toplogo
Sign In

How Practitioners Approach Confidence Assessment in Assurance Cases: An Interview Study


Core Concepts
While various methods for assessing confidence in assurance cases (ACs) exist, a gap persists between these methods and the needs and practices of practitioners, who primarily rely on qualitative approaches like peer reviews and dialectic argumentation.
Abstract
  • Bibliographic Information: Diemert, S., Shortt, C., & Weber, J. H. (2024). How do practitioners gain confidence in assurance cases? arXiv preprint arXiv:2411.03657v1.
  • Research Objective: This research paper investigates how practitioners assess confidence in assurance cases (ACs) for real-world systems, focusing on the methods used and barriers encountered.
  • Methodology: The study employed a grounded theory methodology, conducting structured interviews with 19 AC practitioners from various industries. Data analysis involved open coding of interview transcripts to identify key themes and concepts.
  • Key Findings: The study found that practitioners heavily rely on qualitative methods for AC confidence assessment, particularly peer reviews and dialectic argumentation (using "defeaters"). While practitioners are aware of quantitative methods, they express concerns about their trustworthiness, applicability to nuanced AC information, justification to external stakeholders, and challenges in determining appropriate numerical inputs.
  • Main Conclusions: A significant gap exists between proposed confidence assessment methods (CAMs) and practitioners' needs. The authors suggest that future research should focus on aligning CAMs with established practices, facilitating communication with stakeholders, providing clear guidance on CAM application, and ensuring the trustworthiness of these methods.
  • Significance: This research provides valuable insights into the practical challenges of AC confidence assessment, highlighting the need for more practitioner-centric approaches in developing and refining CAMs.
  • Limitations and Future Research: The study acknowledges limitations due to the convenience sampling method and suggests exploring the use of quantitative methods in specific AC contexts and further investigating the identified barriers.
edit_icon

Customize Summary

edit_icon

Rewrite with AI

edit_icon

Generate Citations

translate_icon

Translate Source

visual_icon

Generate MindMap

visit_icon

Visit Source

Stats
19 practitioners were interviewed for the study. Participants had an average of 23 years of general professional experience and 16 years of systems assurance experience. The automotive industry (including autonomous vehicles) was the most represented industry among participants (47%). All participants (100%) reported using some form of independent or peer review to assess confidence in ACs. A majority of participants (63%) described experiences using dialectic arguments ("defeaters") to challenge positive arguments in ACs. Just under half of the participants (47%) indicated using checklists for confidence assessment. Only two participants (11%) reported using a quantitative method for confidence assessment in a real-world system.
Quotes
"Initially, it [the motivation for preparing an AC] was purely that the standard requires it. . . . it was really just: it’s a requirement of the standard that we were working to at the time and therefore we should put one together. Since then, of course, I have realized that it’s [preparing an AC] much more important than that." ". . . there was lots of extensive documentation about, various safety aspects of the system but then they weren’t particularly easy to navigate by themselves, so the assurance case effectively served as a way to quickly structure a lot of that information . . . where the role that each piece of evidence instead of just, you know, being spread out over hundreds of pages of documents, but so that was logically connected together to an argument" ". . . it’s not like you, you know, you put a bow on the assurance case and say, ‘we’re done’. You know, you’re constantly having to go back and revisit it. So, it’s like continuous process. It’s a live [artifact]”. "having, you know, a big open discussion between several reviewers and the authors of the case is, for me, is the best way to [gain] qualitative confidence." ". . . this was the big eye opener. We had created a number of safety cases for each release of our product . . . and for the first time, we used ‘eliminative induction’, ‘doubting’, or whatever. And it was remarkable. We discovered something like 25 problems that we had not seen previously. Although we had been producing safety cases. We have not seen these particular problems. Some of those problems we could immediately fix . . . something like 12 or so of them were problems that could not be easily fixed.”

Key Insights Distilled From

by Simon Diemer... at arxiv.org 11-07-2024

https://arxiv.org/pdf/2411.03657.pdf
How do practitioners gain confidence in assurance cases?

Deeper Inquiries

How can the development of confidence assessment methods be made more inclusive of practitioner perspectives and feedback?

Integrating practitioner perspectives and feedback into the development of Confidence Assessment Methods (CAMs) for Assurance Cases (ACs) is crucial for their practical applicability and acceptance. Here's how: Early and Continuous Engagement: Involve practitioners from the target industries throughout the CAM development lifecycle. This includes initial needs assessment, design iterations, prototype testing, and refinement based on real-world use cases. Establish Feedback Mechanisms: Create structured channels for practitioners to provide feedback, such as: Workshops and Focus Groups: Facilitate interactive sessions to gather input on CAM usability, understand practical challenges, and identify potential improvements. Surveys and Questionnaires: Use these to collect broader feedback on specific aspects of CAMs, their perceived strengths and weaknesses. Pilot Studies: Conduct pilot deployments of CAMs in real-world settings and gather detailed feedback on their effectiveness and any implementation hurdles. Address Practitioner Concerns: Actively listen to and address concerns raised by practitioners. This might involve: Simplifying Complexity: Design CAMs that are easy to understand and apply, even for those without deep statistical or mathematical expertise. Providing Clear Guidance and Training: Develop comprehensive guidance documents, training materials, and tools tailored to practitioners' needs and workflows. Demonstrating Value and Practicality: Clearly articulate the benefits of using CAMs in terms of improved assurance, risk reduction, and stakeholder communication. Promote Transparency and Collaboration: Foster open communication and collaboration between researchers and practitioners. Share research findings, best practices, and lessons learned through publications, conferences, and online forums. By actively involving practitioners and incorporating their feedback, CAM development can shift from a primarily theoretical exercise to a collaborative effort that produces methods aligned with real-world needs and constraints.

Could the perceived limitations of quantitative methods stem from a lack of accessible tools and training specifically designed for their application in assurance cases?

Yes, the perceived limitations of quantitative CAMs for ACs could certainly be exacerbated by a lack of accessible tools and training tailored to their application. Here's why: Conceptual Complexity: Quantitative methods often rely on probabilistic reasoning, statistical models, or other mathematical frameworks that can be challenging for practitioners without a strong background in these areas. Tooling Gap: While some general-purpose statistical software might be applicable, there's often a lack of specialized tools designed specifically for applying quantitative CAMs to the structure and content of ACs. This makes their application more cumbersome and error-prone. Interpretation Challenges: Even if results are generated, interpreting the output of quantitative CAMs in the context of an AC and making sound engineering judgments based on those results requires specific knowledge and experience. Addressing these limitations through accessible tools and training: User-Friendly Tools: Develop software tools that: Integrate seamlessly with common AC notations (e.g., GSN). Provide intuitive interfaces for inputting data, defining parameters, and visualizing results. Offer guidance and error checking to minimize misuse. Targeted Training: Create training programs that: Explain the underlying concepts of quantitative CAMs in an accessible manner. Provide hands-on experience using the tools with realistic AC examples. Focus on practical application, interpretation of results, and decision-making. By bridging the tooling and training gap, the perceived limitations of quantitative CAMs can be mitigated, making them more accessible and potentially leading to wider adoption by practitioners.

What are the ethical implications of relying solely on qualitative methods for assessing confidence in critical systems, and how can these be addressed?

Relying solely on qualitative methods for assessing confidence in critical systems presents several ethical implications: Subjectivity and Bias: Qualitative assessments are inherently subjective and prone to individual biases, potentially leading to inconsistent or unreliable confidence judgments. This is particularly concerning for critical systems where lives or significant assets are at stake. False Sense of Security: Overly optimistic qualitative assessments, especially if not rigorously challenged, can create a false sense of security. This can lead to underestimation of risks and inadequate mitigation measures. Lack of Transparency and Traceability: Qualitative judgments, if not well-documented, can lack transparency and make it difficult to understand the rationale behind confidence levels. This hinders independent verification and accountability. Addressing these ethical implications: Combine Qualitative and Quantitative Methods: Strive for a balanced approach that leverages the strengths of both. Qualitative methods can provide context, identify potential weaknesses, and guide the selection of appropriate quantitative techniques. Quantitative methods can introduce more rigor, reduce subjectivity, and offer a more nuanced understanding of confidence levels. Structured Argumentation and Defeaters: Employ structured argumentation notations (e.g., GSN) to make qualitative reasoning more explicit and transparent. Encourage the use of "defeaters" to systematically challenge arguments and uncover potential weaknesses. Independent Review and Verification: Implement rigorous independent review processes involving experts who were not involved in the original AC development. This helps identify biases, challenge assumptions, and ensure a more objective assessment of confidence. Documentation and Traceability: Maintain thorough documentation of all qualitative judgments, including the rationale, evidence considered, and any limitations. This enhances transparency and enables traceability of decisions. Continuous Improvement: Establish a culture of continuous improvement where confidence assessments are revisited and refined based on operational experience, new information, or changes in the system or its environment. By acknowledging and addressing the ethical implications of relying solely on qualitative methods, we can strive for more robust, trustworthy, and ethically sound confidence assessments for critical systems.
0
star