This paper introduces BC-LLM, a novel method for learning Concept Bottleneck Models (CBMs) that leverages Large Language Models (LLMs) within a Bayesian framework to iteratively discover and refine interpretable concepts from data, achieving both high accuracy and uncertainty quantification.
Stochastic Concept Bottleneck Models (SCBMs) improve the interpretability and intervention effectiveness of traditional Concept Bottleneck Models (CBMs) by explicitly modeling concept dependencies, leading to more accurate and efficient user interventions for enhanced model performance.
This paper introduces a novel method using decision trees to inspect and control information leakage in Concept Bottleneck Models (CBMs), improving interpretability and reliability, especially when dealing with incomplete concept sets.
This paper introduces Editable Concept Bottleneck Models (ECBMs), a novel approach to efficiently edit pre-trained Concept Bottleneck Models (CBMs) without expensive retraining, addressing challenges related to data privacy, mislabeling, and concept updates.
Concept Bottleneck Models can be significantly improved by leveraging concept relations to realign concept assignments after human interventions.