Core Concepts
Building interpretable clustering with coverage and discrimination constraints.
Abstract
The content discusses the importance of explainable AI in clustering tasks, emphasizing the need for high-quality clustering that is also explainable. It introduces a novel method, ECS, that integrates expert knowledge and constraints to create interpretable clusterings. The framework focuses on coverage and discrimination, aiming to provide explanations for each cluster. The method involves generating candidate clusters, filtering, and selecting clusters based on constraints, and constructing explanations using Constraint Programming. The paper presents experimental results on various datasets, showcasing the impact of coverage and discrimination parameters on the quality of explanations.
Abstract
- Explainable AI is crucial in clustering tasks.
- ECS method integrates expert knowledge and constraints for interpretable clusterings.
Introduction
- Clustering groups objects based on similarities.
- Proposed method aims for high-quality and explainable clustering.
Interpretable Clustering Formulation
- Data described by features and Boolean descriptors.
- Clustering and explanations built simultaneously.
Interpretable Cluster Selection
- Method involves generating candidate clusters and selecting based on constraints.
- Constraint Programming used for cluster selection and explanation construction.
Experimental Results
- Impact of coverage and discrimination parameters on clustering quality.
- Comparison with decision tree-based methods.
- Importance of allowing unassigned instances for finding interpretable clusterings.
Stats
We aim at finding a clustering that has high quality in terms of classic clustering criteria and that is explainable.
Our method relies on four steps: generation of a set of partitions, computation of frequent patterns for each cluster, pruning clusters that violate constraints, and selection of clusters and associated patterns.
The method can integrate prior knowledge in the form of user constraints, both before or in the CP model.
Quotes
"The domain of explainable AI is of interest in all Machine Learning fields."
"We aim at leveraging expert knowledge on the structure of the expected clustering or on its explanations."