Główne pojęcia
Combining human and computer vision to enhance face verification interpretability.
Streszczenie
The article discusses the importance of transparency, fairness, and accountability in AI decisions, specifically in face verification. It introduces an approach that combines human and computer vision to increase the interpretability of face verification algorithms. By leveraging Mediapipe for segmentation and model-agnostic algorithms for insights, the study aims to bridge the gap between how machines perceive faces and how humans understand them. The research focuses on explaining AI decisions by perturbing images based on semantic areas and extracting important concepts for face verification.
Statystyki
With Artificial Intelligence (AI) influencing decision-making processes.
Saliency maps offer insights into critical features considered in decision-making.
Model extracts features for each face to be compared.
Model's knowledge translated to human knowledge through segmentation.
Two model-agnostic algorithms adapted for human-interpretable insights.
Cytaty
"We present an approach to combine computer and human vision to increase the explanation’s interpretability of a face verification algorithm."
"Incorporating human-based semantics in the models’ explanation process can introduce human bias."
"Our primary objective is to translate the XAI solution into human decision-making meaningfully."