Core Concepts
Explainability of search systems is a multidimensional concept comprising both positive utility factors and negative roadblock factors, as identified through a psychometric study leveraging crowdsourcing.
Abstract
The study aimed to establish a user-centric definition of search system explainability by leveraging psychometrics and crowdsourcing. The researchers conducted a comprehensive literature review to identify 26 potential aspects of explainability, which were then used to design a questionnaire.
Through exploratory factor analysis (EFA) on a sample of 200 crowdsourced responses, the researchers identified two key factors underlying explainability:
Utility: This factor encompasses positive attributes such as plausibility, justifiability, trustworthiness, informativeness, acceptability, understandability, and transferability. These aspects represent the overall usefulness and effectiveness of the explainable search system.
Roadblocks: This factor includes negative attributes such as lack of decomposability, global and local interpretability, simulatability, faithfulness, algorithmic transparency, trustworthiness, causality, uncertainty, units of explanation, visibility, and counterfactuals. These aspects represent critical barriers that prevent users from fully understanding the search system's decision-making process.
The researchers then confirmed this two-factor model through confirmatory factor analysis (CFA) on a held-out set of 259 crowdsourced responses. The CFA results showed that the proposed hierarchical two-factor model had a good fit to the data, outperforming alternative models.
The identified dimensions of explainability can be used to guide the design and evaluation of explainable search systems, enabling targeted improvements to address both positive utility and negative roadblock factors. The methodology introduced in this work can also be applied to other IR domains and the wider NLP and ML communities.
Stats
The search system should work well in different search tasks.
I would use this search engine in my everyday life.
The results page provides me enough information to find the answers I am looking for effectively.
The presentation of the results leads me to believe the results are ordered correctly.
The results match my expectations and I agree with them.
I trust that the results are ordered correctly and the system will order results correctly for other queries.
I can easily understand the contents of the results page.
If I change the query, I do not know how it will affect the result ordering.
I do not understand why the results are ordered the way they are and would not be able to recreate the orderings myself.
I think I need more information to understand why the given query produced the displayed results.
I do not understand the document properties that cause some results to be ordered higher than others.
I am unable to see and understand how changes in the query affect the result ordering.
The result interface does not help me understand the true decision making process of the search engine ranker.
I do not understand why each result is ordered in a certain place.
I'm unable to follow how the search engine ordered the results.
It's difficult for me to break down each of the search engine's components and understand why the results are ordered the way they are.
I do not know how confident the search engine is that its displayed orderings are correct.
I do not trust that the results are ordered correctly and that the system will correctly order results for other queries.
The format and amount of information provided in the result interface is not enough to help me understand why the results are ordered the way they are.
Quotes
If I change the query, I do not know how it will affect the result ordering.
I do not understand why the results are ordered the way they are and would not be able to recreate the orderings myself.
The result interface does not help me understand the true decision making process of the search engine ranker.