The study explores the design and implementation of three conversational search systems: text-based, image-based, and a mixed system that combines text and images. A diverse participant group of 21 individuals, including students and individuals over 50, interacted with these systems while their physiological data (skin conductivity, heart rate, eye movements, and facial expressions) was captured using sensors.
The key findings are:
The study highlights the promise of the image-based conversational search system, especially when integrated into a mixed system, offering both clarity and engagement. The sensor-based feedback mechanism provides valuable insights into user experience and decision-making, guiding future system improvements.
The researchers conclude that these conversational search systems, particularly the mixed approach, hold significant potential in enhancing technological accessibility and fostering inclusivity for individuals with intellectual disabilities.
In un'altra lingua
dal contenuto originale
arxiv.org
Approfondimenti chiave tratti da
by Yue Zheng,Le... alle arxiv.org 04-01-2024
https://arxiv.org/pdf/2403.19899.pdfDomande più approfondite