核心概念
Interactive visualization can serve as a key enabling technology for human-centered AI (HCAI) tools, empowering users by amplifying, augmenting, empowering, and enhancing their capabilities through AI models.
要約
The paper discusses how interactive visualization can be a key enabling technology for creating human-centered AI (HCAI) tools. HCAI tools are interactive software tools that amplify, augment, empower, and enhance human performance using AI models, often novel generative or foundation AI ones.
The paper first provides a background on the history of AI and intelligence augmentation (IA), and how the convergence of these fields has given rise to HCAI. It then defines HCAI tools, their capabilities, and the human concerns they must address, including fairness, transparency, explainability, understandability, accountability, provenance, and privacy.
The paper then explains how interactive visualization can address these human concerns by providing design characteristics such as being open-ended and data-driven, facilitating user-computer conversations, externalizing data, serving as a shared data and task representation, and encouraging interaction.
The paper reviews four exemplar visualization-enabled HCAI tools from the authors' own work: TimeFork, HaLLMark, Outcome-Explorer, and uxSense. These tools demonstrate how visualization can amplify, augment, empower, and enhance human capabilities, while also addressing human concerns like transparency, explainability, and provenance.
Finally, the paper derives five design guidelines for creating visualization-enabled HCAI tools: (1) simple is plenty, (2) tackle human concerns directly, (3) encourage interaction, (4) show, don't tell, and (5) practice like you play. The paper concludes by discussing limitations and open problems in this area.
統計
"Harder, Better, Faster, Stronger: Interactive Visualization for Human-Centered AI Tools"
"Visualization has already been shown to be a fundamental component in explainable AI models, and coupling this with data-driven, semantic, and unified interaction feedback loops will enable a human-centered approach to integrating AI models in the loop with human users."
"Visualization has long played a key role in both camps: creating intelligent tools and systems that assist people in specific tasks while helping AI researchers and practitioners create better models and data."
引用
"human brains and computing machines will be coupled together very tightly, and [...] the resulting partnership will think as no human brain has ever thought..."
"Visualization has already been shown to be a fundamental component in explainable AI models, and coupling this with data-driven, semantic, and unified interaction feedback loops will enable a human-centered approach to integrating AI models in the loop with human users."
"Visualization can support AI transparency by visually representing concepts and models that are often abstract and complex."