The content presents a method for enabling natural language-driven manipulation of existing visualizations. Key highlights:
The authors propose a design space for representing visualization-related tasks, which includes operations such as filtering, identification, comparison, aggregation, and derivation.
They introduce a deep learning-based natural language-to-task translator (NL-task translator) that can parse natural language queries into structured and hierarchical task descriptions.
To train the NL-task translator, the authors leverage large-scale language models to assist in curating a diverse cross-domain dataset of natural language expressions and associated tasks.
The authors define a four-level and seven-type visualization manipulation space to facilitate in-situ manipulations of visualizations, enabling fine-grained control over visual elements.
The NL-task translator and visualization manipulation parser work together to transform natural language queries into a sequence of atomic visualization manipulations, which are then applied to the existing visualization.
The effectiveness of the approach is demonstrated through real-world examples and experimental results, highlighting the precision of natural language parsing and the smooth transformation of visualization manipulations.
toiselle kielelle
lähdeaineistosta
arxiv.org
Tärkeimmät oivallukset
by Can Liu,Jiac... klo arxiv.org 04-10-2024
https://arxiv.org/pdf/2404.06039.pdfSyvällisempiä Kysymyksiä