Core Concepts
HaLLMark enhances writer's control, transparency, and conformity in AI-assisted writing.
Abstract
The content discusses the development and evaluation of HaLLMark, a tool for visualizing interactions with Large Language Models (LLMs) in creative writing. It explores how HaLLMark impacts writers' agency, ownership, communication of AI contributions, and adherence to AI-writing policies. The study involved 13 creative writers using both HaLLMark and a baseline tool to write short stories while interacting with LLMs.
Structure:
Introduction to the HaLLMark System
Abstract on LLMs in Creative Writing
Related Work on Writing Support Tools and LLM Concerns
Formative Analysis of AI-Assisted Writing Policies
Design Rationale for the HaLLMark System
Visual Interface Components: Prompting LLMs, Prompt Card, Provenance Visualization, Linking Visualization and Artifact
Evaluation Study Setup: Tasks, Participants, Measures
Results Analysis for RQ1-RQ4: Interaction with LLMs, Agency & Ownership, Communication & Transparency, Conformity to Policies
Stats
"On average, the stories contained 13.66% text written by the AI when participants used the baseline."
"In comparison, the stories contained only 3.48% text written by AI when participants used HaLLMark."
Quotes
"I liked [HaLLMark] better because I was trying to use the AI without overusing it." - Participant 2
"[HaLLMark] made me feel less confused... what did I generate? What did the AI generate?" - Participant 7
"When I read the policy before the study... this will be such a pain... But then when I used the tool... this is easy!" - Participant 5