Concepts de base
Users constantly face trade-offs between privacy, utility, and convenience when using large language model-based conversational agents. However, users' erroneous mental models and dark patterns in system design limit their awareness and control over privacy risks, leading to unintended sensitive disclosures.
Résumé
The study examined how users navigate disclosure risks and benefits when using large language model (LLM)-based conversational agents (CAs) like ChatGPT.
The researchers first analyzed a dataset of real-world ChatGPT conversations to understand users' sensitive disclosure behaviors (RQ1). They found that users disclosed various types of personally identifiable information (PII) about themselves and others, suggesting interdependent privacy issues. The researchers developed a typology of disclosure scenarios based on context, topic, purpose, and prompt strategy.
To further explore users' perspectives (RQ2, RQ3), the researchers conducted semi-structured interviews with 19 LLM-based CA users. They found that users' disclosure intentions were primarily affected by their perceived capabilities of the AI, the convenience of operation, and their assessment of data sensitivity. Many users felt resigned to the idea that their data was already accessible elsewhere, and thus the marginal risk of sharing with CAs was low.
However, users also expressed concerns about data misuse by institutions, others finding out about their CA use, and idea theft. The interviews revealed that users had varied and often flawed mental models about how LLMs work, which impacted their ability to reason about privacy risks. Additionally, the human-like interactions encouraged more sensitive disclosures, complicating users' ability to navigate the trade-offs.
The findings suggest the need for design interventions to improve user awareness, perceived control, and actual control over privacy when using LLM-based systems. Addressing the fundamental misunderstandings and dark patterns in the current systems will also require regulatory and structural changes.
Stats
"I'm doing the same risk by using the app like Instagram or Facebook." (P1)
"Telling ChatGPT I live in [city name redacted], it's kind of like, saying I live on the earth." (P10)
"He asked me to talk to him about my brother. It's like a full conversation. He wanted to know everything." (P16)
Citations
"I hope they (my professors) will never know I used AI to do that (write emails)." (P8)
"I don't know if ChatGPT uses it (the fiction that I wrote) as inspiration for other people, or spits it out as it wrote it itself instead of me." (P2)