toplogo
Sign In

Envisioning Large Language Model Use by Autistic Workers for Communication Assistance


Core Concepts
The author explores the use of Large Language Models (LLMs) by autistic workers for communication assistance, highlighting the preference for LLMs over human confederates despite concerns about advice quality.
Abstract
The study investigates how autistic adults utilize LLMs at work, revealing a strong affinity for LLMs over confederates. Participants valued LLMs for clear advice and understanding, despite concerns about assumptions made by the LLM. The study sheds light on the potential benefits and challenges of using LLMs in workplace communication support for autistic individuals.
Stats
Participants preferred LLM over human confederate (82%). Participants rated Paprika (LLM) higher than Pepper (confederate) in utility, understanding, intent to use, and dependability. Some participants found the formatting, expedience, privacy, open-mindedness, conversational tone, convenience/availability, and affordability of LLMs appealing.
Quotes
"It’s awfully earnest, and I don’t get that a lot." - Participant 1 "I would go to Paprika all the time if I could get it on my phone through Discord." - Participant 1 "I think I just haven’t found the right approach...and I haven’t found the best information to help with that." - Participant 9

Key Insights Distilled From

by JiWoong Jang... at arxiv.org 03-07-2024

https://arxiv.org/pdf/2403.03297.pdf
"It's the only thing I can trust"

Deeper Inquiries

How can LLMs be improved to address concerns about assumptions made in responses?

To address concerns about assumptions made in responses by LLMs, several improvements can be implemented: Contextual Understanding: Enhancing the model's ability to understand and consider context before providing a response can help reduce making inaccurate assumptions. Transparency: Implementing mechanisms for the model to explain its reasoning or provide sources for its suggestions can increase transparency and allow users to better evaluate the advice given. Feedback Loop: Incorporating a feedback loop where users can correct any misconceptions or inaccuracies in the responses provided by the LLM can help improve future interactions. Diverse Training Data: Ensuring that the training data used for LLMs is diverse and inclusive of various perspectives and experiences can help mitigate biases and reduce assumptions based on limited information.

What are the ethical considerations of relying on AI chatbots like LLMs for sensitive communication needs?

Relying on AI chatbots like LLMs for sensitive communication needs raises several ethical considerations: Privacy Concerns: The data shared with these chatbots may contain sensitive information, raising questions about data privacy, storage, and potential misuse. Bias and Fairness: There is a risk of perpetuating biases present in the training data, leading to unfair treatment or discrimination against certain groups. Accountability: Determining accountability when errors occur in advice given by AI chatbots poses challenges as it may not always be clear who is responsible for incorrect guidance. Informed Consent: Users should be fully informed about how their data is being used, especially when seeking support for sensitive topics.

How might access to affordable AI tools impact traditional support services for neurodivergent individuals?

The accessibility of affordable AI tools could have both positive and negative impacts on traditional support services for neurodivergent individuals: Positive Impact: Increased Accessibility: Affordable AI tools could make support more accessible to those who may not have had access previously due to cost barriers. Supplemental Support: These tools could complement traditional services by providing additional resources and assistance beyond what human practitioners offer. Negative Impact: Dependency Risk: Over-reliance on AI tools may lead individuals to forego seeking human interaction or professional guidance when needed. Quality Concerns: The quality of advice from AI tools may vary compared to personalized support from trained professionals, potentially leading to suboptimal outcomes. Balancing these factors will be crucial in ensuring that affordable AI tools enhance rather than replace traditional support services for neurodivergent individuals while maintaining ethical standards and effectiveness in addressing their needs.
0