This position paper argues that large language models (LLMs) constitute promising yet underutilized academic reading companions capable of enhancing student learning. The authors detail an exploratory study examining Anthropic's Claude.ai, an LLM-based interactive assistant designed to help students comprehend complex qualitative literature content.
The study compares quantitative survey data and qualitative interviews assessing outcomes between a control group and an experimental group leveraging Claude.ai over a semester across two graduate courses. Initial findings demonstrate tangible improvements in reading comprehension and engagement among participants using the AI agent versus unsupported independent study. However, the authors acknowledge potential risks of overreliance and ethical considerations that warrant continued investigation.
By documenting an early integration of an LLM reading companion into an educational context, this work contributes pragmatic insights to guide development of synthetic personae supporting learning. The authors emphasize the need for responsible design and multi-stakeholder involvement to maximize benefits of AI integration while prioritizing student wellbeing. They argue that thoughtful exploration of LLMs as academic aids, rather than reactionary policies or complacency, offers the best path forward in empowering students through evidence-based practices.
Til et andet sprog
fra kildeindhold
arxiv.org
Vigtigste indsigter udtrukket fra
by Celia Chen,A... kl. arxiv.org 03-29-2024
https://arxiv.org/pdf/2403.19506.pdfDybere Forespørgsler