This study explores the integration of a large language model, specifically OpenAI's GPT-3.5-Turbo, as an AI tutor within the Artemis automated programming assessment system (APAS). Through a combination of empirical data collection and an exploratory survey, the researchers identified two main user personas:
Continuous Feedback - Iterative Ivy: Students who relied heavily on the AI tutor's feedback before submitting their final solutions to the APAS. This group used the AI tutor to guide their understanding and iteratively refine their code.
Alternating Feedback - Hybrid Harry: Students who alternated between seeking AI tutor feedback and submitting their solutions to the APAS. This group adopted a more iterative, trial-and-error approach to problem-solving.
The findings highlight both advantages and challenges of the AI tutor integration. Advantages include timely feedback and scalability, but challenges include generic responses, lack of interactivity, operational dependencies, and student concerns about over-reliance and learning progress inhibition. The researchers also identified instances where the AI tutor revealed solutions or provided inaccurate feedback.
Overall, the study demonstrates the potential of large language models as AI tutors in programming education, but also underscores the need for further refinement to address the identified limitations and ensure an optimal learning experience for students.
Başka Bir Dile
kaynak içeriğinden
arxiv.org
Önemli Bilgiler Şuradan Elde Edildi
by Eduard Frank... : arxiv.org 04-04-2024
https://arxiv.org/pdf/2404.02548.pdfDaha Derin Sorular