Kernekoncepter
The author explores the feasibility of using large language models (LLMs) to generate code explanations for programming education, comparing them with explanations from experts and students.
Resumé
In the realm of programming education, worked examples are crucial for understanding coding concepts. However, generating detailed code explanations can be time-consuming for instructors. This study delves into the potential of using LLMs like ChatGPT to automate code explanation generation and compares them with human-generated explanations. By analyzing metrics such as lexical diversity, readability, and similarity, the study sheds light on the effectiveness of AI-generated code explanations compared to those by experts and students.
Statistik
Instructors generally do not have time or patience to properly author explanations of their examples.
Creating just one explained example could take 30 minutes even with authoring tools.
An ANOVA analysis indicated statistically significant variations among experts, ChatGPT, and students in terms of lexical diversity.
Explanations produced by students are shorter than those by experts and ChatGPT but have higher lexical density.
Gunning-Fog readability scores significantly differ across expert, student, and ChatGPT explanations.
Citater
"Most approaches for presenting code examples to students are based on line-by-line explanations."
"Instructors rarely have time to provide detailed explanations for many examples used in programming classes."
"Using LLMs like ChatGPT could potentially resolve the authoring bottleneck in creating code explanations."