This study examined the relationships between students' rate of programming errors and their grades on two exams in an introductory Java programming course. Data were collected from 280 students using an online integrated development environment, including 51,095 code snapshots with compiler and runtime errors.
Three error measures were explored to identify the best measure for explaining variability in exam grades:
The results showed that models utilizing EQ outperformed the models using the other two measures, in terms of the explained variability in grades and Bayesian Information Criterion. Compiler errors were significant predictors of exam 1 grades, which focused on introductory programming topics, but only runtime errors significantly predicted exam 2 grades, which covered more complex topics.
Overall, the error measures did not explain most of the observed variability in exam grades, suggesting that other factors, such as students' problem-solving strategies and background knowledge, may play a significant role in determining performance in introductory programming courses.
In eine andere Sprache
aus dem Quellinhalt
arxiv.org
Tiefere Fragen