toplogo
Accedi

Automating Computer Programming Assessments and Project Submissions: Experiences from a University Department


Concetti Chiave
This paper describes the authors' experiences in automating computer programming assessments and project submissions using CodeRunner and GitHub Classroom, two open-source tools that have helped improve the efficiency and effectiveness of teaching and learning in programming courses.
Sintesi

The paper discusses the authors' experiences in automating computer programming assessments and project submissions at their university department over the past six years.

Key highlights:

  1. Challenges in traditional programming courses: Setting up programming environments, verifying student code logic, helping with debugging, and manually evaluating student work.

  2. Adoption of CodeRunner (CR) and GitHub Classroom (GHC) to address these challenges:

    • CR provides a single interface for multiple programming languages, allowing automated grading of programming exercises.
    • GHC enables online programming assignments and project submissions, with features like version control, autograding, and feedback.
  3. Installation and customization of CR and GHC:

    • CR was installed on a separate server and integrated with the university's Moodle learning management system.
    • GHC was used to create classrooms, assign programming exercises, and manage project submissions.
  4. Experiences with using CR and GHC:

    • CR allows closed-book programming assessments, while GHC is suitable for open-book exercises and project submissions.
    • Both tools support multiple programming languages and enable parallel assessments.
    • Students benefit from the real-time feedback and industry-ready environment provided by GHC.
    • Instructors save time on manual evaluation and can focus on helping students understand programming concepts.
  5. Lessons learned:

    • CR and GHC are free, open-source tools that are well-suited for educational settings.
    • Initial learning curve for students, but they become more comfortable after a few attempts.
    • Importance of faculty training and addressing potential issues like plagiarism.

The paper provides a comprehensive overview of the authors' experiences in automating programming assessments and project submissions, highlighting the benefits and insights gained from using CR and GHC in their university department.

edit_icon

Personalizza riepilogo

edit_icon

Riscrivi con l'IA

edit_icon

Genera citazioni

translate_icon

Traduci origine

visual_icon

Genera mappa mentale

visit_icon

Visita l'originale

Statistiche
Consider a typical institution conducting a programming course (lab) for students. Here, the student is expected to have a lab notebook and prepare the assigned programming assignment each week in a well formatted structure. Assuming a class size of 60, it is close to impossible for the course instructor to do all these tasks in the given time.
Citazioni
"With CR or GHC, course instructors distribute programming assignments to students with appropriate instructions, all in virtual mode. Then students work independently based on the instructions, and the code is verified automatically. Here, there is no manual evaluation, so the course instructor does not need to go to individual students' places and check the output of his code." "To encourage the collaborative environment, we use GHC for the submission of students' code. This method provides the course instructors with information about code submissions, frequency of these submissions with changes, and the individual contribution, all in online mode. This reduces the manual time of checking the students' progress in evaluating projects."

Approfondimenti chiave tratti da

by Bama Sriniva... alle arxiv.org 04-09-2024

https://arxiv.org/pdf/2404.04521.pdf
Automated Computer Program Evaluation and Projects -- Our Experiences

Domande più approfondite

How can the automated assessment tools be further extended to provide personalized feedback and adaptive learning experiences for students?

Automated assessment tools can be extended to provide personalized feedback and adaptive learning experiences for students by incorporating machine learning algorithms. These algorithms can analyze students' performance data, such as their coding patterns, errors made, and areas of strength and weakness. Based on this analysis, the tools can generate personalized feedback tailored to each student's needs. For example, if a student consistently struggles with a particular programming concept, the tool can provide additional practice exercises or resources specifically targeting that concept. Furthermore, the tools can adapt the difficulty level of assignments based on individual student progress, ensuring that each student is appropriately challenged. By leveraging data-driven insights, automated assessment tools can offer a more personalized and adaptive learning experience for students.

What are the potential challenges in scaling the use of these tools to larger class sizes or across multiple institutions, and how can they be addressed?

Scaling the use of automated assessment tools to larger class sizes or across multiple institutions may pose several challenges. One major challenge is ensuring the reliability and accuracy of the automated grading system, especially when dealing with a large volume of submissions. To address this challenge, institutions can implement rigorous testing and validation processes to verify the effectiveness of the automated assessment tools across different scenarios and student populations. Additionally, providing adequate training and support for instructors and students on how to use the tools effectively can help mitigate potential issues that may arise during scaling. Another challenge is maintaining data security and privacy, particularly when handling a large amount of student data. Institutions must implement robust data protection measures, such as encryption and access controls, to safeguard student information and ensure compliance with data privacy regulations. Moreover, ensuring the scalability and performance of the tools to handle a high volume of concurrent users is essential for a seamless user experience. This can be achieved by optimizing the infrastructure supporting the automated assessment tools and regularly monitoring and optimizing system performance.

How can the integration of these tools with other educational technologies, such as learning analytics platforms, enhance the overall teaching and learning experience?

Integrating automated assessment tools with learning analytics platforms can significantly enhance the overall teaching and learning experience by providing valuable insights into student performance and engagement. By combining the data generated by automated assessment tools with learning analytics, instructors can gain a comprehensive understanding of students' learning progress, identify areas where students may be struggling, and tailor their teaching strategies accordingly. For example, learning analytics can track students' progress over time, highlight trends in performance, and provide early intervention alerts for at-risk students. Furthermore, the integration of these tools can enable instructors to create personalized learning pathways for students based on their individual learning styles and preferences. By leveraging data analytics, instructors can offer targeted interventions, adaptive learning resources, and personalized feedback to enhance student learning outcomes. Additionally, the integration of automated assessment tools with learning analytics platforms can facilitate continuous improvement in teaching methodologies by providing instructors with actionable insights to refine their instructional practices and course materials. Ultimately, this integration can lead to a more data-driven and student-centered approach to teaching and learning.
0
star