toplogo
Inloggen

Students' Perceptions and Use of Generative AI Tools for Programming in Different Computing Courses: A Longitudinal Study


Belangrijkste concepten
Students are increasingly using GenAI tools as programming aids, and their acceptance of this practice varies based on their level of study and the type of course, with a greater acceptance in advanced courses where programming is not the primary learning objective.
Samenvatting
  • Bibliographic Information: Keuning, H., Alpizar-Chacon, I., Lykourentzou, I., Beehler, L., Köppe, C., de Jong, I., & Sosnovsky, S. (2024). Students’ Perceptions and Use of Generative AI Tools for Programming Across Different Computing Courses. Proceedings of the ACM Conference on International Computing Education Research - ICER '24. https://doi.org/10.1145/nnnnnnn.nnnnnnn
  • Research Objective: This study investigates how students in various computing courses perceive and utilize generative AI (GenAI) tools for programming-related tasks, examining trends in usage and acceptance over an academic year.
  • Methodology: The researchers conducted three consecutive surveys (fall '23, winter '23, and spring '24) among students enrolled in different computing programs (Bachelor and Master) at a large European research university. The surveys explored students' perceptions of GenAI's impact on learning, job prospects, ethics, and classroom use, with a focus on programming-related tasks across different course types (programming-centric, programming-required, and programming-optional).
  • Key Findings:
    • Students increasingly perceive GenAI as a valuable support tool for programming, surpassing teachers and TAs in preference over time.
    • MSc students and those in courses where programming is not the primary focus show a higher acceptance of GenAI use for assignments.
    • Students primarily utilize GenAI for debugging, understanding programming concepts, and generating boilerplate code in courses where programming is not the main learning objective.
    • There is a concerning trend of students finding the submission of AI-generated assignments as less unethical over time.
  • Main Conclusions: The study highlights the growing adoption of GenAI tools by students and the need for educators to adapt their teaching practices and policies to address the ethical and pedagogical implications of these tools in computing education. The authors emphasize the importance of aligning GenAI use with course learning goals, promoting responsible use, and fostering critical thinking skills in students.
  • Significance: This research provides valuable insights into the evolving landscape of computing education in the age of GenAI, informing educators about student perceptions, usage patterns, and ethical considerations surrounding these tools.
  • Limitations and Future Research: The study was conducted at a single institution, potentially limiting the generalizability of the findings. Future research should explore the long-term impact of GenAI on student learning outcomes, develop effective pedagogical strategies for integrating GenAI in a way that enhances learning, and investigate cross-cultural differences in GenAI adoption and perception.
edit_icon

Samenvatting aanpassen

edit_icon

Herschrijven met AI

edit_icon

Citaten genereren

translate_icon

Bron vertalen

visual_icon

Mindmap genereren

visit_icon

Bron bekijken

Statistieken
99.2% of respondents in Period 1 found it unethical to submit an entirely AI-generated assignment. This percentage dropped to 92% in Period 2 and 89.2% in Period 3. 115 out of 269 responses indicated no use of GenAI for coding tasks.
Citaten
"For a ‘basics’ course like this you should be able to do everything in this course by yourself." "I think it should be disallowed as it might influence the understanding students develop for the course content." "in real-world software development you can also use generative AI" "For learning skills, people should be able to come up with results on their own. Don’t give responsibility away to AI!" "Forbidding it is futile" "Ideally disallowed but there is no way to control it" "You’re not teaching programming here, so just let people use AI"

Diepere vragen

How can educational institutions effectively incorporate ethical guidelines and policies surrounding GenAI use into their curricula to ensure academic integrity while preparing students for future workplaces where these tools are likely to be commonplace?

Answer: Educational institutions can navigate the evolving landscape of GenAI and uphold academic integrity by taking a multi-pronged approach: Developing Clear Policies: Institutions should establish transparent and comprehensive policies on GenAI use, specifying permissible and unacceptable uses in different academic contexts. These policies should be readily accessible to students and faculty. Integrating Ethics into Curricula: Ethical considerations of GenAI should be woven into the fabric of various courses, particularly within computing disciplines. This can involve: Dedicated Modules: Incorporating dedicated modules or workshops on AI ethics, responsible AI development, and the societal impact of AI. Case Study Discussions: Analyzing real-world case studies highlighting ethical dilemmas related to AI bias, fairness, and accountability. Critical Thinking Exercises: Engaging students in critical thinking exercises that challenge them to evaluate the ethical implications of GenAI applications. Promoting Transparency and Attribution: Students should be educated on the importance of transparency and proper attribution when using GenAI. This includes: Citing AI Tools: Developing clear guidelines on how to cite AI tools and generated content in academic work. Disclosing AI Assistance: Encouraging students to disclose the use of AI assistance in their assignments, even when permitted. Fostering Digital Literacy: Institutions should equip students with the necessary digital literacy skills to critically evaluate and use GenAI tools. This includes: Understanding AI Capabilities and Limitations: Educating students on the capabilities and limitations of AI, emphasizing that these tools are not infallible. Developing Critical Evaluation Skills: Teaching students how to critically evaluate AI-generated content for accuracy, bias, and originality. Adapting Assessment Methods: Assessment methods should be revisited and potentially redesigned to ensure they effectively measure student learning outcomes in the age of GenAI. This may involve: Authentic Assessments: Emphasizing authentic assessments that require critical thinking, problem-solving, and the application of knowledge in real-world scenarios. Process-Oriented Evaluation: Shifting towards a more process-oriented evaluation approach that values the steps involved in problem-solving and not just the final output. Faculty Training and Support: Providing faculty with professional development opportunities on GenAI, its ethical implications, and strategies for integrating it into their teaching. By embracing these strategies, educational institutions can foster a culture of responsible GenAI use, ensuring academic integrity while preparing students to navigate the ethical complexities of an AI-driven world.

Could the increasing acceptance of GenAI use for assignments, despite concerns about its impact on learning, be indicative of a shift in students' priorities from deep understanding to task completion, potentially driven by external pressures and assessment methods?

Answer: The increasing acceptance of GenAI for assignments, despite concerns about its impact on learning, presents a complex issue that could indeed reflect a shift in student priorities, potentially influenced by various factors: Pressure for Efficiency and Grades: In a fast-paced academic environment, students often face immense pressure to complete assignments efficiently and achieve high grades. GenAI tools, by offering a shortcut to generate content quickly, can become appealing, especially when assessment methods primarily focus on final outputs rather than the learning process. Shifting Perceptions of Learning: The widespread availability and increasing sophistication of GenAI tools might be subtly shifting students' perceptions of learning. Some students might perceive using these tools as a legitimate form of assistance, blurring the lines between collaboration and over-reliance. External Pressures and Competition: Students might feel pressured to keep up with peers who are using GenAI, fearing that they will be at a disadvantage if they don't. This competitive environment can further incentivize prioritizing task completion over deep understanding. Assessment Design: If assessment methods do not adequately address the use of GenAI or fail to emphasize critical thinking and problem-solving skills, students might be more inclined to prioritize using these tools for task completion. However, it's crucial to avoid generalizations. Not all students who use GenAI are solely focused on task completion. Some might be using these tools strategically to enhance their learning, such as seeking clarification on complex concepts or exploring alternative solutions. To address this potential shift, educational institutions should: Redesign Assessments: Develop assessments that emphasize critical thinking, problem-solving, and the application of knowledge, making it more challenging for students to rely solely on GenAI-generated content. Promote a Growth Mindset: Encourage a growth mindset among students, emphasizing the value of effort, learning from mistakes, and seeking genuine understanding over shortcuts. Facilitate Open Dialogue: Create open forums for students and faculty to discuss the ethical implications of GenAI use and explore strategies for responsible integration. By addressing these underlying factors and fostering a learning environment that values deep understanding, educational institutions can mitigate the potential negative consequences of GenAI on student learning.

What are the broader societal implications of increasing reliance on AI tools, not just in education but in various professions, and how can we foster critical thinking and human agency in an increasingly automated world?

Answer: The increasing reliance on AI tools across various sectors presents profound societal implications, demanding careful consideration and proactive measures to ensure a future where human agency and critical thinking remain paramount. Societal Implications: Job Displacement and Economic Inequality: As AI automates tasks previously performed by humans, concerns about job displacement and widening economic inequality arise. This necessitates reskilling and upskilling initiatives to prepare the workforce for evolving job markets. Algorithmic Bias and Fairness: AI systems are susceptible to inheriting and amplifying biases present in the data they are trained on, potentially leading to unfair or discriminatory outcomes in areas like hiring, lending, and criminal justice. Addressing algorithmic bias requires diverse datasets, rigorous testing, and ongoing monitoring. Erosion of Critical Thinking and Problem-Solving: Over-reliance on AI tools without a deep understanding of their limitations can erode critical thinking and problem-solving abilities. This underscores the importance of education systems that prioritize these skills. Dependence and Loss of Agency: Excessive dependence on AI can diminish human agency, leading to a sense of powerlessness and reduced autonomy in decision-making. Fostering Critical Thinking and Human Agency: Education Reform: Revamping education systems to prioritize critical thinking, problem-solving, creativity, and ethical reasoning. This includes: Project-Based Learning: Incorporating project-based learning that encourages students to tackle real-world problems, fostering collaboration and critical thinking. Digital Literacy: Equipping students with the skills to critically evaluate information, understand the limitations of AI, and use technology responsibly. Ethical Frameworks and Regulations: Developing robust ethical frameworks and regulations for AI development and deployment, ensuring transparency, accountability, and fairness. Human-Centered AI Design: Promoting human-centered AI design principles that prioritize human well-being, autonomy, and control over technology. Upskilling and Reskilling Programs: Investing in comprehensive upskilling and reskilling programs to equip individuals with the skills needed to thrive in an AI-driven job market. Public Awareness and Engagement: Fostering public awareness and engagement on the societal implications of AI, encouraging informed discussions and responsible innovation. By embracing these measures, we can harness the transformative potential of AI while safeguarding human agency, critical thinking, and a just and equitable society.
0
star