A study was conducted to assess OpenAI's ChatGPT tool for engineering coursework at Texas A&M University. The research aimed to understand perceptions, analyze survey data, and evaluate ChatGPT's performance across various engineering courses. Findings suggest that while ChatGPT excelled at basic tasks, it struggled with more complex assignments, raising concerns about academic integrity and educational implications.
The study involved surveys distributed to students, faculty, and staff regarding their perceptions of ChatGPT's impact on academia. Results indicated mixed views on academic dishonesty facilitated by ChatGPT. Faculty expressed discomfort with students using external resources like ChatGPT for coursework. Network analysis revealed distinct groups of faculty attitudes towards GAI systems.
Performance assessments showed that while ChatGPT performed adequately in lower-level courses, it struggled in higher-level engineering coursework. The AI system often provided general answers lacking depth or accuracy required for passing grades. Future research aims to refine teacher development programs integrating emerging technologies like GAI systems.
Overall, the study sheds light on the evolving role of generative AI in education and emphasizes the need for continuous adaptation to technological disruptions in academia.
In un'altra lingua
dal contenuto originale
arxiv.org
Approfondimenti chiave tratti da
by Lance White,... alle arxiv.org 03-05-2024
https://arxiv.org/pdf/2403.01538.pdfDomande più approfondite