Core Concepts
Middle school girls initially overtrusted generative AI, but deliberate exposure to its limitations shifted their perspectives to a more nuanced understanding, while maintaining optimism about future possibilities.
Abstract
The study explored how middle school girls (N=26) perceived and reasoned about generative AI, such as ChatGPT. Initially, the participants did not distinguish generative AI from other conventional AI systems or computational tools, and they were overtrusting of its capabilities. However, after being exposed to examples of generative AI's limitations and mistakes, the participants became more discerning, recognizing its potential for generating misinformation and bias.
The participants' mental models were influenced by factors like the aesthetic legitimacy and perceived transparency of the AI's outputs. While some learners were convinced by these superficial cues, others were able to critically evaluate the outputs after observing their peers checking the answers.
Despite the shift towards a more nuanced understanding of generative AI's limitations, the participants remained optimistic about its future potential applications. They discussed both the benefits and drawbacks of using generative AI in educational settings, raising concerns about academic integrity and equity of access, similar to the ongoing debates among adult stakeholders.
The findings suggest the need to explicitly educate children about the unique characteristics of generative AI, as well as the importance of experiential learning opportunities that expose them to the technology's limitations. This can help develop more accurate mental models and critical thinking skills to navigate the evolving landscape of AI-powered tools.
Stats
There are other cities with more bridges than the study city.
ChatGPT's multiplication output was incorrect.
ChatGPT's list of related papers on machine learning was incorrect.
ChatGPT's instructions on how to add a line break in Google Docs were incorrect.
Ohio and Pennsylvania have never fought a war.
Quotes
"If you use it for medical instructions, and it gets it wrong, then the person is dead."
"But should we trust it with medical questions? If it can't even do a multiplication problem, why should we trust it with medical questions?"
"Isn't Grammarly an AI that helps you write essays? People already use it, so I think [using ChatGPT in school] would be fair."