Everyone is talking about AI models like ChatGPT and DALL-E today, but what place does AI have in education? Can it help students or does it pose more risks than benefits? As impressive as this technology is, there are some serious pitfalls of AI-based learning that parents, teachers and students should be aware of.
1. The Spread of Misinformation
One of the biggest issues with AI today is misinformation and “hallucinated” information. This is a particularly prominent challenge with chatbots like ChatGPT. These AI models are adept at natural language processing but don’t always provide correct or real information. As a result, they can give answers that sound authoritative while providing flawed or completely made-up facts, references or statements.
Chat AI models like ChatGPT and Bing AI regularly give wrong answers. This phenomenon is known as “hallucinating” answers. The AI is not actually capable of understanding a fact the way a human could — it has no concept of true or false. It is simply trained to give answers that mimic a question, format or other context.
This poses a serious risk for students, who may be unable to tell when an AI gives inaccurate information. In fact, ChatGPT has even been known to create entirely fictional “references” for seemingly factual answers, making misinformation even more convincing. This could lead students to base whole essays and research projects on false information.
The risk of misinformation applies to teachers as well as students. They cannot trust AI-based tools to provide correct or reliable information for things like grading or study-guide generation. If teachers are not careful, AI could lead them to give a student an incorrect grade or provide inaccurate information.
“These AI models are adept at natural language processing but don’t always provide correct or real information.”
2. Cheating and Over-Reliance on AI
Now that AI can quickly generate convincing essays and study guides, cheating is a serious concern. Modern AI chatbots’ natural language processing capabilities can allow students to effortlessly cheat, commit plagiarism and rely too heavily on AI. Not only does this threaten educational integrity, but it also endangers the effectiveness of coursework.
Students may lose important critical thinking skills and fail to learn valuable concepts when they can simply type their homework into a chatbot. Since AI can craft such convincing content, it can be very difficult for teachers to tell when a student used an AI to complete their homework or essay. Failure to learn and complete coursework may only be noticeable once students take tests or exams.
3. Undercutting the Role of Teachers
There is a popular narrative that AI can replace humans in countless jobs, but teaching is not one of those. Teachers play an invaluable role in education — one a piece of software cannot replicate. AI has the potential to seriously undercut the part of teachers, undermining their instruction, authority and mentorship.
In fact, AI can even compromise the quality of education and the value of customized educational experiences schools can provide. For example, no AI can truly replicate the experience of attending a Montessori school, which focuses on teaching soft skills like empathy and independence through individualized learning techniques.
AI-based learning can boil education down to simply sharing facts or feeding users data based on an algorithm. In reality, education is about personal growth, life skills, socialization and creativity, in addition to gaining knowledge. Only teachers can provide the human guidance students need.
“AI-based learning can boil education down to simply sharing facts or feeding users data based on an algorithm”
4. Student Data Privacy
AI-based learning can also pose technical and legal challenges — especially when it comes to the handling of students’ data. AI models learn by tracking and digesting all the data they encounter. This can include things like students’ test answers, questions typed into a chatbot, and characteristics like age, gender, race or first language.
The black-box nature of most AI models makes it difficult or even impossible for anyone to see how the AI uses the data it collects. As a result, there are real ethical issues with using AI in education. Parents, teachers and students may want their data kept from an AI out of concern for their privacy. This is especially true with AI platforms that personalize students’ experiences through surveillance, such as tracking their activity or keystrokes.
Even in cases where an AI-based learning platform does ask for users’ consent to use their data, privacy is still at risk. As studies point out, students are often not equipped to understand data privacy consent. Additionally, if a school requires an AI-based platform, students and teachers may have no choice but to consent to give up their personal information.
“AI models learn by tracking and digesting all the data they encounter. This can include things like students’ test answers, questions typed into a chatbot, and characteristics like age, gender, race or first language.”
5. Uneven Education and Data Bias
While AI might be able to “personalize” education, it can also lead to uneven or unequal experiences. Equal educational opportunities rely on having some standard baseline for the content all students learn. Personalized learning through AI can be too unpredictable to ensure a fair experience for all students.
Additionally, data bias threatens racial and gender equality in education. There has been evidence of bias in AI for years. For example, in 2018, Amazon came under fire for using a hiring AI that discriminated against applicants based on gender indicators such as the word “women’s” or the name of a women’s college. AI is not as objective as many may believe — it is just as biased as the training data it learns from.
As a result, underlying societal biases can easily leak into AI models, even down to the language the AI uses in certain contexts. For instance, an AI might only use male pronouns to describe police officers or government officials. Likewise, it might regurgitate racist or offensive content it learned from poorly filtered training data.
Bias and inequality are not conducive to safe, fair and supportive learning. Until AI can be trusted to remain truly fair, it poses a threat to equal opportunities in education.
How Should AI Be Used in Education?
These five significant pitfalls of AI-based learning require careful consideration as this technology becomes more commonplace. Like any technology, AI should be a tool, not a fix-all solution. Teachers can use AI to automate low-risk tasks and improve the quality of the education they provide, but AI is not a replacement for teachers themselves.
Educators should take steps to help students understand the uses and risks of AI so they can make intelligent choices about their data privacy, as well. Ultimately, AI-based learning is best in moderation, not as a stand-in for conventional learning experiences.
Also, Read Are AI Tools Ready To Be Trusted and Used as Educational Resources?