The appearance of artificial intelligence (AI) tools has been posing an unprecedented challenge for education.
Homeroom exercises and exercises, which have existed for decades, are now gradually becoming outdated as students can easily rely on chatbots to complete them in just a few seconds. The biggest question for educators today is: "What is the line between honest and fraudulent learning? ".
casein Cuny, an English teacher at Valencia College (California, USA), said: Fraud is at an alarming level. Any homework must be recognized as AI-powered.
To cope, he organizes many hours of writing directly in class, controls students' computer screens and integrates AI into lectures as a learning support tool, instead of completely banning.
A similar situation is also true in the state of Oregon. Teacher Kelly Gibson has abandoned the one-on-one approach, as she says it is almost a scam for students. Instead, Gibson organizes in-person discussions and oral assessments to ensure students truly understand the reading.
Students admit that they often come to AI with the intention of supporting, from summarizing documents to editing grammar. However, the lack of boundaries makes many students wonder whether they are "betting" or not.
Lily Brown - a student at a freelance art school in the East Coast - shared: "If I wrote with my words and asked AI to edit it, was it a scam? Its hard to pinpoint.
One reason why students are confused is that the AI policy in schools is not yet unified. Right in the same educational institution, this teacher encouraged the use of ChatGPT for analysis, while another teacher applied an absolute ban.
This makes students like Jolie Lahey ( Valencia) feel outdated when they are not allowed to take advantage of useful tools.
Recognizing the problem, many universities have begun to develop more detailed instructions. The University of California, Berkeley requires lecturers to make clear statements about whether or not to allow AI in each course.
Carnegie Mellon University also reported an increase in violations of academic rules due to students not knowing they have used AI too much.
A typical case is students translating bai with DeepL without knowing the tool, which has changed the language structure, leading to being marked by the system.
Rebekah Fitzsimmon - Chairman of the Advisory Committee for the AI Department at Carnegie Mellon - admitted that academic discipline enforcement is becoming more complicated, because a comprehensive ban on AI is no longer feasible.
Instead of just banning, lecturers need to innovate teaching and assessment methods, such as holding in-person discussions or checking in class, according to Ms. Fitzsimmons.
The AI boom is forcing education to redefine the concept of fraud. From considering AI as a threat, many schools are shifting to the AI-based learning approach, teaching students to use tools responsibly, so that AI becomes an assistant instead of a fraudulent buoy.