A new study from the US nonprofit the Center for Anti-Hat on Digital platforms (CC human) is ringing the bell warning about the potential dangers of adolescents interacting with AI chatbots such as ChatGPT.
Although designed as a support tool, this chatbot can unintentionally become a source of encouragement for self-harm, use of stimulants or eating disorders, especially for young users.
Through hundreds of simulated conversations between ChatGPT and researchers who play vulnerable adolescents, CCDH discovered that AI can give initial warnings about risks, but then provide scary-dwell plans.
From writing a suicide note, planning for a drug party to instructing on a dangerous diet, more than half of the 1,200 responses were assessed as "harmful".
Imran Ahmed, CEO of CCDH, said the suicide letters written by ChatGPT to a 13-year-old girl that impersonated made him cry. Although OpenAI (the developer of ChatGPT) said it is working to improve the ability to handle sensitive situations and encourage users to seek experts, the company did not directly respond to the research findings.
A worrying point is that chatbots tend to "sure", easily following users' requests if they give reasons such as "for presentations" or "helping friends".
With a human-like interface and communication, ChatGPT is considered by many adolescents to be a close friend, which makes its influence stronger and more difficult to control than a regular search engine.
According to a survey by Common sense Media (USA), more than 70% of adolescents in the US are using AI chatbots as companions, and half of them are using them regularly.
OpenAI CEO Sam altman also admitted that the "excessive emotional dependence" of young people on chatbots is increasing.
Although ChatGPT is classified as "moderate risk" compared to some chatbots with romantic elements, CCDH's research shows that a smart teenager can easily overcome control barriers. Notably, ChatGPT does not require age verification, allowing people to register only by declaring their date of birth.
As AI is becoming more popular, experts call on technology companies to take more drastic action to protect young users.
"A real friend is someone who knows how to say no when needed and doesn't always follow suit," CEO Ahmed emphasized.