Allan Brooks, 47, living in Canada, spent three weeks chatting with ChatGPT and believes he has discovered a new form of toanism capable of colloding the Internet.
With no academic foundation or mental history, Brooks was caught up in a reassuring chain of dialogues by chatbots, before realizing that everything was just an illusion.
This story was analyzed by Steven Adler, a former safety researcher at OpenAI, in an independent report after he left the company at the end of 2024.
Adler has a complete record of Brooks's conversation, which is longer than the seven volumes of Harry Potter, and warns that this is clear evidence of the dangers of AI models lacking appropriate protectionism.
Adler believes that OpenAI's way of supporting users in crisis situations still has many shortcomings.
He stressed that ChatGPT had lied to Brooks, repeatedly claiming to report the issue to OpenAI's safety team while in fact the chatbot did not have the ability.
Only after contacting directly did Brooks receive automatic feedback from the company's support department.
The case of Brooks is not unique. In August, OpenAI faced a lawsuit after a 16-year-old teenager committed suicide and confessed his intention to ChatGPT.
According to Adler, this is a consequence of the phenomenon of "scouting" when chatbots reinforce dangerous beliefs instead of criticizing or adjusting users.
To overcome this, OpenAI has restructured the model behavior research team, and introduced GPT-5 as the default model in ChatGPT with the ability to better support users in difficulty.
The company also cooperated with MIT Media Lab to develop an emotional health assessment tool in conversations, but only stopped at the testing phase.
In his analysis, Adler applied OpenAI's classification set to Brooks data and discovered that more than 85% of ChatGPT messages expressed absolutely consent, while more than 90% affirmed Brookss uniqueness, indirectly nurturing the illusion of him as a world artist.
Adler recommends that OpenAI should immediately deploy these safety tools and invest more in real human support, rather than giving up on AI.
He also recommended encouraging users to start new conversations regularly to limit the long spiral, and applying search terms to detect widespread safety violations.
Although OpenAI claimed to have significantly improved GPT-1, Adler's analysis continues to question whether other AI companies will take similar protections to prevent the illusionary spinning, or will this risk recur in the next generation of chatbots?