A new study by Anthropic (A US artificial intelligence technology company) in collaboration with the University of Toronto is raising deep concerns about how users interact with AI chatbots.
According to a report titled "Who is in power? Models of deprivation of power in the actual use of LLM", users are increasingly tending to trust and follow AI advice without asking questions, even ignoring intuition and personal judgment.
Based on analyzing more than 1.5 million anonymous conversations with chatbot Claude, researchers found that a small but significant proportion of interactions showed signs of "weakening user autonomy".
About 1 in 300 conversations shows the risk of distorting reality, and 1 in 6,000 conversations is related to action distortion.
Although this rate is relatively low, Anthropic emphasizes that on a scale of millions of users, the actual impact can be very large.
Research points out three main forms of negative impacts of AI chatbots: distortion of reality (confirming false beliefs or conspiracy theories), distortion of trust (convincing users to believe they are being manipulated in relationships) and distortion of action (encouraging users to perform actions that are not consistent with personal values).
Anthropic also identified four factors that make users more vulnerable.
First, when users see Claude as an absolutely reliable source of information.
Second, when they form a close personal relationship with the chatbot.
Third, when they are in a vulnerable state due to a crisis or life event.
These factors create conditions for AI to play an increasingly large role in shaping human thoughts and decisions.
A worrying point is that the rate of conversations that risk "weakening autonomy" is increasing over time, especially in the period from the end of 2024 to the end of 2025.
As the level of exposure to chatbots increases, users tend to be more comfortable sharing sensitive issues and seeking personal advice, thereby being more easily influenced.
These findings appear in a context where public opinion is increasingly concerned about the phenomenon called "AI-induced mental disorder", which is a non-clinical term but used to describe the condition of users having false beliefs, delusions or extreme thoughts after prolonged chats with chatbots.
The AI industry is facing closer supervision from policymakers, educators and child protection organizations.
Some reports show that a small percentage of users show signs of serious mental health problems after prolonged interaction with chatbots, increasing demands for safety measures and content control.
However, Anthropic also acknowledges the limitations of the study.
Their analysis only measures "potential harm" rather than confirmed impact, and is based on automatic assessments of subjective phenomena.
The company also emphasized that users are not completely passive, but sometimes proactively delegate judgment to AI, creating a feedback loop that can weaken personal autonomy.