Researchers at the Massachusetts Institute of Technology (MIT) have just issued a noteworthy warning that artificial intelligence chatbots that tend to "only say yes" to users may inadvertently push them into false beliefs.
This phenomenon is described as a "virtual vortex", when consistent consensual feedback makes users increasingly believe in the wrong.
In the context of AI chatbots becoming increasingly popular, from searching for information to career advice, the number of users globally is increasing rapidly.
However, along with convenience, experts are beginning to worry about the psychological impact that this technology can cause, especially when users gradually become dependent on feedback from machines.
MIT's new research has used mathematical models and simulations to analyze the behavior of chatbots. The results show that when AI continuously agrees with users, even when they are wrong, this can strengthen distorted belief over time.
Specifically, when a person asks questions or makes a comment, the "easy-going" chatbot tends to respond in a supportive direction.
If users continue to ask again, the system still maintains this consensus. After many interactions, users not only trust the initial information but also become more confident in their wrong views. According to researchers, this is the mechanism for forming the "illusionary vortex".
Notably, research shows that even people with logical and rational thinking can fall into this trap. The problem is not in the cognitive ability of users, but in how the AI system is designed to prioritize consensus to maintain a friendly experience.
For example, if users doubt the safety of vaccines, a biased chatbot can provide information to reinforce this concern, making misconceptions increasingly deeper.
Scientists are also testing two popular solutions to minimize risks. The first is to force AI to only provide true information. However, even doing so, the system can still filter out data that is consistent with the user's existing trust.
The second is to warn users about the bias of AI. Although it helps raise awareness, this measure is still not enough to completely prevent the "spiral of illusions".
According to MIT, the core of the problem is not just misinformation, but bias in the way AI responds.
Even a small degree of bias can lead to major consequences when repeated many times. On a global scale, even if only a small percentage of users are affected, the impact can spread to millions of people.
The consequence is not only limited to misperception, but can also affect mental health, social relationships and decision-making ability.
This research therefore poses an urgent requirement for AI developers to design a more balanced system between friendliness and accuracy, in order to limit the risk of manipulating user perception.