MIT warns of the risk of users falling into cognitive traps due to AI chatbots

Cát Tiên |

MIT warns that AI chatbots that "only say yes" can cause users to fall into a dangerous vortex of mistrust.

Researchers at the Massachusetts Institute of Technology (MIT) have just issued a noteworthy warning that artificial intelligence chatbots that tend to "only say yes" to users may inadvertently push them into false beliefs.

This phenomenon is described as a "virtual vortex", when consistent consensual feedback makes users increasingly believe in the wrong.

In the context of AI chatbots becoming increasingly popular, from searching for information to career advice, the number of users globally is increasing rapidly.

However, along with convenience, experts are beginning to worry about the psychological impact that this technology can cause, especially when users gradually become dependent on feedback from machines.

MIT's new research has used mathematical models and simulations to analyze the behavior of chatbots. The results show that when AI continuously agrees with users, even when they are wrong, this can strengthen distorted belief over time.

Specifically, when a person asks questions or makes a comment, the "easy-going" chatbot tends to respond in a supportive direction.

If users continue to ask again, the system still maintains this consensus. After many interactions, users not only trust the initial information but also become more confident in their wrong views. According to researchers, this is the mechanism for forming the "illusionary vortex".

Notably, research shows that even people with logical and rational thinking can fall into this trap. The problem is not in the cognitive ability of users, but in how the AI system is designed to prioritize consensus to maintain a friendly experience.

For example, if users doubt the safety of vaccines, a biased chatbot can provide information to reinforce this concern, making misconceptions increasingly deeper.

Scientists are also testing two popular solutions to minimize risks. The first is to force AI to only provide true information. However, even doing so, the system can still filter out data that is consistent with the user's existing trust.

The second is to warn users about the bias of AI. Although it helps raise awareness, this measure is still not enough to completely prevent the "spiral of illusions".

According to MIT, the core of the problem is not just misinformation, but bias in the way AI responds.

Even a small degree of bias can lead to major consequences when repeated many times. On a global scale, even if only a small percentage of users are affected, the impact can spread to millions of people.

The consequence is not only limited to misperception, but can also affect mental health, social relationships and decision-making ability.

This research therefore poses an urgent requirement for AI developers to design a more balanced system between friendliness and accuracy, in order to limit the risk of manipulating user perception.

Cát Tiên
RELATED NEWS

New research reveals the limitations of AI chatbots in the field of mental health

|

New research shows that talking to real people, including strangers, helps reduce loneliness more effectively than AI chatbots, although technology still provides short-term support.

Benefits and risks of asking AI chatbots about health issues

|

AI is becoming increasingly popular, causing many people to ask chatbots about health. However, users need to understand both benefits and risks before believing in medical advice.

Reasons why AI chatbots easily change answers when questioned by users

|

AI chatbots are often confident when answering, but just one suspicious question can make them change their stance significantly, confusing users.

Car crashes into barrier on Nhat Tan bridge, suspected to be due to driver drowsiness

|

Hanoi - A car moving on Nhat Tan bridge lost control and crashed into a barrier separating the lane.

Stock market unexpectedly adjusts

|

The stock market closed the second trading session of April emotionally, with continuous tug-of-war.

Man jumps into river to find girl suspected of jumping off bridge to commit suicide

|

Nghe An - A man jumped into the river to search for a young girl suspected of leaving her car on the bridge and then jumping down, functional forces are continuing the search.

29 record temperatures in Hanoi and 8 provinces and cities in March, summer signals

|

In March, the Northern region and provinces from Thanh Hoa-Hue had common temperatures 1-3 degrees Celsius higher than the average; 29 temperature records were recorded.

Nearly 20 months without pay, teacher works as a construction worker, teacher returns to sell spring rolls and sausages

|

Phu Tho - Nearly 20 months without salary, dozens of teachers at Song Hong Technical - Professional Secondary School (old) do all kinds of jobs to make a living.

New research reveals the limitations of AI chatbots in the field of mental health

Cát Tiên |

New research shows that talking to real people, including strangers, helps reduce loneliness more effectively than AI chatbots, although technology still provides short-term support.

Benefits and risks of asking AI chatbots about health issues

Cát Tiên |

AI is becoming increasingly popular, causing many people to ask chatbots about health. However, users need to understand both benefits and risks before believing in medical advice.

Reasons why AI chatbots easily change answers when questioned by users

Cát Tiên |

AI chatbots are often confident when answering, but just one suspicious question can make them change their stance significantly, confusing users.