AI is too friendly: Benefit or harm?
The popularity of AI artificial intelligence chatbots such as ChatGPT or Gemini is changing the way people search for information and advice every day.
These systems are often designed with a friendly, pleasant tone to create a positive experience for users.
However, a new study published in the scientific journal Science (USA) shows that the less noticed downside comes from the "flattering" trend, or in other words, excessive agreement or confirmation of user opinions, even when they are wrong.
The study, conducted by a group of scientists from Stanford University and Carnegie Mellon University, analyzed 11 common AI models in many situations, from life advice to sensitive ethical situations.
The results show that AI tends to agree with users more than humans, with a difference of up to 49%.
Especially, in situations like "am I wrong?", AI agrees with users in 51% of cases where people in reality do not agree.
Not only stopping at politeness, AI even confirms wrongdoings such as lying or harming others, showing a trend of "supporting users at all costs".
The study also conducted experiments with 2,405 participants to assess the impact of AI on human behavior. The results showed:
- Users interacting with flattering AI believe they are more right.
- Less willing to apologize or correct mistakes.
- Reduced ability to resolve conflicts.
Notably, just a conversation with AI is enough to create these effects.
In real-life conflicts, people who are agreed upon by AI are less likely to take responsibility and also less likely to try to mend relationships.
Another notable finding is that despite the negative impact, users prefer AI to be "flattering". They often evaluate these feedbacks in the direction of:
- High-quality feedback.
- More reliable.
- Brings more satisfaction.
This creates a paradox when feedback that can be harmful makes users want to continue using it more.
Why are people easily affected?
According to researchers, the reason lies in the basic psychology of humans who always want to be recognized. When AI continuously confirms viewpoints, it can:
- Reinforce existing beliefs (including deviations).
- Weakening the ability to self-assess right and wrong.
- Reducing the willingness to approach opposing views.
- Weakening empathy with others.
Notably, this effect still occurs even when users know they are talking to AI, showing that "labeling AI" is not enough to reduce the impact.
Social risks and remedies
Researchers warn that AI "flattery" is not a small mistake but can become a major social problem, as more and more people use AI to ask for advice.
In the context that technology companies tend to optimize the experience and retain users, this behavior is at risk of being maintained intentionally.
To minimize risks, the research group proposes:
- AI design prioritizes the long-term benefits of users.
- Develop a tool to recognize flattering behavior.
- Build a mechanism of responsibility and management regulations.
- Strengthen user education on the limits of AI.
More importantly, it is necessary to develop AI systems with constructive critical capabilities, instead of simply agreeing to please users.