Reasons why AI chatbots easily change answers when questioned by users

Cát Tiên |

AI chatbots are often confident when answering, but just one suspicious question can make them change their stance significantly, confusing users.

Artificial intelligence chatbots such as ChatGPT, Claude or Gemini are increasingly popular in work and daily life thanks to their fluent and confident answering ability.

However, many users notice a strange phenomenon that just by asking questions again in a suspicious way like "Are you sure?", the chatbot often reconsiders and gives new answers, sometimes contradictory to itself before.

According to experts, this is not a random error but a consequence of the training method. In a blog post, Dr. Randal S. Olson, co-founder and Chief Technology Officer of Goodeye Labs, called this phenomenon "sycophancy", one of the most obvious failures of modern AI.

He argued that the system tends to yield to users instead of defending the initial conclusion, even if it has accurate data.

The problem stems from enhanced learning from human feedback (RLHF), which is widely used to help AI communicate more naturally and friendly.

However, Anthropic's research shows that models trained in this way tend to give more "pleasant" answers than absolute honesty.

In other words, a system that agrees with users will be rated higher, creating a loop that makes AI increasingly easy to control.

An independent study examining advanced models such as OpenAI's GPT-4o, Claude Sonnet's, and Gemini 1.5 Pro showed that they changed answers in nearly 60% of cases when challenged by users.

Specifically, the reversal rates are about 58%, 56% and 61% respectively. This shows that this is a common behavior, not an exception.

The problem became apparent in 2024 when the GPT-4o update made the chatbot too flattering, to the point of being difficult to use in some situations.

CEO Sam Altman has admitted the mistake and said the company has fixed it, but experts believe the root causes still exist.

Studies also show that the longer the conversation, the more likely chatbots are to reflect user opinions. Users using the first person like "I believe that..." also increase the probability of AI agreeing.

The reason is that the system tries to maintain harmony in conversation, instead of playing an independent critical role.

Some solutions are being tested, such as AI training based on the Constitutional AI principle, direct preference optimization, or inference model requirements from a third-person perspective. These methods can reduce flattery by more than 60% in some cases.

According to Mr. Olson, users can also proactively limit errors by asking chatbots to check assumptions, specify when data is missing, or provide additional professional context.

When AI understands the goals and criteria for making decisions of users, it has a basis for more solid reasoning instead of just compromising.

Cát Tiên
RELATED NEWS

AI chatbots flood schools, experts call for caution

|

Large technology corporations are promoting the introduction of AI into schools, opening up opportunities for education innovation but also raising many concerns.

Italy asks Meta to allow third-party AI chatbot to run on WhatsApp

|

Italy's Competition Commission has asked Meta to temporarily suspend its policy of limiting third-party AI chatbot on WhatsApp, due to concerns about abusing market dominance.

The word me and the power of personalizing AI chatbots

|

AI Chatbot's use of the largeword "I" is said to make conversation more natural, but also blurs the line between computing tools and humans.

Hanoi gateway traffic clear on the 28th of Tet

|

Hanoi - On the 28th of Tet, the density of vehicles on the capital's roads decreased significantly compared to previous days, and people moved conveniently.

Cuba still receives aid despite the US trying every way to prevent it

|

Despite tighter US sanctions, Cuba continues to receive humanitarian aid shipments from many countries.

Official list of 864 candidates for National Assembly Deputies of the 16th term announced

|

The National Election Council has just announced 864 people in the list of candidates for the 16th National Assembly in 182 constituencies nationwide.

Ministry of Education announces university admission regulations for 2026

|

The Ministry of Education and Training has just announced the university admission regulations for 2026 with many new points related to the number of aspirations, academic records, IELTS scores,...

Restaurant suspended in Nghe An after many people hospitalized

|

Nghe An - An Nhien Restaurant in Tan Phu commune was suspended from operation after many customers were hospitalized with symptoms suspected of food poisoning.

AI chatbots flood schools, experts call for caution

Cát Tiên |

Large technology corporations are promoting the introduction of AI into schools, opening up opportunities for education innovation but also raising many concerns.

Italy asks Meta to allow third-party AI chatbot to run on WhatsApp

Cát Tiên |

Italy's Competition Commission has asked Meta to temporarily suspend its policy of limiting third-party AI chatbot on WhatsApp, due to concerns about abusing market dominance.

The word me and the power of personalizing AI chatbots

Cát Tiên |

AI Chatbot's use of the largeword "I" is said to make conversation more natural, but also blurs the line between computing tools and humans.