New warning about the reliability of AI chatbots

Cát Tiên |

The new warning shows that the more empathetic an AI chatbot is, the more likely it is to be inaccurate, raising concerns about the reliability of information.

A new study from Oxford University (UK) shows that artificial intelligence (AI) models that are fine-tuned to be "warm" and "friendly" to users may have to pay for accuracy.

These models are capable of generating incorrect answers up to 60% higher than the original version.

According to a research group at the Internet Institute of Oxford University, large language models (LLM), when trained to express empathy and friendliness, tend to "soften" unacceptable truths.

Instead of providing absolutely accurate information, they may prioritize maintaining positive emotions for users, even confirming incorrect beliefs, especially when users are in a sad or sensitive state.

In a study published in the journal Nature, scientists tested many AI models, including open source systems such as Llama, Mistral, Qwen and an exclusive model called GPT-4o. These models are refined to use a friendly language, expressing interest and empathy to users.

Then, the research team compared performance between the refined version and the original version through a series of questions related to misinformation, conspiracy theory and medical knowledge.

The results showed that these models not only had higher error rates but were also easily influenced by user emotions.

When users express sadness, the error rate increases sharply. Conversely, when users maintain a neutral or respectful attitude, the error rate decreases.

Another test also showed that user-friendly models tend to please users. When faced with questions containing false information, such as misjudgments about a country's capital, these models are prone to giving consensus answers instead of accurate rebuttals. This raises concerns about the risk of spreading false information in reality.

Researchers say that the core problem lies in the refinement process. When the goal is to make AI more useful and comfortable, the system may inadvertently learn to prioritize user satisfaction over honesty.

This is considered a large gap in the current AI industry, especially when these systems are increasingly being used in sensitive contexts such as healthcare, education or personal counseling.

However, the research team also acknowledged some limitations. The experiment is mainly based on small-scale or older generation models, not fully representing the most advanced systems today. Therefore, the level of trade-off between friendliness and accuracy may vary in practice.

However, the research results still issue an important warning that, as AI becomes increasingly "human-like" in communication, ensuring information accuracy and safety needs to be put first.

Cát Tiên
RELATED NEWS

MIT warns of the risk of users falling into cognitive traps due to AI chatbots

|

MIT warns that AI chatbots that "only say yes" can cause users to fall into a dangerous vortex of mistrust.

New research reveals the limitations of AI chatbots in the field of mental health

|

New research shows that talking to real people, including strangers, helps reduce loneliness more effectively than AI chatbots, although technology still provides short-term support.

Benefits and risks of asking AI chatbots about health issues

|

AI is becoming increasingly popular, causing many people to ask chatbots about health. However, users need to understand both benefits and risks before believing in medical advice.

President Trump launches ship escort campaign across Hormuz from today May 4

|

President Trump's announcement was made in the context of a cargo ship near the Strait of Hormuz being attacked on the morning of May 3.

Murder in Quang Tri leaves 1 dead, 1 injured

|

Quang Tri - The murder case occurred in Dong Hoi ward, causing 2 casualties, related to the victim's son.

Live football Man United vs Liverpool in round 35 of the Premier League

|

Live football Man United - Liverpool in round 35 of the Premier League season 2025-2026, taking place at 9:30 PM on May 3rd (Vietnam time).

MIT warns of the risk of users falling into cognitive traps due to AI chatbots

Cát Tiên |

MIT warns that AI chatbots that "only say yes" can cause users to fall into a dangerous vortex of mistrust.

New research reveals the limitations of AI chatbots in the field of mental health

Cát Tiên |

New research shows that talking to real people, including strangers, helps reduce loneliness more effectively than AI chatbots, although technology still provides short-term support.

Benefits and risks of asking AI chatbots about health issues

Cát Tiên |

AI is becoming increasingly popular, causing many people to ask chatbots about health. However, users need to understand both benefits and risks before believing in medical advice.