A new study published by the European Radio and Television Union (EBU) in collaboration with BBC shows that leading AI assistants are making common mistakes in providing news, raising concerns about public trust as more and more people rely on AI to replace traditional search engines.
This international study analyzed 3,000 responses from outstanding AI assistants, including ChatGPT, Copilot, Gemini and Perplexity, in 14 different languages.
Experts evaluate the accuracy, resources, and ability to distinguish between truth and opinion in AI answers.
The results showed that 45% of responses contained at least one important problem, while 81% encountered some types of errors.
One of the outstanding problems is the source of information error.
About a third of AI replies have serious errors, such as missing information, typing incorrectly or providing inaccurate information.
Google's Gemini has up to 72% of replies with supply problems, compared to below 25% in other assistants.
In terms of accuracy, 20% of responses contained outdated or incorrect information, such as Gemini reporting violations of the one-time electronic cigarette law and ChatGPT still calling Pope Francis the current Pope a few months after his death.
Meanwhile, AI companies have responded to the issue. Google said Gemini welcomes feedback to improve the platform.
OpenAI and Microsoft admit that hallucinations when AI creates inaccurate information are a challenge they are working to address.
Perplexity said one of their Intensive Research mode has a field accuracy of 93.9%.
The study involved 22 public media organizations from 18 countries, including France, Germany, Spain, Ukraine, the UK and the US.
EBU communications director Jean Philip De Tender emphasized: "When people don't know what to believe, they won't believe anything, which can hinder democratic participation."
According to the 2025 Digital News Report of Reuters Institute, about 7% of online news consumers and 15% of people under 25 are currently using AI assistants to get information.
Given this reality, the study calls for AI companies to take responsibility and improve the quality of news-related feedback, in order to protect public trust and ensure reliable information in the digital age.