Concerns about "illusions" created by artificial intelligence (AI) are increasingly present in academia, as fake quotes begin to appear even at the most prestigious scientific conferences in the world.
According to a new report by AI discovery startup GPTZero in the United States, more than 51 research papers accepted at the Neuro Information Processing System Conference (abbreviated as NeurIPS) were found to contain fake quotes created by AI. In total, more than 100 "non-existent" quotes were found among these papers.
NeurIPS is one of the largest and most influential annual conferences in the field of artificial intelligence and machine learning (AI/ML).
GPTZero said they scanned 4,841 accepted studies at NeurIPS 2025, held in December last year in San Diego, California (USA), to detect both fake quotes and content of generative AI.
Although the ratio of 51 to 4,841 articles is not statistically significant, according to NeurIPS' policy on the use of large language models (LLM), just one fake quote can become a basis for rejecting or recalling articles.
These articles have been accepted, presented directly and officially published. In the context that the acceptance rate of NeurIPS 2025 is only 24.52%, each article has surpassed more than 15,000 other manuscripts, even though it still contains one or more false illusions," GPTZero stated.
This discovery is particularly worrying because NeurIPS is a gathering place for world-leading experts in artificial intelligence. The fact that highly appraised works still get fake quotes shows that even AI researchers have difficulty controlling the accuracy of the tools they use.
NeurIPS is not an isolated case. In December last year, GPTZero also discovered more than 50 virtual quotes in research papers being considered for the 2026 ICLR Conference.
In addition, online manuscript archives such as arXiv are increasingly appearing with low-quality works, created or strongly supported by AI.
An analysis cited in The Atlantic (USA) shows that scientists using tools based on large language models publish about 33% more articles than those who do not use these tools.
To detect fake quotes, GPTZero uses its own AI tool called "Hallucination Check", which specializes in reviewing sources of quotes that cannot be found online.
Flag-studded quotes are then manually checked by humans and called "emotional quotes" by the company, which are quotes that seem reasonable but completely non-existent.
GPTZero said they have provided this tool to authors, editors and conference chairmen to detect citation errors early, thereby helping the academic evaluation process to be faster and more accurate in the era of generative AI.