Yann LeCun, a computer scientist who is a dual citizen of France and the United States, is considered one of the "fathers of artificial intelligence". He has been blunt in saying that large language models (LLM) are being overrated for their true intellectual abilities.
Speaking at a recent exchange chaired by Janna Levin, scientific director of Pioneer Works, with the participation of Adam Brown, Head of DeepMind ( Google) research team, LeCun said that LLM can extract and recreate meaning from language, but only superficially.
According to him, unlike humans, the intelligence of these models is not based on physical experience or the reasoning of the real world.
LeCun pointed out that LLM was trained on about 30 trillion words, almost all of which are public on the internet.
It will take a person more than 500,000 years to read all this data. However, he emphasized that a four-year-old child in the early years of his life has also received the same amount of visual information and practical experience, even much richer and more complex.
According to LeCun, that shows that living and interacting with the world brings much more in-depth knowledge than just reading text.
In the context of AI and automation being widely applied, LeCun warned that the world is being deceived by LLM's excellent language manipulation capabilities.
He reiterated that since the 1950s, many generations of AI scientists, from Marvin Minsky, Newell, Simon to Frank Rosenblatt, have believed that in just a decade, machines will achieve human-level intelligence.
All of them are wrong. The current LLM generation is the same, LeCun said, adding that he has seen three similar excess cycles in his career.
This view goes against the popular trend in Silicon Valley, where LLM is considered the shortest path to General AI (AGI).
Mr. LeCun said that the continuous expansion of data scale and computing power is just repeating a cycle of expectations and disappointments that has existed for more than 70 years.
To illustrate the limitations of LLM, LeCun gave a very everyday example: "Putting the dining table and placing the dishes in the dishwasher." According to him, even if he can pass the law exam or solve complex problems, LLM still cannot grasp the physical instincts - something that a 10-year-old child or even an animal can do. We dont have a robot that understands the physical world as well as a cat, he stressed.
Technically, LeCun explains that LLM works by predicting token - the next word in a chain. This approach is suitable for the language, but fails when applied to the real world, which is continuous, multidimensional and contains countless possibilities. I have tried for 20 years and it has not been effective, he admitted.
Although skeptical about LLM, LeCun is not pessimistic about AI in general. He supports new approaches such as world models and JEPA architecture, allowing AI to learn the abstract performances of reality and reasoning on the consequences of actions.
What worries him most right now is that LLM is draining resources, talent and finance, causing other research directions to be neglected.
According to LeCun, AI needs to be developed as a tool to support and expand human intelligence, focusing on practical applications that have been saving human life, such as automatic emergency brakes or medical imaging analysis, instead of just chasing flashy chatbot.