In an effort to shorten the gap between artificial intelligence and human's natural learning ability, Google has just announced a new AI model called HOPE (Hierarchically Optimized Progressive enc encoder).
This is considered a big step forward in the journey towards general artificial intelligence (AGI), a type of AI that can learn, adapt and improve itself over time.
According to Google's blog announcement on November 8 (local time), HOPE is built on the Nested Learning concept, a new approach invented by Google's research team.
Unlike traditional linear training, this learning method considers an AI model as a system of multi-level learning problems, linked and optimized at the same time, helping AI to handle long-term context and continuously learn without forgetting old knowledge.
Researchers say this method can overcome catastrophic forgetting ( CF), an inherent weakness of today's large language models (LLM).
Although LLM can write poems, code or converse naturally, they cannot yet self-study from experience, a skill that the human brain does every day.
According to Google, Nested Learning opens up a completely new direction of AI design, when models and training algorithms are considered two sides of the same structure.
By combining interspersed learning levels, HOPE can remember, change, and optimize behavior based on previous experiences, something that current models are almost unable to do.
Famous researcher Andrej Karpathy, who worked at Google DeepMind, commented that AGI still has a humanized way, because there is no system that is truly capable of continuous learning. However, the birth of HOPE could be the first signal to narrow that gap.
In a scientific paper presented at the NeurIPS 2025 conference, the Google research team affirmed that HOPE not only has lower computational complexity but also achieves higher accuracy than modern models when testing many different language and theory tasks.
By applying Nested Learning principles, engineers can design deeper learning components, helping AI learn systematically and respond flexibly to new data.
We believe this approach is the foundation for narrowing the gap between current LLM models and the incredible learning ability of the human brain, Google emphasized.
If successful, HOPE could mark a turning point in the artificial intelligence industry, when machines not only simulate thinking, but also learn by themselves, improve themselves and remember for as long as humans.