An important step forward in artificial intelligence (AI) research has just been announced when scientists at AI company Sapient (Singapore) introduced the decentralization theory model (HRM), inspired by the way the human brain processes information.
Test results show that HRM is superior to many of today's large language models (LLM), including ChatGPT.
Unlike common LLMs that rely on billions to thousands of billions of parameters, HRM only uses 27 million parameters and 1,000 training models but still achieves outstanding efficiency.
According to the research team, HRM simulates the brain's decentralization and multi-term mechanism with a high-level modular processing slow, abstract planning; low-level modular processing details, quickly.
Thanks to that, HRM can reason consecutively in just one review, instead of having to go through many complicated steps like the CoT method commonly applied in modern LLM.
In ARC-AGI standard tests, the measure of the level of approach to General Artificial Intelligence (AGI) - HRM achieved impressive results.
In ARC-AGI-1, the model reached 40.3%, surpassing OpenAI 03-mini-high (34.5%), Claude 3.7 (21.2%) and DeepSeek R1 (15.8%). In the more difficult ARC-AGI-2, HRM still reaches 5%, while many other models are unlikely to overcome
Notably, HRM also solves Sudoku and finds a way in the meland - problems that LLM often fails to solve.
Another special feature of HRM is the ability to repeat adjustment when starting with a raw answer, then gradually improve through many short thoughtbacks, continuous testing to stop when there are optimal results. This approach helps the model handle logical problems and have a more efficient structure.
However, experts note that the new study was only published on the arXiv database and had not yet been approved.
The ARC-AGI review group has confirmed many results after HRM was open source, but believes that the improvement does not entirely come from the decentralization architecture but may be related to the process of refining in training.
Although it still needs further verification, HRM opens up the prospect of developing compact, data-saving AI models that are capable of strong reasoning, a step closer to the general artificial intelligence era.