HRM Brain-Inspired AI Model Surpasses LLMs in Reasoning Accuracy
Getting Data
Loading...

HRM Brain-Inspired AI Model Surpasses LLMs in Reasoning Accuracy

Singapore-based Sapient’s HRM, modeled on the human brain, outperforms larger AI models on tough reasoning benchmarks, signaling a new era for efficient, smart AI systems.

AvatarJR

By Jace Reed

3 min read

Image Credit: Unsplash
Image Credit: Unsplash

A new chapter in artificial intelligence has arrived as Singapore’s Sapient claims its Hierarchical Reasoning Model (HRM) can outperform heavyweight large language models (LLMs) and has a radically lean design.

The HRM, reminiscent of the brain’s layered thinking, uses only 27 million parameters, a tiny fraction compared to the trillions found in cutting-edge models like OpenAI’s GPT-5.

Scientists tested HRM against industry leaders, finding that Sapient’s system delivers not only greater accuracy on tough reasoning benchmarks but also speed and resource efficiency that could shift AI development priorities.

HRM’s Brain-Inspired Architecture Explained

HRM’s secret lies in mimicking the human brain’s twofold reasoning: a high-level module takes charge of abstract planning, while a low-level module manages rapid, detailed computations.

Instead of slow, step-by-step logic typical of LLMs, HRM cycles through strategic and tactical thinking, allowing it to decide, iterate, and refine without explicit step supervision.

This allows it to handle complex visual puzzles and logical challenges in a single forward pass.

The two modules communicate constantly during problem-solving, much like a chess master collaborating with an assistant.

The model adaptively decides how many reasoning cycles each task needs, a flexibility that echoes human cognition.

Did you know?
HRM’s 27-million-parameter size is nearly 100 times smaller than top LLMs, yet it can beat them on key reasoning benchmarks.

Benchmark Scores: HRM vs Leading LLMs

On the ARC-AGI-1 benchmark, designed to test artificial general intelligence, HRM scored an impressive 40.3% accuracy, surpassing OpenAI’s o3-mini-high at 34.5%, Anthropic’s Claude 3.7 at 21.2%, and DeepSeek R1 at 15.8%.

Even on the newer and harder ARC-AGI-2 benchmark, HRM managed 5%, ahead of its high-powered rivals.

These benchmarks pose abstract reasoning tasks that have stymied even the largest and most advanced models.

Despite its modest size, HRM delivers results where others lag, breaking the assumption that ever-larger parameter sets equal smarter AI. With computational costs dropping dramatically, models like HRM signal a rethink in what truly matters for reasoning performance.

ALSO READ | Quantum security breakthrough enables telecom-ready data protection

Efficiency at Scale: The Future of Lean AI

Unlike GPT-5, rumored to have up to five trillion parameters, HRM’s streamlined architecture drastically cuts resource requirements, handling tasks with just 1,000 training examples.

Traditional LLMs rely on chain-of-thought reasoning and vast amounts of data, often struggling with task decomposition and high latency.

In contrast, HRM’s approach lets it operate with minimal oversight and fine-tuned allocation of computational effort.

Reinforcement learning within the model enables it to “think fast” on easy tasks and “think slow” when dealing with complexity, adapting like a human under pressure.

This kind of agility is crucial for real-world applications where time and hardware constraints matter and hints at a broader industry trend toward smaller, smarter models.

HRM’s success challenges the prevailing race to build ever-bigger AIs by showing that intelligent architecture and training can unlock higher reasoning with fewer resources.

As benchmarks get tougher and costs soar, brain-inspired models could democratize advanced reasoning, making powerful AI widely accessible rather than just a feat of scale.

The industry debate now centers on whether future AI innovation will belong to nimble, brain-like models or to the giants of parameter scaling.

The next wave of breakthroughs may depend more on clever design and targeted training than sheer computational bulk.

Will brain-inspired AI models like HRM reshape the future of reasoning tasks?

Total votes: 526

(0)

Please sign in to leave a comment

Related Articles

MoneyOval

MoneyOval is a global media company delivering insights at the intersection of finance, business, technology, and innovation. From boardroom decisions to blockchain trends, MoneyOval provides clarity and context to the forces driving today’s economic landscape.

© 2025 MoneyOval.
All rights reserved.