Samsung’s Tiny Recursive Model Outperforms Gemini and DeepSeek
Samsung researchers have unveiled the Tiny Recursive Model (TRM) — a neural network so small it defies expectations, outperforming massive AI systems from Google and DeepSeek in reasoning benchmarks.
A 7-million-parameter breakthrough
The TRM contains just 7 million parameters — less than 0.01% the size of today’s leading large language models. Yet it achieved an impressive 45% accuracy score on the ARC-AGI-1 benchmark, surpassing models like DeepSeek R1, Gemini 2.5 Pro, and o3-mini.
Samsung’s AI lab attributes this performance to a recursive reasoning process: the model drafts an initial answer, evaluates it, and repeatedly refines it through multiple internal passes. This allows a two-layer network to simulate multi-stage reasoning chains typical of deep architectures.
Smarter, faster, and lighter
Despite its tiny footprint, TRM performs complex symbolic reasoning tasks that typically require far larger transformer-based systems. It not only runs faster but also consumes dramatically less power, making it ideal for on-device AI and edge applications.
The research team described TRM as a “proof of concept” showing that intelligence does not necessarily require massive scale. Instead, the recursive self-improvement loop may hold the key to efficient artificial cognition.
Open-source release
Samsung has released the full implementation and training code on GitHub, inviting the AI community to experiment with and extend the recursive design. Early interest from independent developers suggests the method could inspire a new generation of compact reasoning agents.
If TRM’s approach continues to prove effective, it could reshape the current race for larger models — replacing “bigger is better” with “smarter and recursive.”
Editorial Team — CoinBotLab