A new frontier for onboard artificial intelligence
What if robots became truly autonomous, without relying on a cloud connection? On June 24, 2025, Google DeepMind unveiled Gemini Robotics On-Device, an embedded version of its Gemini artificial intelligence model, designed to operate directly on robotic machines. This technological evolution marks a strategic advance in the field of adaptive robotics, with a watchword: local responsiveness, without network latency.
This launch is part of a wider trend towards miniaturization and optimization of foundation models, capable of being run locally on resource-constrained hardware platforms, while maintaining a high level of cognitive performance.
What uses for AI embedded in robots?
Gemini Robotics On-Device has been designed to equip general-purpose robots, enabling them to understand, adapt and interact finely with their physical environment. Unlike traditional cloud-dependent robotics models, this AI is operational in real time, even without a connection.
Concrete use cases include :
- Handling non-rigid or unstable objects, such as pouring water without spilling or folding clothes;
- Navigation in a dynamic environment, with continuous adaptation to changing obstacles;
- Performing complex tasks in a domestic environment, such as loading a dishwasher, sorting objects or reorganizing a space;
- Intervention in industrial or medical environments where network latency is critical (assistance or inspection robots).
According to DeepMind researchers, Gemini Robotics combines motor planning, visual understanding, spatial reasoning and closed-loop adaptation– in other words, the robot perceives, understands and adjusts its actions without supervision.
A technical breakthrough: the fusion of language and movement
One of the special features of this embedded version is its ability to interpret commands in natural language andtranslate them into coordinated physical actions. This is based on advanced integration between the Gemini language models and the motor control engines (Motion Planning).
In a demonstration, a robot equipped with Gemini On-Device was able to execute an instruction as vague as “clean up this mess” and determine the gestures required to pick up, sort and put away objects in an unfamiliar environment.
This fusion of language, vision and action enables us to design a new generation of versatile robots, capable of adapting to unscripted tasks in real-life contexts.
Towards more autonomous, sober and safe robotics
The choice of running AI without using the cloud opens up several structuring perspectives:
- Reduced latency, crucial for real-time micro-adjustments in motor tasks;
- Enhanced security and confidentiality, as data remains localized;
- Robustness under extreme conditions (no network, jamming, energy constraints) ;
- Reducing energy dependency on the cloud, a central challenge for more sustainable AI.
According to a study by Boston Consulting Group (2024), the global market for embedded robotics will reach $92 billion by 2027, driven by the logistics, healthcare, defense and personal assistance sectors.1.
Democratization to be monitored: what are the limits?
While Gemini Robotics’ technical performance has been hailed, its deployment raises critical issues. What safeguards should be put in place to allow robots to make autonomous decisions without human supervision? How can we avoid a fragmentation of uses according to hardware capabilities? What are the ethical standards for AI operating locally?
For the time being, Gemini On-Device remains reserved for selected partners and experimental environments. But its potential for mass deployment over the next few years could accelerate the transition to ubiquitous, unobtrusive robotics, integrated into our daily lives.
References
1. Boston Consulting Group. (2024). The Rise of Embedded AI in Robotics.
https://www.bcg.com/publications/2024/embedded-ai-robotics