Google DeepMind CEO Demis Hassabis says the AI robotics revolution is now unfolding, arguing that recent breakthroughs in AI software intelligence are removing the biggest barrier that kept robots from becoming broadly useful in the real world.
Why this shift is happening now
Hassabis has framed robotics’ historical limits as an intelligence problem more than a hardware problem, saying the bottleneck was the missing software intelligence needed for robots to deal with messy, changing environments.
That view aligns with Google DeepMind’s push to extend Gemini beyond text and images into vision-language-action systems that can perceive a scene, interpret an instruction, and output motor commands to complete a task.
In late 2025, DeepMind has positioned these advances as a step toward general-purpose robots by pairing embodied reasoning (planning and tool use) with action models that can execute multi-step physical work.
What Google DeepMind launched (and what it does)
Google DeepMind’s Gemini Robotics family is designed to power an era of physical agents, enabling robots to perceive, plan, use tools, and act to solve complex tasks.
The company says Gemini Robotics models can generalize to tasks they were not explicitly trained for, and can work across different robot embodiments (different shapes and hardware).
Gemini Robotics 1.5 and Robotics‑ER 1.5
DeepMind introduced two connected models: Gemini Robotics 1.5 (a vision-language-action model that turns visual inputs and instructions into motor commands) and Gemini Robotics‑ER 1.5 (an embodied reasoning model that plans and can call tools like search).
DeepMind says Robotics‑ER 1.5 is available to developers through the Gemini API in Google AI Studio, while Gemini Robotics 1.5 is available to select partners.
DeepMind describes Robotics‑ER 1.5 as orchestrating the robot’s activity with multi-step planning, while Gemini Robotics 1.5 executes actions and can explain its thinking for transparency.
On-device robotics (local inference)
DeepMind also announced Gemini Robotics On‑Device, positioned as a vision-language-action model optimized to run locally on robot hardware to address latency and connectivity constraints.
DeepMind released a Gemini Robotics SDK to help developers evaluate and adapt the on-device model via a trusted tester program.
A public Gemini Robotics SDK repository describes access pathways for trusted testers and notes model support within the SDK.
Timeline: From new models to revolution talk
DeepMind’s robotics push in 2025 combines model releases and platform access with Hassabis’s public messaging that the AI robotics revolution is no longer theoretical.
| Milestone (2025) | What happened | Why it matters |
| March 10, 2025 | DeepMind published the Gemini Robotics model page describing a vision-language-action model and an embodied reasoning model for robots. | Establishes the physical agents direction and cross-robot generalization goal. |
| June 24, 2025 | DeepMind announced Gemini Robotics On‑Device and a Gemini Robotics SDK aimed at local execution and easier evaluation/adaptation. | Targets real-world constraints like low latency and unreliable connectivity. |
| Sept. 24, 2025 | DeepMind introduced Gemini Robotics 1.5 and Gemini Robotics‑ER 1.5, including API availability for Robotics‑ER 1.5. | Brings agentic planning + tool use into robotics workflows at scale. |
| Dec. 25, 2025 | Hassabis publicly said the robotics bottleneck has been software intelligence and suggested the inflection point is here. | Signals a strategic claim: robotics is moving from bespoke engineering to foundation-model generalization. |
Where robots may show up first
DeepMind’s framing suggests early adoption will favor settings where structured tasks still benefit from better generalization, such as logistics, warehouses, and controlled service environments.
Industry data also points to demand drivers like labor shortages, which the International Federation of Robotics (IFR) cites as a key factor behind professional service robot growth.
IFR reports that professional service robot sales reached nearly 200,000 units in 2024, up 9%, highlighting expanding use across sectors such as logistics and healthcare.
Service robot demand signals (IFR-reported)
The IFR’s World Robotics reporting shows continued momentum in service robotics, including strong growth in logistics and notable increases in medical robots.
| Segment (2024) | Units / change | Detail |
| Professional service robots (total) | ~200,000 units (+9%) | IFR says labor shortages are a key driver. |
| Transport & logistics robots | 102,900 units (+14%) | Largest application category in the IFR’s reporting. |
| Medical robots | ~16,700 units (+91%) | IFR-reported surge, with sharp growth in multiple sub-categories. |
| Consumer service robots | ~20 million units (+11%) | Growth led by domestic tasks like floor cleaning and lawn mowing. |
Safety and governance: What DeepMind says it is doing
DeepMind says it is taking a comprehensive approach to safety, including safeguards and collaboration with experts, policymakers, and an internal Responsibility and Safety Council.
In its Gemini Robotics 1.5 announcement, DeepMind described alignment and safety work spanning semantic reasoning, adherence to safety policies, and triggering low-level robot safety subsystems (for example, collision avoidance).
DeepMind also points to the ASIMOV benchmark as part of evaluating semantic safety for models acting as robot brains, and described releasing an upgraded version alongside its robotics updates.
What Hassabis’ claim means for the robotics industry
Hassabis’ central argument is that foundation models are compressing timelines by reducing the need for hand-coded behaviors and bespoke engineering for each new environment and task.
DeepMind’s product strategy reflects that claim by combining a planner (Robotics‑ER 1.5) that can create multi-step plans and call tools with an action model (Robotics 1.5) that can execute and adapt mid-task.
DeepMind also highlights learning across embodiments, describing transfer of skills between platforms such as ALOHA robots, the bi-arm Franka, and Apptronik’s humanoid Apollo.
Final thoughts
The AI robotics revolution narrative now has concrete product pillars—agentic planning models, action models, and on-device deployment options—that are aimed at moving robotics from controlled demos into repeatable operations.
The next test is whether these systems can maintain reliability, safety, and cost-effectiveness outside labs while handling the long tail of real-world variability that has historically broken robotics rollouts.
If DeepMind’s approach delivers consistent performance across tasks and robot bodies, it could accelerate adoption in logistics and healthcare—two areas highlighted in IFR reporting as major growth engines for service robots.






