Google has opened the Gemini Deep Research agent to developers via a new Interactions API, while adding new security layers to Gemini-powered Chrome to reduce prompt-injection risks as AI browsing becomes more action-oriented.
Google opens Gemini Deep Research agent to developers
Google says it has “reimagined” the Gemini Deep Research agent and is now making it accessible to developers through a new Interactions API, enabling third parties to embed long-running research and synthesis workflows directly into their own apps. The company announced the rollout on December 11, 2025, positioning Deep Research as a built-in agent developers can call as part of a broader push toward agent-based software.
What Deep Research does (and how it works)?
Deep Research is designed for multi-step investigations: it plans an approach, runs searches, reads results, identifies gaps, and repeats—then compiles findings into a report. Google describes it as optimized for “long-running context gathering and synthesis tasks,” and says the system is trained to reduce hallucinations and improve report quality.
In Google Cloud’s Gemini Enterprise documentation, Deep Research is described as a “Made by Google” agent that generates a research plan, streams progress as it works, and produces a final report with citations and an audio summary.
The model behind it
Google says the agent’s “reasoning core” uses Gemini 3 Pro, which it calls its “most factual model yet,” and that Deep Research has been trained specifically to improve reliability during complex tasks.
Interactions API: Google’s new “unified” interface for models and agents
Alongside the Deep Research announcement, Google introduced the Interactions API, calling it a unified interface for interacting with Gemini models and specialized agents. It is available in public beta through the Gemini API in Google AI Studio, using a Gemini API key.
What’s new for developers?
Google frames Interactions API as a foundation for building “agentic” apps where messages, tool calls, and state can be managed more cleanly than a simple request/response workflow. Google highlights several capabilities aimed at long-running tasks:
- Single REST endpoint (/interactions) for models and agents.
- Server-side state (optional) to reduce client-side history management.
- Background execution for long-running loops without keeping a client connection open.
- Remote MCP tool support so models can call Model Context Protocol servers as tools.
- A defined way to call agents by setting an “agent” parameter, not only a “model” parameter.
Google also lists the currently supported Deep Research agent identifier as deep-research-pro-preview-12-2025.
Where Deep Research goes next?
Google says Deep Research will “soon” be available or upgraded across more of its own products—specifically naming Google Search, NotebookLM, and Google Finance, in addition to improvements coming to the Gemini app.
Google also says the Interactions API and Gemini Deep Research are “coming soon” to Vertex AI, and that its Agent Development Kit (ADK) and Agent2Agent (A2A) protocol already support the Interactions API.
Benchmarks and numbers Google is using to sell “better research”
Google is backing the Deep Research upgrade with benchmark results and a new dataset.
DeepSearchQA: Google’s new benchmark for research agents
Google says it is open-sourcing DeepSearchQA, describing it as a benchmark intended to better reflect real-world multi-step web research. The company says DeepSearchQA contains 900 “causal chain” tasks across 17 fields, where each step depends on prior analysis, and that it is designed to test both comprehensiveness and precision.
Reported performance on multiple evaluations
Google published these performance figures for Deep Research:
- 46.4% on Humanity’s Last Exam (HLE) full set.
- 66.1% on DeepSearchQA.
- 59.2% on BrowseComp.
SiliconANGLE adds context that HLE includes 2,500+ questions, many related to math, physics, and programming, and it repeats Google’s stated 46.4% score.
What Google announced for Deep Research (Dec. 11, 2025)
| Item | What Google says | Why it matters |
| Deep Research via API | Available to developers through Interactions API | Makes Deep Research a platform capability, not just a Gemini app feature |
| DeepSearchQA benchmark | Open-sourced; 900 tasks across 17 fields | Creates a public yardstick for multi-step web research agents |
| Product expansion | “Soon” coming to Search, NotebookLM, Finance | Suggests Deep Research is moving into mainstream Google surfaces |
Gemini-powered Chrome expands—and Google emphasizes safety
While Google pushes research agents outward to developers, it is also extending Gemini into the browser and tightening defenses.
Gemini arrives in Chrome on iPhone and iPad
Google has started rolling out Gemini integration for Chrome on iOS, with a Gemini icon appearing in the address bar (replacing the Google Lens entry point). Users can ask Gemini about the current page or request a summary.
The Verge and 9to5Google report that access requires being signed into Chrome, the feature does not work in Incognito, and rollout is U.S.-first with English-only support at launch.
Desktop rollout context
Gemini in Chrome also expanded on desktop earlier this year. 9to5Google reported that Gemini began rolling out to all Chrome users on Windows and Mac in the U.S. (English) on September 18, 2025, following an earlier launch period tied to Google’s developer events and staged availability.
The guardrails: Chrome’s “agentic” future meets prompt-injection threats
Google’s security focus is tied to what it calls agentic capabilities—AI that can do more than summarize, potentially taking actions online on a user’s behalf.
In a December 8, 2025 post, Google’s Chrome security team describes the “primary new threat facing all agentic browsers” as indirect prompt injection. The company warns that malicious instructions can appear on hostile websites, inside third-party iframes, or in user-generated content, and can trick an agent into unwanted actions such as financial transactions or data exfiltration.
What Google says it is adding?
Google describes a layered defense that combines deterministic controls with model-based checks. Key measures include:
- A user alignment critic, where a separate model vets whether actions match the user’s intent, isolated from untrusted content
- Expanded origin-isolation constraints to limit which web origins the agent can interact with based on relevance to the task
- User confirmations for critical or sensitive steps
- Work logs and observability to help users understand what the agent did and stop it
- Ongoing red-teaming and response efforts
Independent coverage of the same changes highlights Google’s naming of components such as User Alignment Critic and Agent Origin Sets, framing them as defenses against prompt injection and unsafe cross-site behavior.
Deep Research vs. Gemini-powered Chrome (what each is optimized for)
| Product | Primary use | “How it helps” | Main risk area |
| Gemini Deep Research agent | Long-form research & synthesis | Plans multi-step investigation, searches, fills gaps, drafts reports with sources | Source integrity and data-access control during long runs |
| Gemini-powered Chrome | In-the-moment browsing assistance | Page-aware Q&A and summaries; pathway to task-oriented browsing | Prompt injection and unintended actions during browsing |
Why Google is doing both at once?
Seen together, these updates show the same strategy from two angles:
- On the developer side, Google is turning Deep Research into something developers can build on via a unified agent API (Interactions API).
- On the consumer side, Google is embedding Gemini deeper into browsing, while publicly outlining a security architecture meant to keep AI actions aligned with user intent.
Google’s message is that research agents and browsing agents are becoming part of normal software—and that “helpfulness” must ship alongside clear constraints when models start interacting with the real web.
Takeaways: What comes next
Google’s December announcements point to a near-term future where AI is not just a chat interface, but an execution layer across products. The Gemini Deep Research agent opening to developers via Interactions API could accelerate new tools for market research, due diligence, and knowledge work—especially if the beta matures into stable production use and expands to Vertex AI as Google has indicated.
At the same time, Chrome’s evolving “agentic” direction makes safety design central. By publishing its approach to indirect prompt injection and alignment checks, Google is signaling that action-taking AI in browsers will rise or fall on user trust, permissions, and transparency—not just model quality.






