TL;DR
- Malicious browser extensions are secretly monitoring and exfiltrating users’ AI conversations, a practice dubbed “prompt poaching.”
- Bad actors are cloning popular extensions, or adding the functionality after establishing a large user base, to harvest sensitive AI chat data.
- Organizations should restrict unapproved extensions, review permissions, and steer users toward official AI tools in order to avoid exposing sensitive data.
For many folks, using an AI assistant in browser means opening a new tab, navigating to a website, and asking questions. This works for many use cases, but often means bringing content to the agent, either by summarizing or copy/pasting from other locations. The assistant in this case has no awareness of the conversations, context, or history in the other browser tabs. In short, the agent is effectively siloed. This isolation can be seen as good from a security and privacy perspective, but presents challenges from a usability standpoint.
This usability gap has led to the creation of tools that bring further awareness to the AI tools. While this shift has taken several forms, one area of rapid growth is AI-powered browser extensions. These extensions afford users the ability to work across browser tabs, simplifying the ingestion of content into the AI agent and streamlining the experience significantly.
This convenience comes with a tradeoff. The AI agents become more ingrained with other browsing activity, sitting alongside your banking tab, emails, and even personal documents. That AI assistant that’s peeking at your current tab to help summarize an article can also create an opportunity for something more sinister. Some of these browser extensions are silently monitoring, copying, and exfiltrating your AI conversations without you knowing.
Prompt poaching
We’ve fielded several dozen incidents in the last month involving Chrome browser extensions performing what can be viewed as malicious activity. These extensions provide similar functionality to other, legitimate extensions; however, they contain the undesirable capability of actively seeking out tabs involving AI conversations, collecting, and then sending the content to external servers.
This activity has been dubbed prompt poaching by the security firm Secure Annex. The functionality is fairly straightforward—the browser extension monitors open tabs, and upon seeing an AI client loaded, will monitor for and collect questions and answers using API interception or DOM scraping. The extension will then package them up and send them to an external server run by the browser extension’s developers.
Monkey see, monkey dupe
In many of these cases, the bad actors have simply taken popular browser extensions and cloned them, adding the malicious functionality that collects and exfiltrates AI conversations. For instance, several of the malicious extensions that we’ve seen appear to be copies of an extension developed by AITOPIA, only with added, malicious functionality.
| Name | Extension ID |
|---|---|
|
Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI |
fnmihdojmnkclgjpcoonokmkhjpjechg |
|
AI Sidebar with Deepseek, ChatGPT, Claude, and more |
inhcgfpbfdjbjogdfjbclgolkmhnooop |
|
Talk to ChatGPT |
hoinfgbmegalflaolhknkdaajeafpilo |
Not all extensions collecting AI conversations are clones, however. Another example is Urban VPN Proxy, which began as a legitimate, useful tool. Then, after establishing a large enough base, the actors behind the extension inserted the malicious functionality. Any user with the extension installed, or who installed it after this stage, would have their AI conversations exfiltrated without their knowledge or consent.
| Name | Extension ID |
|---|---|
|
Urban VPN Proxy |
eppiocemhmnlbhjplcgkofciiegomcon |
The risks
It almost goes without saying that these plugins open the doors to several risks, including identity theft, targeted phishing campaigns, and sensitive data being put up for sale on underground forums. In the case of organizations where employees may have unwittingly installed these extensions, they may have exposed intellectual property, customer data, or other confidential information.
In the end, these browser extensions are collecting what is bound to contain sensitive information. From a business standpoint, we highly recommend prohibiting the use of such extensions within your environment and managing browser extensions through available management tools.
What to look out for
There is a clear use case for browser extensions that allow for a more seamless experience across browser tabs, but it’s important to be wary of malicious extensions. Here are several suggestions on how to approach AI related browser extensions:
- If users are installing these extensions, it could speak to a productivity gap present in your organization’s current workflows. Identifying these gaps and recommending sanctioned tools can go a long way in solving this issue, and reducing the number of unauthorized extensions in use.
- Stick with extensions developed directly by the AI company in question. Most major AI companies now offer them, and there are also several desktop clients and mobile apps that can provide similar functionality.
- Review the permissions of browser extensions before installing them. Look out for extensions where the requested permissions extend beyond the advertised functionality.
- Manage browser extensions within the workplace using Group Policy or browser management consoles, limiting usage to extensions that have been reviewed and approved.
- Perform periodic extension inventory audits and monitor browser processes looking for anything that periodically connects to unknown domains.
