According to TechRadar, security researchers at Koi Security discovered two malicious AI-powered extensions on Microsoft’s official Visual Studio Code Marketplace. The extensions, named “ChatGPT – 中文版” and “ChatMoss,” had a combined total of over 1.5 million installs. They were part of a campaign dubbed ‘MaliciousCorgi’ and functioned by exfiltrating sensitive user data to a third-party server in China. The mechanisms included real-time uploading of any opened file’s entire contents, a server-controlled command to steal up to 50 workspace files, and hidden tracking iframes using analytics SDKs. Microsoft confirmed it is looking into the situation, but as of the report, the malicious add-ons were still available for download from its marketplace.
The breach mechanics
Here’s the thing that’s particularly chilling about this attack: its simplicity and audacity. We’re not talking about some complex, nation-state level exploit. The moment you opened a file in your editor—just opened it—the extension would read the entire thing, encode it, and ship it off. Not snippets. The whole file. And that’s before the hidden command that could grab dozens more files from your project workspace. It’s a stark reminder that even tools from an official marketplace, especially those riding the AI hype wave, can be wolves in sheep’s clothing. The use of commercial analytics SDKs in a hidden iframe is another clever, scummy layer, building profiles on developers’ behavior. It’s a full-spectrum surveillance operation wrapped in a “helpful” AI assistant.
A marketplace trust problem
So, what does this say about Microsoft’s vetting process for the VSCode Marketplace? Having 1.5 million installs isn’t an overnight achievement; these extensions had time to operate and steal data at scale. The fact they were still live after being reported to BleepingComputer is a bad look. It points to a reactive, not proactive, security model for these storefronts. Developers assume a basic level of trust when installing from the official source, but this incident basically shatters that assumption. It’s a wake-up call that you need to treat every extension, no matter its source, as a potential security risk. I think we’ll see a lot more scrutiny on AI tool extensions specifically now, and probably should.
The broader implications
This isn’t just about stolen code. Think about what’s in a developer’s workspace. API keys, database credentials, internal configuration files, proprietary algorithms, unreleased product code—the crown jewels of a company. For individual developers, it could be sensitive personal projects or client data. The exfiltration to servers in China adds a geopolitical dimension that many organizations’ security teams will be deeply concerned about. And let’s be real, how many of those 1.5 million users will ever know their data was taken? The silent, automated nature of this theft is what makes it so effective and dangerous. It’s a perfect storm of a desirable tool (AI coding help), a trusted platform (Microsoft’s marketplace), and a complete betrayal of that trust.
