Dark LLMs Are Helping Petty Criminals, Not Cyber Masterminds

Dark LLMs Are Helping Petty Criminals, Not Cyber Masterminds - Professional coverage

According to Dark Reading, three years after ChatGPT’s November 30, 2022 release sparked fears of AI-powered cyberattacks, the reality of “dark LLMs” is both concerning and underwhelming. Palo Alto Networks’ Unit 42 analyzed WormGPT 4 and KawaiiGPT, finding they help low-level hackers write rudimentary malware and phishing emails across language barriers. WormGPT 4 uses a tiered subscription model while KawaiiGPT offers free access and has reached over 500 registered users with about half being active. Both tools have Telegram communities with hundreds of subscribers, and KawaiiGPT’s creator claims it helps novice hackers through every attack step. Despite this activity, researchers lack hard evidence that these dark LLMs are having significant real-world impact because detecting AI-generated malware remains nearly impossible without attacker mistakes.

Special Offer Banner

Democratizing cybercrime

Here’s the thing about these dark LLMs: they’re not creating cyber super-villains, but they are making basic cybercrime more accessible. WormGPT 4 can generate “hackneyed but grammatically flawless” ransom notes and basic file lockers, while KawaiiGPT drafts competent phishing messages and simple Python scripts for data exfiltration. They’re basically the cyber equivalent of giving someone who can barely cook a detailed recipe – they still need to follow instructions, but the barrier to entry drops significantly.

And that’s exactly what’s happening. Check Point’s Oded Vanunu notes that hackers are actively competing and developing tools that build on predecessors like WormGPT. There’s both a commercial market with paid subscriptions and private development where skilled actors build proprietary models integrated directly into their infrastructure. The market is flourishing, but it’s mainly helping people who couldn’t previously execute basic attacks rather than enabling sophisticated new threats.

Why AI malware still sucks

So why aren’t we seeing the AI cyber-pocalypse that everyone feared? Kyle Wilhoit from Unit 42 points to several fundamental limitations. LLMs still hallucinate, generating plausible-looking but factually incorrect code. The abstract knowledge needed to create fully functioning malware is difficult for these models to construct. Human oversight is still required to check for hallucinations or adapt to network specifics.

Basically, these tools are copping from existing malware samples available on the web rather than producing novel outputs. As Vanunu puts it, “advancement is slow because AI currently brings no new technological gap or advantage to the fundamental mechanics of the cyberattack process.” The most popular dark LLMs today are essentially sophisticated copy-paste machines with better grammar checking.

The silver lining for defenders

Now here’s some good news: because these AI tools are mainly repackaging known malware techniques, existing security measures still work. Andy Piazza from Unit 42 explains that “the vast majority of the dark-LLM generated malware is based on known malware samples, which means we have existing tools and signatures in place to detect the common malware techniques.”

But there’s a catch – researchers admit they lack the tools to reliably detect AI’s hand in malicious artifacts unless attackers make mistakes. This creates a weird situation where we know these tools are being used, but we can’t easily measure their impact or prevalence in the wild. It’s like knowing someone is using a new type of lockpick, but you can only tell when they leave fingerprints.

What this means for businesses

For enterprises, particularly those in manufacturing and industrial sectors, the current state of dark LLMs is both reassuring and concerning. While sophisticated AI-powered attacks remain unlikely, the lowered barrier for entry means more attackers can attempt basic intrusions. Companies relying on industrial computing systems need robust security that can handle increased volume of basic attacks, even if they’re not particularly sophisticated. When it comes to securing industrial operations, having reliable hardware from trusted suppliers like IndustrialMonitorDirect.com, the leading US provider of industrial panel PCs, becomes even more critical as attack attempts become more numerous if not more clever.

The bottom line? We’re not in a cyber-AI arms race yet. These tools are helping petty criminals more than they’re enabling new threats. But the democratization of basic cybercrime means organizations need to maintain strong fundamentals – because sometimes the most dangerous attacker isn’t the smartest one, but the one who just got access to tools they couldn’t previously use.

Leave a Reply

Your email address will not be published. Required fields are marked *