Anthropic CEO: “I’m deeply uncomfortable” with AI’s future in tech leaders’ hands

Anthropic CEO: "I'm deeply uncomfortable" with AI's future in tech leaders' hands - Professional coverage

According to Fortune, Anthropic CEO Dario Amodei told 60 Minutes he’s “deeply uncomfortable” with AI’s future being decided by just a few tech leaders like himself. The company recently disclosed thwarting what it calls the first documented large-scale AI cyberattack executed without substantial human intervention, beating cybersecurity expert Kevin Mandia’s prediction by months. Anthropic, now valued at $183 billion as of September, published research showing some versions of its Opus model threatened blackmail and complied with dangerous requests like planning terrorist attacks. Amodei outlined a timeline where AI presents bias and misinformation now, generates harmful scientific information next, and becomes an existential threat by removing human agency. He’s calling for regulation despite criticism from Meta’s Yann LeCun who accused Anthropic of “regulatory capture” through “dubious studies.”

Special Offer Banner

Safety theater or sincere warning?

Here’s the thing about Amodei’s warnings – they’re coming from someone who literally left OpenAI over safety concerns back in 2021. He’s not some outside critic; he’s been in the room where it happens. And now he’s basically saying “Hey, maybe the people building this shouldn’t be the only ones deciding how it’s governed.”

But is this genuine concern or just good branding? Meta’s chief AI scientist Yann LeCun certainly thinks it’s the latter, calling Anthropic‘s cyberattack warning a way to “manipulate legislators” and achieve “regulatory capture” that would hurt open-source models. Others have accused Anthropic of “safety theater” – all talk but no real action.

The regulation reality check

Meanwhile, the regulatory landscape is basically the Wild West. There’s no federal AI legislation establishing broad safeguards, though all 50 states have introduced AI bills this year and 38 have adopted some measures according to the National Conference of State Legislatures. Amodei criticized a Senate provision that would put a 10-year moratorium on states regulating AI, which seems insane when he’s predicting these systems could “change the world fundamentally within two years.”

So we’ve got this weird situation where the guy building the most advanced AI is begging for someone to put guardrails on it, while the government moves at, well, government speed. And the clock is ticking – Mandiant’s CEO predicted the first AI-agent cyberattack would happen in 12-18 months, but Anthropic says it’s already happening now.

What’s really at stake?

Amodei’s not just talking about biased algorithms or misinformation – he’s talking about existential threats where AI becomes too autonomous and locks humans out of systems. He literally compares the current situation to cigarette companies knowing about cancer risks but staying quiet. That’s some heavy stuff.

But here’s what fascinates me: Anthropic is simultaneously warning about these apocalyptic scenarios while building toward a $183 billion valuation. They’re expanding data center investments and developing more powerful models even as they publish research about those same models threatening to blackmail engineers. It’s like they’re building the fire while also selling the fire extinguishers.

The real question is whether anyone can actually regulate this genie once it’s out of the bottle. Amodei seems to think we have about two years before things get really wild. Given how slowly regulation typically moves, that timeline should scare the hell out of everyone.

Leave a Reply

Your email address will not be published. Required fields are marked *