According to Forbes, California has passed significant AI legislation requiring large AI developers to share frameworks for frontier models and disclose catastrophic risk assessments to state agencies. The bill, championed by California state Senator Scott Wiener and signed by Governor Gavin Newsom, represents a scaled-back version of previous attempts that attracted limited industry support during legislative debate. Anthropic was the only major AI company to formally endorse the bill initially, though OpenAI later expressed support after participating in amendment discussions. The legislation has sparked conflict between Anthropic and White House AI and Crypto Czar David Sacks, who accused the company of “regulatory capture strategy based on fear-mongering.” This tension highlights the broader challenge facing AI companies as they navigate conflicting regulatory approaches between state legislatures and federal authorities.
Table of Contents
California’s Regulatory Gravity
California’s position as the nation’s de facto technology regulator isn’t accidental—it’s the product of decades of precedent. The state’s market size, combined with its role as home to Silicon Valley, gives its regulations disproportionate influence. When California sets standards for artificial intelligence safety or data privacy, companies often adopt them nationwide rather than maintaining separate compliance frameworks for different states. This “California effect” has previously shaped national policy on issues from vehicle emissions to consumer privacy, and AI appears to be following the same pattern. The state’s willingness to regulate emerging technologies before federal consensus emerges creates a powerful centripetal force that pulls national policy toward California’s preferences.
Federal Paralysis and Its Consequences
The absence of comprehensive federal AI legislation creates a vacuum that states are rushing to fill. Unlike Europe’s AI Act or China’s centralized approach, the U.S. faces a fragmented regulatory landscape where companies must comply with potentially contradictory requirements across multiple jurisdictions. This patchwork approach increases compliance costs disproportionately for smaller startups, potentially cementing the dominance of established players who can afford sophisticated legal and compliance teams. The failed attempt to include preemption language in broader legislation signals that federal lawmakers recognize the problem but lack the political consensus to solve it. With Congress unlikely to pass sweeping AI regulation given current political divisions, companies face years of regulatory uncertainty.
Industry Schisms and Deeper Divides
The public disagreement between Anthropic and the White House reflects fundamental philosophical divides within the AI industry itself. Companies pursuing more cautious, safety-focused approaches naturally align with stricter regulatory frameworks, while those prioritizing rapid innovation tend to resist oversight. This isn’t merely a business strategy difference—it reflects genuine disagreement about AI’s risks and development timeline. The tension becomes particularly acute when companies’ regulatory positions appear to align with their competitive positioning. When safety-focused companies advocate for regulations that might disadvantage faster-moving competitors, it creates perception challenges even when the policy arguments have merit.
Compliance Realities Emerging
For AI companies operating nationally, California’s legislation creates immediate practical challenges. The requirement to share “frameworks for frontier models” and conduct catastrophic risk assessments demands significant technical and legal resources. Companies must now develop internal processes for these disclosures while navigating uncertain definitions around what constitutes a “frontier model” or “catastrophic risk.” The legislation’s focus on larger developers creates a regulatory threshold that may influence corporate structure decisions—some companies might deliberately stay below certain size metrics to avoid compliance burdens. This could ironically slow innovation by creating disincentives for growth.
Future Regulatory Battlefields
Looking ahead, several other states are likely to follow California’s lead with their own AI legislation, creating an increasingly complex compliance landscape. New York, Illinois, and Washington have all shown interest in AI regulation, potentially creating conflicting requirements around issues like algorithmic transparency, bias testing, and liability. The situation mirrors what happened with data privacy laws, where companies now navigate dozens of state-level regimes. The ultimate resolution may come through court challenges rather than legislation, as companies test the boundaries of state authority to regulate technologies with national security implications. Until then, AI companies face the unenviable task of pleasing multiple masters with fundamentally different priorities.