EU May Water Down AI Act After US Pressure

EU May Water Down AI Act After US Pressure - Professional coverage

According to Fortune, the European Union is considering major delays to its landmark AI Act following pressure from U.S. tech giants and the Trump administration. The Financial Times reports seeing a draft proposal that would give companies deploying high-risk AI systems a one-year “grace period” before enforcement begins. Penalties for transparency violations could be postponed until August 2027, giving firms “sufficient time” to adjust. The European Commission is internally discussing these changes ahead of an expected November 19 adoption date, though no formal decision has been made. Commission spokesperson Thomas Regnier confirmed that “various options are being considered” regarding potential implementation delays.

Special Offer Banner

US pressure mounting

Here’s the thing: this isn’t just about European companies complaining. The unnamed EU official told the Financial Times that Brussels has been actively “engaging” with the Trump administration on these potential adjustments. That’s significant. We’re seeing direct U.S. government intervention in European regulatory policy. Remember when U.S. Vice President J.D. Vance publicly warned about “excessive regulation” at the Paris AI Summit? That wasn’t just talk – it appears to be part of a coordinated effort to soften Europe’s approach.

What’s actually changing?

The AI Act itself isn’t being rewritten – at least not yet. We’re talking about implementation delays. High-risk AI systems in healthcare, policing, and employment would get an extra year before facing enforcement. Transparency requirement penalties would wait until 2027. But let’s be honest: delayed enforcement often becomes de facto policy change. Companies know that by the time these deadlines roll around, there might be more delays, or the political landscape could shift entirely. It’s basically kicking the can down the road when everyone knows the road might disappear.

Tech company complaints

Meta, Alphabet, and other tech giants have been warning that the AI Act’s broad “high-risk” definitions could stifle innovation. They argue compliance costs are too high, especially for smaller developers. And you know what? They’re not entirely wrong about the bureaucratic burden. But here’s my question: when has “self-regulation” ever worked with Big Tech? We’ve seen this movie before with privacy, with content moderation, with competition. The pattern is always the same – complain about regulation, get delays, then complain some more.

Broader simplification push

The European Commission is framing this as part of its wider “simplification agenda” to create a “more favorable business environment.” They opened a call for evidence in September to gather research on simplifying data, cybersecurity, and AI rules. That sounds reasonable, right? Who doesn’t want simpler regulations? But there’s a fine line between simplification and gutting meaningful protections. The timing here is suspicious – just as global AI competition heats up and U.S. pressure intensifies.

Global implications

This matters far beyond Europe’s borders. The AI Act was supposed to set the global standard, much like GDPR did for privacy. Now we’re seeing the first major retreat. The Trump administration’s light-touch approach is gaining influence, while Europe’s precautionary principle appears to be weakening. And in the background, China continues its own aggressive AI development with very different regulatory priorities. We’re basically watching the opening moves in what will become a decades-long global AI governance struggle. The question isn’t whether regulation will happen – it’s whose approach will dominate.

Leave a Reply

Your email address will not be published. Required fields are marked *