Europe Stands Firm on AI Liability Rules Amid Trump Administration Pressure
The EU denies scrapping AI liability rules due to US pressure, citing competitiveness goals, as transatlantic tech tensions escalate.
The European Union has firmly denied allegations that its decision to abandon the AI Liability Directive resulted from pressure by the Trump administration, instead framing the move as part of a broader strategy to streamline regulations and enhance tech sector competitiveness. The development coincides with heightened transatlantic friction over AI governance, as U.S. Vice President JD Vance criticized EU tech rules at the Paris AI Summit.
EU Prioritizes Competitiveness Over Liability Framework
The European Commission confirmed the withdrawal of the 2022 AI Liability Directive in its 2025 work program, emphasizing a shift toward reducing bureaucratic hurdles for AI innovation. Digital Chief Henna Virkkunen stated the bloc will instead focus on implementing its AI Act and associated codes of practice, which outline lighter reporting requirements for AI developers.
The scrapped directive aimed to simplify lawsuits against AI systems causing harm, such as discriminatory hiring algorithms. Critics like the Centre for Democracy and Technology warn its removal weakens accountability mechanisms, calling it a “significant setback” for victims of AI-related harms.
US-EU Tech Policy Divide Widens
The decision follows direct appeals from Trump administration officials urging Europe to ease AI restrictions. During the Paris summit, Vance argued that strict regulations risk “paralyzing one of the most promising technologies,” while pledging to challenge foreign tech rules affecting U.S. firms.
| Key Regulatory Conflicts | EU Position | US Position |
|---|---|---|
| AI Liability Rules | Withdrawn | Opposed |
| Content Moderation (DSA) | Enforcing | Censorship |
| Market Dominance (DMA) | Investigating | Anti-competitive |
Major U.S. tech firms face mounting EU scrutiny:
-
Meta fined €1.2B in 2023 for data breaches
-
Apple ordered to pay €13B in back taxes
-
Google confronting antitrust probes
Enforcement Flexibility Sparks Debate
While EU lawmakers urge steadfast enforcement of laws like the Digital Services Act (DSA) and Digital Markets Act (DMA), analysts note the Commission retains discretion in applying these rules. For instance, platforms can adjust content moderation policies without automatic penalties if overall risk mitigation efforts appear sufficient.
The EU’s AI Act now carries greater weight following the liability directive’s withdrawal, with binding requirements for high-risk AI systems already taking effect. A revised Product Liability Directive also modernizes compensation rules for defective tech products.
Reactions to the Policy Shift
-
Enza Iannopollo (Forrester): “The AI Act remains the regulatory cornerstone – firms must stay focused on compliance”
-
Laura Lazaro Cabrera (CDT): “Withdrawal undermines rights protections, prioritizing innovation over accountability”
As Europe mobilizes €300B for AI development through initiatives like the Paris summit, the regulatory recalibration signals a strategic pivot toward nurturing homegrown tech champions while managing geopolitical pressures.

