Trump’s AI Strategy Exposes Europe’s Strategic Ambiguity

In a move to ensure U.S. global leadership in AI, on March 20, 2026, the Trump Administration announced a comprehensive AI strategy.

In a move to ensure U.S. global leadership in AI, on March 20, 2026, the Trump Administration announced a comprehensive AI strategy.[1] The strategy is designed to advance AI innovations in all sectors with a focus on speed, free speech, and a lack of regulations. The strategy is a two-edged sword for Europe. While AI innovations are advancing with lightning speed in the U.S., Europe is experiencing a threat to its digital sovereignty, regulatory authority, and citizens’ personal data.

Europe often views U.S. AI policy through a familiar lens: light-touch regulation, market primacy, and a tolerance for risk. But beneath this surface lies a deliberate strategic logic: artificial intelligence is treated as a general-purpose technology where scale and coherence determine who wins, not just who regulates.

The Trump administration’s framework is not ideological laissez-faire – it is a blueprint for consolidating advantage across compute, data, and models, pre-empting state-level fragmentation while leaving room for targeted safeguards on fraud, consumer protection, and children.[2]

The controversy over federal pre-emption highlights a classic tension in high-stakes technologies. Critics, including US Representatives Don Beyer, Doris Matsui, Ted Lieu, Sara Jacobs, April McClain Delaney, and Senator Brian Schatz, warn of a potential “regulatory vacuum” that leaves Americans exposed.[3] On March 20, 2026, they proposed the GUARDRAILS Act (Guaranteeing and Upholding Americans’ Right to Decide Responsible AI Laws and Standards Act) which aims to repeal or stop the preemption provisions of the EO. The statements of the lawmakers clearly indicate their concerns that the administration is preempting the state laws on AI “without setting even minimally acceptable federal guardrails” and exposes the country to risks from AI in safety, bias, consumer protection, or democracy-related areas and creates a “regulatory vacuum.”  Beyer said this is a “lawless Wild West” for AI companies, while Schatz called it “absurd and dangerous.” [4]Yet the administration sees the greater threat differently: fragmented compliance across fifty different legal regimes risks stalling innovation, slowing deployment, and undermining the creation of foundational scale. In technologies with high fixed costs and network effects, early scale is not a reward for success; it is a prerequisite.


Europe’s model: normative power without industrial depth


Europe, by contrast, has excelled at regulatory sophistication. The Artificial Intelligence Act, the GDPR, the Digital Services Act, and the Data Act have created a global standard-setting machine. Through these legislation, the EU  has constructed a dense regulatory framework grounded in risk management, fundamental rights and market oversight.

The “Brussels effect” has extended European standards beyond its borders, shaping corporate behaviour globally. Yet this success has also created a form of strategic inertia.

Europe has become highly effective at governing digital markets, but less effective at shaping their underlying industrial structure. Fragmented investment across member states, underdeveloped cloud and semiconductor infrastructure, and reliance on non-European capital leave Europe able to dictate rules but not yet to enforce them through operational capability.

This structural asymmetry is particularly evident in cloud infrastructure. U.S. hyperscalers-Amazon Web Services, Microsoft Azure, and Google Cloud-control roughly 65–75% of Europe’s cloud market.[5] Beyond switching costs, this dependence limits Europe’s technological sovereignty, amplifies foreign influence, and grants these providers economic and information leverage over strategic systems. Lobbying efforts to maintain this status quo, while framed as reassurance to European businesses, are in reality an effort to slow European industrial capacity-building. Emphasising the difficulty of change risks entrenching the status quo. It shifts the policy discussion from how to build capacity to why capacity cannot be built.

In this sense, the debate is not purely technical. It is political.

From regulation to industrial policy-quietly

Recent European discussions suggest an awareness of this shift.

The European Commission’s so-called “Omnibus” agenda-aimed at simplifying overlapping regulatory requirements-reflects concern that cumulative compliance burdens may be weighing on competitiveness.[6] While not AI-specific, it signals a broader recognition that regulation has opportunity costs.

More striking are emerging proposals that move beyond regulation altogether.

The French company Mistral AI has suggested the introduction of a levy on providers offering AI or cloud services within the EU. [7] The rationale is not punitive, but redistributive: to capture a share of the value generated by dominant providers and redirect it towards European capacity-building.

Such proposals would once have been politically marginal. Their growing visibility reflects a subtle but important shift: from market-correcting regulation to market-shaping intervention.

Yet Europe’s approach remains, at present, incomplete. Regulation continues to operate largely independently of industrial policy. Initiatives to support infrastructure-whether through federated cloud projects or targeted investment-remain relatively modest in scale. Meanwhile, debates oscillate between calls for openness and warnings against protectionism.

The result is a form of strategic ambiguity. Europe is neither fully committed to building autonomous capacity nor entirely reconciled to dependence. It seeks to preserve openness while mitigating its consequences, to regulate markets while remaining embedded within them. This may prove difficult to sustain.

What the US debate reveals

The internal US debate is instructive precisely because it makes these trade-offs explicit. Critics of the Trump administration emphasise the need for safeguards, accountability and democratic oversight. Supporters emphasise coherence, scale and speed.

Both are, in different ways, correct.

But the structure of the American system allows these tensions to play out within a framework that assumes continued industrial leadership.

Europe faces a different challenge. It must resolve similar tensions while simultaneously building the capacity that would make them less constraining.

The central issue, ultimately, is temporal. The US approach assumes that governance can evolve alongside deployment. Europe’s approach assumes that governance should precede it.

Neither assumption is universally valid. But in technologies characterised by rapid iteration and path dependence, early advantages can become entrenched. Markets consolidate. Standards emerge. Ecosystems form.

At that point, regulation becomes less a tool of design than one of adjustment.

Conclusion: beyond the comfort of rules

The contrast between the US and European approaches is often framed in normative terms: innovation versus precaution, speed versus safety.

In reality, it is about different theories of how power is created and sustained in technological systems.

The United States is aligning its governance model with an existing base of industrial and technological strength. Europe is attempting to extend regulatory influence into domains where its capacity is still developing.

Bridging that gap will require more than incremental reform. It will require a clearer acceptance that regulation, while necessary, is not sufficient-and that capability must be built, not assumed.

The question for Europe is no longer whether it can shape the rules of the AI age.

It is whether it can shape the system to which those rules apply.


[1]President Donald J. Trump Unveils National AI Legislative Framework, The White House

March 20, 2026, available at https://www.whitehouse.gov/articles/2026/03/president-donald-j-trump-unveils-national-ai-legislative-framework/

[2] Executive Order 14365, “Ensuring a National Policy Framework for Artificial Intelligence” (signed December 11, 2025), White House. The order establishes a “minimally burdensome national policy framework” for AI, directing an AI Litigation Task Force to challenge conflicting state laws on grounds including interstate commerce and pre-emption, while preserving limited state authority (e.g., child safety). Available at: https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/.

[3] U.S. Representatives Sara Jacobs, Don Beyer, Doris Matsui, Ted Lieu, and April McClain Delaney (with companion Senate bill by Sen. Brian Schatz) introduced the GUARDRAILS Act (March 2026) to repeal the executive order’s effective moratorium on state AI policies, arguing it preempts safeguards without federal replacements and creates a regulatory vacuum. See press release: https://sarajacobs.house.gov/news/press-releases/jacobs-beyer-matsui-lieu-mcclain-delaney-introduce-legislation-to-repeal-white-house-ai-moratorium.

[4] “Beyer, Matsui, Lieu, Jacobs, McClain Delaney Introduce Legislation to Repeal White House AI Moratorium” available at: https://beyer.house.gov/news/documentsingle.aspx?DocumentID=9009

[5] Synergy Research Group data (2025) indicates U.S. hyperscalers (AWS, Azure, Google Cloud) hold approximately 70% of the European cloud market, with local European providers at around 15%. This aligns with broader estimates of 65–75% dominance in enterprise cloud infrastructure services in Europe. See: https://www.srgresearch.com/articles/european-cloud-providers-local-market-share-now-holds-steady-at-15 (and related 2025 reports).

[6] The European Commission’s “Omnibus” packages (multiple in 2025–2026, starting with Omnibus I on sustainability reporting and due diligence in February 2025) aim to simplify overlapping rules, reduce administrative burdens by billions of euros, and boost competitiveness. This includes ten omnibus proposals in 2025 cutting recurrent costs by €11.9–15 billion. See: European Commission Simplification

related 2025–2026 work programmes. Available at: https://commission.europa.eu/law/law-making-process/better-regulation/simplification-and-implementation/simplification_en

[7] Mistral AI CEO Arthur Mensch proposed (March 20, 2026) a revenue-based levy (1–1.5%) on commercial AI/cloud providers operating in the EU, to fund cultural sectors and content creation, applied equally to foreign providers. See op-ed in Financial Times, available at: https://www.ft.com/content/d63d6291-687f-4e05-8b23-4d545d78c64a and Le Monde: https://www.lemonde.fr/en/international/article/2026/03/20/mistral-ceo-demands-eu-ai-levy-to-pay-cultural-sector_6751643_4.html).

Cristina Vanberghen
Cristina Vanberghen
Prof Dr. Cristina Vanberghen, YSU, Faculty of International Relations, Yerevan State University.