The Governance Gap
Artificial intelligence is developing faster than the institutions designed to govern it. Machine learning systems are being deployed in hiring decisions, loan approvals, criminal sentencing recommendations, medical diagnostics, and military targeting — often with limited transparency and minimal regulatory oversight. Governments around the world are now scrambling to catch up, but they are doing so with very different philosophies, priorities, and legal traditions.
The European Union: The Rights-Based Model
The EU's Artificial Intelligence Act, which entered into force in 2024, represents the world's most comprehensive binding AI regulation to date. Built on a risk-tiered framework, it:
- Bans certain AI applications outright — including real-time biometric surveillance in public spaces and social scoring systems
- Imposes strict requirements on "high-risk" AI used in critical infrastructure, employment, education, and law enforcement
- Requires transparency disclosures for AI systems that interact with the public
- Establishes enforcement through national market surveillance authorities and an EU AI Office
The EU approach prioritizes fundamental rights and precautionary governance. Critics argue it may impede innovation and put European AI companies at a competitive disadvantage relative to less-regulated rivals.
The United States: Sectoral and Voluntary Approaches
The United States has not enacted comprehensive federal AI legislation. Instead, regulation has proceeded through existing agencies applying existing authorities — the FTC on consumer protection, the EEOC on employment discrimination, the FDA on medical AI — supplemented by executive orders and voluntary industry commitments.
This fragmented approach reflects American regulatory culture and deep political disagreement about the proper role of federal intervention in technology markets. Proponents argue it is flexible and avoids stifling innovation. Critics contend it leaves critical gaps — particularly for high-stakes applications that fall between regulatory silos.
China: Strategic Development with State Oversight
China has adopted a distinctive model: aggressive state support for AI development as a strategic national priority, combined with targeted regulations on specific applications. Beijing has issued regulations on algorithmic recommendation systems, deepfakes, and generative AI — focused partly on preventing content that challenges political stability. The state maintains strong oversight over data flows and AI deployment, particularly in sensitive sectors.
China's approach treats AI governance as inseparable from national competitiveness and domestic security imperatives, a framing quite different from the rights-centered EU model or the market-led U.S. approach.
The United Kingdom: Pro-Innovation Flexibility
Post-Brexit, the UK has positioned itself as offering a more permissive regulatory environment than the EU. Rather than creating a new AI-specific regulator, the UK government has asked existing sector regulators to apply AI oversight within their domains, guided by cross-cutting principles rather than binding rules. This approach bets on agility and industry-friendliness but may produce inconsistent protection across sectors.
Key Unresolved Questions
Regardless of jurisdiction, regulators grapple with similar fundamental challenges:
- Transparency and explainability — how to require meaningful disclosure from systems whose decision logic is often opaque even to their developers
- Liability — who is legally responsible when an AI system causes harm
- Frontier models — how to govern the most capable general-purpose AI systems before their full capabilities and risks are understood
- International coordination — preventing "races to the bottom" where development migrates to the least regulated jurisdiction
Why the Choices Made Now Matter
AI governance decisions made in the next few years will shape the development trajectory of a technology with transformative potential across every sector of society. The norms, standards, and legal frameworks established now — or notably absent — will be path-dependent, difficult to reverse, and deeply consequential. For citizens, understanding what their governments are and are not doing to govern AI is no longer optional background knowledge. It is a civic necessity.