Metricstream Logo
×
Blogs

2026 Guide to AI Regulations and Policies in the US, UK, and EU

blog-3-July-2024-dsk
11 min read

Introduction

The rapid evolution of Artificial Intelligence (AI), particularly Generative and Agentic AI, over the past year has opened up new opportunities across every industry. According to data from Microsoft, 78% of enterprises have now adopted AI technologies, generating an average of $3.70 in ROI for every dollar invested. But, in the wrong hands, AI is capable of unleashing fraud, discrimination, disinformation, stifling healthy competition, disenfranchising workers, and even threatening national security. Even more so, since AI is increasingly being integrated with critical infrastructures.

In response, several countries around the world, including the United States, the European Union, and the United Kingdom, have moved from policy discussions to enforcement reality. The EU is now actively penalizing non-compliance, the UK is transitioning from principles to binding legal frameworks, and the US is navigating a fundamental shift in policy between innovation-focused deregulation and state-level protective measures.

Here is a quick overview of the key regulations being implemented in these three regions.

AI Regulations in the EU: The European Union’s Artificial Intelligence Act

The European Parliament adopted the Artificial Intelligence Act in March 2024. The objective of this Act is to standardize a technology-neutral definition of AI for future reference. Furthermore, the Act aims to ensure that AI systems within the EU are safe, transparent, traceable, non-discriminatory, environmentally friendly, and monitored by people and not automation. The law employs a risk-based approach, with varying requirements depending on the level of risk.

Risk level definition: It defines two levels of risk and states obligations for providers and users depending on the risk level:

  • Unacceptable Risk AI Systems - These are considered to be harmful for people and will be banned:
    • Cognitive behavioral manipulation of people or vulnerable groups
    • Social scoring or segmentation of people based on behavior, personal characteristics or socio-economic status
    • Biometric identification and categorization
    • Real-time and remote biometric identification, such as facial recognition programs
  • High Risk AI Systems - AI systems that can negatively impact fundamental rights and/or the safety of people:

    AI systems are used in products covered by the EU’s product safety legislation, such as toys, aviation devices and systems, cars, medical devices, and elevators.

    AI systems in specific areas that have to be registered with an EU database:

    • Systems used for managing or operating critical infrastructure
    • Systems used for educational and vocational training
    • Those used to employment, employee management, and access to self-employment
    • Those who are involved with access to and utilization of essential private and public services and benefits
    • Law enforcement systems
    • Systems involving migration, asylum, and border control management
    • Those providing legal interpretation and application of laws

Transparency requirements – While the Act does not classify Generative AI as high risk, it mandates transparency requirements and compliance with EU copyright laws:

  • Disclosures that state the content was generated by AI
  • Designing the model to stop it from generating illegal content
  • Publishing summaries of copyrighted data used for training

Supporting Innovation – The Act aims to help startups and small to medium-sized businesses leverage AI by providing opportunities to develop and train AI algorithms before public release. National authorities have to provide companies with suitable testing conditions that simulate real-world conditions.

In 2025, the EU finalized the Code of Practice for general-purpose AI (GPAI), which serves as the framework for compliance with transparency, safety, and copyright measures. Organizations non-compliant with the EU AI Act face severe consequences: penalties reaching up to 35 million euros or 7% of global annual turnover, whichever is higher.

Key Implementation Dates for the EU AI Act

July 12, 2024: AI Act published in the Official Journal of the EU

August 1, 2024: The AI Act officially entered into force (20 days after publication)

February 2, 2025 (6 months after entry into force): Bans on unacceptable risk AI systems (e.g., social scoring, certain biometric systems) take effect. AI literacy requirements begin

August 2, 2025 (12 months after entry into force): Rules for General- Purpose AI (GPAI) models apply. Governance structures, penalties, and confidentiality rules start applying

August 2, 2026 (24 months after entry into force): The majority of the AI Act's provisions become applicable. Obligations for most high-risk AI systems (Annex III) come into force

August 2, 2027 (36 months after entry into force): Rules for high-risk AI systems that are safety components of products (e.g., in medicine, aviation) apply

This phased approach enables the implementation of different AI categories and obligations over time, with full applicability expected by around 2027.

Notable Developments Since the EU AI Act Became Law

  • Establishment of a European AI Office: The Commission set up the European Artificial Intelligence Office to coordinate implementation and oversee GPAI obligations; its establishment took effect in February 2024 (decision entry into force 21 Feb 2024).
  • Code of Practice Uptake & Industry Reaction: The GPAI Code of Practice is voluntary and intended to give legal clarity for signatories; some major companies (e.g., Meta) publicly declined to sign the code, citing concerns about legal uncertainty and scope, while others indicated they would sign. There were also calls from many large firms for delays to implementation. This is an important practical development for industry compliance choices.
  • Enforcement Nuance: While several GPAI rules and transparency obligations apply from Aug 2, 2025, practical enforcement and supervisory arrangements (AI Office & national authorities) have been phased in with national regulators and the AI Office progressively taking on enforcement tasks. Expect cross-border coordination via the European AI Board.

AI Regulations in the UK: United Kingdom’s Response to the White Paper Consultation on Regulating Artificial Intelligence

In February 2024, the UK Government announced its response to the 2023 whitepaper consultation on AI regulation. Its pro-innovation stance on AI follows an outcome-based approach, focusing on two key characteristics – adaptivity and autonomy – that will guide domain-specific interpretation.

It provides preliminary definitions for 3 powerful AI systems that are integrated into downstream AI systems:

  • Highly capable GPAI – large language models fall into this category. These are foundational models that can carry out a wide range of tasks. Their capabilities can range from basic to advanced and can even grow to outpace the most advanced models in use currently.
  • Highly Capable Narrow AI- these can carry out a limited range of tasks within a specific field or domain. These can also meet or outpace the most advanced models in use today within those specific domains
  • Agentic AI – this is an emerging subset of AI technology that can complete numerous sequential steps over long periods of time using tools like the Internet and narrow AI models.

It sets out five cross-sectoral principles for regulators to use when driving responsible AI design, development, and application:

  • Safety, security, and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and Governance
  • Contestability and Redress

The principles are to be implemented on the basis of three foundational pillars:

  • Working with existing regulatory authorities and frameworks – The UK will not be instituting a separate AI regulator. Instead, existing regulatory offices such as the Information Commissioner's Office (ICO), Ofcom, and the FCA will implement the five principles as they oversee their respective domains and use existing laws and regulations. They are expected to quickly implement the AI regulatory framework within their domains. Their strategy must include an overview of the steps taken to align their AI plans with the principles defined in the framework, an analysis of AI-related risks, and an overview of their ability to manage these risks.
  • Creating a central function for risk monitoring and regulatory coordination – The UK has set up a central function within DSIT to monitor and evaluate AI risks and address any gaps in the regulatory environment. This is because AI opportunities and risks cannot be addressed in isolation.
  • Foster innovation via a multi-agency advisory service – A multi-regulatory advisory service, the AI and Digital Hub, will be launched to help innovators ensure complete legal and regulatory compliance before they launch their products.

Key Developments to UK AI Regulation in 2025

The UK has reintroduced a private member’s bill, the Artificial Intelligence (Regulation) Bill, on 4 March 2025 in the House of Lords. If passed, this Bill would create a central “AI Authority” (or at least mandate its establishment) to oversee AI governance, define obligations for businesses using AI, and ensure adherence to principles such as safety, accountability, and governance.

The Bill also proposes other measures:

  • requiring businesses that develop or deploy AI to designate an AI officer responsible for compliance
  • enabling the establishment of AI sandboxes controlled testing environments with regulatory supervision, allowing innovators to experiment under relaxed rules while maintaining oversight.

Key Concerns to the Implementation of the Bill include:

  • As of now (December 2025), the Bill remains a Private Member’s Bill, it lacks clear government backing, and its passage into law is far from guaranteed.
  • Recent public reporting suggests that the government may delay AI regulation while it prepares a more comprehensive, government-backed AI bill, likely to address issues including safety, copyright, transparency, and broader governance. The decision to delay may push such a comprehensive bill into the next parliamentary session, possibly not until 2026 or later.
  • The issue of copyright / data-use transparency for AI training remains unresolved. For instance, amendments proposed in the private-member bill (or related data bills) that would force AI firms to disclose use of copyrighted content have been blocked or stripped out by the government during parliamentary processes — signalling resistance from the government to impose such requirements in piecemeal fashion.

AI Regulations in the United States: America’s AI Action Plan (July 2025)

In July 2025 by the Trump Administration, unveiled Winning the Race: America’s AI Action Plan (the Plan), a comprehensive federal strategy to secure U.S. leadership in AI by accelerating innovation, building infrastructure (like data centers), promoting U.S. AI exports, and deregulating to boost development, focusing on 90+ federal actions, key themes including exporting AI tech, easing data center permits, reducing bias ("woke AI"), and strengthening biosecurity, aiming to outpace global competitors.

The Plan is structured around three strategic pillars, each encompassing several policy measures and initiatives.

Pillar I — “Accelerate AI Innovation”

Major Focus Areas

  • Deregulation and rapid adoption
  • Enabling private-sector innovation
  • Federal procurement as a lever
  • Workforce & skills development

Key Initiatives

  • Review and rollback “unnecessary regulatory barriers” and existing investigations/orders that “unduly burden” AI development
  • Federal funding and procurement to favor AI projects; steer funds away from states with what the plan deems “burdensome” AI regulation
  • Support for open-source and open-weight models; encourage open innovation
  • Workforce training and education: integrate AI skill development into federal education/workforce programs; establish dedicated AI workforce research hubs; support retraining, apprenticeships, tax-free employer-sponsored training for AI skills

Pillar II — "Build American AI Infrastructure"

Major Focus Areas

  • Expand the compute and energy infrastructure
  • Strengthen domestic hardware ecosystems
  • Support critical infrastructure for AI at scale

Key Initiatives

  • Expedite permitting and regulatory approvals for data centers, semiconductor fabrication plants (“fabs”), and associated infrastructure (power, cooling, networking)
  • Modernize and stabilize the U.S. power grid; promote advanced, reliable energy generation (e.g. next-gen sources) to meet the high energy demands of AI workloads
  • Revive and expand U.S.-based semiconductor manufacturing to reduce dependency on foreign supply chains
  • Build secure, resilient data centers and infrastructure for critical sectors (defense, intelligence, healthcare, etc.), with enhanced cybersecurity and incident-response capabilities

Pillar III — "“Lead in International AI Diplomacy and Security"

Major Focus Areas

  • Use U.S. AI leadership to shape global standards
  • Influence export regimes
  • Ensure security and protect intellectual property
  • Promote American tech values abroad

Key Initiatives

  • Export “American AI technology stack” (hardware, software, models, standards) to allies, ensuring secure, full-stack AI export packages
  • Re-orient federal procurement guidelines to favour frontier AI developers that can guarantee neutrality / objectivity (avoiding “ideological bias”) — signaling preference for “frontier models” aligned with U.S. values
  • Strengthen national security posture against misuse, theft or exploitation of AI by malicious actors, including export control, cybersecurity, and IP protection measures
  • Use international diplomacy and collaboration to shape global AI governance, standards, and supply-chain resilience

Key Dates and Deadlines

  • 23 July 2025 — Official publication of the AI Action Plan; the ~90 policy actions become the work programme for federal agencies.
  • The Plan envisages near-term execution by federal departments on many of the measures (per the official White House fact sheet). Because the Plan is a non-statutory roadmap and relies on existing agencies and existing laws, implementation timing will differ per agency, per initiative. Some items (e.g., revising permitting procedures, procurement guidelines) may take effect quickly; others (infrastructure build-out, semiconductor revitalization, energy-grid reforms) will take years.

The Blueprint for an AI Bill of Rights (2022) remains a voluntary framework guiding AI development and deployment. The White House Office of Science and Technology Policy has formulated the bill with five principles to guide the design, use, and deployment of AI systems.

The includes:

  • Safe and Effective Systems
  • Algorithmic Discrimination Protections
  • Data Privacy
  • Notice and Explanation
  • Human Alternatives, Consideration, and Fallback

The AI Bill of Rights remains nonbinding and advisory, with no federal statutory mandate. Federal agencies do not have new authority to enforce these principles. Instead, enforcement still relies on existing laws such as:

  • FTC Act (unfair/deceptive practices)
  • Civil rights statutes
  • Consumer protection and privacy laws

U.S State Laws for AI Regulation

Several state-level laws and regulations related to Artificial Intelligence cover general AI systems, transparency, voice/deepfakes, government use, and other areas. Because many states enact legislation through targeted measures (rather than a single “AI Act”), the types of regulation vary widely.

President Donald Trump has announced this week a forthcoming executive order on AI regulation across the US, set to roll out this week. Signed on 12th December, the order centralizes AI regulation under federal authority, preventing states from imposing separate or conflicting AI rules.

“There must be only One Rulebook if we are going to continue to lead in AI,” Trump said in a Truth Social post. “We are beating ALL COUNTRIES at this point in the race, but that won’t last long if we are going to have 50 States, many of them bad actors, involved in RULES and the APPROVAL PROCESS.”

Stay tuned to this space for ongoing updates on these critical developments in AI regulations that are shaping the GRC landscape.

AI-First Compliance Management with MetricStream

Transform your compliance management with MetricStream's AI-first Compliance Management solution. The solution empowers organizations to adopt an integrated, cost-efficient approach to managing cross-industry regulations while enhancing visibility and reducing redundancies.

Use the power of AI to automatically ingest regulatory updates, map your compliance profile, test controls, and gather evidence, ensuring continuous regulatory effectiveness. Simplify policy management and streamline compliance processes, including:

  • Mapping regulations to processes, assets, risks, controls, and issues
  • Identifying, prioritizing, and monitoring high-risk compliance areas
  • Performing automated control testing and continuous monitoring
  • Creating, managing, and communicating corporate policies
  • Capturing and managing the impact of regulatory changes
  • Managing incidents and cases for better corrective and preventive actions
  • Generating detailed reports with drill-down capabilities

Want to see it in action? Request a personalized demo today!

tharika

Tharika Tellicherry Manager, Product Marketing, MetricStream

Tharika is a Product Marketing Manager at MetricStream, where she leads go-to-market strategy, messaging, and sales enablement for Cyber GRC products. With over eight years of experience driving growth for AI, analytics, and SaaS solutions, she specializes in translating complex technologies into clear, customer-centric narratives that accelerate adoption. A storyteller at heart, she’s passionate about connecting product innovation with meaningful market impact.