The rapid evolution of Artificial Intelligence (AI), particularly Generative and Agentic AI, over the past year has opened up new opportunities across every industry. According to data from Microsoft, 78% of enterprises have now adopted AI technologies, generating an average of $3.70 in ROI for every dollar invested. But, in the wrong hands, AI is capable of unleashing fraud, discrimination, disinformation, stifling healthy competition, disenfranchising workers, and even threatening national security. Even more so, since AI is increasingly being integrated with critical infrastructures.
In response, several countries around the world, including the United States, the European Union, and the United Kingdom, have moved from policy discussions to enforcement reality. The EU is now actively penalizing non-compliance, the UK is transitioning from principles to binding legal frameworks, and the US is navigating a fundamental shift in policy between innovation-focused deregulation and state-level protective measures.
Here is a quick overview of the key regulations being implemented in these three regions.
The European Parliament adopted the Artificial Intelligence Act in March 2024. The objective of this Act is to standardize a technology-neutral definition of AI for future reference. Furthermore, the Act aims to ensure that AI systems within the EU are safe, transparent, traceable, non-discriminatory, environmentally friendly, and monitored by people and not automation. The law employs a risk-based approach, with varying requirements depending on the level of risk.
Risk level definition: It defines two levels of risk and states obligations for providers and users depending on the risk level:
High Risk AI Systems - AI systems that can negatively impact fundamental rights and/or the safety of people:
AI systems are used in products covered by the EU’s product safety legislation, such as toys, aviation devices and systems, cars, medical devices, and elevators.
AI systems in specific areas that have to be registered with an EU database:
Transparency requirements – While the Act does not classify Generative AI as high risk, it mandates transparency requirements and compliance with EU copyright laws:
Supporting Innovation – The Act aims to help startups and small to medium-sized businesses leverage AI by providing opportunities to develop and train AI algorithms before public release. National authorities have to provide companies with suitable testing conditions that simulate real-world conditions.
In 2025, the EU finalized the Code of Practice for general-purpose AI (GPAI), which serves as the framework for compliance with transparency, safety, and copyright measures. Organizations non-compliant with the EU AI Act face severe consequences: penalties reaching up to 35 million euros or 7% of global annual turnover, whichever is higher.
July 12, 2024: AI Act published in the Official Journal of the EU
August 1, 2024: The AI Act officially entered into force (20 days after publication)
February 2, 2025 (6 months after entry into force): Bans on unacceptable risk AI systems (e.g., social scoring, certain biometric systems) take effect. AI literacy requirements begin
August 2, 2025 (12 months after entry into force): Rules for General- Purpose AI (GPAI) models apply. Governance structures, penalties, and confidentiality rules start applying
August 2, 2026 (24 months after entry into force): The majority of the AI Act's provisions become applicable. Obligations for most high-risk AI systems (Annex III) come into force
August 2, 2027 (36 months after entry into force): Rules for high-risk AI systems that are safety components of products (e.g., in medicine, aviation) apply
This phased approach enables the implementation of different AI categories and obligations over time, with full applicability expected by around 2027.
In February 2024, the UK Government announced its response to the 2023 whitepaper consultation on AI regulation. Its pro-innovation stance on AI follows an outcome-based approach, focusing on two key characteristics – adaptivity and autonomy – that will guide domain-specific interpretation.
It provides preliminary definitions for 3 powerful AI systems that are integrated into downstream AI systems:
It sets out five cross-sectoral principles for regulators to use when driving responsible AI design, development, and application:
The principles are to be implemented on the basis of three foundational pillars:
The UK has reintroduced a private member’s bill, the Artificial Intelligence (Regulation) Bill, on 4 March 2025 in the House of Lords. If passed, this Bill would create a central “AI Authority” (or at least mandate its establishment) to oversee AI governance, define obligations for businesses using AI, and ensure adherence to principles such as safety, accountability, and governance.
The Bill also proposes other measures:
In July 2025 by the Trump Administration, unveiled Winning the Race: America’s AI Action Plan (the Plan), a comprehensive federal strategy to secure U.S. leadership in AI by accelerating innovation, building infrastructure (like data centers), promoting U.S. AI exports, and deregulating to boost development, focusing on 90+ federal actions, key themes including exporting AI tech, easing data center permits, reducing bias ("woke AI"), and strengthening biosecurity, aiming to outpace global competitors.
The Plan is structured around three strategic pillars, each encompassing several policy measures and initiatives.
Pillar I — “Accelerate AI Innovation”
Major Focus Areas
Key Initiatives
Pillar II — "Build American AI Infrastructure"
Major Focus Areas
Key Initiatives
Pillar III — "“Lead in International AI Diplomacy and Security"
Major Focus Areas
Key Initiatives
The Blueprint for an AI Bill of Rights (2022) remains a voluntary framework guiding AI development and deployment. The White House Office of Science and Technology Policy has formulated the bill with five principles to guide the design, use, and deployment of AI systems.
The includes:
The AI Bill of Rights remains nonbinding and advisory, with no federal statutory mandate. Federal agencies do not have new authority to enforce these principles. Instead, enforcement still relies on existing laws such as:
Several state-level laws and regulations related to Artificial Intelligence cover general AI systems, transparency, voice/deepfakes, government use, and other areas. Because many states enact legislation through targeted measures (rather than a single “AI Act”), the types of regulation vary widely.
President Donald Trump has announced this week a forthcoming executive order on AI regulation across the US, set to roll out this week. Signed on 12th December, the order centralizes AI regulation under federal authority, preventing states from imposing separate or conflicting AI rules.
“There must be only One Rulebook if we are going to continue to lead in AI,” Trump said in a Truth Social post. “We are beating ALL COUNTRIES at this point in the race, but that won’t last long if we are going to have 50 States, many of them bad actors, involved in RULES and the APPROVAL PROCESS.”
Stay tuned to this space for ongoing updates on these critical developments in AI regulations that are shaping the GRC landscape.
Transform your compliance management with MetricStream's AI-first Compliance Management solution. The solution empowers organizations to adopt an integrated, cost-efficient approach to managing cross-industry regulations while enhancing visibility and reducing redundancies.
Use the power of AI to automatically ingest regulatory updates, map your compliance profile, test controls, and gather evidence, ensuring continuous regulatory effectiveness. Simplify policy management and streamline compliance processes, including:
Want to see it in action? Request a personalized demo today!