SEARCH BY:
Blog  |  April 30, 2025

The Updated State of AI Regulations for 2025

In January of last year, we discussed some of the current AI regulations that exist, as well as resources to stay current as the AI regulation landscape changes.

Of course, as Ferris Bueller would say: “Life moves pretty fast. If you don’t stop and look around once in a while, you could miss it.” That statement could certainly be applied to AI regulations, as a lot of changes have happened over the past 15 months. With that in mind, we have developed an update to our article on the landscape of global AI legislation that was published last year in order to reflect changes that have happened since our original post.

Updates to the Global AI Legislation Landscape

As we did last year, we have organized the update by jurisdiction to illustrate the current global landscape, as follows:

United States

The U.S. still lacks a comprehensive federal AI law, but 2024 saw a surge of proposals and state-level actions. Congress introduced numerous AI bills focused on accountability and transparency, though none have become law yet​. However, at the state level, at least 45 states proposed AI-related bills in 2024, and 31 states and territories enacted AI laws or resolutions​. Examples include:

  • Colorado passed the first broad AI law requiring developers of “high-risk” AI to use reasonable care to prevent algorithmic bias and to disclose AI use to consumers.
  • New Hampshire criminalized malicious deepfakes.
  • Tennessee passed the Ensuring Likeness, Voice, and Image Security Act (ELVIS) Act barring unauthorized AI simulations of a person’s likeness or voice.
  • Maryland set rules for AI use in state government.
  • California enacted a package of AI laws in September 2024 on topics from deepfakes to transparency​. These include the Defending Democracy from Deepfake Deception Act (AB 2655) mandating large online platforms to detect and label materially deceptive AI-generated election content​, and the AI Transparency Act (SB 942, effective Jan. 2026) requiring AI services with 1M+ users to disclose AI-generated content and implement content detection measures.

Similar to data privacy laws, the patchwork of state AI laws is rapidly expanding in the absence of overarching federal regulation.

On the federal level, executive actions have shaped AI policy. In October 2023, President Biden issued Executive Order 14110 on “Safe, Secure, and Trustworthy AI”, directing a broad range of AI oversight measures (from safety standards to equity and civil rights protections). However, in January 2025 the new administration took a different course: President Trump rescinded Biden’s order and replaced it with a new order, “Removing Barriers to American Leadership in AI,” which shifts toward deregulation and explicitly prioritizes AI innovation and U.S. competitiveness, instructing agencies to eliminate policies that might “hinder American AI dominance”.

European Union

As we discussed last year, the European Union achieved a landmark milestone by enacting the world’s first comprehensive AI law. The EU AI Act was formally adopted in mid-2024 after intensive inter-institutional negotiations. On 12 July 2024, the final Artificial Intelligence Act (Regulation (EU) 2024/1689) was published in the EU’s Official Journal​. This regulation, effective August 1, 2024, establishes harmonized rules for AI across all 27 EU states. Most provisions, however, will not be enforced until August 2, 2026, allowing a two-year phase-in period for compliance.

The EU AI Act takes a risk-based approach: it prohibits a few unacceptable AI practices outright and imposes escalating obligations on other AI systems based on their risk level. For example, AI systems deemed “high-risk” (such as AI in medical devices, recruiting, or credit scoring) must meet strict requirements for risk management, data governance, transparency, human oversight, and conformity assessments before being deployed. Providers of high-risk AI will have to register their systems in an EU database and obtain CE certification. The Act also bans uses of AI that are seen as contrary to EU values – notably real-time biometric identification in public spaces for law enforcement, “social scoring” of citizens by governments, and AI that exploits vulnerable groups in harmful ways (with limited exceptions).

United Kingdom

Outside the EU, the United Kingdom has adopted a different approach, preferring guidance over enacting new laws so far. In March 2023, the U.K. government published a white paper outlining a “pro-innovation” AI regulatory framework. Instead of a single AI statute, the U.K. is empowering sectoral regulators (like the health, financial, or transportation regulators) to issue AI rules tailored to their domains​. The strategy centers on five principles (safety, transparency, fairness, accountability, and contestability), which regulators are expected to enforce using existing powers. Companies in the U.K. are advised to follow the white paper principles and monitor regulatory guidance. The landscape may shift in 2025 if the U.K. moves forward with an AI Act of its own, potentially influenced by international developments like the EU AI Act and US policies.

Canada

Canada’s effort to pass a federal AI law, the Artificial Intelligence and Data Act (AIDA), has seen twists and delays since 2024. AIDA was proposed as part of Bill C-27 (an omnibus bill that also updates privacy law)​. The act would establish rules for “high-impact” AI systems – requiring impact assessments, mitigation of biases, and registration of such systems – and prohibit reckless AI deployments that could cause serious harm. However, as of early 2025, AIDA has not yet been enacted.

Throughout 2024, Bill C-27 progressed through Parliament but faced extensive review and debate​. In January 2025, the Canadian Parliament was prorogued, effectively killing Bill C-27 on the order paper. This means AIDA in its current form has “died” in committee and would need to be reintroduced in a new parliamentary session.

China

China has been rapidly expanding its AI regulatory framework, focusing on guiding the development of AI in a safe and government-supervised manner. In mid-2023, China implemented pioneering rules on generative AI services, and it has built on those in 2024–2025. The Interim Measures for Generative AI Services (effective August 15, 2023) require providers of generative AI that is accessible to the public to ensure content is lawful, truthful, and labeled if AI-generated, and to register their algorithms with regulators.

Since January 2024, Chinese authorities have released further refinements: for example, in May 2024 the government’s standards body (NISSTC) issued draft Security Requirements for Generative AI, detailing technical measures to secure training data and models. Also, one notable new regulation is China’s mandatory labeling rule for AI-generated content. In March 2025, the Cyberspace Administration of China (CAC) issued the final “Measures for Labeling AI-Generated Content”, which take effect on September 1, 2025. These rules compel all online services that create or distribute AI-generated content to clearly label such content.

Beyond content rules, China continues to refine its AI governance frameworks at a high level. In September 2024, China’s national committee on AI governance released an AI Safety Governance Framework aligned with Beijing’s Global AI Governance Initiative​. This framework lays out broad principles – a “people-centered approach” and AI “for good” – and classifies AI risks to guide policymakers, emphasizing ethics, transparency, and continuous risk monitoring, reflecting concerns about bias and security in AI.

Brazil

In December 2024, Brazil’s Senate approved a comprehensive AI Bill (No. 2338/2023), which adopts an EU-like risk-based model. The bill defines categories of AI systems and corresponding obligations, and it now awaits further approval. If enacted, Brazil’s law would be one of the first in the region to specifically regulate AI development and use.

Other Countries

Many other countries around the world have published or proposed national guidance, but (to our knowledge) none of them have binding laws – yet.

Additional Resources for Keeping Current on AI Regulations

In addition to the two key resources we discussed last year – OECD AI Policy Observatory (OCED.AI) and Global AI Legislation Tracker – here are two additional resources for keeping current on AI policies and regulations in the EU and US:

  • EU AI Act Resources: In addition to the European Commission’s EU AI Act page and the official Eur-Lex site for Regulation (EU) 2024/1689 contain the text of the law and updates on its implementation timeline, eu is an unofficial tracker that posts the Act’s legislative history, key documents, and timelines​.
  • NCSL Artificial Intelligence Legislation Tracker: The US National Conference of State Legislatures maintains an updated tracker summarizing AI-related bills in all U.S. states, including enacted laws (like those in Colorado, California, etc.) and pending bills.

Conclusion

There have been many changes to the AI regulation landscape in the last 15 months, and the expectation is that this space will continue to evolve for the foreseeable future. Cimplifi will continue to revisit the AI regulation landscape periodically to ensure you’re up on the latest global AI guidance and regulations that may impact you.

For more regarding Cimplifi specialized expertise regarding AI & machine learning, click here.

>