In an increasingly AI-driven world, government leaders – both in the U.S. and abroad – continue to grapple with how to regulate the use of this transformative technology while striking a delicate balance between protecting citizens and enabling innovation and technological advancement.
This summer, the European Union (EU) took a major step forward by enacting the EU AI Act, the first major AI regulation law in the world. The landmark law establishes a regulatory framework for AI based on a “risk-based” approach, where the level of regulation corresponds to the potential societal risks posed by AI applications.
-
High-risk uses, such as applications used in healthcare, law enforcement and transportation systems, must meet stringent standards for transparency, safety, and data governance.
-
Meanwhile, applications deemed to be “unacceptable risks,” such as applications used for mass surveillance, are banned outright.
Although this new law specifically governs AI use within the EU, its impact will extend far beyond Europe’s borders. With many of the world’s most advanced AI systems developed by U.S.-based companies, these companies will need to comply with the EU’s stringent regulations to operate in European markets, which will likely have impacts on U.S. systems as well. The EU’s regulatory framework is also expected to influence legislative efforts in other countries, including the U.S., as policymakers worldwide seek to address the safety, ethical and societal challenges posed by AI.
In the U.S., AI regulation has been a priority for our government leaders.
-
The White House has convened task forces and issued guidance for responsible AI development. Additionally, more than 120 legislative proposals have been introduced this Congress.
-
In May, Senate Majority Leader Chuck Schumer (D-NY) and a bipartisan group of lawmakers, the so-called “AI gang,” announced a 31-page framework that aims to serve as a blueprint to guide congressional committees in shaping AI regulation bills, including some proposals that appear to mirror the new regulations in the EU.
But while momentum for federal action on AI is growing, it appears unlikely that lawmakers will pass a comprehensive bill to regulate AI before the end of the Congress.
California, home to many of the world’s leading tech companies, is emerging as a leader in AI regulation at the state level. Just last week, Governor Gavin Newsom signed five AI-related bills into law, including measures designed to curb the spread of election deepfakes and AI-generated election misinformation, as well as to protect the digital likeness of performers and celebrities – and there are still over 35 AI-related bills awaiting the Governor’s signature or veto.
As the Governor weighs safety protections against the need to protect his state’s early lead in AI innovation, one of the most significant bills being watched closely is the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), which would impose various safety restrictions and requirements on advanced AI models. While this bill and dozens of others remain pending, all eyes are on California as the state’s legislative efforts will have lasting impacts on the use of AI across the U.S.
While California’s leadership on comprehensive AI regulation provides a blueprint for other states, there will be a growing patchwork of AI regulations in the states in the absence of a comprehensive federal law.
-
According to the National Conference of State Legislators (NCSL), there have been more than 300 AI bills introduced across at least 45 states in 2024 – an increase from 125 bills introduced in 2023.
-
Based on NCSL’s estimates, over 30 states have adopted resolutions or enacted legislation on a variety of AI topics.
Without a cohesive national standard, the U.S. risks falling behind on AI with a fragmented regulatory landscape that could stifle innovation and place the U.S. at a competitive disadvantage.