Spotlight on AI Takes Center Stage at Aspen Ideas Festival as Congress Weighs Regulations

The transformative power and implications of artificial intelligence (AI) were some of the most anticipated discussions at this week’s Aspen Ideas Festival. As AI technologies rapidly advance, panel discussions over the future of these technologies were a major topic of focus at this year’s event.

During the conference, top tech leaders discussed everything about AI – from how it will change the world for the better, to the negative risks that must be considered, and potential areas for regulation. IBM Chief Privacy & Trust Officer Christina Montgomery discussed the need for transparency when people are interacting with AI, while former Google CEO Eric Schmidt warned about potential threats to democracy and impacts on the next election cycle citing an anticipated rise in misinformation and disinformation that will spread at a speed in which “you can’t trust anything that you see or hear.” You can watch an insightful roundup of top leaders discussions on AI by NBC News here.

A spotlight on AI on the Aspen Ideas Festival stage comes at a time when public support is growing for federal regulation of AI as these technologies become more intertwined with our daily lives. The widespread adoption of ChatGPT, now 100 million monthly active users, has sparked debate among federal lawmakers over the urgent need for regulation of AI technologies. In a May 2023 Reuters/Ipsos poll, more than two-thirds of Americans expressed concerned about the negative effects of AI, and 61% believe it could threaten civilization.

Due to the lack of comprehensive federal legislation on AI, dozens of states are taking matters into their own hands, creating a patchwork of state laws. According to the National Conference of State Legislatures, over 80 bills have been considered by 27 states this year alone. Since 2018, at least seven states have passed laws on AI ranging from consumer protections to mitigate bias and increase transparency, to limits on the use of automated systems; and at least 13 states have established commissions to study AI.

As states continue to lead on AI, Members of Congress are debating how to regulate this rapidly advancing technology. Just last week, Senate Majority Leader Chuck Schumer announced the “SAFE Innovation Framework” after weeks of collaborative discussions with experts; the framework outlines pillars to serve as guiding principles for AI in areas such as security and accountability, while continuing to support innovation. A series of “AI Insight Forums” are expected to bring AI experts to Capitol Hill to brief Members of Congress.

Additionally this month, two bipartisan bills were introduced in the Senate. The Transparent Automated Governance Act, introduced by Sens. Gary Peters (D-MI), Mike Braun (R-IN), and James Lankford (R-OK), would require the U.S. government to be transparent on its use of AI. The Global Technology Leadership Act, introduced by Sens. Todd Young (R-IN), Michael Bennet (D-CO), and Mark Warner (D-VA), would establish a dedicated office to assess U.S. competitiveness on emerging technologies, including AI.

These legislative developments come after a Senate Judiciary Committee hearing in May on regulating AI with a focus on ChatGPT. Tech leaders, including OpenAI, Microsoft and Google, largely agreed with, and some even encouraged, lawmakers to regulate this technology as it becomes increasingly powerful.

As pressure mounts, the Biden Administration has ramped up their efforts with the White House confirming that staff have been meeting several times a week to discuss ways that the federal government can ensure consumer protections around artificial intelligence.

As U.S. federal leaders race against time to regulate AI, the European Union moved closer to passing a pioneering law, the AI Act, which would be the world’s first comprehensive AI law. The law would regulate use of AI in the European Union, including placing a ban on high-risk AI practices, such as real-time facial recognition technology, and implement strict disclosure requirements for generative AI models like ChatGPT.

The pressure is on for the U.S. to lead on the regulation over AI technologies and momentum in Congress is clearly building. It would be a major accomplishment for Congress to legislate on an emerging technology proactively in anticipation of preventing potential issues, versus reactively, as we have seen in recent years on other critical technology policy issue areas.