The University’s Miller Center hosted a panel of experts in government, academia and the private sector Thursday to discuss the lack of federal regulations surrounding artificial intelligence. Panelists discussed AI’s transformative role in the current “historical moment” and how — and whether — to address the fragmented nature of AI regulations across states and countries in the Trump era.
The Miller Center is a nonpartisan institution at the University that seeks to study the American presidency and engage scholars with citizens, helping to “solve major problems” in American democracy. Panelists of the event were Seth Center, former acting special envoy for critical and emerging technologies at the State Department, Public Policy Prof. Allan Stam and Astri Kimball Van Dyke, director of Google’s global competition policy team. The discussion was moderated by Chris Lu, one of the 2025-26 James R. Schlesinger distinguished professors at the Miller Center.
Panelists opened the discussion by explaining how the exponential rate of technological development in the 21st century poses unique challenges to democratic institutions and corporations in regulating AI. Stam said he believes that AI technology’s growth is outpacing the creation of government regulatory measures.
“I think our democratic institutions are stuck a little bit,” Stam said. “[This also] creates demand for some of the more pernicious political changes.”
Van Dyke agreed with Stam that the rate of innovation has made AI particularly difficult to regulate internally at Google, emphasizing that the many variations of AI — from chips, to infrastructure of data centers to large chat models and apps — and their broad impacts across a variety of fields may require a specifically tailored approach to regulation that considers both the model in question and the field its intended use is for.
Another topic of discussion among panelists was how the current, highly “fragmented” AI regulatory system has materialized, where the regulatory approach differs by state and country. Upon taking office in 2025, President Donald Trump issued executive orders revoking Biden-era regulations on the use of AI. Stam explained that the Trump administration’s approach to AI policy has included attempts to supersede existing state policies with executive orders, but that the judicial branch has been “unsympathetic” to these efforts.
“The courts have said … ‘sorry … legislatures … [and] state legislatures … carry the day,’” Stam said. “I think until Congress is able to come together on this, for better or for worse … we’re going to continue to see this heterogeneity across the states.”
Trump released a legislative blueprint to guide Congress in establishing a more permanent legislative policy Friday. In the blueprint, he calls for legislators to ensure they are protecting children and empowering parents, safeguarding American communities, respecting Intellectual Property rights of companies, preventing censorship, supporting innovation, ensuring American AI dominance and educating the public on AI.
Lu asked Center for his thoughts on whether the Biden-era approach — which he described as getting a set of voluntary commitments from private companies and codifying them via executive order — could be taken any further.
Center responded that it is not unusual for an early technological development to be “self-policed” by companies. He said that this then creates pressure on shareholders to self-regulate as well.
Van Dyke addressed the fact that power companies have to regulate themselves and set their own AI standards. Van Dyke assured the audience that, in the absence of many federal regulations, Google establishes strict internal standards to ensure it has the public trust, including rules governing watermarks and transparency about whether AI was used in the completion of a product.
Later in the evening, Van Dyke also emphasized Google’s voluntary commitment to open-source AI — which can be examined, altered and distributed for any purpose. One example of this commitment to open-source AI is Google’s database that maps proteins and has been accessed by three million scientists from over 120 countries to advance their research.
“That kind of choice that a company makes to be open source, to lean into that I think is something that we should be celebrating and talking about,” Van Dyke said. “We don’t want, though, the ‘just trust us’ [approach].”
Panelists also discussed AI regulation at the international level. Center warned that in Europe, there has been a significant amount of regulation with the development of the AI Act. The act outlaws AI that poses “unacceptable risk” such as social scoring systems and manipulative AI. He said he believes that countries in Europe got “buyer’s remorse” when AI technology subsequently developed.
Despite this, Center expressed that he was surprised with many countries’ agreement with the basic U.S. framework of regulation — the framework emphasizes transparency, an expectation that AI promotes societal good and shared information among governments.
“What we have in front of us, actually, is a fairly clear path to developing a governance framework around what I think are shared principles,” Center said. “[They can be] reflected largely through U.S. principles that I think we could increase substantial consensus for.”
Panelists concluded the discussion by fielding questions from students in the audience. Second-year College student Owen Wenwatzlavicko asked Van Dyke why members of the public should trust companies like Google to create their own sets of internal standards.
Van Dyke responded that, in a fiercely competitive market, Google must earn users’ trust. She further noted that Google runs thousands of tests each year on its search engine for reliability and relevance.
Wenwatzlavicko spoke with The Cavalier Daily after the event, and he said that he is passionate about ensuring that AI is used for “positive social good.” He said that his exchange with Van Dyke left him with concerns about the role that corporations play in an unregulated AI environment.
“I think the lack of a good answer is more indicative that Google is not as public facing as I would have hoped,” Wenwatzlavicko said.
The Miller Center will host its next event, a series on former President Barack Obama’s time in office, starting Wednesday at 5:30 p.m.




