The University’s Digital Technology for Democracy Lab hosted an event Tuesday evening with accomplished technology journalist Karen Hao so she could discuss her latest book — “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI.” The talk was moderated by incoming Media Studies Prof. Seth C. Lewis, and Mona Sloane, co-lead of the University’s Digital Technology for Democracy Lab and assistant professor of Data Science and media studies.
Introduced by Christa Acampora, dean of the College of Arts and Sciences, the evening opened with Hao delivering a talk on her book, followed by a conversation with Lewis and Sloane and concluded with a book signing. The book was named a New York Times bestseller, and addresses the rapid expansion and advancements of artificial intelligence, the corporate entities behind it, particularly OpenAI, and the long term implications of this expansion.
Acampora stressed that AI had both the ability to save and also destroy humanity, but within this argument she tied in the importance of the liberal arts education. She believes that the point of the liberal arts is to provide education for people who are not trained in one craft. She added that the point was to be enlivened by the cultivation of human capacities to seek and understand knowledge, discern fact from fiction, explore how to live together, and understand the complexity of our emotional lives.
“AI can not only save the liberal arts, but can also draw on them as reservoirs of wisdom, virtue and creativity that are also necessary for centering human needs and possibilities within and alongside the development of AI,” Acampora said.
Hao’s book draws on the interviews and investigative reporting done throughout her career, mainly from around 2017-2024, to make a key argument — the AI industry, honing in on OpenAI, resembles an empire in formation. During her talk she emphasized the need for governance and control over AI’s future and development. She believes that the cause is urgent, but that humanity is currently at a pivotal moment where it is still possible to have some control over the future. Hao said that everyone had a role in shaping this future.
“Policymakers can implement strong data privacy and transparency rules and update intellectual property protections to return people's agency over their data and work,” Hao said. “We can all resist the narratives that OpenAI and the AI industry have told us to hide the mounting social and environmental costs of this technology behind an elusive vision of progress.”
Throughout the talk, Hao referred back to the sentiment in Silicon Valley, that the people behind AI felt as though they were truly creating these models for the greater good. She explained that there was this idea that the path towards AGI — or artificial general intelligence, a more advanced, human-like AI — makes these tech giants feel as though they are going to help solve everyone’s problems.
Moreover, she mentioned how quasi religious groups around AGI have developed in Silicon Valley — the “boomers” and the “doomers.” The boomers believe that this cataclysmic transformation will be positive, while the doomers think such transformation will be negative.
She calls these groups quasi religious as there is no actual scientific data to back these ideas about the future. As a result, these vague and two extreme ends of the spectrum for the future distract from the actual plot and perpetrates the consolidation of power in the hands of a few people. She used Sam Altman, founder and CEO of OpenAI, as an example who recently shared his vision to build 250 gigawatts of data centers by 2033, which would cost $10 trillion, and be the equivalent to the power used by 50 New York Cities.
“People that are making the AI are using it as cover to ultimately take control of land, take control of our power lines, take control of our water lines, take control of all of our data,” Hao said.
This is a growing feeling of concern across generations, according to Lewis. In particular, he mentions students who are unsure how to grapple with all of the uncertainty.
“It feels kind of like a runaway train, where is this going, at what speed and with what impact?” Lewis said.
Hao in response mentioned the recent study done by Stanford titled “Canaries in the Coal Mine?,” which found that since late 2022 employment has fallen by 13 percent for workers aged 22 to 25 in AI-exposed jobs. Lewis thought that the study shed light on the fact that there will be a new concept of the value of human labor. He believes that in the future instead of supervising 50 humans, one will be supervising 50 AI agents.
Hao pointed out that Silicon Valley is trying to do something impossible — selling the world an “everything machine.” She talked about how the term AGI being used as something that can cure cancer and solve climate change gets exaggerated and is actually a scam.
“If someone comes up to you and is like, ‘I have a solution for all of your problems, and the price is everything you have ever owned, and don't worry, it's gonna work, just give me five more years and five more years and five more years and $10 trillion,’ you realize what's actually happening,” Hao said.
Fourth-year Batten students Celia Calhoun and Owen Kitzmann attended the talk because of their interest in AI and policy, but also due to their involvement as research assistants in the Sloane Lab — a research group focused on AI led by Sloane. They expressed the importance of students' voices to be a part of this conversation and how their development of the Student Technology Council Project — an initiative from the Sloane Lab — plays into the bigger picture of “AI empires.”
“We've been working to see how we could build a governing body with the input of students, where they could work with the administration and secure their data and make sure that the policies reflect what they support and what they see,” Calhoun said.
Kitzmann further built on this point by reiterating one of the points of the talk — that students are at the crux of this revolution and will ultimately be most affected by the era of AI.
“We need to put students at the forefront of conversations around data and technology at the University,” Kitzmann said. “To have all these decisions made above them without students being involved is antithetical to the whole idea of the University being an academic village.”
Second-year College student Rishi Chandra was interested in attending the talk after a class he took with Media Studies Prof. Siva Vaidhyanathan on the intersection of democracy and AI. He is now hosting workshops with a club called Societal AI about how AI can both undermine and support democratic processes. One of his biggest takeaways from the talk is the idea that people should remember their agency and power as individuals.
“People like Sam Altman and other AI executives are very adamant about AGI and about the natural progression towards elite super artificial intelligence, but we as people also have the power to reframe how we think about AI, " Chandra said.




