The Cavalier Daily
Serving the University Community Since 1890

GARVIE: U.Va. should focus on understanding AI before implementing it

With educational complications in the incorporation of AI, the University should take steps back to acknowledge AI’s consequences

Using AI literacy as a guide can assist the broader knowledge that the University has about its use
Using AI literacy as a guide can assist the broader knowledge that the University has about its use

Over the course of its expansion these past years, the University has seen professors take a variety of stances on the use of generative artificial intelligence, ranging from complete bans to zero regulations. Karen Hao, an award-winning journalist who covers the impact of AI on society, gave a talk last semester highlighting how the University could better integrate AI literacy — an understanding of how AI works and what consequences exist — rather than simply using or encouraging it. Because there is no singular or correct approach in an educational world of individualized, class-specific policies, the University must mandate AI literacy training on the ethical, environmental and practical concerns for generative AI. 

Hao’s talk broached topics from the potential benefits and harms of AI on society, to the variation in ethicality of different models. In her discussion, she acknowledged one of the most common arguments against its use — the worry of individuals losing their critical thinking skills as they rely solely on AI to provide them with answers. Another common argument, though, is that when done right, AI may strengthen students’ ability to learn, as demonstrated in Harvard University’s experimentation with AI tutor bots. Despite these opposing interpretations, it is important to note that generative AI is not going anywhere. Thus, the University should use AI literacy as a guide that can incite ethical discussions about its use.

University professors have been instructed to develop their own guidelines for AI, which abandons professors at the crux of various ethical dilemmas about AI use. Many professors rightfully worry that students depend too much or entirely on AI, prohibiting them from actually integrating their knowledge and developing new skills. Some also worry that students are letting AI complete their assignments for them, bringing the integrity of AI into question. By mandating courses which uplift understanding over punishment, professors and students alike can further develop their own perspectives on the ethical challenges and necessary choices behind AI use.

Besides the need to analyze AI’s use in the University classroom, it is essential to not overlook the lasting impact generative AI has on the environment. Northern Virginia is considered the data center capital of the world for its sheer number of data centers, many built with the explicit goal of powering AI. These centers, along with being largely powered by fossil fuels, can consume up to five million gallons of fresh water a day. Many consider this beyond their locus of responsibility, especially within a University where it is widely integrated. However, nobody who uses or condones the technology is above the impacts of it, meaning it is our collective responsibility to ensure AI leads to more good than harm. There are some models, such as DeepSeek, that use significantly less energy and water to run than larger LLMs like ChatGPT.  In addition to providing more comprehensive information to the University community about the environmental impacts of AI, the University can and should also distinguish which models cause less environmental destruction. 

In thinking about how these proposed actions can be feasibly implemented, it is clear that the University already has the tools it needs to launch programs such as learning modules. Adding in key information about generative AI — such as information about its composition, the variety of models that exist, the potential impacts it can have on learning and how it influences the environment — would enable more ethical use of AI across the University community. This will not stop every single person from using AI, but it will create more informed users who are guided by knowledge of the repercussions of its use.

There is a lot that, as a society, we cannot control when it comes to AI. Its rapid development, how higher-ups decide to use it and potential future policies regarding its use are far beyond our control. However, we can do our part as a leading University to remain informed about generative AI and the implications of using it. The University should work to expand the information it provides to faculty and students on generative AI models, ensuring that the community is prepared for the increasingly AI-centric future. 

Adeline Garvie is an opinion columnist who writes about health, technology and environment for The Cavalier Daily. She can be reached at opinion@cavalierdaily.com.

The opinions expressed in this column are not necessarily those of The Cavalier Daily. Columns represent the views of the authors alone.

Local Savings

Puzzles
Hoos Spelling

Latest Podcast

In this episode of On Record, Allison McVey, University Judiciary Committee Chair and fourth-year College student, discusses the Committee’s 70th anniversary, an unusually heavy caseload this past Fall semester and the responsibilities that come with student-led adjudication. From navigating serious health and safety cases to training new members and launching a new endowment, McVey explains how the UJC continues to adapt while remaining grounded in the University's core values of respect, safety and freedom.