The Cavalier Daily
Serving the University Community Since 1890

Honor Week panel discusses the future of artificial intelligence in academic integrity

The panelists also explored the possibility of a required general education course in AI literacy and a University-wide AI usage policy

Honor Week artificial intelligence panel, photographed Feb. 20, 2026.
Honor Week artificial intelligence panel, photographed Feb. 20, 2026.

The Honor Committee hosted a panel Feb. 20 as the concluding event of Honor Week to discuss the future of artificial intelligence in academia, the honor system and the community of trust at the University. The Committee specifically discussed a proposed mandatory AI general education course, AI tech corporation influence, AI detection software and whether the University should implement a general AI policy. 

The event featured Thomas Ackleson, Committee chair and fourth-year Engineering student, graduate Batten student Ella Duus, Mona Sloan, assistant professor of Data Science and Media Studies and founder of Sloane Lab, Leo Lo, dean of libraries and advisor to the provost on AI Literacy and English and AI Prof. Matthew Kirschenbaum. The panel was moderated by fourth-year College Rep. Jack Wallace.

The panel opened with a discussion on whether the Provost’s Office — which regulates academic policy, research activities and the academic curriculum at the University — should implement a University-wide AI policy.

The panel unanimously agreed that a general AI policy issued by the Provost’s Office would not be the best step forward to regulate AI in academia. Panelists expressed concern that a University-wide policy could impose potential restrictions on learning about AI, constrain academic freedom across departments and limit students' ability to develop AI-related skills that are valuable to employers. 

Lo said that he believes a general restrictive AI policy would hinder people’s learning about the developing technology. He said that instead of a restrictive policy, it would be more beneficial for the Office of the Provost to share guidelines on AI usage that will give students and faculty the opportunity to explore and experiment with the technology. 

“We are facing a technology that is so disruptive … that I have never seen something like this disrupting education in my lifetime, calculators, internet and computers — I don't think any of them can compare to what is happening right now,” Lo said. “We are learning as we go … and I think a policy by default restricts that.”

Ackleson and Duus both shared that they believe an AI policy should start from a bottom-up process. They said that students and faculty in classrooms should have opportunities to discuss AI use and ethics, rather than through the implementation of a general University-wide AI policy. 

The panel also discussed the possibility of implementing a mandatory AI literacy course for all students at the University. The proposed course would not only focus on how to use AI, but also on AI ethics and its potential ramifications on academia. The panelists noted that these implications could include changing computer science and engineering-based coursework as well as stricter grading policies in writing classes. 

Sloane said that she believes a standardized AI literacy course would benefit students by not only teaching about AI usage, but also explaining how to work with AI and understanding the way AI learns to think. She also said that it would be valuable if such a course discussed AI’s disruptive impact on education — both socially and politically. 

“I think here at U.Va., we are quite well positioned to provide [an AI literacy course], because we are very strong in the social sciences and humanities, which have historically grappled with these kinds of questions that have now shown up with this new, massively disruptive technology,” Sloane said. 

Duus said that it is important for these potential courses to have a balance that benefits students and faculty with a wide variety of perspectives on the use of AI in education. She added that it is important to ensure that students feel engaged in these courses so that the content feels relevant and valuable rather than simply a requirement. 

Wallace asked the panel what one thing they would include in an AI policy, should one be implemented tomorrow. Lo said that it should prohibit faculty from using AI detection tools because the detectors can be ineffective and foster distrust and anxiety. 

Ackleson shared a similar sentiment and said that implementing a policy on the use of AI detection software would be in the best interest of students at the University due to its inaccuracy. 

Kirschenbaum shared that he believes the University should implement a policy that would help establish a culture of documentation of AI, so that when students utilize the software, they document what they used it for and can reflect on how it was used. 

“If you use AI, own it. Acknowledge it, document it, be transparent about what you're doing … to create [a] culture of openness, documentation and curation around our use of [AI], and get away from deceptive practices [instead of] trying to maintain the subterfuge that AI isn’t really in all of our browsers,” Kirschenbaum said. 

Some of the panelists raised concerns that the corporations behind popular AI software do not share the same interests as institutions of higher education. Kirschenbaum said he believes many of these companies aim to put these institutions “out of business” by disrupting how they operate and reshaping education. Sloane shared that she believes the University has the power to influence these companies not only through the financial value of its software licensing agreements but also through the University's reputation as a customer. 

“We are a big institution, and centrally procured licenses for big institutions are valuable, even for big tech companies in terms of money but also in terms of reputation,” Sloane said. “When such a license is procured, we do have the power to actually make demands … and we’ve not leveraged that [power enough].” 

At the conclusion of the event, Sloane said that she believes it is important for faculty, students and administrators to engage in conversations about AI starting in the classroom. She also highlighted the need to create more opportunities for faculty to discuss the future of AI in higher education.

Local Savings

Puzzles
Hoos Spelling

Latest Podcast

Carolyn Dillard, the Community Partnership Manager for the University’s Center of Community Partnerships, discusses the legacy of Dr. King through his 1963 speech at Old Cabell Hall and the Center's annual MLK Day celebrations and community events. Highlighting the most memorable moments of the keynote event by Dr. Imani Perry, Dillard explored the importance of Dr. King’s lasting message of resilience and his belief that individuals should hold themselves responsible for their actions and reactions.