Last semester, the Honor Committee found a student guilty of using ChatGPT on an assignment. This is the first time a student has received an infraction for artificial intelligence use. Of course, Honor infractions are not cause for celebration. However, this case demonstrates the laudable steps our community is taking to deal with AI. While the University’s AI policy must continue to adapt to new developments, the Committee and University administration alike have, so far, proven that they are up to the challenges set by ChatGPT and other AI platforms. This success has in large part been thanks to their unwillingness to use a strict rubric in handling AI related infractions — a decision that allows the Committee to tailor its responses to the various ways students can use AI. It is time other universities follow our lead and avoid developing rigid rules on handling AI.
When ChatGPT reached college campuses, almost every student had the same thought — class just got a whole lot easier. On the other side, educators feared that the new tool meant the beginning of the end for original thought. Overnight, take-home quizzes went by the wayside, professors relied more on old-fashioned paper exams and universities were forced to grapple with unprecedented challenges. The University was not, and is not, immune to this struggle. As such, the Committee has spent a great deal of time addressing the AI issue. In March, the University created the Generative AI In Teaching and Learning Task Force. The group is made up of professors and faculty who consult with the Committee chair to help address academic integrity concerns regarding AI. The task force has faced numerous challenges — first and foremost was a lack of definitive guidance on how the Committee would deal with cases involving the use of AI.
Prior to the recent Committee Case, the University community had very little information on definitive guidelines for AI use, much less their implementation. Despite this, the conversations around AI were very promising. Committee members indicated an understanding of the benefits of AI, going as far as to say that some use of ChatGPT and other such softwares is not against the Honor code. Furthermore, they have avoided the pitfalls of AI detecting softwares by opting not to use these softwares as evidence.
Last semester, the Committee was able to conduct a fair and impartial trial without any use of these faulty AI detection programs. Instead, they relied on comparative analysis between the student’s previous work and the work allegedly produced with AI. It is this decision that should fill students with hope because of the Committee’s careful consideration of how softwares were used as verified by humans, not faulty detection programs.
While some schools took similar steps to the University, convening groups to study the impact of AI and advise their respective administrations, other universities opted for quick action. Both Hofstra University and Montclair State University, for example, updated their plagiarism policies to define any use of ChatGPT or other AI services as plagiarism. I understand the appeal of these policies. They are definitive, clear. But sweeping policies and strict rubric like these fail to realize both the versatility and benefits of AI.
Go on ChatGPT right now and play around with it for a minute. You can get it to do nearly anything, from crafting recipes and workout plans to full fledged essays and speeches. This begs a very obvious question — when you can use ChatGPT in so many different ways, why would every use be handled the same? For this reason, the Committee has avoided drafting universal guidelines on how cases involving AI should be handled. This ensures that students who use AI to help digest information are not caught in the same boat as students who access ChatGPT on a closed-book exam.
In addition to its versatility, AI is an exceptional educational tool. It can be used to help students, particularly those with learning disabilities, digest complicated information. Whether you are an economics major learning supply and demand curves — I am assuming that is what economics majors do — or a philosophy major trying to understand David Hume, AI can play a vital role in helping you grow as a student. These are benefits that the Committee recognizes while maintaining the fair stance that AI is not a substitute for original student work.
Without a nuanced policy on AI, one that recognizes both its benefits and drawbacks, students and professors alike miss out on what is a revolutionary teaching tool. Rather than ban AI outright — a task which increasingly proves futile — colleges and universities should strive to create a policy that mirrors the University’s by allowing students to utilize AI while still upholding strict expectations of originality. By avoiding the development of a strict rubric on cases where a student uses AI in an academically dishonest manner, the University avoids boxing itself into a corner on a complicated and ever-changing issue.
If you hadn’t guessed already, I use AI in a planning capacity on almost all my assignments. I am lucky that I picked a university that put in the work to understand the benefits of AI before banning it entirely. At the University, considerable time and resources have been poured into ensuring that each move they make in relation to AI is properly informed. Other colleges and universities would do well to follow the lead of the University and the Committee. With the first AI-related Committee Case now behind us, the cards have been put on the table — and when compared to the response of other schools it is coming up all aces.
Dan Freed is an opinion editor who writes about academics for The Cavalier Daily. He can be reached at email@example.com.
The opinions expressed in this column are not necessarily those of The Cavalier Daily. Columns represent the views of the authors alone.