The University’s early decision and early action applications close Nov. 1. The admissions office then reviews these applications holistically, meaning that algorithms are not used in the decision making process which takes into account a range of factors. Ultimately, such a process should produce a richly diverse incoming class whose variety of life experiences further the institutional mission and stated values of the University. However, this year, the entire higher education community is confronting a twin set of issues — the legal ban on affirmative action and the rise of ChatGPT — which appear to complicate the entire basis of the holistic admissions process and its stated goals of producing a “varied and dynamic” class. Despite the clarity of the new challenge, we have no proof that the University is adequately equipped or prepared to handle one, let alone both of these problems simultaneously. In fact, upon closer examination, their response to the ban on affirmative action is entirely undermined by the presence of ChatGPT. The admissions office must act proactively to address this threat in a way that accounts for the unique intersectionality of this problem.
At the end of June, the Supreme Court banned race-based admissions decisions, claiming that affirmative action policies violate students’ 14th Amendment right to equal protection under the law. This unprecedented ruling precipitated a series of debates within higher education regarding how best to alter admissions practices to be in accordance with both the law and the contemporary imperative for diversity. The University, like other institutions of higher education, has instituted a fairly straightforward approach to the issue of affirmative action — a new essay question has been added to the application, asking for applicants’ life experiences and how those experiences have impacted their worldview. In a vacuum, or even in the 2022 admissions cycle, this change would be sufficient — perhaps even genius in its subtle ability to evade legal constraints while simultaneously interrogating students’ relation to diverse lived experiences. But today is not one year ago, and we most certainly do not live in a vacuum.
Rather, today is a world complicated by the very real presence of artificial intelligence platforms such as ChatGPT which make plagiarism easier than ever. It is true that we have no clear data yet on how precisely AI will affect college admissions processes. However, all the signs suggest that AI will be hugely impactful for how students navigate the admission process, and therefore should alter how admissions policies are developed. It is indisputable that students have access to these platforms and even more indisputable that they are using them. In this new milieu, the personal essay, which was once a unique window into the thought processes and expression of prospective students, becomes a complicated site — which may no longer be entirely genuine. Not only is it easy to find tips on how best to use ChatGPT for outlining, drafting and writing, the use of it is also becoming more prevalent including in contexts which require a high degree of emotional intelligence — Vanderbilt University used it to write a statement following the Michigan State University shooting. It would be exceptionally short-sighted for universities to ignore the threat these platforms pose to their holistic admissions process of which the written personal essay is a dominant feature.
It is not simply that the University has not considered how to revamp their admissions process in a way that is cognizant of AI technologies — the admissions office has completely failed to provide applicants with any guidance regarding AI platforms, an oversight which implies a dangerous disregard for the contemporary significance of AI. This lack of guidance means that prospective students have no clarity on the permissibility of using AI to brainstorm or even write their essays. Consider the admissions FAQ page — none of the questions and answers even allude to artificial intelligence platforms. This is clearly antiquated and out of touch with the realities that applicants are experiencing during this application cycle.
This is especially concerning given that the University is entirely relying upon essay responses to produce a diverse class of students. ChatGPT is more than capable of answering the new University admissions question in a sufficient and nuanced way, as my brief experimentation with the platform proved. After entering the question, ChatGPT quickly identified that an appropriate response would include references to multiculturalism and social justice. While the original answer was exceptionally vague, the response became more specific upon giving more directives to the platform. Similarly, it would be easy for students to add their own examples into a piece inspired by AI platforms. Thus, not only does AI compromise the integrity of college essays, it is also cognizant of what appeals to admissions officers.
It is not in my purview to do the admission office’s job for them and provide a solution to the interconnected dilemmas of the affirmative action ban and the prevalence of artificial intelligence platforms. This is not an easy problem, and its solution will not be straightforward. The University has sought to address the newfound ban on affirmative action, but its complete and utter neglect of the presence of ChatGPT means that its policies will not only fail to achieve its stated goals of diversity but also comprise the efficacy of our school’s holistic admissions process. The admissions office must begin living within this new reality — it must find proactive ways to safeguard campus diversity while counterbalancing the prevalence of AI.
Naima Sawaya is a Senior Associate opinion editor who writes about Academics for The Cavalier Daily. She can be reached at firstname.lastname@example.org.