As Artificial Intelligence is becoming a tool that many professors are utilizing as learning tools, several professors have begun studying the possible uses of AI in research. From economics to medicine to the humanities, professors are experimenting with AI to speed up tedious tasks, analyze patterns in data and even help to shape new topics of research. The uses of these different tools vary greatly depending on the discipline.
Assoc. Psychology Prof. Hudson Golino has been conducting research on how AI is being used at the University as well as how the University compares to 13 peer institutions. He found that in the past two and a half years across Grounds, schools and departments are increasingly using AI to help with research. Whether it be using AI methods to help them analyze data or help them to understand challenging concepts — AI has increased people’s capacity to do tasks more efficiently.
“It's just like introducing electricity in universities in the early 20th century. Everybody's using it,” Golino said.
Compared to other similar universities, Golino found that the University ranks towards the bottom in measuring the impact of AI-assisted research. However, Golino said that the need for greater understanding of AI is not just University specific — nationally, the academic world is dealing with the same shift and adapting to use these tools to help make their research methods more efficient.
However, it is specifically within the machine learning-related research category the University excels. Golino believes that this is due to an early investment in that research through the School of Data Science. Similarly, he thinks the University needs to make the same kind of investment in learning about the applied work that can be done with AI tools.
Thus far, applied AI research at the University — projects that use AI to answer social questions or transform professional practices — has less reach compared to peers, according to Golino, who says it is important to allow researchers to focus on adapting to new advancements in AI.
“It's a gold rush,” Golino said. “Everybody's investing in AI, everybody is using AI [and] everybody is developing things. So if you take time away from research because people need to create a new course, or because they need to teach large courses, you're not optimizing your talent.”
In line with Golino’s focus, other professors at the University are also beginning to experiment with AI and research in new ways. Economics Prof. Anton Korinek was recently named in Time’s 100 most influential in AI and has been teaching a graduate level course about how to make use of AI in research — ECON 8991, “Research Methods in Economics.” Through this course Korinek urges his students to become more comfortable with using large language models, treating them as if they are a research assistant who is smart, incredibly motivated and eager to help, but also completely lacks the content of what you are doing.
In his research he has focused on how to utilize AI for economic research. For a previous study, Korinek reviewed the capabilities of AI in a variety of different research categories, including ideation and feedback, writing, background research, coding, data analysis, math and research promotion, and rated AI’s effectiveness at completing these tasks.
Korinek has found that AI is highly effective at synthesizing and editing text, as well as writing and debugging code. However, tasks such as deriving equations or setting up models in mathematics remain experimental and require significant human oversight. By rating these capabilities, he aims to help researchers understand what AI can truly save time on and where human expertise is still indispensable.
Although professors like Korinek are beginning to do more research to understand the best uses for AI, Golino said that part of what makes doing research with AI so experimental is the fact that many still do not fully understand how these models function.
“I’m using methods I developed to understand how human beings work, but now I'm applying these methods and, of course, adapting these methods to understand how these transformer models work,” Golino said.
Golino has been able to translate his research focused on understanding how the human brain works and applied it to computational methods from psychology to AI to better understand how these models function, which is significantly different from the human brain.
Mona Sloane, assistant professor of Data Science and media studies, has incorporated AI tools like the search engine Perplexity into her lab’s workflow as a way for students and researchers to learn through experimentation. She emphasized that disclosure is key — whenever AI has been used in her classroom or research, she requires that its use be acknowledged explicitly. In her view, part of preparing students for a future shaped by AI is ensuring they can critically evaluate both the benefits and the limitations of these tools.
“It's never going to be a silver bullet. It's always going to come with risks [as an] epistemic technology that reconfigures knowledge production,” Sloane said.
Renee Cummings, assistant professor of Practice in Data Science, also emphasized that understanding how AI models work is important to ensure that research is being conducted ethically. She also said that guardrails need to be put in place, because the creators of the systems being used don’t even fully understand the limits of these systems.
“I just always like to think about — what's the diversity of perspectives?” Cummings said. “Is it just a Western perspective? Is it a modern perspective? Whose voices are being amplified by the AI and whose voices are being excluded?”
Cummings also encouraged both students and professors alike to ask the challenging questions when thinking about intellectual equity. For her, AI means going further than just being more efficient, it means ensuring that research aided by AI can still reflect a diversity of perspectives and does no harm.
While Cummings does not use AI to generate original research, she has deployed it as a comparative tool to scan for the missing pieces. She describes it as a way to catch the details that the human eye might miss. Moving forward, Cummings believes that research with AI tools needs to be approached with curiosity and accountability.
“A critical aspect of transparency and disclosure is a documenting of the process, documenting the prompts that are used, the tools that are being used, or the tools that were used and the sort of decision making steps that were used, almost like lab protocols,” Cummings said.