David Danks joined the University in January as the William L. Polk Jr. and Carolyn K. Polk Jefferson Scholars Foundation distinguished University professor of philosophy, artificial intelligence and data science. He teaches students about the ethics and philosophy of AI in both the data science and philosophy departments. This spring, he is teaching DS 2004, “Data Ethics,” in the School of Data Science. At the core of his teaching and research, Danks focuses on the ethical choices embedded in the design and use of AI systems.
Danks comes from the University of California, San Diego, where he served as a professor of data science, philosophy and policy. He will continue his Data Intelligence Values Ethics Research Lab that existed at UCSD at the University. Here, he researches the intersection of data, cognition and values, using methods from philosophy, psychology, machine learning and public policy. Danks said he plans to teach one class in the SDS and one in the philosophy department of the College each semester, starting Fall 2026.
A significant part of Danks’ research examines data and model bias, interrogating whether algorithms should be used to compensate for human bias. He points to real-world examples, such as AI being used in the legal system to help determine bail and the ethical trade-offs involved in building technologies like self-driving cars.
Specifically, Danks said that there are current debates around whether or not self-driving cars that use AI are safer when they are built to follow the laws. He listed adherence speed limits as an example — self-driving cars that are programmed to follow the speed limit could pose danger to human drivers who exceed speed limits.
“Who wants to be the engineer who has to go tell the legal department, oh, yeah, we designed a self-driving car to break the law,” Danks said. “[While] you don't want to do that, on the other hand, you're deliberately making something that's less safe, which also seems wrong, but you can't have both.”
Danks also said that many tech companies are not considering how they are impacting local communities with the products they create. He said that these companies are filled with engineers whose training focuses on technology, not ethical impact. In his teaching, he hopes to bring this ethical impact back to the forefront of conversations about AI and technology.
“If you talk to one of the engineers at a place like Waymo, they'll … understand that there are these ethical components to it,” Danks said. “They just say, but that's not my job. Nobody's ever taught me how to [consider the ethical impact].”
According to Danks, the self-driving car example shows how important it is to not only have a legal team but also a team focused on the ethics and morals of these decisions. Danks said that while some AI and technology companies do have ethics teams, there is often a disconnect between the ethics team and the software engineers.
In his teaching, Danks hopes to fill this gap he has witnessed in the workforce by showing students the importance of ethical implications of skills they learn in school. In his Data Ethics course this semester, Danks said that he teaches about the many decisions that go into building and deploying AI and data science models. From how data are collected to how outputs are interpreted, each step reflects value judgments about who benefits and what outcomes are prioritized.
According to Danks, developers often fail to consider the local social norms of the communities where these AI technologies are implemented. As a result, he encourages students to leave his class with an awareness of who the technology benefits, where the data comes from, and what data could be missing.
Danks’ arrival reflects a broader University-wide effort to advance responsible AI governance. The University has launched multiple initiatives and research centers, including the LaCross Institute for Ethical AI in Business, which provides leadership training and guidance on ethical implementation. Similarly, the School of Law has developed an AI Accountability Framework focused on transparency and fairness.
Danks said that he was drawn to the University because of the history and tradition of focusing on human rather than solely technical skills. He said he views the SDS as one of the strongest programs in the world for its emphasis on the social and human dimensions of technology.
“The only way that we're going to make real progress on these kinds of challenges of AI and the benefits of AI is by having the humanistic and social just embedded from the very beginning,” Danks said.
Danks is also a member of a National AI Advisory Committee focused on AI assistance and interaction in the context of mental health. He works in public policy at the intersection of AI and mental health to ensure technology helps, rather than hurts, its users.
“How do we bend the technology a bit more towards the good?” Danks said. “Which means everything from working directly with companies to having better algorithms to teaching the next generation how not to make the mistakes that my generation has made in building these systems.”
Looking ahead, Danks said he sees AI as a force that is reshaping education. In the short term, he said he believes instructors are still learning how to integrate AI into their teaching, but in the long run, he sees potential for more reflective and intentional learning alongside AI. He said AI will force both educators and students to be a lot more explicit about metacognition and consider the technology’s broader role.
“We have to think about, okay, why am I using this? How is this changing how I think? And those skills take time to develop,” Danks said.
In addition to his academic work, Danks is engaged in AI policy and governance, an area he is eager to pursue from a location closer to Washington, D.C. His guiding questions remain the same across research, teaching and policy — what is the role of government in shaping AI’s future? How can society build responsible systems that align with human values?
“If I could change one thing about the AI industry, it would be to change the AI industry so that it produces products that fit us, rather than making us fit the products,” Danks said.




