
s 2022 came to a close, English faculty member Joe Essid—now retired—encountered an artificial intelligence bot’s writing output for the first time. Essid’s standards for writing proficiency were high given his position at the time as director of writing at the Weinstein Learning Center. He considered the prose carefully and soon recognized the precipice he stood upon.
“I had an aha moment when I saw how good the output from AI was,” Essid said. “I was reminded of when I was at Michigan Tech doing a summer workshop back in the early 1990s—when I saw my first web browser. It was Mosaic, the predecessor to Netscape, and I saw a moving weather pattern on a computer. I said then, ‘The world had changed.’”
AI—or more specifically, generative AI—has since dominated public attention. Generative AI is a machine-learning model designed to create something new that looks like the data it has been trained on. For example, if the developer fed it millions of lines of computer code, it would be able to generate new code. The same goes for any data: Bots can also learn to create from images, literature, and other art forms, though this has commonly occurred without artists’ awareness or permission. This has triggered ongoing discourse on whether this training process constitutes theft. In the case of Essid’s interaction with the chatbot’s writing output, the data would have been samples of prose from anywhere the developers could access. The chatbot then crafted something similar from that knowledge bank.
Generative AI’s potential applications are vast and can be helpful or harmful to the student experience. The University of Richmond is adopting a nuanced approach to AI, one that emphasizes human judgment, critical thinking, and ethical deployment of these powerful new tools. “It’s wonderful to know that, unlike many schools, we are pushing a lot in this space,” said Saif Mehkari, economics professor and co-developer of Spider AI, the university’s custom AI platform—more on that shortly. “We are competing with some of the top schools out there.”
▪ ▪ ▫ ▪ ▫ ▫ ▫ ▪ ▪
What is Richmond’s AI policy?

Saif Mehkari teaching in the Robins School of Business
The answer to this question depends on its application, and academic excellence is always the first consideration. Rather than sweeping top-down policies around AI use, Richmond gives individual professors the autonomy to develop appropriate guidelines for their specific courses and disciplines.
“The university’s policy is the one I like most because, above all, the reason I’m an academic is for autonomy and academic freedom,” Essid said. “I think the university’s light-handed approach is the best one.”
This flexible framework allows faculty to experiment with AI in ways that enhance rather than detract from learning objectives. For Andrew Bell, technology consultant and faculty hub operations manager, the goal isn’t to push AI adoption, but rather to “take a really considered position, a very well-researched and nuanced position on the technologies, and be able to communicate that to faculty and help them see its role in their teaching.”
The university made advanced AI tools accessible through Spider AI, an internally developed platform that provides faculty and students with access to leading AI models from companies like OpenAI, Anthropic, and Google. Spider AI emerged from a faculty learning community exploring AI applications in teaching and has evolved into a robust system used campuswide.
▪ ▪ ▫ ▪ ▫ ▫ ▫ ▪ ▪
How is AI being implemented in classrooms?
While much public attention has focused on AI writing tools like ChatGPT, University of Richmond faculty are continually uncovering creative applications across many academic fields. In Mehkari’s first-year seminar, students use AI to create complete marketing campaigns—generating scripts, synthesizing voiceovers, and producing AI-generated images and videos. In computer programming courses, AI serves as a coding assistant to help students learn syntax while developing their core problem-solving skills.
Shital Thekdi, who teaches in the Robins School of Business’ analytics concentration, has students experiment with prompt engineering—learning to effectively query AI systems to support statistical modeling and data analysis. But she emphasizes that the technology doesn’t replace the need for human judgment.
“[the goal is to] take a ... very well-researched and nuanced position on the technologies and be able to communicate that to faculty and help them see its role in their teaching.”
“The class becomes less about coding and more about the analyst, their role, and their judgment,” Thekdi said. “We still rely on the human to design how to use the model. You could think of a model as a black box. You have inputs, you have outputs, and the human decides what the inputs are and how to interpret the outputs.”
This emphasis on human judgment and critical thinking extends to how faculty approach assessment and academic integrity. Rather than relying on AI detection tools, which can produce false positives, professors are redesigning assignments to highlight higher-order thinking skills that can’t simply be automated.
“If I can feed a test question to AI and it gives me the answer, then what was I really testing in the first place?” Thekdi asked. “If we’re teaching things that can easily be automated or replaced by algorithm, then we need to focus on delivering higher levels of learning that cannot be automated or entrusted to an algorithm.”
▪ ▪ ▫ ▪ ▫ ▫ ▫ ▪ ▪
How does AI integration help graduates become workplace-ready?

Today’s students will graduate into a world where AI is ubiquitous in the workplace, a fact that shapes Richmond’s approach to AI integration. Faculty are focused on helping students develop AI literacy—the ability to understand AI’s capabilities and limitations while maintaining their own critical thinking skills.
“Our students in particular are perfectly poised to handle this next generation of technology because they have access to resources that let them understand and use the technology, but they also have this broad understanding of science, art, math, and economics, and the many other disciplines within business, liberal arts, and leadership,” Thekdi said. “That helps them put perspective on what they’re doing.”
Richmond views this combination of technical literacy and liberal arts education as a boon to graduates looking to add value in an AI-enhanced workplace. “If you’re just crunching numbers or proofreading, doing some other rote tasks,” Essid said, “I can hire a machine that never asks for a day off, that never gets sick, that needs an occasional upgrade, and costs me $20 to $100 a month, which is cheaper than I can pay for any human employee.”
The key, faculty argue, is helping students understand how to leverage AI while maintaining their own judgment and creativity. “We need to prepare students for a world where this technology is going to be as common as a web browser,” Essid said.
▪ ▪ ▫ ▪ ▫ ▫ ▫ ▪ ▪
What’s next for AI at Richmond?
To keep in lockstep with AI’s rapid evolution, the university is launching the Center for Liberal Arts and AI (CLAAI) in fall 2025. This initiative, supported by funding from the National Endowment for the Humanities, will “bring together researchers, students, and experts to think about the social, political, and cultural possibilities and challenges of AI,” said Lauren Tilton, who will lead the center. “The humanities are central to this, thinking about how we can use AI creatively while also thinking about its impacts.”
CLAAI will operate as part of a broader collaboration with the Associated Colleges of the South, connecting Richmond with 15 other small liberal arts colleges across the region. “The idea here was that small liberal arts colleges could be even stronger by being connected together and harnessing all the research and teaching and pedagogical opportunities across them,” Tilton said.
The center will focus on four key areas: faculty fellowships will support research projects with the opportunity for student collaboration; course development grants will help professors create new classes or redesign existing ones to thoughtfully integrate AI; a workshop and lecture series will bring expertise to campus while fostering broader community engagement; and the university will develop expertise in visual AI and the analysis of images, photography, and film.

Lauren Tilton, center, teaching outside Weinstein Hall
“If we’re teaching things that can easily be automated or replaced by algorithm, then we need to focus on delivering higher levels of learning.”
“I don’t think of research and teaching as disconnected,” Tilton said. “When you have exciting research, it also creates exciting classes. And when students get really excited, it infuses the research. So while they’re divided into four areas, I see them as deeply connected.”
Tilton sees CLAAI as a natural extension of the university’s strengths as a liberal arts institution. “We think interdisciplinarily and transdisciplinarily and integrate many ways of thinking together. That’s the value of a liberal arts education. CLAAI is taking those values and ways of thinking and integrating them into a different way to do AI.”

Faculty also acknowledge the need to stay current with AI’s development. Training and consultation are available to teaching staff through Bell and the faculty hub. “I also think there’s going to be learning on both ends,” Mehkari said, describing how faculty can also learn from their students about effective AI implementation.
“I think the University of Richmond is in a great place—definitely a better place in the higher education landscape than other institutions—and that’s because we have such small class sizes,” says Bell. “The interventions that are shown to be beneficial are where we maintain our humanity, where we maintain the human in the loop.”
This perspective shapes how faculty ultimately view AI: not as a replacement for human intelligence, but as a tool that—when properly understood and thoughtfully deployed—enhances learning and prepares students for the future while maintaining the essential human elements of education.
“The value of the education at the University of Richmond is to understand the tools and how to use those tools, but not entrust those tools to replace one’s own critical thinking,” Thekdi said. “Students learn how to use those tools, but they also get their liberal arts education and learn how to use those tools effectively, ethically, and wisely and use those tools to do great things that the tools could not have done.”