There’s no playbook for universities to rely on when it comes to figuring out what generative artificial intelligence means for higher education and, specifically, the liberal arts. Its impact is more revolutionary than evolutionary, promising both disruption and opportunity in novel, even unforeseeable ways.
As the University of Richmond considers what generative AI means for students’ education, it is relying on the bedrock skills Spiders have always excelled at: curiosity, critical thinking, and collaboration. On campus, one key effort is the Presidential Advisory Group on AI convened by President Kevin F. Hallock. This small, dedicated group of staff and faculty is engaging the campus community to gather insights and make recommendations to Hallock as new AI-related opportunities and challenges become apparent. Additionally, faculty across all five schools are exploring AI and sharing ideas and best practices, including through programming and learning communities in the Faculty Hub.
The university is taking a leadership role beyond campus through another key effort: the Center for Liberal Arts and AI. CLAAI brings together faculty, researchers, and others from 15 liberal arts universities to explore pressing social, cultural, and legal questions and dimensions of artificial intelligence.
Richmond is a key driver of the conversations. The university is the center’s host, and CLAAI’s founding director is UR’s E. Claiborne Robins Professor of Liberal Arts and Digital Humanities Lauren Tilton.
At the start of the academic year, CLAAI identified approximately two dozen fellows from among the institutions. Tilton describes the CLAAI fellowship as something more fundamental than a typical research cohort. The fellowship deliberately creates time and collaborative space for faculty to think broadly across schools and fields about what generative AI means for students and liberal arts education.
“That’s a harder space to get than people realize,” Tilton says.
A handful of UR faculty are among the fellows, and they have some thoughts. Here’s what’s on some of their minds as they think about AI in their disciplines and how it is reshaping students’ education and futures.
STEPHANIE SPERA: SCALING UP RESPONSIBLY
“I generally loathe that everyone has access to [all] things AI,” says Stephanie Spera, associate professor of geography, environment, and sustainability. “Do I want people to use it to solve global environmental problems and make the world better? Sure, yes. Do I want students to waste energy and water by asking it how to write an email? Absolutely not.”
Her field has been using what she describes as “a version of AI” for years, which helps her track the effects of climate change on the Amazon rainforest and in Acadia National Park. “Now with the cloud and big data, you can scale this up immensely,” Spera says. “Models that used to take forever to run no longer do. More advanced models like neural networks and deep learning are becoming more accessible.”
But accessibility creates new challenges. “A lot of people just type something into ChatGPT or Claude or whatever and hope it will solve their problems, but you have to have background in what you’re studying,” Spera cautions. “You need context. You need content. ... You need to know what you’re doing in order to be able to use it as a tool.”
“AI is a tool, and like all tools, it only works if you know how to use it.”
In her Python coding class, Spera allows students to use AI for help, knowing they’ll quickly discover its limitations. “What we knew and the students realized is that AI is not perfect,” she says. “And again, if you don’t have the foundational background in your topic — you not only won’t be able to ask useful questions that can help solve your problem, but you also won’t be able to troubleshoot when it inevitably fails.”
Her teaching philosophy centers on a few key principles: Acknowledge the huge environmental footprint and don’t use AI unnecessarily; don’t let it replace original thoughts; and recognize that AI is biased and imperfect.
“AI is a tool,” she says, “and like all tools, it only works if you know how to use it.”
VLADIMIR CHLOUBA: THE HUMAN CONNECTION
Vladimir Chlouba says that in some aspects of his field, “AI changes nothing.”
Chlouba, an assistant professor of leadership studies, researches traditional leadership. This work regularly takes him into the field across sub-Saharan Africa, where he interviews traditional chiefs, runs focus groups with citizens, and combs through archival materials.
“No algorithm will ever replace sitting down to sip tea with villagers in remote regions of Namibia, hearing firsthand what they think about their chief,” he says.
“No algorithm will ever replace sitting down to sip tea with villagers in remote regions of Namibia.”
Yet he’s not dismissive of AI’s utility. “AI can transcribe my interviews with shocking accuracy, and it helps me tackle administrative datasets that were once too massive and unwieldy to analyze in any reasonable time frame,” he says. “The bottom line is that technology does not replace the human connection at the heart of fieldwork. It amplifies what I can do with the insights I gather there.”
In the classroom — particularly with the role of writing — he says AI poses difficult questions without clear answers, but he offers one prediction.
“The importance of self-regulation in the lives of our students will increase,” he says. “Those who can use AI to enhance their productivity while resisting the temptations to offload the learning process will benefit.”
This creates a novel hurdle. “Navigating this tradeoff is something that no prior generation had to face to the same extent,” he says. “How do you teach this sort of willpower? Another good question to which I have no ready-made answers.”
MEGAN DRISCOLL: THE CRITICAL ENGAGEMENT
For Megan Driscoll, assistant professor of art history, AI presents a dual challenge. As someone who focuses on contemporary art, she thinks about how AI affects both the practices of artists and the work of art history researchers.
“Many artists feel that their jobs and livelihoods are under threat” from AI, Driscoll says. But her focus isn’t on the controversies around image generation and training sets. Instead, she’s interested in artists who have been actively engaging with AI and related technologies for years.
“I see a lot — the kinds of exhibitions that are happening and new works that artists are coming out with — there’s really a flood of artists trying to dig into what it actually means to start having these tools really rapidly available,” Driscoll says.
“I see a lot — the kinds of exhibitions that are happening and new works that artists are coming out with — there’s really a flood of artists trying to dig into what it actually means to start having these tools really rapidly available.”
With respect to research, Driscoll sees practical benefits emerging, particularly as universities and libraries improve their digitization efforts. “Already, I can do so much more primary source research with my students at UR than I could as an undergrad,” she says. Better data sets combined with AI tools that sort and organize information can increase research opportunities, especially for undergraduates.
In her classroom, Driscoll takes a pragmatic approach to how students use AI for writing assignments. She’s developed specific prompts that use AI as what she calls “a second pair of eyes” for student writing. The AI feedback, she concludes, is consistent, helping students identify whether they’ve written a clear thesis statement and providing basic structural guidance.
Driscoll’s careful to emphasize AI’s limitations. She doesn’t allow students to use AI for brainstorming on visual analysis papers — “that’s not the skill that they’re developing.” Instead, the tool helps them learn to edit their own work, mimicking the self-editing skills that experienced writers possess.
MARY FINLEY-BROOK: THE ENVIRONMENTAL CONCERNS
Mary Finley-Brook, professor of geography, environment, and sustainability, brings a sustainability perspective to the CLAAI fellowship program. Her research on data centers and AI’s environmental impact adds a critical dimension that other fellows are finding eye-opening.
The scale of the issue is staggering. “There are currently hundreds of data centers in the Commonwealth of Virginia and hundreds more in the pipeline,” Finley-Brook says, citing a total of 668. “The majority of these facilities in Virginia are hyperscale — greater than 10,000 square feet with more than 5,000 servers — to meet the massive computational demands of AI.”
She doesn’t mince words about the consequences. “Forecasted energy use for AI and hyperscale data centers is unrealistic.”
“Not every assignment or project needs to use the largest, biggest, highest parameter model. Some projects just need to use a smaller model.”
Finley-Brook envisions a different path than the one we’re on. “Infrastructure projects for technology like AI should be beneficial to the local host area, and companies need to minimize [environmental] harm — this has not been the trend in data center development,” she says, arguing that a lack of transparency means sustainability commitments require more accountability and independent public reporting.
In her classroom, Finley-Brook focuses on preparing students for the realities of this landscape. “Richmond students can use their critical thinking skills to unpack the claims of tech companies,” she says.
She’s working with fellow CLAAI members on practical resources, including a one-page guide on how to use AI more sustainably. “Not every assignment or project needs to use the largest, biggest, highest parameter model,” she says. “Some projects just need to use a smaller model.”
DANIEL HOCUTT: THE TOOL IN THE LOOP
“I don’t see generative AI as being a creative agent,” says Daniel Hocutt, adjunct professor of liberal arts in the School of Professional & Continuing Studies. “I see it as being a creative tool that extends or augments human capabilities.”
The distinction matters deeply to him. “The term ‘human in the loop’ gives generative AI too much agency,” he says. “I believe we should focus on ‘tool in the loop’ approaches that demonstrate the reality that generative AI can, at its best, augment the work of humans.”
For Hocutt, AI isn’t a distant future concern. He has watched AI’s integration into social media and search marketing for at least five years, though only recently has it become part of a marketer’s required expertise. Looking ahead, he sees profound changes coming to search and digital communication. “Look for search results to be completely AI-generated in the coming years and for search engine results pages to become the gold standard for authority — more so, probably, than even a product or service’s website,” he predicts. “If one’s web content doesn’t appear in AI-generated summaries, then it will not be considered authoritative.”
“I see it as being a creative tool that extends or augments human capabilities.”
He integrates AI into his Business and Professional Communication classes using SpiderAI — a UR-built generative AI tool for students and faculty — having students test AI tools toward becoming more critical users. “I want the tool in the loop of human communication practices,” Hocutt says, “not the human in the loop of digital communication practices.”
His advice for everyone is direct: “Immediately start using generative AI because you need to know its strengths and limitations.”
SONJA BERTUCCI: STRANGE APPROPRIATIONS
“AI will increasingly present artists with creative and ideological problems rather than solutions,” says filmmaker Sonja Bertucci, an assistant professor of languages, literatures, and cultures. She foresees “enormous upheaval” in both film production and academic disciplines.
“I can say that AI, in its current form, when it produces creative work — scripts, images, films — is still too formulaic, and palpably and paradoxically strange in its mastery of the codes,” she says. “Art depends in some way on deviating from formulas, on taking risks and manipulating ‘flaws.’ ... That is dissonant with, if not contrary to, the very idea of algorithmic predictability.”
She points to artists who treat AI in simultaneously critical and creative manners.
“The goal of my classes is not to be either ‘pro’ or ‘contra’ AI, but rather to raise an awareness.”
“To the extent that AI turns artists into prompters, I believe it will be catastrophic,” she says. “To the extent that AI can become the basis of enhanced critical practices, exercises, and strange appropriations — that human beings can do something strange and wonderful with it — there is the slightest possibility that it will provide an ecosystem for creativity and reflection.”
In her classes, Bertucci takes a balanced approach. “The goal of my classes is not to be either ‘pro’ or ‘contra’ AI, but rather to raise an awareness about the ideological, technical, and creative problems, potentials, and limitations of AI,” she says.
Her students’ reactions are revealing. “Most students laughed at AI aesthetics — its glossy, flashy sheen — when we analyzed it together in class, as it was so obviously and exaggeratedly kitschy,” Bertucci recalls.
She’s developing what she calls “a commitment to a process over time” as a counter-practice to AI’s promise of immediate mastery.
“An important strand in art production still believes in the necessity of effort, in long years of training and study as part of the formation of a personality and a style,” she says.
PREPARING STUDENTS
As the CLAAI fellows at Richmond and elsewhere continue to analyze and assess, the group plans to produce practical resources: a guide for more environmentally sustainable AI use and a broader articulation of the relationship between a liberal arts education and generative AI.
But perhaps more valuable than any single document is the space the fellowship creates — space for faculty to think deeply across disciplinary and institutional boundaries, to question assumptions, and to prepare students not just for a world with AI, but for critical engagement with it.
“I have no doubt that there are going to be some unexpectedly great outcomes that [arise] from this group,” says Tilton.
The conversations happening among these fellows suggest they’re asking the right questions.