
How Loyola is bringing artificial intelligence and ethics into the classroom
August 11, 2025
It’s no secret that artificial intelligence is transforming research and business practices. In seconds, generative artificial intelligence applications like OpenAI’s ChatGPT can write code, spin up marketing copy, suggest medical diagnoses, or turn your Spotify playlist into a work of abstract expressionist art.
Yet, as they integrate AI in their classrooms, many professors at Loyola University Chicago, a Jesuit, Catholic University anchored by an ethical mission, are attempting to strike a delicate balance: embracing the technology for education and research aimed at benefiting the common good, while taking steps to heed the warnings of Pope Leo XIV, who, in his inaugural address to the College of Cardinals, noted the risks artificial intelligence—if not properly addressed—poses to “human dignity, justice and labor.”
“We want to equip students with the preeminent skillsets, and this particular skillset, AI, is just disrupting industry and the way we work in massive ways,” says Executive Lecturer Steven Keith Platt, who directs the AI Business Consortium and the Lab for Applied AI in the Quinlan School of Business. Using artificial intelligence ethically and responsibly—to support and augment critical thinking, rather than to stand in for it—will “make a big, big difference in our students’ abilities to get jobs,” Platt said.
New AI minors prepare students for emerging applied AI roles
This fall, Loyola University Chicago will launch two new undergraduate AI minors, including a Business of Artificial Intelligence Minor based in Quinlan and an Artificial Intelligence Minor in the Department of Computer Science. Plans to launch an Interdisciplinary Artificial Intelligence and Human Flourishing Minor in fall 2026, co-led by the computer science and philosophy departments, are also in the final stages of review.
Meanwhile, the Lab for Applied Artificial Intelligence on the third floor of the Quinlan School of Business, funded by a two-year, $200,000 grant from the National Science Foundation, is engaging undergraduate and graduate students in multidisciplinary AI research and providing a hub for data scientists to support faculty projects across the University.
And at regularly occurring conferences and symposia, such as the 14th Annual International Symposium on Digital Ethics hosted by the Center for Digital Ethics and Policy in March 2025, Loyola faculty and staff are convening professors, technologists, theologians, attorneys, policy makers, and ethicists from across universities, government agencies, and businesses to share knowledge and discuss what rapidly advancing AI technology could mean for society.
“So, there’s a lot happening, but the important thing is that university-wide, people are actively trying to advance the science,” Platt said.
Many Loyola faculty members are already embedding AI directly into their curricula, whether as a research and writing tool or the core instructional focus. These offerings, says George Thiruvathukal, a Loyola professor and visiting scientist at the Argonne National Laboratory who chairs the computer science department, are designed to prepare students for a technological shift he believes will be as profound as the arrival of “the printing press or the internet.”
In undergraduate and graduate courses at Quinlan, for instance, Platt’s students have applied AI models in culminating assignments and capstone projects to assess the relationship between Environmental, Social, and Governance (ESG) initiatives and U.S. banks’ financial performance, analyze retail traffic patterns in a large supermarket chain, improve the efficacy of job posting sites, and flag inconsistencies in tax documents comprising tens of thousands of pages.
These real-world projects, sponsored by private companies such as Deloitte and Glassdoor, are a differentiating feature of the Business of Applied Artificial Intelligence Minor, Platt says, designed to equip students with the type of skills that will prepare them for emerging roles at the intersection of business and AI engineering and deployment.
“Our goal is to have these students become what’s known as ‘business translators,’” Platt said. “That role is incredibly unique. On the one hand, they know enough about engineering and algorithms and math to be conversant, but they’re also steeped in knowledge on the business side.”
Similar pedagogical initiatives are taking place across the University. At the Stritch School of Medicine, in a course called Machine Learning for Molecular Biologists, Qunfeng Dong, the director of the Center for Biomedical Informatics and a professor of computational biology, is teaching students principles of machine learning, with the goal of improving their ability to evaluate medical literature critically and ask empirically informed research questions.
“[Students are] not just going to take AI-driven research results at face value,” Dong said. “They’re going to be able to ask, ‘What is your training data set?’ ‘How did you train your machine learning algorithm?’ ‘How did you evaluate your results?’ ‘Is there a bias?’”
Dong says AI advances have led to groundbreaking new research methods in molecular biology that would have been unfathomable in the recent past. Just as language learning models, like ChatGPT, can make next-word predictions and construct human-seeming sentences and texts based on massive corpuses of training data, neural networks can be applied to strings of amino acids in protein sequences to reveal insights that help molecular biologists predict diseases and inform vaccine development.
However, given how fast the technology has emerged, few medical students understand these AI models beyond the superficial level of buzzwords, a paradigm Dong is striving to change. “That’s our goal for students,” Dong says. “You come in knowing almost nothing. You come out understanding the basic principles of machine learning and AI.”
Loyola leads the way in AI ethics
Loyola is not alone in its efforts to integrate AI into its instructional framework. In June, Ohio State University announced that starting in fall 2025, “every student will be asked to use artificial intelligence,” according to a report from NBC. That same month Northwestern University launched a university-wide network “dedicated to the integration of data science and AI across all aspects of research and education.”
But Loyola’s scholarly focus on AI ethics places it in a rare position among universities as a moral authority well positioned to steer the broader philosophical debate about how to address threats experts say the technology poses to everything from privacy and data security to job security, academic integrity, and human dignity.
What I think Pope Leo XIV is thinking about when he talks about AI’s threat to human dignity is the loss of cognitive capabilities. So, teaching students how to think critically, without outsourcing everything to AI, is really important.
— Diana Acosta Navas, assistant professor of management in the Quinlan School of Business

“Especially in a Jesuit university with a strong philosophy department, it would be a travesty to think about not just ethical implications of AI, but also the philosophical and even theological questions to some extent—to really grapple with issues, like, ‘What is it that makes us human?’ Thiruvathukal said.
Such ontological questions—and the thorny ethical concerns that accompany them—are not just being voiced by Luddites and Silicon Valley antagonists. Open AI CEO Sam Altman, speaking at the Capital Framework for Large Banks conference at the Federal Reserve in late July, told the audience AI could eliminate entire job categories, including many customer service roles. An investigation by Time, meanwhile, found that OpenAI employed Kenyan workers paid less than $2 per hour to purge its training data of toxicity, hate speech, and bias. And a preliminary MIT Media Lab study of 54 participants writing essays, with and without ChatGPT, found that those relying on the language learning model showed weaker brain connectivity, poorer memory recall, and less ownership of their writing.
ChatGPT enters the classroom
In her Ethics in Business class, Diana Acosta Navas, an assistant professor of management at Quinlan, asks students to wrestle with such questions, though not in the way one might expect. One assignment challenges students to perform an analysis of a business fraud case, such as disgraced FTX founder Sam Bankman-Fried’s orchestration of the misappropriation of billions of dollars of customer funds from the now-collapsed cryptocurrency exchange and associated trading firm Alameda Research. Students use a generative AI chatbot to draft preliminary ethical analyses of the fraud cases, but they are graded on their ability to identify errors and biases and “take a critical stance vis-a-vis the response they get from the AI,” Navas said.
More broadly, Navas strives to show students that AI is a tool that can augment their work, rather than replace it: “What I think Pope Leo XIV is thinking about when he talks about AI’s threat to human dignity is the loss of cognitive capabilities,” she said, adding that she shares this concern. “So, teaching students how to think critically, without outsourcing everything to AI, is really important.”

Florence Chee, an associate professor who directs the Center for Digital Ethics and Policy, applies a similarly reflective approach in a communications studies course called Digital Media Ethics. While acknowledging the almost magical power of ChatGPT to “generate actor’s voices, resurrect loved ones’ [as avatars], and write songs and essays,” she says the tool’s ability to produce valid citations for the content it produces is dubious at best. Rather than trying to catch students misusing the tool, she has devised creative “ChatGPT-proof” assignments. For instance, students create digital media toolkits—websites, public service announcements, TikTok vertical videos—that educate young people and older adults about data privacy risks.
“Students tend to start my courses with a very instrumental, transactional mindset,” she said. “By drawing on Ignatian pedagogical principles, I incorporate practices that touch upon reflection, storytelling, and learning assessments that focus on outcomes. I start by trusting students.”
Still, moving away from traditional teaching methods is not always easy, particularly in the context of academic writing.
Julie Chamberlain, an advanced lecturer in the English department, says she has had to abandon research summaries and reading response assignments, which are too easy for ChatGPT to crib. “Students do not even have to read the Wikipedia page for a literary text or attempt to paraphrase the abstract of an article—they can complete the assignment without the slightest thought. This violates the core principles of why we assign students writing tasks: to get them to sit with a text and reflect on it.”
Her solution? That depends on the course and assignment context. In Business Writing, students are free to use AI as they see fit, with the caveat that they’re responsible for anything AI writes that is inaccurate or inappropriate for the assignment. “There’s a real emphasis in business writing on efficiency, and, to some extent, boilerplate language that can be used and adapted to similar situations,” Chamberlain said.
In her first-year research writing and literature courses, however, the bar is higher for original authorship and citation; students can use AI to organize ideas and revise drafts, but they must maintain receipts to document the entire process. “The onus is on them to establish their originality,” she said. In an Exploring Drama course, she has moved in the other direction, asking students to demonstrate their knowledge of The Second Shepherds Play, a medieval mystery, by using AI to render images of imagined stage sets and present their visions to the class.
“What do their choices convey about the play? How did they adapt or change particular elements? What are the implications of their choices? So, they’re really having to engage in critical thinking,” Chamberlain said.
The larger lesson may be that to prepare students for emerging AI-driven jobs and new ways of organizing and conveying knowledge, professors will need to think creatively to adapt their teaching and evaluation methods.
“For a long time, we’ve relied on written, exam-based assessments,” Thiruvathukal said. “We should really have more of an oral process for assessing knowledge. When you have to do a PhD defense, it’s not like you can say, ‘Excuse me, committee members, I have to step away to ask ChatGPT what my work is about.’”