
Living the AI Experiment

When philosophy professor James Brusseau, PhD, introduced his students to the Caffeinated Professor, a generative artificial intelligence (AI) chatbot trained on his business ethics textbook, he wasn’t trying to replace traditional teaching by handing the classroom over to a robot.
He was embarking on an experiment into uncharted educational territory, a journey without a map and only one direction of travel.

“I don’t know all the ways that it will help and hurt my students,” said Brusseau, who unveiled the AI professor to his Philosophy 121 class this semester. Students are encouraged to converse with the bot day or night, engaging in conversation just as they might with him. “When answers are a few keystrokes away, there’s a clear pedagogical negative to introducing a tool like this.”
“But if I didn’t build this, someone else would have,” he added. “While I can’t control the world’s ‘AI experiment,’ I do have the opportunity to see for myself how it’s working.”
The rise of generative AI—tools like ChatGPT, Gemini, and Grok that generate original text, images, and videos—has sent shockwaves through many industries. For some observers, fear is the dominant emotion, with concerns that AI could take jobs or lead to humanity’s downfall.
Professors and researchers at Pace University, however, see a different future. For them, AI anxiety is giving way to a cautious acceptance of a technology that’s transforming how we live, work, study, and play. While creators urge caution and experts debate regulations, scholars are concluding that, for better or worse, AI is here to stay.
The real question is what we choose to do with that actuality.
At Pace, experimentation is the only way forward. In Fall 2024, Pace included an AI course—Introduction to Computing—to its core curriculum for undergraduates, bringing the number of courses that incorporate AI at the undergraduate and graduate levels to 39.
“While I can’t control the world’s ‘AI experiment,’ I do have the opportunity to see for myself how it’s working.”
Pace is also leading the way in cross-disciplinary AI and machine learning research. At the Pace AI Lab, led by pioneering AI researcher Christelle Scharff, PhD, faculty, staff, and students integrate their knowledge areas into collective problem solving powered by the technology.
In doing so, Pace’s academics are writing and revising the script for how to balance the dangers and opportunities that AI presents. “We’re living in a heuristic reality, where we experiment, see what happens, and then do another experiment,” said Brusseau.
A Defining Moment
Jessica Magaldi’s AI experiment began with revenge porn. Early in her career, the award-winning Ivan Fox Scholar and professor of business law at the Lubin School of Business studied intellectual property law and transactions for emerging and established companies.

In 2020, she turned her attention to laws criminalizing the illegal sharing of sexually explicit images or videos of a person online without consent. Shockingly, most revenge porn laws were toothless, she said, and there was very little public or political appetite to sharpen them.
Now, fast forward to January 2024, when fake sexually explicit images of singer Taylor Swift went viral on X. Public outrage was immediate. Users demanded accountability, and fans initiated a “Protect Taylor Swift” campaign online. In Europe, lawmakers called for blood.
For Magaldi, something didn’t add up. “We were at a moment when AI generated content that everyone knows is fake was producing more outrage than so-called revenge porn photos, images that are real.” Understanding that contradiction could offer clues on how to draft laws and legislation that are more effective for the victims, she said.
Eventually, it might even teach us something about ourselves. “My greatest hope is that we can use what we learn about the differences between how we feel about what is real and what is AI to explore what that means for us and our collective humanity,” she said.
Optimism Grows
Harnessing the benefits of AI is also what occupies Brian McKernan, PhD, an assistant professor of communication and media studies at the Dyson College of Arts and Sciences.

McKernan, who describes himself as cautiously optimistic about AI, would be excused for taking a less rosy view of the technology. His research areas include misinformation, cognitive biases, and political campaign transparency—topics where the use of AI is rarely benevolent. In a 2024 study of the 2020 US presidential election, McKernan and his collaborators found that President Donald Trump used the massive exposure popular social media platforms offer in an attempt to sow distrust in the electoral process.
“There are great uses for AI, particularly in cases with huge amounts of data. But we will always need humans involved in verifying."
And yet, McKernan remains upbeat, an optimism stemming from the fact that AI helps him keep tabs on what politicians are saying, and doing, online.
“It’s a data deluge,” he said. To help sort through it, McKernan and colleagues at the Illuminating project, based at Syracuse University train supervised AI models to classify and analyze social media content. Researchers check the performance of the models before making their findings public.
“There are great uses for AI, particularly in cases with huge amounts of data. But we will always need humans involved in verifying,” he said.
Racing to Regulate?
To be sure, there are social and ethical dangers inherent in AI’s application—even when people are at the keyboard. One concern is access. Many generative AI tools are free, but they won’t be forever. When people can’t afford “the shiniest tools,” McKernan said, the digital divide will deepen.
Other challenges include maintaining data privacy, expanding availability of non-English tools, protecting the intellectual property of creators, and reducing biases in code. Even AI terrorism is an area of increasing concern for security experts.
Emilie Zaslow, PhD, a professor and chair of communication and media studies at Pace, said given these concerns, eventually, a regulatory framework for AI might be wise.

“In media, we have examples of both government regulatory oversight, through the Federal Communications Commission, for example, and industry self-regulation, such as the Motion Picture Association film rating system,” Zaslow said. “There is also government involvement in evaluating new consumer products; take the Food and Drug Administration, for example. Every time a new drug comes to market, the FDA evaluates it, tests it, and decides whether it gets released and with what kind of warnings.”
“There should be increased regulatory oversight for technology,” she said.
Regulations are emerging. In Europe, the AI Act bans certain applications deemed to pose an “unacceptable risk” to citizens. Punishable programming includes social scoring systems, real-time facial recognition and other forms of biometric identification that categorize people by race, sex life, sexual orientation and other attributes, and “manipulative” AI tools.
Companies face fines up to $35.8 million or 7% of their global annual revenues—whichever amount is higher.
Brusseau, while sensitive to the dangers, doubts that the punitive approach will pay off. “The internet has no geography; it isn’t anywhere,” he said. “How do we prohibit something that isn't anywhere?”
“There should be increased regulatory oversight for technology.”
He suggests a different approach: using technology to regulate itself. He calls this acceleration ethics, the idea that the most effective way to approach risks raised by innovation is with still more innovating.
In a recent paper, Brusseau examined how TELUS, a Canadian telecommunications company, developed an automated safety tool to monitor its customer-serving chatbot. When the safety tool detected hallucinations, phishing threats, or privacy risks in the chatbot’s answers, it flagged them for human review.
“While the purity of theoretical positions is blurred by real-world ambiguities,” Brusseau wrote, “the TELUS case illustrates how the acceleration strategy transforms AI ethics into an innovation catalyst.”
Risks Worth Taking
Ask ChatGPT whether it’s dangerous, and its response is unequivocal: “I’m here to help and have meaningful conversations.”
Ask ChatGPT whether AI is dangerous, the reply is a bit murkier: “It depends on how it's used.”
But point out that ChatGPT is AI, and the contradiction isn’t lost on the technology itself. “What I meant to convey is that I am designed to be helpful, safe, and non-threatening. But it’s true that, like any tool, the potential for harm exists if used irresponsibly.”
When scholars and historians look back at this era of AI experimentation, they may be similarly conflicted. Magaldi, who understands how devastating sexually explicit deepfake images can be, also recognizes the usefulness of AI’s creativity. In Spring 2024, she even used AI to help her flesh out an idea for a class on Taylor Swift. She did it, in part, as an exercise for herself to use AI in a creative way.
“I'm not worried in the least. Humans produce knowledge through causality, while machines do it exclusively through correspondence. They reason wrong.”
“With ChatGPT, I was able to build an entire music industry law class based on Swift's disputes and lawsuits,” Magaldi said. After lots of tweaking, she ended up with the syllabus for a three-credit class exploring the singer’s experiences with copyright infringement, music industry contracts, trademark law, and ticketing practices.
It was a massive success. TikTok videos were made about the class, registration for the class closed in minutes, and students are eager for it to run again.
This type of human-AI interaction—using the technology as a “thought partner,” as Magaldi puts it—is the sweet spot in AI’s societal integration.
It’s also why Brusseau is upbeat. “I'm not worried in the least,” he said. “Humans produce knowledge through causality, while machines do it exclusively through correspondence. They reason wrong.”
That certainty, however, doesn’t mean he has all the answers. With AI, there are only questions. “Like buying a one-way plane ticket, it’s not the destination that matters, but the journey,” he said. “That’s why I built the Caffeinated Professor—to see where it takes us.”
More from Pace
From privacy risks to environmental costs, the rise of generative AI presents new ethical challenges. This guide developed by the Pace Library explores some of these key issues and offers practical tips to address these concerns while embracing AI innovation.
With artificial intelligence remodeling how healthcare is researched, and delivered, Pace experts are shaping the technology—and erecting the guardrails—driving the revolution.
Pace President Marvin Krislov recently participated in a conversation at Google Public Sector GenAI Live & Labs as part of the Future U. podcast. He joined higher ed leader Ann Kirschner, PhD, and Chris Hein, Field CTO at Google Public Sector, to discuss the evolving role of AI in higher education.