
Critical Thinking About Generative AI

Communication and media scholars think critically about the introduction of new technologies, exploring what society gains and loses as new tools for communicating and new forms of media production and distribution become integrated into society. Rather than be motivated by fear of a new technology or a tendency to ask if a new technology is “good” or “bad,” communication and media scholars apply critical thinking strategies to consider its benefits and challenges as well as its design and uses. Through these lenses, scholars argue that the design of each technology, including Artificial Intelligence (AI), affords us the ability to use it in certain ways, including uses that we find beneficial and those that we find harmful. At the same time, we understand that a technology does not determine its own impact on the world; instead, we can think about how those who design, distribute, profit from, and use the technology are all part of the mélange of factors that impact how a new technology will be integrated into and possibly change society. Since AI is always developing, here are some questions you can ask to think critically about its value, use, and impact:
We encourage you to spend some time thinking about your own answers to these questions.
What might we gain and/or lose from the introduction of Generative AI (GenAI)?

As many of us have experienced, GenAI can increase efficiency and process large amounts of information. It may even increase creativity by helping us to think outside of the confines of human thought. On the one hand, these tools could lead us to develop complex perspectives and stronger evidence-based arguments. On the other hand, AI processes could also lead us to rely less on our own memories and analytical skills, potentially atrophying our abilities to think critically, develop expertise, and exercise moral judgments.
What moral codes and ethical principles does AI use as it creates communications and media?

Like all new technologies, AIs’ processes are encoded with the biases of its developers. In a capitalist society, developers are likely to value profit over human wellness. In addition, some scholars are concerned that because GenAI relies on user prompts and develops its intelligence by building on existing information and patterns, it is not equipped to challenge social norms or stereotypes. It is essential to consider when it is imperative for AI to value humanity or ecology over profit. If we first specify, through prompts, what a GenAI tool should value, it may abide, but what about when we don’t? In other words, what is or should be the AI moral default? Alternatively, can AI be developed that helps users critically reflect on their own biases and consider alternative ideologies?
How might AI impact the creative industries?

Much of the buzz surrounding GenAI has focused on its potential uses in artistic endeavors, such as creating literature, music, and video games. It’s worth considering the potential benefits and challenges of using AI in these areas. AI might lower the barrier to entry for creative work and thus help even more people create and share their artistic visions with the world. Yet it’s possible that AI trained primarily on media content that reflects predominant power dynamics and stereotypes would largely generate output reflecting those same power dynamics and stereotypes, thus impeding the creation and spread of innovative and resistive creative ideas and expressions. AI has already provoked a “crisis” in intellectual property, with artists expressing concern that AI is using their works without permission and threatening their livelihood. AI thus raises critical questions about what it means to “own” an idea or creative expression as well as the meaning of creativity in general.
As users who interact with new and expanding AI tools, how can we help shape their use?

It’s important to recognize that AI is a technology made by and ultimately used by humans, thus giving us influence over how AI is designed and implemented. New literacies must be developed to help people learn how to use AI safely and responsibly. New norms, expectations, and regulations are needed to make sure AI is used ethically and to hold those accountable who fail to do so. Serious consideration must also go into developing and implementing strategies to prevent AI from exacerbating the digital divide. Who will have access to the highest quality AI? Will it remain free and open or will those with greater privilege, have access to more powerful and advanced tools? What might be the long term socio-political and economic impact of this divide?
More from Pace
As artificial intelligence seeps into every facet of life, Pace scholars are working to harness the technology’s potential to transform teaching and research. While the road ahead is fraught with uncertainty, these Pace experts see a fairer and safer AI-driven future.
From helping immigrants start businesses, to breaking down barriers with AI-generated art, Pace professors are using technology to build stronger, more equitable communities.
Pace University Professor of Art Will Pappenheimer, who has long incorporated digital media and new technologies into his artwork, discusses his latest AI-influenced exhibition and the technology’s effects on the art world.