When OpenAI launched ChatGPT in November 2022, questions that had long been simmering about the impact of artificial intelligence suddenly felt more urgent. Within two months, ChatGPT had reached 100 million users. Giant corporations such as Microsoft and Google are incorporating AI into software that millions of workers use every day. All of this underscores the relevance of Leading AI: Organizations, Ethics and Society, a class in the Kellogg School of Management’s new MBAi Program that examines the ethics and the implications of using artificial intelligence.

Leading AI is taught by three faculty members whose research approaches the topic from complementary directions. Adam Waytz, the Morris and Alice Kaplan Chair in Ethics and Decision Management and professor of management and organizations, uses social psychology and cognitive neuroscience to explore ethical considerations around AI. Hatim Rahman, assistant professor of management and organizations, investigates how AI is changing the nature of work and employment relationships. Leslie DeChurch, professor of communication studies at Northwestern, studies the factors that make collaborations between humans and AI successful.

In the class, students discuss articles about the potential effects of AI in business, analyze case studies and design an interaction between humans and AI. They also write reflection papers to help them apply the course materials to their own careers.

As new commercial applications of AI proliferate, the professors are constantly updating the course. “When we first taught this class in the summer of 2022, ChatGPT hadn’t been released,” DeChurch says. “The target is moving rapidly relating to social interactions, management of AI and other critical decisions.”

The three instructors offer an overview of the questions they explore in their class.

What are the pressing ethical issues around AI?

ADAM WAYTZ: We talk about privacy and moral responsibility, and whether you are responsible for how your technology is used, even if you developed it for a different purpose.

We also discuss algorithmic bias, and how even well-intentioned business strategies using AI can unintentionally contribute to bias. This leads into a discussion about the idea that as biased as algorithms are, they might be less biased than humans, and we could reduce their bias with appropriate monitoring.

In response to the current panic over AI and technology, I point out that most of the problems attributed to AI are generated by human beings. Perpetuating bias, spreading misinformation, eliminating jobs — these are all things humans have done and are doing. At the end of the day, these are just machines. We have to be careful not to let humans off the hook.

How will AI affect organizations and individuals?

HATIM RAHMAN: Technology tends to make things better and worse at the same time, and the technical features of a technology almost never predict its impact. We examine this pattern, for example, in the context of autonomous vehicles (AVs). In 2015, many predicted AVs would be fully autonomous by 2020, but overlooked the social and political dynamics in this prediction.

We also talk about how AI could impact careers. Only one occupation has been completely replaced since the 1950s: elevator operator. It takes time for technology to impact the nature of work in a meaningful way, so there will be time to adjust. We review academic research on reskilling and what types of skills are likely to be in demand.

New technologies have created unprecedented wealth, but there isn’t very convincing data that they have improved productivity in the past few decades. We need better engagement from leaders to think carefully about how technology is integrated into organizations.

How will teams adapt to collaborating with AI?

LESLIE DECHURCH: Through research on teams, we know a lot about what happens when you put experts together. How they interact can either lead to novel ideas or impede their performance.

Adding intelligent technologies creates new kinds of social demands, where some members of a group are real people and some are synthetic teammates. I have students design a physical embodiment of an AI for their team, leveraging findings about how aspects of embodiment affect trust.

I’m excited about how AI could help people collaborate better. There’s a lot of research on the importance of collaboration in society. AI could be the ultimate team member, detecting problem-solving errors and helping other team members utilize one another’s expertise.

Fast forward: What will AI look like in 10 years?

LESLIE DECHURCH: It’s going to be completely unpredictable. If the past 10 months have taught us anything, it’s that the things we’re really worried about today might not be the most pressing issues of the future.

ADAM WAYTZ: My main concern is that it’s going to broadly contribute to the homogenization of art and culture, where algorithms have learned which cultural products seem to be financially successful, and so AI will keep regurgitating those things rather than creating novel works of art.

HATIM RAHMAN: I believe we will simultaneously experience the best of AI and the worst. Which scenario will dominate will depend on social, cultural, political and organizational factors, not on the technical advancements made in AI.