In Interactions with AI, Does Race Matter?

GW researchers found that consumers report higher satisfaction when interacting with a chatbot whose avatar is Black than when the same bot appears to be white or Asian.

June 27, 2023

An image of three chatbot avatars—one Black, one Asian and one white—from an explanatory video created by the research team.

The three chatbot avatars—Black, Asian and white—as they appear in an explanatory video created by the research team.

Between human beings, racial prejudices shape all kinds of interactions—from which hairstyles are considered professional to who gets to coach professional sports teams. But as more interactions become partially or wholly digital, and as artificial intelligence becomes a more central part of those digital interactions, how might these prejudices and stereotypes translate into the digital environment? How does perceived race affect human interaction with AI—say, when talking to a consumer-help chatbot with a cartoon human avatar?

An interdisciplinary team led by researchers from the George Washington University is exploring the question, and they say the answers aren’t necessarily what the layperson might expect. When interacting with three identical chatbots with different racialized cartoon avatars—one white, one Asian and one Black—consumers rated the Black avatar highest on scales of competence, warmth and “humanness.” Customers who interacted with the Black bot also reported higher satisfaction levels, the researchers found.

“We had predicted the opposite, because our predictions were based on humans,” said study co-author Vanessa Perry, vice dean for strategy  and professor of marketing at the GW School of Business. Perry and Nils Olsen, assistant professor of organizational sciences in the Columbian College of Arts and Sciences, are coauthors of “I’m Only Human?: The Role of Racial Stereotypes, Humanness, and Satisfaction in Transactions with Anthropomorphic Sales Bots,” published this year in the Journal of the Association for Consumer Research. (Their coauthors from other institutions are Nicole Davis of the University of Georgia, Marcus Stewart of Bentley University and Tiffany White of the University of Illinois Urbana-Champaign.)

In the study, participants engaged in a simulated booking of a several-day trip to New York City with the goal of negotiating the best possible price for their stay. The bot they interacted with was functionally identical across all participant groups, but the avatar representing it was stylized with three different color schemes—one suggesting a Black character, one Asian and one white. After the interaction, researchers asked participants about their perceptions of the avatar and how they felt about the negotiation and its outcome.

According to existing stereotype research, Black people are generally perceived as lower in both competence and warmth than white and Asian people. Yet consumers interacting with the bot reported the opposite. Part of that mismatch might be due to an effect known as “expectancy violation,” Perry and Olsen said.

“Stereotypes cause us to form particular expectations, and when some cue or some signal is not consistent with those expectations, then it can cause a more extreme and opposing reaction to the expectation,” Perry explained. “It is still relatively rare to have Black males in positions of negotiation representing corporations—that’s just the demographic reality of that—and there’s some evidence suggesting that when you find people in unexpected roles, then this can cause stereotypes to flip.” Because Black bots are more unusual or unexpected in an AI setting, then, their mere digital presence may increase perceptions of competence and humanness.

An element of status might also play into this particular expectancy violation. Since this bot represented the owner of a short-term rental property, participants might have inferred that they were speaking with a homeowner. If participants considered Black people less likely to own property, expectancy violation would suggest they might be particularly impressed by this “homeowning” bot.

Olsen and Perry said their next step will be to expand their study of the different demographic factors that could play into AI perception, expanding from race to gender and even educational background. Would consumers be more impressed by a bot avatar wearing a Harvard t-shirt than one rocking state university colors? The field is a rich one, especially as more companies incorporate AI into consumer-facing roles.

While the internet once seemed to provide a level demographic playing field—one in which socioeconomic status, race, gender, age and other areas of bias were not so legible—companies are now “strategically reintroducing” certain demographic characteristics, Olsen pointed out. And the implications of that are still unknown. If consumers tend to trust AI more when associated with a Black character, will companies start to introduce more Black chatbots? If they do, does that have implications for the treatment of Black people? Does satisfaction with a corporate bot translate into respect for a human being who shares its racialized characteristics?

Whatever the case, Olsen and Perry suspect that as bots become more widely used, human interaction—and human decision-making—are likely to become luxurious amenities rather than baseline expectations. And that might be concerning.

“The human is still the last bastion of being able to make very creative, unique judgments that can bring a lot of complex dynamics into play and understand all kinds of social contexts,” Olsen said. And as digital and human spaces become more and more interconnected, “it becomes way, way more important that we continue to have flexible, diverse, dynamic human beings somewhere in that food chain.”


The George Washington University is at the forefront of artificial intelligence research, from professors who explore the technology’s use in the classroom to thinkers examining the promise and challenge of an AI-integrated future. In May, GW’s status as a leader in the field was cemented by its selection as co-lead institution of the National Science Foundation’s $20 million Institute for Trustworthy AI in Law and Society (TRAILS). Follow GW Today for more stories on how this technology of the future is part of the university’s present.