GW Study: Offline Events Spike Online Hate Speech

From Black Lives Matter protests to presidential elections, real-world events often lead to increased online bigotry. A new study reveals the targeting doesn’t stop with single groups.

January 29, 2023

Black Lives Matter

GW researchers found a 250% spike in online racist posts during the summer 2020 Black Lives Matter protests.

In summer 2020, the murder of George Floyd and the Black Lives Matter protests sparked a racial reckoning that brought issues of social justice into the national spotlight.

But, according to a new study by George Washington University researchers, those events also triggered a more troubling trend: dramatic spikes in online hate speech across both fringe and mainstream social platforms. And the targets included groups who had little connection to the actual events.

“More than any other event we studied, the murder of George Floyd and ensuing protests triggered sharp increases in online hate speech,” said Yonatan Lupu, associate professor of political science in the Columbian College of Arts and Sciences (CCAS) and lead author of the study. “In the online communities we study, levels of hate speech still have still not returned to where they were before the murder.”

Associate Professor of Political Science Yonatan Lupu co-authored the study.
Associate Professor of Political Science Yonatan Lupu co-authored the study.

Titled “Offline Events and Online Hate” and published last week in the journal PLOS ONE, the study revealed that real-world events are often followed by surges in online hate speech. While racist rhetoric constituted the overwhelming majority of bigoted remarks online, the researchers discovered that expressions of hate toward other groups abounded as well.

The research team trained an algorithm to analyze seven types of online hate speech: racism, misogyny, anti-LGBTQ, anti-Semitism, anti-religion, anti-immigrant and xenophobia. The machine learning analysis covered six interconnected online platforms, from mainstream sites like Facebook to websites notorious for hosting offensive content, such as 4Chan and Gab. In all, researchers collected 59 million English-language posts from approximately 1,150 online communities.

Racist posts skyrocketed by 250% during the summer 2020 Black Lives Matter protests. The research team observed that many other types of online hate speech—especially anti-LGBTQ and anti-Semitism—also increased, despite those targets not being directly connected to demonstrations for racial justice. Facebook experienced the largest increase in racist content during the George Floyd demonstrations, outpacing even some unmoderated web forums.

“These events highlight both the importance of content moderation and the challenges of effectively implementing this,” Lupu said. “Hateful users online mobilize quickly and move across platforms easily. Our evidence suggests content moderators on mainstream platforms should be keeping an eye on how discourse and narratives evolve on fringe platforms.”

The study took place between June 2019 and December 2020. Researchers did not collect any user information and did not investigate how online hate speech influences offline events. Although the study has concluded, the research team continues to monitor approximately 2,000 online hate communities, where bigoted content continues to run rampant at elevated levels.

“Hate online is still very much alive, and our team is continuing to study its evolution and spread,” said Richard Sear, B.S. ’21, a data analyst in the CCAS Department of Physics and co-author of the study. Sear began working with the research team while an engineering undergraduate at GW.

The team’s latest efforts are tracking not only how different types of hate fluctuate in response to real-world events, but also how they respond to efforts by moderated platforms to remove hateful content—or “deplatforming.” Sear said the researchers hope to inform strategies for removing hateful content that won’t result in further escalation of hate levels or even re-emergence of removed content.

“When you ban someone from a social media platform, they don’t just stop using the internet,” he said. “They regroup on other platforms and figure out how to more effectively dodge content moderation. It’s a constant push-pull. The mainstream platforms are trying to get rid of hate speech online whereas people who are promoting it want to be on mainstream platforms because that’s where they can recruit new members.”

In addition to Lupu and Sear, the research team includes CCAS Professor of Physics Neil F. Johnson; former GW researchers Nicolás Velásquez, Rhys Leahy and Nicholas Johnson Restrepo; and Beth Goldberg of Google. The study was funded by the U.S. Department of Defense, the National Science Foundation and Jigsaw, a unit of Google.