Study: Presidential Elections Bring Online Hate Communities Together

As the election approaches, GW researchers detailed how major events strengthen global hate networks online and incite new content around hot-button issues.

October 29, 2024

Illustration Demonstrating how online hate networks strengthen around U.S. elections with 2 clusters of blue dots on a white background with a black arrow between them.

Demonstrating how online hate networks strengthen around U.S. elections, this illustration shows a subset of Telegram-connected networks before and after election day, 2020.

A new study led by George Washington University researchers detailed the ways in which the 2020 U.S. election not only incited new hate content in online communities, but also brought those communities closer together around online hate speech.

The research has wider implications for better understanding how the online hate universe multiplies and hardens around local and national events such as elections. It also reveals how smaller, less regulated platforms like Telegram play a key role in that universe by creating and sustaining hate content.

“Politics can be a catalyst for potentially dangerous hate speech. Combine that with the internet, where hate speech thrives, and that’s an alarming scenario,” said Neil Johnson, professor of physics at GW’s Columbian College of Arts and Sciences and an author on the study. “This is why it’s critical to understand exactly how hate at the individual level multiplies to a collective global scale. This research fills in that gap in our understanding of how hate evolves globally around local or national events like elections.”

The study, “How U.S. Presidential elections strengthen global hate networks,” was published in the journal “npj Complexity,” part of the Nature portfolio of journals. It found that the 2020 U.S. election drew approximately 50 million accounts in online hate communities closer together and in closer proximity with the broader mainstream, including billions of people.

The research also found the election incited new hate content around specific issues, such as immigration, ethnicity and antisemitism that often align with far-right conspiracy theories. It identified a significant uptick in hate speech targeting these three issues around Nov. 7, 2020, when then President-elect Joe Biden was declared the winner in the U.S. presidential race. The team also identified a similar surge in anti-immigration content on and after Jan. 6, 2021.

The team, which also included GW researchers Rick Sear and Akshay Verma, developed a powerful new tool to take a closer look at the online world and the hate content spreading there. They built an “online telescope” that maps the online hate universe at an unprecedented scale and resolution.

They found that the social media platform Telegram acts as a central platform of communication and coordination between hate communities. Yet Telegram is often overlooked by U.S. and E.U. regulators, Johnson said.

Moving forward, the researchers suggested that current policies focused only on popular platforms—such as more widely used sites like Facebook, Twitter or TikTok—will not be effective in curbing hate and other online harms, since various platforms can play different roles in the online hate community.

Additionally, they recommend that any anti-hate messaging deployed to combat online hate speech should not be tied specifically to the event itself, since hate speech around real-world events may also incorporate adjacent themes. By only targeting anti-hate messaging around a U.S. election, for example, messaging may neglect to reach audiences who are spreading hate speech around issues of immigration, ethnicity or antisemitism.

The U.S. Air Force Office of Scientific Research and The John Templeton Foundation funded the research.