Malicious AI Activity Likely to Escalate into Daily Occurrence in 2024

As more than 50 countries including the U.S. gear up for national elections, GW’s Neil Johnson has published the first quantitative scientific analysis of AI misuse by bad actors.

January 23, 2024

AI potential and challenges

Image by rawpixel.com on Freepik

A new study led by researchers at the George Washington University's Columbian College of Arts and Sciences predicts that bad-actor artificial intelligence (AI) activity will escalate into a daily occurence by mid-2024, increasing the threat that it could affect election results in the more than 50 countries set to hold national elections this year.

While analysts have long sounded the alarm on the threat of bad actors using AI to disseminate and amplify disinformation during election seasons, "Controlling bad-actor-AI activity at scale across online battlefields,” published this week in the journal PNAS Nexus, is the first quantitative scientific analysis predicting how bad actors will misuse AI globally.

“Everybody is talking about the dangers of AI, but until our study there was no science of this threat,” said Neil Johnson, lead study author and a professor of physics at GW. “You cannot win a battle without a deep understanding of the battlefield.”

The researchers say the study answers what, where and when AI will be used by bad actors globally, and how this threat can be controlled. Among their findings:

  • Bad actors need only basic Generative Pre-trained Transformer (GPT) AI systems to manipulate and bias information on platforms, rather than more advanced systems such as GPT 3 and 4, which tend to have more guardrails to mitigate bad activity.
  • A road network across 23 social media platforms, which previously was mapped out in Johnson’s prior research, will allow bad actor communities direct links to billions of users worldwide without users’ knowledge.
  • Bad-actor activity driven by AI will become a daily occurrence by summer 2024. To determine this, the researchers used proxy data from two historical, technologically similar incidents that involved the manipulation of online electronic information systems. The first set of data came from automated algorithm attacks on U.S. financial markets in 2008, and the second from Chinese cyber attacks on U.S. infrastructure in 2013. By analyzing these data sets, the researchers were able to extrapolate the frequency of attacks in these chains of events and examine this information in the context of the current technological progress of AI.
  • Social media companies should deploy tactics to contain the disinformation, as opposed to removing every piece of content. According to the researchers, this looks like removing the bigger pockets of coordinated activity while putting up with the smaller, isolated actors.

The research was funded by the U.S. Air Force Office for Scientific Research and The Templeton Foundation.