At the Institute for Data, Democracy & Politics, Forestal will research the processes by which “incels” become radicalized online.
The Institute for Data, Democracy & Politics (IDDP) provides research fellowships to support projects by scholars holding a Ph.D. or other terminal degree at any career stage. The fellow’s research should align with IDDP’s mission to help the public, journalists and policymakers understand digital media’s influence on public dialogue and opinion and to develop sound solutions to disinformation and other ills that arise in digital spaces. This Q & A is one in a GW Today series introducing fellows to the university community.
Jennifer Forestal is Helen Houlahan Rigali Assistant Professor of Political Science at Loyola University Chicago. She is a political theorist whose research interests include platform design and governance, digital culture and democratic theory. She is the author of Designing for Democracy: How to Build Community in Digital Environments (Oxford University Press, 2022) as well as the co-editor of The Wives of Western Philosophy: Gender Politics in Intellectual Labor (Routledge, 2021). Her research has been supported by the National Endowment for the Humanities, the Notre Dame Institute for Advanced Study, and the Faculty Resource Network at New York University.
Q: What is the aim of your research project with IDDP?
A: My project explores processes of online radicalization by examining the transformation of ‘incels’ (short for ‘involuntarily celibate’) from their origins as a supportive self-help group to the deadly misogynists they are today. More specifically, I’m interested in the role of the community’s organizational characteristics—its social structure, platform design, governance decisions and so on—in facilitating that transformation.
Q: In your view, is content moderation an infringement of a person’s First Amendment right?
A: No. The First Amendment of the U.S. Constitution is pretty clear: it prevents Congress from making laws that would inhibit (among other things) the freedom of speech. Content moderation, at least the way we commonly discuss it, is conducted by private companies—and the First Amendment doesn’t say anything about that.
When people talk about free speech and content moderation, however, what I think they’re really concerned about isn’t the technical legal question, but rather what kinds of speech is “allowed” in the public sphere, more generally—which is much more complex. Content moderation is a limitation on people’s ability to speak in public on a specific platform. But limitations on speech have always existed, often for reasons that support democracy. The question of what those limits are, and who should enforce them, is a question we still need to figure out (and the answer might—and probably should—vary between platforms!).
Q: What role has technology played in worsening the “misinfodemic’?
A: There can be no doubt that the speed and ease with which information spreads through social media has aided the spread of misinformation. But in addition to that, the ways that social networks are reconfigured on many prominent social media platforms (like Twitter) makes them also less likely to be able to “correct” misinformation once it’s out there.
Q: What can individuals do to reduce the spread of misinformation?
A: The addition of “friction”—of making it harder to share content online—is helpful in disrupting the speed and ease that make “virality” possible. Pausing before sharing—to verify accuracy, to think about how or why to post, etc.—is one tool individuals have.
“Correcting” is also a useful tool, though it works better in certain circumstances. If users see a friend post misinformation, for example, it can be productive to comment on the post with accurate information instead—though this is most effective if there’s a trusting relationship between the poster and the commenter.
But the spread of misinformation is a collective problem as much as an individual one. If platforms were redesigned to foster more robust communities, we could see more collective—and therefore more effective—responses to misinformation as well.
Q: How have digital platforms aided extremist groups? Do you believe this has a direct correlation to democratic backsliding?
A: Digital platforms provide people with new ways of connecting with one another; extremist groups have done a really good job at taking advantage of that affordance to find one another.
In addition, the same platforms that connect people are also reshaping communities and the social dynamics that go along with them. So, it isn’t just a matter of extremists finding one another on Facebook. It’s also that these platforms are creating spaces where traditional forms of social sanctioning that might signal widespread disapproval of extremist views—like public shaming—don’t have the same effect. One effect of this is that we are seeing these groups not just gather, but also become emboldened in ways they might not have been in traditional off-line spaces.
That said, I wouldn’t lay the blame for democratic backsliding solely, or even mostly, on the shoulders of platforms. Instead, at least in the U.S., I think we need to talk seriously about the role of political leaders and mainstream media elites in failing to protect democratic norms and institutions in the face of a mainstream political party’s determination to undermine them.
Q: Are there any pertinent policy solutions you would like platforms to adopt to better identify and rectify mis- and disinformation?
A: I would like to see platforms take a position on what constitutes mis-/dis-information and regulate it as such. Right now, platforms’ policies are opaque and unevenly enforced—or else nonexistent, which is itself a choice about how to regulate misinformation. With clear guidelines that can be discussed and contested by the user base, it will be easier to hold platforms to account for their decisions.
Q: What are some of the challenges with studying online platforms?
A: They change so quickly! The academic publishing process is often a very slow one; this means that by the time articles are accepted, let alone published, they’re often out of date. In one particular case, I wrote an article about Gawker’s commenting system—only to find that by the time the article was published, the platform no longer existed.
Another challenge is the sheer amount of research on online platforms that’s occurring across disciplines. This means that anyone interested in studying digital platforms must keep up with a lot of work—both traditional academic publications as well as the really excellent scholarship being produced by scholars in the public sphere.
Q: What do you recommend social media platforms do to build trust in online environments for both researchers and the general public?
A: There are many things that I’d like to see platforms do. Transparency is a big one; right now, we have a staggering amount of evidence that platforms are making all kinds of decisions with information that would be useful for the public to know as well—but it’s not currently shared. Having access to that information—as researchers, journalists, or even just users—would help create that trust.
But information alone isn’t enough to cultivate the kind of accountability that ultimately leads to trust. For that, I’d also like to see platforms create dedicated spaces where users and others are able to gather to discuss that information so that they can take action in response to it.