Q & A: IDDP Fellow Jeffrey Lees

At the Institute for Data, Democracy & Politics, Lees will examine the spread of election-related misinformation.

July 18, 2022

 

Lees

IDDP fellow Jeffrey Lees.

The Institute for Data, Democracy & Politics (IDDP) provides research fellowships to support projects by Ph.D. or others with terminal degrees at any career stage. The fellow’s research should align with IDDP’s mission to help the public, journalists and policymakers understand digital media’s influence on public dialogue and opinion and to develop sound solutions to disinformation and other ills that arise in digital spaces. This Q & A is one in a GW Today series introducing fellows to the university community.

Jeffrey Lees is a visiting assistant professor at Clemson University’s Media Forensics Hub and an incoming associate research scholar at Princeton University’s Andlinger Center for Energy and the Environment. He researches the psychology of inaccurate beliefs and their consequences for political, organizational and social life. He was recognized as an Emerging Scholar by the Center for the Science of Moral Understanding in 2020, and his research has been published in peer-reviewed journals including Nature Human Behaviour and the Proceeding of the National Academy of Sciences, and featured in Harvard Business Review.

Q: What is the aim of your research project with IDDP?
A: My project seeks to understand the spread of election-related misinformation, namely misinformation which challenges the legitimacy of democratic processes. The project seeks to track how this misinformation spreads through social networks on Twitter, and what characteristics of people predict posting, sharing and engaging with election-related misinformation. For example, does ideological extremity predict someone’s likelihood of spreading election-related misinformation? Do people share misinformation because of heightened emotional states, like anger? How does the source vs. content of the misinformation matter for its successful spread on Twitter?

Q: In your view, is content moderation an infringement of a person’s First Amendment right?
A: I think content moderation is necessary to mitigate harm, at the very least. However, my concern with how content moderation is handled on most platforms is that it is often completely opaque. The guiding principles and intent of moderation are vague at best, the criteria for review are unclear if stated at all, the individuals (or algorithms) conducting moderation are unknown, and the outcomes of moderation decisions are rarely transparent. Community members rarely have any influence over these rules and processes. And often the motives of the moderators (i.e., the profit motive) are at odds with stated goals such as reducing harm. A more ethical approach to content moderation needs to involve significantly greater transparency and input from the community being moderated.

Q: What role has technology played in worsening the “misinfodemic”?
A: Social media didn’t create political polarization, but probably exacerbates it. Technology didn’t create con artists who hawk fake medicine, but it allows them to reach larger audiences. Technology didn’t create violent extremists, but it helps them coordinate. It’s tempting to say that the core affordances of technology and social media are the exacerbating factors here, but I’m not convinced that’s the case. If not for the fact that the misinfodemic is about objectively false information, the characteristics of it look a lot like successful marketing campaigns. That observation should lead us to question the idea that it’s technology per se that’s causing these issues, but rather the incentive systems set up by those who control the technologies. Changing incentives, not technological affordances, will better address the misinfodemic. 

Q: What can individuals do to reduce the spread of misinformation?
A: Honestly, I think the best thing people can do is be more thoughtful. There’s a large body of research showing that people accidentally share misinformation because they’re thoughtlessly clicking “share” without asking “is this accurate?” Simply asking people to consider accuracy is very effective at getting them to discern fake news from real news. This same research shows that, in general, there’s a disconnect between what people are willing to share vs. what they think is untrue. Recognizing that we all have this proclivity to just hit “share” without considering the truthfulness of what we’re sharing will help us all be more thoughtful.

Q: How have digital platforms aided extremist groups? Do you believe this also has a direct correlation to democratic backsliding?
A: Digital platforms have made it easy for extremists and non-extremists alike to coordinate. Democratic backsliding (in the United States at least) has deeper historical roots, going back to opposition to the civil rights movement. The actors today who are doing everything in their power to subvert our democratic institutions are a direct outgrowth of that reactionary movement against the expansion of democratic rights and institutions. There’s a reason the folks who stormed the Capitol on January 6 carried Confederate flags, not flags with Facebook’s logo. Social media didn’t create their animosity toward democracy; the possibility that the United States might evolve from a white supremacist caste system into an egalitarian, multiethnic society created such animosity.

Q: Are there any pertinent policy solutions you would like platforms to adopt to better identify and rectify misinformation?
A: Honestly, regulating producers is going to be a lot more effective than regulating the platforms where misinformation spreads naturally. I’m not saying that’s easy, or that I have the solution for doing so—I don’t—but regulating social media in the hopes of stopping misinformation seems like regulating door makers in order to stop home invasions. It won’t hurt, but most houses have lots of windows.

Q: What are some of the challenges with studying online platforms?
A: Lack of transparency is the biggest problem. Most social media platforms are uninterested in or hostile toward researchers, which creates massive barriers to any sort of systematic research on what is happening on those platforms.