Q & A: IDDP Fellow Rachel Moran

At the Institute for Data, Democracy & Politics, Dr. Moran will study trust in digital information environments and the role of trust in spreading mis- and disinformation.

August 20, 2021

Rachel Moran

(Courtesy Rachel Moran.)

IDDP's research fellowships support projects by Ph.D. or other terminal degree holders at any stage of their career. The research should align with IDDP’s mission to help the public, journalists and policymakers understand digital media’s influence on public dialogue and opinion and to develop sound solutions to disinformation and other ills that arise in these spaces.

This is the third of three Q & A’s in which GW Today introduces the fellows to the university community.

Rachel Moran is a postdoctoral researcher at the University of Washington’s Center for an Informed Public and faculty fellow at the University of North Carolina’s Center for Information, Technology and Politics. She was the 2018-19 Oakley Endowed Fellow at the University of Southern California, where she earned her Ph.D., and received the 2018 Charles Benton Junior Scholar Award. Her research explores the role of trust in digital information environments and is particularly concerned with how trust is implicated in the spread of mis- and dis-information.

Q: What is the aim of your research project with IDDP?

A: I am exploring the spread of misinformation in Vietnamese American communities with my talented UW Ph.D. student, Sarah Nguyễn. Our research will examine the spread of misinformation in Vietnamese across social media platforms during the 2020 presidential election. In addition to capturing what misinformation narratives spread and the platforms they spread on, we will be working with community organizations who are fighting misinformation such as VietFactCheck. We hope to better understand how misinformation is impacting Vietnamese communities across the United States and how we can build a healthier information ecosystem that curbs the spread of non-English language misinformation and instead elevates accurate, community-grounded political and civic information.

Q: What is your favorite platform to study?

A: My recent research has focused more on visual platforms like Instagram and YouTube. I’m really interested in ideas around trust—how people build relationships of trust with media and information sources, and how these relationships of trust can be weaponized to spread mis- and disinformation. Visual platforms are a really interesting site for these relationships of trust for a couple of reasons. For one, we so often think “seeing is believing,” and so tend to trust (or at least not question as much) the things we see photo or video “evidence” of. Second, we build quite intimate parasocial relationships with complete strangers on these visual platforms because they are spaces for personal sharing. I think both of these elements (and more!) make Instagram and YouTube particularly vulnerable to the spread of bad information.

Q: What do you recommend social media platforms do to build trust in online environments?

A: This is a big question and one that threads through all of my research. The frank answer is that the profit-logic of social media platforms is not conducive to building a trusted and trustworthy environment. Realistically speaking, I do believe that platforms should be making adjustments, such as adding friction into sharing through extra steps or fact-check flags, to at least try to move incrementally toward a healthier digital environment.

Q: What can individuals do to reduce the spread of misinformation?

A: We have a phrase here at the Center for an Informed Public that I think is pretty apt—“Think more, share less.” Particularly if you read something that elicits an emotional reaction, it’s good to step back and rethink hitting that share button.

Q: What is the role of language online and how should it be tailored in online spaces to reduce inflammatory content?

A: Our preliminary research into misinformation in Vietnamese highlights the sheer inaction of social media platforms in moderating online communities that do not converse in English. During the 2020 election period and during the ongoing COVID-19 pandemic, we’ve seen Facebook, Twitter and others put in place information flags on posts that mention “COVID-19” or “vaccines” or election-related terms. These flags are designed to offer more context to users and to reduce the spread of dangerous misinformation related to these topics. These flags rarely, if ever, come up on the same kinds of posts in Vietnamese and other non-English language. Or if the post is flagged, the warning message is in English and redirects to another English-language authoritative site, such as the CDC or the platform’s own “election integrity” page. This is not only useless for Vietnamese-speaking users, but also allows misinformation in Vietnamese to flourish almost unchecked. We need social media companies to invest their resources in better understanding how non-English language misinformation is proliferating on their platforms and how marginalized and immigrant groups in the US are becoming targets for misinformation.

Q: Should governments be involved in regulating online spaces?

A: Yes. So long as online spaces continue to act as public spaces and given that being “online” is a necessary part of everyday life, I do believe governments should play a role in ensuring that their constituents have access to affordable internet, have control over their data and their privacy online and are assured physical and mental safety in the offline and online world. I can’t say I will ever have a clear idea on what this looks like in terms of federal, state or even local legislation—in part due to the lobbying money that runs from Silicon Valley to politicians at every level of government. The problem of misinformation is as much a political one as an informational one.