IDDP's research fellowships support projects by Ph.D. or other terminal degree holders at any stage of their career. The research should align with IDDP’s mission to help the public, journalists and policymakers understand digital media’s influence on public dialogue and opinion and to develop sound solutions to disinformation and other ills that arise in these spaces.
This is the first of three Q & A’s in which GW Today introduces the fellows to the university community.
Josephine Lukito is a first-year assistant professor at the University of Texas at Austin’s School of Journalism and Media. Her research focuses on multi-platform media systems, political disinformation and global communication. She uses computational and quantitative methods to analyze news and social media over time.
Dr. Lukito’s past research analyzed the linguistic intergroup bias in international news, differences in news language about 20th century and 21st century social movements and social media discourse about the 2016 U.S. presidential election. Her ongoing research focuses on state-sponsored disinformation and cross-platform flows of malicious political content. She has discussed her research in Columbia Journalism Review and on CNN, and her work was cited in Robert Mueller’s 2018 “Report on the Investigation into Russian Interference in the 2016 Presidential Election.”
Dr. Lukito answered questions about her research for GW Today:
Q: Can you tell us the aim of your research project with IDDP?
A: My project explores the relationship between state-sponsored disinformation on social media, news coverage, and state violence. I’m motivated to understand how social media disinformation amplifies political tribalism and dehumanization and justifies the use of violence towards social groups, particularly minorities and political dissenters.
Q: What is your favorite platform to study? Why?
A: I like to study a lot of platforms. Given the growing fragmentation of media audiences and the platformization of national media ecologies, I think it’s important for mis/disinformation researchers to understand how political actors coordinate message dissemination across platforms. For example, disinformation can often “trade up the chain” from social media messages to news media articles. The platforms I’m currently studying include Twitter, Facebook, Parler and Reddit.
Q: What do you recommend social media platforms do to build trust in online environments?
A: Transparency from these companies are essential to building trust between the social media platform and its users. For example, Twitter has been quite forthcoming in terms of sharing data from state-sponsored information operations, which has been critical for studying state-sponsored disinformation. However, we know little about how Twitter identifies an information operations campaign, which is troubling.
Q: What can individuals do to reduce the spread of misinformation?
A: Individuals can learn and develop healthy media verification habits to reduce the spread of misinformation. For example, before sharing a news story, people can check to make sure that other outlets have covered that news story.
However, I think it’s also important to recognize that there is no one strategy to prevent misinformation entirely. In fact, there is probably no combination of strategies to completely remove misinformation, as many stories and anecdotes cannot be independently verified.
Q: What is the role of language online and how should it be tailored in online spaces to reduce inflammatory content?
A: Given my Ph.D. minor in English Language & Linguistics and methodological focus on natural language processing, I believe that language is absolutely essential to online political communication. On the internet, minorities and disenfranchised publics can advocate for policies and share their stories using language. However, language can also be used by those in power, such as governments. Given the overwhelming power of governments to engage in violent activities, I’m motivated to understand the language these governments use to justify, validate or encourage violence.
From a research perspective, I do think that language must be carefully studied. Though computational fields such as natural language process and text-as-data have been useful for scaling up analyses, they are no substitute for reading the language data ourselves, even if only to verify that the computational tool has accurately captured the concept we are operationalizing.
Q: Are there any pertinent policy solutions you would like platforms to adopt to better identify and rectify mis-and dis-information?
A: I would like to see platforms adopt more consistent implementations of their policy. As it stands, what is worth or not worth taking down seems to be rather subjective. I think it is also important for platforms to admit what mis/disinformation is easy or difficult to remove from their platform. For example, removing content based on the use of a specific keyword can also remove non-malicious content or corrective information.