Q & A: IDDP Fellow Madeline Jalbert

At the Institute for Data, Democracy & Politics, Jalbert will study how our contexts and experiences impact the way we judge truth and anticipate risk.

August 16, 2022

Madeline Jalbert

IDDP's research fellowships support projects by Ph.D. or other terminal degree holders at any stage of their career. The research should align with IDDP’s mission to help the public, journalists and policymakers understand digital media’s influence on public dialogue and opinion and to develop sound solutions to disinformation and other ills that arise in these spaces. This Q & A is part of a series in which GW Today introduces the fellows to the university community.

Madeline Jalbert is a postdoctoral student in the Information School at the University of Washington. She studies how context and subjective experiences influence memory, judgment and decision-making. Her work focuses on factors impacting judgments of truth and risk, as well as how these judgments play out in naturalistic contexts, with the goal of developing effective strategies to prevent and correct the spread of misinformation. Her project will be conducted with University of Washington political science doctoral student Morgan Walk.

 

Q: What is the aim of your research project with IDDP?

A: As it has elsewhere, misinformation has accompanied the increased use of modern portable communications devices, like cell phones, in sub-Saharan Africa. While there is a high awareness of a “misinfodemic” and the subsequent spread of political misinformation, the impact of these narratives on electoral outcomes remains understudied, particularly in the global south.

The aim of our research project is to study the interplay between misinformation narratives and perceptions of election integrity over the course of Kenya’s 2022 general election. We will conduct large-scale surveys before and after the election which include measures of familiarity with and belief in misinformation narratives, trust in the electoral process, voting intent and behavior and other key metrics.

Our surveys will be completed by the same individuals both before and after the election. This will allow us to investigate how perceptions of election integrity and individual endorsements of misinformed narratives function in the build-up to an election, as well as how or if they change afterwards. Specifically, we’ll investigate key questions including:

  • Where are people being exposed to different misinformation narratives?
  • How do familiarity with, and belief in, these misinformation narratives impact voting behavior, measures of trust and the likelihood of considering engaging in violence?
  • How do these perceptions change as a result of the election outcome? Do people double down in belief in misinformation narratives when their party loses, or do they update their beliefs to be more in line with those of the prevailing political party?

 

Q: What role has technology played in worsening the “misinfodemic”?

A: Technology has lowered the barriers to sharing information with large audiences. In the past, people primarily got their news through newspapers, television and radio. These sources typically have a process of vetting and verifying information before it is shared, and they limit who can share information. Now, any group or individual easily create and share content without having it checked by others. Concerningly, people often don’t even check the things they share themselves. It’s common for people to share an article without even clicking past the headline.

Another way in which technology has played into the worsening of the “misinfodemic” is through algorithms that prioritize showing users information that has received a lot of engagement. The type of information that gets a lot of engagement is not always the highest quality information. It’s usually the information that’s the most emotionally charged and polarizing. When these more controversial posts are prioritized over more factual and credible information, it creates a problematic information environment for the consumer to wade through: The world seems more polarized and unknowable than it really is, expert input is lost, and practical, usable information becomes buried by sensationalist content.

Q: What can individuals do to reduce the spread of misinformation?

A: As individuals, one of the most simple and effective things we can do is consider whether information is true before we like or share it. Our default is to assume that information is true, especially when it comes from a source that we trust, when it is consistent with our beliefs or when it aligns with our worldviews and political identities. Taking the extra minute to slow down, read a full article (rather than just a headline), and actually consider how we know something is true can help us catch false information before we spread it to others.

Q: Elections both domestically and globally are nearing. What measure would you like to see digital platforms take ahead of them to ensure a healthy democracy?

A: My research background is in cognitive psychology. From my work and the work of others in my field, we know that the best way to prevent belief in misinformation is to stop people from being exposed to it in the first place. Once people have been exposed to misinformation, it can become very difficult to fully correct. And even if we can correct specific pieces of misinformation, our past exposure can still have an impact on our broader attitudes and impressions. Given this, the measures that I would like to see digital platforms enact focused on removing or deprioritizing non-credible information and misinformation and, instead, prioritizing quality information.

Here are some examples of what this could look like:

  • A small number of people are responsible for an outsized portion of the misinformation spread. Platforms can remove these repeat misinformation spreaders. In an ideal world, this would be done in a coordinated effort across platforms so these spreaders don’t simply hop from one platform to another.
  • Platforms can continue to work on the best ways to remove or hide problematic information quickly before it has a chance to spread.

Digital platforms can adjust their algorithms to help quality information, rather than controversial content