Q & A: IDDP Fellow Enrique Armijo

At the Institute for Data, Democracy & Politics, Armijo will examine the theories underpinning legal protection of disinformation.

July 6, 2022

 

Enrique Armijo

 IDDP fellow Enrique Armijo.

The Institute for Data, Democracy & Politics’ (IDDP) research fellowships support projects by Ph.D. or other terminal degree holders at any stage of their career. The research should align with IDDP’s mission to help the public, journalists and policymakers understand digital media’s influence on public dialogue and opinion and to develop sound solutions to disinformation and other ills that arise in these spaces. This Q & A is part of a series in which GW Today introduces the fellows to the university community.

Enrique Armijo is a professor of law and an affiliate fellow of the Yale Law School Information Society Project and the UNC-Chapel Hill Center for Information, Technology and Public Life. He teaches and researches in the areas of the First Amendment, constitutional law, torts, administrative law, media and internet law and international freedom of expression. Armijo’s current scholarship addresses the interaction between new technologies and free speech. His work has been cited by the Federal Communications Commission, the Federal Election Commission and other agencies as well as in testimony before the U.S. Senate Committee on Governmental Affairs.

Q: What is the aim of your research project with IDDP?
A: My project seeks to examine the theoretical basis for legal protection of disinformation. First Amendment theory has long favored counter speech over government intervention as the primary remedy for private-speaker misinformation. This goes back to the conviction that false speech should be counteracted with true speech and underpins the marketplace-of-ideas metaphor that buttresses many of our laws, policies and norms around freedom of expression.

But in modern speech markets, not all lies are met with truth; lies are also met with other lies. These “counter-lies” can increase the harm of the initial lie, and that harm often falls unequally on underrepresented speakers and their views. The project will examine how and whether “counter-lies” are deserving of protection under our governing theoretical models of free speech, and how to counteract the harms they can cause in the larger project of knowledge production.

Q: In your view, is content moderation an infringement of a person’s First Amendment right?
A: When we talk about First Amendment rights in the context of content moderation, the proper starting point is the rights of platforms, not users. That may seem paradoxical, because in this context the platforms have so much substantive and procedural power—i.e., power to both define and exclude—than users do. But there is no real question that platforms can and should exercise that power to shape the informational environment in ways they see fit, as informed by their own judgments with respect to the kinds of speech and speakers they want to amplify and associate themselves with.

Platform speech rights around content moderation also involve more than just removal or deprioritization of content that the platform believes is materially false or harmful. For example, there can be content that might be otherwise infringing if posted by a private person but in the public interest if posted by a public figure, such as a political ad that makes assertions that are dubious but not imminently harmful. In such a circumstance, platforms can label that content instead of removing it.

Q: How have digital platforms aided extremist groups? Do you believe this also has a direct correlation to democratic backsliding?
A: I don’t know that anyone can credibly argue that there are more extremists now than there were before social media. But it is certainly true that platforms have made it easier for extremists to find other extremists, both to commiserate and to plan bad acts. This is an unfortunate but necessary consequence of platforms’ erasure of the need for shared physical space as part of social interaction. Many of the same things about the online environment that enable extremists to meet online also help minorities and gender-nonconforming people to find acceptance and share their views with likeminded allies across the Internet.

But platforms can and should do more to prevent hate groups, hate speech and intentional disinformation from being spread online. Their processes in doing so should be informed, predictable and established in advance, and the default position of any platform should be to leave content up in the absence of emergency. These procedures will never be perfect, because perfection in content moderation is impossible. But nothing in the law obligates platforms to host speech that they view as harmful to other users or society at large.

Q: Should governments be involved in regulating online spaces?
A: The best thing governments can do in regulating online spaces is to create the conditions by which more spaces can begin and develop. Section 230 of the Communications Decency Act has been critical in this regard—it ensures that new platforms and websites can host user-generated content without fear of liability for that content. Without Section 230, the user generated content-based Internet we now know would likely cease to exist.

The next set of conflicts in online speech will take place across borders. Proposed legislation and regulation like the EU’s Digital Services Act seeks to adopt “notice-and-takedown”-based liability rules for user content in ways that will encourage platforms to take down much more of that content. So far international bodies have respected longstanding bars on imposing general monitoring obligations on platforms, but that may change as well. We should not take for granted the idea that in ten years the Internet will look largely the same as it does now. 

Additionally, because platforms are private companies, they often argue that much of this and other information about their use of user data is proprietary—in the view of the platforms, any decision to share it with users or researchers is an act of grace, not obligation. And ironically, sometimes platforms assert the privacy rights of their users to justify their refusals to provide requested data to researchers.