Sitemap

Moderate Them, Not Us

5 min readSep 25, 2025

by Julie Hawke

A portion of these thoughts contributed to reporting in The Hill on Republican calls for social media crackdowns.

I saw the news about Charlie Kirk’s murder at Utah Valley University first on social media. Many of my own connections are either in Utah, or have kids in Utah, and people came online looking for answers before any news outlets were reporting. I watched intently as the conversations shifted from rumors (“I heard something happened at UVU; My daughter’s roommate was there; Is there an active shooter; I’m driving there now to find my kids”) to confirmations: “Charlie Kirk was shot at an event.” Videos spread to confirm what happened and to answer everyone’s question at the time: did he survive? The most graphic of the videos circulated seemingly as an unofficial confirmation that he had not.

As they do, the images and videos continued to spread. As Surina Venkat outlines in her article linked above, Republican officials who had spent years railing against “Big Tech censorship” suddenly became content moderation’s most vocal advocates. There were calls for the content itself to be taken down out of respect for the Kirk family, but also calls to censor users seen to be mocking, justifying, or celebrating Kirk’s death, including at least one call for wide-ranging civil consequences (like taking drivers licenses) away.

I have some initial thoughts about what is happening, and why this seemingly contradictory move away from first amendment defense isn’t that surprising.

TL;DR: If we’re serious about addressing the role of social media in political violence and polarization, we need to move beyond the binaries of “moderate more, moderate less” “moderate them, not us” and start making more productive demands about the incentive structures, algorithmic design choices, and political strategies that make viral violence profitable and politically useful in the first place.

The politics of whose harm counts

Content moderation operates as a classification system, and classification is always an exercise of power. The tradeoff is that while securing classifications for terms like “hate speech” and “misinformation” can provide important resources and policies to protect marginalized communities, these same classifications can be weaponized to invalidate alternative knowledge or perspectives that challenge dominant narratives. What counts as “acceptable” content becomes a mechanism for determining whose voices and experiences are legitimate. This tradeoff is much older and broader than social media.

When platforms receive political pressure — whether from Republican officials about Charlie Kirk videos or from advocacy groups about other content — they’re being asked to adjudicate between competing claims about whose harm counts. I think content moderation emerges as a politicized weapon precisely because there’s actually broad agreement about its necessity. When the rubber meets the road, platforms couldn’t function without moderation. Calls for “no moderation” quickly collide with the reality that some content is broadly recognized as harmful, and that we agree for the most part about what that is. The political conflict is instead about the perceived uneven application of moderation, not the concept itself. So, the loudest voices against moderation are typically with proximity to those whose speech gets restricted. The loudest voices for it are those with proximity to being harmed.

Where the conversation here is about an asymmetry in application in the U.S. political context, uneven moderation by platforms is old news globally, and it is skewed towards countries where platforms have the most human, technical, and linguistic resources. We can demand nuanced decisions about when death-related content serves a public interest versus when it causes harm in the U.S.. That is not true for a majority of the world, where classifications have to be applied with a blunt edge.

Attribution bias in conflict

So, moderation debates are less about principle and more about whose identity feels under attack. That is classic social psychology. People judge the same behavior differently depending on whether it comes from their group or another group. In particular, we have an attribution bias. In-group misdeeds get explained as situational and disconnected (“we had no choice or that isn’t us”). Out-group misdeeds get explained as dispositional and cohesive (“they’re violent by nature”). Perceptions, not just material interests, lock groups into cycles of hostility. When a group is classified as dispositionally corrupted, it places them outside of the moral community entirely, which justifies extraordinary punishment. So censorship in principle is bad, but if you consider people “evil, sick animals” like Rep. Clay Higgins does, censorship, cancelling, and doxxing is absolutely fair game. And Mike Lee can make a sick joke online after the political murders of the Hortmans, but he gets to keep his job in Congress while teachers are being fired for less re: Kirk.

We know engagement-based algorithms puts social psychology on steroids

Social media algorithms privilege content that shocks or provokes, because that’s what keeps people engaged. Lots of research shows that in polarized environments, material that humiliates or vilifies an out-group spreads even faster. Further, content that uses moral claims is also more successful. Both become a way for groups to reinforce identity and cohesion.

In moments as big as this, it feels wild to me to essentially still be having “take down or don’t take down” discussions and leaving it at that. We need structural solutions for structural problems. All of this points to deeper fault lines of polarization and mistrust, and to the financial and social incentives people and platforms have to create it. See: The Algorithmic Management of Polarization and Violence on Social Media

If we’re serious about addressing the role of social media in political violence and polarization, we need to move beyond the binaries of “moderate more, moderate less” “moderate them, not us” and start making more productive demands about the incentive structures, algorithmic design choices, and political strategies that make viral violence profitable and politically useful in the first place.

The first amendment isn’t set in stone?

Debates about social media policies are the visible manifestation here, but it very much points to a political system under stress, where traditional mechanisms of legitimacy and control are being dismantled, and those in power are experimenting with increasingly authoritarian responses disguised as content moderation. As long as it’s toward the “other side.”

Content moderation is necessary. Platforms cannot function without it, and we broadly agree on what constitutes harmful content. The political theater around moderation obscures this basic reality, and also, these fights aren’t really about moderation policy. They’re about who gets to wield classificatory power and when. We need more focus on the structural changes that matter: algorithmic design that doesn’t reward outrage, economic models that don’t profit from division, and political incentives that don’t make viral violence useful for fundraising and mobilization.

I believe that our inability to confront structural incentives leaves a vacuum easily filled by authoritarian logics that are cloaked in the language of public safety and platform responsibility. That’s how you get calls for what would have once been extreme punitive measures, which are not seen as deviations from democratic first amendment values, but instead as their supposed defense.

Getting caught up in an idea of the ‘other side’ is an illusion that keeps us from building, or maybe in this case defending, the systems we all actually need.

--

--

Build Up
Build Up

Written by Build Up

Build Up transforms conflict in the digital age. Our approach combines peacebuilding, participation and technology.

No responses yet