Helena Puig Larrauri proposes a practical framework for mediators and peacebuilders to identify divisive behaviors observable on social media that are most relevant to their work.
One of the main reasons mediators and peacebuilders are interested in social media analysis is to understand whether there is evidence of divisive behavior online that is affecting the prospects for peace.
Mediators need this evidence of relevant, divisive behavior online in order to determine what aspects of social media are “mediatable”, for example whether a social media code of conduct outlining principles that conflict parties sign up to is appropriate, or whether a clause on the conduct of conflict parties on social media should be included in other agreements being negotiated (e.g. ceasefire agreements). Mediation teams may also need this evidence in order to monitor any agreed code of conduct or clause on social media in an agreement, or to take proactive measures to protect a peace process from social media disruption.
Peacebuilders need this evidence of relevant, divisive behavior online in order to determine what aspects of social media need to be addressed through dialogue or trust-building, for example whether a dialogue with religious leaders is needed to address divisive discourse shared on social media, or whether a narrative change campaign is needed to address intergroup tensions expressed online.
That’s a lot of possible behaviors… so at Build Up, we’re using a series of questions to help arrive at concrete signals we can look for on social media. This is useful if you’re working on any kind of social media listening — whether you’re browsing X, or using Crowdtangle, or scraping data to later classify it in some automated way. Whatever your qualitative or quantitative analysis method, here are some questions that can help at the start.
Question 1: what kind of behaviors are likely to impact the prospects of peace?
The divisive behaviors on social media likely to impact the prospects of peace are those that either impact or signal affective polarization. Why the emphasis on affective? Polarization is generally bifurcated into an issue-based or relationship-based analysis. Issue-based polarization focuses on the ideological distance between parties on policy areas.
- Issue-based polarization is connected to constructive conflict, the kind that allows for differences of opinion to co-exist in society, and for democratic deliberation that can lead to important transformations in society. A relationship or identity-based polarization is more precisely referred to as affective polarization, meaning the increasing dislike, distrust, and animosity towards those from other parties or groups.
- Affective polarization is a dynamic process intertwined with conflict escalation, by which a self-reinforcing spiral cooperates to separate ideologies or identity groups into increasingly distanced and aggregated adversaries.
In other words, where issue-based polarization leads to constructive conflict necessary for a peaceful society, affective polarization leads to destructive conflict that can affect the prospects for peace. Deliberate tactics impact affective polarization; contextual changes are signals of affective polarization. This is why we define divisive behaviors on social media as those which impact or signal affective polarization, and not differences of opinion that impact or signal issue-based polarization.
Question 2: are we looking for deliberate tactics or contextual changes?
Divisive behaviors can come about because someone deliberately wants to act in a divisive way or because the context of social media is influencing behaviors to be more divisive. It’s useful to distinguish between:
- Deliberate tactics that some people use to harass or manipulate people on social media. These are things that some people do on social media. These tactics are often amplified by algorithms, but they have an attributable source.
- Contextual changes that result from the amplification of harassment and manipulation on social media. These are things that happen to people on social media. These contextual changes are a network effect that results from the interaction of platform design with human psychology — it cannot be attributed but it does need to be understood.
Question 3: are deliberate tactics and contextual changes distinct?
Deliberate tactics and contextual changes interact. For example, disinformation is a deliberate tactic employed by actors wishing to sow division through the production of false or misleading information.
- Misinformation happens when people unwittingly spread disinformation, without the intention to deceive. Misinformation spreads when people’s interests are so polarized that they will believe a piece of information largely based on who shares it: the virality of disinformation is a contextual change connected to changes in what people are interested in.
- Coordinated harassment is a deliberate tactic employed by actors wishing to sow division by discrediting an individual or group. This tactical discrediting makes it easier for ‘clapbacks’ and outrageous claims to go viral: when groups are discredited, the norms according to which people behave on social media change.
Question 4: who is best placed to address deliberate tactics? And contextual changes?
Despite this interaction, it’s useful to think about deliberate tactics and contextual changes separately because we can think of distinct strategies to address each:
- Deliberate tactics can be more easily attributed, and addressing them is about asking people / groups to stop doing something on social media. In this sense, it is more about an agreement to a “code of conduct” or a social media “ceasefire”; addressing deliberate tactics may be of more interest to people acting as mediators.
- Contextual changes are not easily attributable and may require more proactive strategies to reverse pervasive behaviors or beliefs, such as narrative change campaigns or dialogue-based initiatives. Addressing contextual changes may be of more interest to people working on broader social cohesion or peacebuilding programming.
Question 5: what are types of deliberate tactics? what are types of contextual changes?
There are probably lots of answers out there to this question — here is what we use internally to categorise these types.
- Production of harmful content
- Coordinated harassment of individuals or institutions
- Coordinated harassment of identity groups
- Inflation of positions along a dividing line
- Attitude polarization: perceptual shifts towards stereotypes, dehumanization, deindividuation and vilification of the “other”
- Interaction polarization: reduction of quantity and deterioration of quality of meaningful communication across groups
- Interest polarization: specific issues of contention give way to more general, simplified and unspecified claims
- Affiliation polarization: aggregation of actors from formerly neutral, adjacent, or cross-cutting positions into a limited number of adversarial groups with increasing in-group cohesion
- Norm polarization: formation of new combative norms of interaction that displaces empathy or curiosity and reifies the erosion of trust between people
Question 6: how do I find signals on social media of a particular divisive behavior?
That answer isn’t going to fit in a blogpost. But we’ve made our internal manual publicly available here, and if you have any questions about it, please drop us a note in the comments section below. Thanks for reading, and we hope this is useful in your work!