Sitemap

Digital Harms in Conflict

6 min readMay 15, 2025

--

A framework for mediators and peacebuilders to understand and address different categories of digital harm, and what we know about their impact on conflict.

Over the past weeks, under commission from the UK Foreign, Commonwealth and Development Office’s Migration and Conflict Directorate, we have been conducting a global review into how people and organizations working in the peacebuilding and mediation fields identify, track, and understand digital harms and their impact on conflict in an evidence-based way. The final report is available here, or read below for the main insights. Have thoughts on this report? Join us at a launch event on May 29.

The technology-conflict nexus

As digital technologies become ubiquitous, governments supporting the fields of peacebuilding and mediation need to consider the impact of digital harms on conflict, just as they consider all other aspects of human interaction and power dynamics that impact the prospects of peace.

Despite this, the technology-conflict nexus may often seem overwhelming to understand or engage with, leaving many key actors lacking the knowledge or capacity to effectively respond. To clarify what digital harms in conflict encompass, we must first define three terms:

  • Harm in conflict is anything that creates or increases divisions between groups (or affective polarization) which erodes social cohesion, and can lead to violence.
  • Digital affordances are what digital technologies enable one to do, that is different ways to produce and distribute information.
  • Digital harm is attributable or incidental harm that results from digital affordances, including the contextual impact of digital affordances.

Digital harm in conflict is the production and distribution of information in a digital space that creates or increases divisions between people or groups.

Digital harm in conflict results both from the deliberate use of digital technologies by a conflict party and from the unintended impact of technology design on division and polarization.

From this working definition of digital harm in conflict, and building on global best practices, we derive a framework for peacebuilding and mediation practitioners to mainstream understanding, analysing and responding to digital harms in conflict.

Five digital affordances to understand digital harm in conflict

Practitioners need to distinguish five digital affordances that inflict harm in conflict, namely:

  1. Offensive cyber operations for the extraction and release of sensitive data used to harass, bully and intimidate an individual or group.
  2. Network control for digital surveillance that targets individuals or groups and shutdowns used to silence entire populations.
  3. Information deception & manipulation through the production of divisive content.
  4. Manipulative influence operations to further the spread of divisive content.
  5. Algorithmic amplification that inadvertently spreads divisive content.

Connecting analysis of offline harm to digital harms

Practitioners should first assess relevant harms that can be impacted by digital affordances, and the structural factors that enable digital affordances, before investigating specific digital harms. A traditional conflict analysis — looking at the political, economic, and socio-cultural factors of a conflict — is the starting point to assess the main areas of harm that interact with digital affordances, namely:

  • Direct calls for division, incitement to hatred and violence, or recruitment to armed groups;
  • Areas of concern regarding erosion of social cohesion;
  • Ongoing military operations;
  • Areas of direct or structural discrimination that can aggravate legitimate grievances and / or be manipulated to further division.

Building on an analysis of these areas of harm offline, practitioners should consider investigating digital affordances that can inflict or amplify these harms in digital space, as follows:

  1. Engage cybersecurity firms and government counterparts to understand offensive cyber operations, network control, and digital surveillance.
  2. Conduct digital forensics and narrative digital media analysis to investigate the production and distribution of divisive content.
  3. Undertake impact assessments of the information ecosystem to investigate the algorithmic amplification of divisive content.

Cybermediation and digital peacebuilding responses to address digital harm

With a solid conflict analysis that integrates digital harms, practitioners will be better placed to systematically address digital harms as part of conflict prevention, peacebuilding or mediation programming. While new practices are constantly emerging, a review of global best practices reveals two main broad areas of programming.

1. Protecting from offensive cyber operations and defending against abuses of networks

Both protecting from offensive cyber operations and defending against abuses of networks require a combination of digital literacy programming to help those impacted by these harms and advocacy or diplomacy efforts to change norms around the deployment of these digital harms in conflict.

Concretely, to protect from offensive cyber operations aimed at extracting sensitive data that can be used to harass, bully and intimidate an individual or group, educate users in cybersecurity and deploy technological solutions that can help prevent unauthorized data extraction. To defend against abuses of networks for digital surveillance that targets individuals or groups, advocacy and diplomacy for robust legal protections and adherence to international norms is essential can deter unjustified network control. Where such efforts do not bear fruit and civilian populations are being harmed, support the use of circumvention tools and decentralized networks.

2. Addressing the impact of divisive content on conflict

In order to effectively address the complex ways divisive content impacts conflict, it is important to distinguish between approaches that address each digital affordance, namely the production of deceptive & manipulative information, the deliberate spread of such information, and the algorithmic amplification of such information.

The production of deceptive and manipulative information is primarily a regulatory issue, addressed through both national legislation and international norms. Concretely, diplomatic efforts should focus on creating frameworks that regulate information campaigns employing AI-generated content.

Addressing the deliberate spread of deceptive and manipulative information is the focus of programs that employ either content moderation or counter-influence approaches. Counter-influence approaches include debunking, alternative narratives, by-stander and protection initiatives, digital literacy, and codes of conduct.

To many peacebuilders and mediators, content moderation and counter-influence strategies can feel like an uphill battle against a torrent of divisive content in a polarized information ecosystem primed to believe and promote digital harm. Where algorithmic amplification results makes divisive content pervasive, more systemic interventions can begin to build resilience to divisions caused by digital harm. Such systemic approaches fall into two broad categories. First, working with companies to change the design of platforms in ways that decrease the incentivization of the production and distribution of divisive content, and dampen the ways that platform design can enable mass harassment and manipulation. Second, implementing digital peacebuilding programs that seek to change how content is curated and providing opportunities for connection across divisions. While evidence on digital peacebuilding approaches is still thin, the theory of change is that supporting a spectrum of activities that transform divisive behaviors and promote sustained social healing online, shifts norms in ways that offer sustained resilience to digital harms in conflict.

Towards mainstreaming digital harms in conflict prevention

Digital harms are present in most conflict settings. To integrate these harms among the range of conflict dynamics relevant to their work, peacebuilders and mediators need to become more nuanced in their understanding of different digital affordances that can inflict harm in conflict, and differentiate between them systematically. Conflict analysis should integrate this detailed understanding of digital harms as a matter of course, to assess where conflict prevention, peacebuilding or mediation programming needs to address digital harms systematically, not as an add on or after-thought.

Have thoughts on this report? Join us at a launch event on May 29.

--

--

Build Up
Build Up

Written by Build Up

Build Up transforms conflict in the digital age. Our approach combines peacebuilding, participation and technology.

No responses yet