AI Intimacy & Mediation
by Helena Puig Larrauri
Applications of artificial intelligence to mediation are all in one way or another focused on sensemaking of the opinions or positions of conflict parties, key stakeholders or the general public on matters related to a peace process. This includes making it easier to interpret information coming from consultative processes, understanding the positions of parties from previous statements, interacting with a digital twin to play a conflict forward, or finding bridge-building positions.
With caveats about the danger of biases and the importance of not losing touch of the human element of mediation, these applications are important advances that will continue to improve and bring vaue to the mediation field as AI models get more sophisticated. But I was recently introduced to a different kind of value that AI might bring, an intimacy dividend, that points to a new application of AI to mediation that (as far as I know) we haven’t seen any experimentation with yet.
The AI intimacy dividend
The idea of AI inimacy comes from Shuwei Fang, who wrote this reflection on how AI might transform news media consumption. Starting from the growing phenomenom of AI therapy, she identifies an intimacy dividend from AI:
Value is created by this new and fascinating willingness people seem to have to open up to conversational AI interfaces in ways not possible before, and it’s about to transform how we engage with news and information. — Shuwei Fang
Her core premise is that where much attention is being paid to how AI is transforming content production and dissemination (often in pernicious ways that favour misinformation, polarizing content, and other trust-busting information disorders), there is also a potential transformation to content consumption.
We are increasingly interacting with content via AI agents. While social media has dominated, our consumption has been public — measured in likes, comments and shared — , recorded for posterity, and often judgemental or even retaliatory. If AI agents become the main way we interact with content, then we may return to a more private consumption of information, away from the performative forces that can too often prevent authentic engagement with opinions and entrench us in publicly-stated positions. A return to the intimacy of private thought, an unexpected dividend. She describes it like this:
Rather than ignoring or simply consuming an article and moving on (perhaps without fully understanding it), an AI companion could now help us explore questions like: “What does this actually mean for someone like me?”, “What are the assumptions behind these different perspectives?”, or even “I feel anxious about this but I’m not sure why — can you help me understand my reaction?” — Shuwei Fang
Consuming information via an AI agent can, for example, offer a safe space to ask difficult or embarrassing questions, new ways to explore viewpoints on divisive topics in private, and support in working through difficult reactions to information. Fang explores how this might change the business model for news media, and although this is not her explicit focus, she identifies a set of functions on the horizon that have clear applications to conflict transformation at large, and concretely to the task of mediation in peace processes.
A mediator’s AI companion
There is some experimental work in creating AI companions that support the work of peacebuilders and mediators.
- Akord AI is an AI co-pilot designed to help Sudanese peacebuilders navigate a body of knowledge related to the Sudan peace process, and best practices around it.
- The UN’s experiment with AI agents includes “Ask Abdalla”, an AI persona representing a Rapid Support Forces combatant in easter Sudan, with whom mediators can simulate negotiations, helping them play out likely responses and behavioural patterns.
- CulturePulse’s digital twins create a simulated model of a society in conflict based on real or generated data, which mediators can interact with to see how different people in the community react to negotiation strategies, and what potential outcomes might be.
These AI agents are primarily focused on helping mediators and peacebuilders navigate a context, acquire knowledge, and understand what is possible. But they all assume that these practitioners are able to integrate what they learn impartially, or at least multi-partially, into their existing worldview. Yet we know that mediators have biases — stated or implicit, that biased mediators make worse agreements, and that hegemonic power shows up in how these biases are manifested.
What if an AI companion could help mediators consume information about a context in a different way, one that helps them to reconcile new information with their existing beliefs and life experience? Fang calls this potential function of AI “narrative integration”. We can imagine an AI companion that helped to navigate any of the knowledge bases of Akord, “Ask Abdall” or CulturePulse while at the same time helping a mediator understand their thought patterns in respect of this new information.
Belief updating for conflict parties
There is a growing body of work on integrating deliberative platforms to peacebuilding and mediation processes.
- UN DPPA has used AI-assisted digital dialogues to engage diverse voices in peace processes in Iraq, Yemen, Lybia, Afghanistan and more.
- ALLMEP has facilitate large-scale AI-enabled discussions among Jewish Israelis, Palestinians in the occupied territories, and Palestinian Citizens of Israel who work in the peacebuilding field.
- At Build Up, we are working on constructive public policy conversations with thousands of young Kenyans, combining analogue and AI-enabled digital channels.
Broadly speaking, these processes use AI for sensemaking in two ways: identifying key topics or issues of debate and ranking these topics / issues according to different metrics. A number of deliberative platforms use bridge-building algorithms to identify topics or issues that are most likely to result in consensus. While finding common ground is an important component of mediation processes, we know that at times common ground becomes a weak or watered down compromise, and that there will be no true, solid common ground unless the positions and beliefs of conflict parties shift. Could AI help overcome that hurdle?
In The Moral Imagination, John Paul Lederach emphasizes the critical role of relationships, especially across divides, and the importance of creating spaces where “unlikely people in unlikely places” can come together. This is critical because it offers opportunities to open oneself to new ways of seeing, reexamining and shifting one’s beliefs, and taking a moral leap of imagination towards a possible future peace. Fang talks about how AI could provide what she calls “belief updating assistance”, support to work through cognitive dissonance when confronted with information that challenges their existing views. We can imagine an AI agent that was trained to represent the positions, interests and needs of a population, and that helped conflict parties engage with this information in private, and update their beliefs as if in an unlikely encounter.
Trauma healing & information therapy
There are countless studies showing that information overload, especially when that information is served up by engagement-based algorithms that are likely prioritize the most emotionally triggering content, impacts our mental health. In war, information overload is often weaponized to target a population, looks more like information warfare, and can result in trauma. We have seen this in Gaza. We recently wrote an analysis of the impact of social media narratives on Sudan (coming soon, webinar here), and concluded that the deep divisions fueilign the war in Sudan are made worse by communication methods that create a violent online environment and make it harder for Sudanese people to recover together, amplifying and extending the conflict.
Misinformation thrives under conditions of information warfare, not only because its tactics are amplified by algorithms, but also because people become entrenched in position, overwhelmed by information overload. No amount of digital literacy is going to work in these conditions; perhaps trauma healing would be a better strategy to address the spread of misinformation.
What would it take for media consumption to shift in ways that are supportive of trauma healing for populations subjected to information warfare? Fang points to the possibilities for AI-enabled “information therapy”, where people have access to tolls for healthy information consumption. This new horizon for media consumption could move us away from the divisive tactics that thrive on algorithmically mediated platforms and towards an information environment that is more conducive for peace.
Closing thoughts: the ethics of AI agents in peace processes
As AI agents become ubiquitous across many fields, concerns are being raised about the risks associated with their use. Fang points out particular challenges around sycophancy and persuasion. In the context of peace processes, we might also pay particular attention to the potential misuse of agents for surveillance of conflict parties, and consider the risks of security breaches and leaks of sensitive information.
There is much to be considered in the conflict sensitive design and use of this technology, and more that will emerge as practical applications unfold. Yet the potential application of AI agents to mediation is worth exploring. In A Psalm for the Wild-Built — a science fiction novella about a monk and a robot who become frineds in a world where robots have gained self-awareness and wander into the wilderness — a turning point comes when Mosscap (the robot) shifts from observing Dex (the monk) to offering connection by making him tea and creating a space to listen. We may not be at tea-making robots yet, but building on the idea of the AI intimacy dividend as a willingness to engage more authentically with AI than with humans, we can explore how AI might support — through narrative integration, belief updating and information therapy — one of the core goals of mediation: to achieve the shift from positions to interests and needs that paves the way for constructive negotiations.
