Artificial Intelligence in Peacetech

Build Up
6 min readMay 16, 2019

If the heralds of new tech are to be even half-believed, the promises of Artificial Intelligence will revolutionize our computerized and data-driven world. At least, careful consideration suggests new opportunities in how we approach work, interaction, and data. In this post, we begin to grapple with how we can understand what AI can do for peacebuilding.

These tools require very careful consideration. Successful implementation of AI requires specific knowledge and resources, and the process brings up challenging ethical considerations. Yet, the technology is here to stay — no longer the next frontier, AI is becoming embedded in the tech industry, and like blockchain, it’s grabbed our collective imagination. We need to understand how it can impact peace positively and negatively, and how to approach it critically and use it without falling into any of the traps imagined in science fiction or realized in today’s actual implementations of AI.

Opportunities for AI

AI is often understood as facilitating cooperative intelligence — the idea that a network of humans and machines can solve more and new problems easier than can any one of the networked entities by itself. In this frame, AI can fall into one of the following categories:

  • Tool — computer performs task, humans monitor task; include autocomplete, spreadsheets, and connecting people
  • Assistant — can work without direct attention, more active in solving problems, like IBM Watson or a chatbot
  • Peer — Computers perform similar tasks to people; people solve some of the cases & unusual cases
  • Manager — Assigning tasks, evaluation, training.

If you interact with services from major tech companies, drive a newer car, play video games, or use social media, chances are you use AI in some of these capacities without even realizing it.

Recently, we’ve been paying close attention to three tools in the AI toolkit — machine learning (ML), natural language processing (NLP), and robotics. Each of these is a relatively specific class of tool in its own right:

Machine learning is used for sensing and predicting tasks. ML can recognize repeatable patterns in large and complex data sets, which can be used to present information, such as image recognition, fed into driverless car systems. It can also present recommendations based on previously recognized patterns, such as online shopping, or image generation techniques. In peacebuilding, we imagine that ML could be used to help identify common cultural touchpoints that signify dividers or connectors, in order to assist ethnographic work.

Natural language processing is a special subset of ML that is used for a wide range of tasks around text and language, such as search, information extraction, machine translation, text generation, and sentiment analysis. We can imagine NLP it in peacebuilding processes assisting chatbots and conversational interactions, rumor tracking, and social media mapping. It could also be utilized in analyzing historical records, to assist human researchers, for instance in our work with the Center for Democratic Constitutional Design and their process to archive important documents related to the Icelandic constitutional process.

Robotics is the fusion of sensors and mechanisms, taking input from the real world and utilizing machine learning and other rules to direct the mechanical actions. Today’s robots are designed for very specific tasks, such as manufacturing in factories, picking, sorting and moving merchandise in warehouses, or delivering, manipulating, or locating objects in other spaces. Practitioners have been imagining possibilities for some types of robots for peace for a while now, especially around how drones could be used to deliver aid and keep communities connected.

Can we trust AI to build peace?

Introducing artificial intelligence to peacebuilding work presents similar challenges to other peacetech, only magnified by the nature of the process. In broad strokes, to build an ML system, you need to supply a data set and define a set of rules, which the code iterates over to refine its processing capabilities. You can either tag the data set directly with examples of what you want, and ask the system to learn the tags for repetition on similar unknown data, known as supervised learning. Or, you can utilize unsupervised learning, which means finding correlations in a data set of significant size, with known input, but not known outputs.

At this point, it’s quite a technical task, requiring an AI expert to help formulate this learning process. Bringing in an expert adds additional burdens to anyone trying to implement AI in peacebuilding. Not only do resource costs go up, but an organization must also identify AI practitioners who are value-aligned.

Trust between system operators and other stakeholders is especially important, because AI systems do not see the world the way people do, and the differences are not trivial, and are sometimes hard to understand.

Bias can escalate quickly. It creeps in through the hidden features of data sets and the blind spots in developers’ vision. High profile cases have shown time and time again that despite the best efforts of developers, facial recognition can be biased against people of color, resume-reading AI can be biased against women, and chat-bots such as Microsoft’s Tay can be trained by malicious actors to use racist and genocidal language. It’s led to false identification of terror suspects, which can cause immediate and irreversible harm.

One current challenge in the field of AI is programming systems that can explain their choices, so humans can account for their actions. Any AI engagement necessarily must contend with the introduction of the strangeness and unreliability of this pseudo-actor, and deep trust in the expert is essential.

Design considerations in AI

With all of this in mind, how can we safely and ethically engage with AI as it grows in importance and use across the tech sector? Here are few key thoughts to guide AI development:

  • AI is not a peacebuilding methodology in itself, but its outputs can improve existing proven methodologies.
  • AI should be considered an assistant and tool for human practitioners, but never a replacement, and its outputs need to be verified.
  • No system should be built with AI as a single point of failure, and should always have some monitoring and interruption mechanism built in.
  • AI should go through its own human-centered design process and Do No Harm analysis, in addition to any other design processes taking place.
  • Stakeholders should be given the opportunity to learn about the AI. They should be able to give input into how it’s trained, what data it uses, and how the program uses the outputs.
  • Set aside dedicated time to actively consider bias in the data and the methodology, because bias will be magnified by the AI process.

Given the resources needed to set up an AI system — a well tagged or significantly large data set, and resources for a highly trained expert — it is not appropriate for implementation in early-stage or pilot programs. Well trained models and large datasets take time to develop. Simpler and more accessible tech processes will help refine the core approach. It’s easy to imagine that AI would distract from the actual peacebuilding objectives of any process. As Diana Dajer and Jonathan Stray said at Build Peace in 2016, an application shouldn’t be built until it’s been proven on a small scale, by hand in spreadsheets.

That said, at Build Up we are starting to envisage AI as a future component of some new processes. We keep AI in mind from the beginning of any project, and plan for it as a possibility during early data collection, in order to be in a better strategic position to implement it in future iterations or expansions. We know that if our intention is to implement AI at a later point, collecting the right data and tagging the key features in the data early will save us time and resources in the future. (In fact, tagging data is so expensive that AI startups are turning to prison labor, and Google has embedded tagging into the ubiquitous Captcha process.)

Let’s keep talking about AI for peacebuilding

Artificial intelligence is a complicated toolkit that’s hard to utilize effectively in ambiguous situations like peacebuilding. As peacebuilders, we need to give careful thought to all the challenges it brings into our work: resource requirements, ethical data collection, transparency, and trust.

Are you exploring AI for peacebuilding? We want to hear from you, share resources and think together about how we can embrace AI while remaining realistic and critical about how it can add to or hinder the actual work of peacebuilding.

--

--

Build Up

Build Up transforms conflict in the digital age. Our approach combines peacebuilding, participation and technology.