Manifest a world unseen

Build Up
9 min readNov 4, 2022

--

Opening remarks for the Build Peace 2022 conference by Helena Puig Larrauri

This year is the 9th edition of the Build Peace conference.

In its essence, Build Peace is a conversation that evolves organically, and we try to stay true to that. One way we do this is in how we decide on themes: always in response to an interest, always listening to what people around us are saying and what we are picking up as key issues for reflection.

Maybe that’s why when you string them together the themes since 2014 really feel like the arc of an ongoing inquiry into the relationship between the digital era we live in and the opportunities for and challenges to peace that we see around us.

A brief history of the Build Peace conference, in themes

At the first Build Peace conference in 2014, which took place at MIT in Boston, we wanted to understand in broad overview what technology and innovation could do in peacebuilding, focusing on four areas — information, communications, gaming and networking. In 2015, Build Peace was in Nicosia, Cyprus — Europe’s last divided capital city. We asked ‘by whom and for whom is innovation used to build peace’ and unpacked ways in which innovation changes who participates in peacebuilding. And then in 2016, in Zürich, we tackled the question of change, of transformation by asking why and how we use innovation to build peace. And then in 2017 in Colombia, we deepened by looking at as specific application of innovation to improve participation in peace agreements.

Those first four years were an overview inquiry of the what, who, how and why of innovation for peacebuilding. And we were mostly focused on the potential and opportunities that innovation and technology afford peacebuilders. Since 2018, the themes and contributions from speakers shifted from opportunity to challenge — we began to look critically at how innovation and technology might be changing conflict.

In 2018, in Belfast, Northern Ireland, when we challenged an assumption often held by those of us working in peacetech or civic tech: that technology is just a tool, and what matters is how we choose to use it. That digital technologies are essentially neutral. Instead, we started to question how digital technology is tooling us, how it is altering the human experience, and how that is relevant to conflict and peace. We saw that from personal incentives to strive for online approval to the polarization of political discourse, digital technologies are changing how we form our identities.

That led straight into the 2019 conference theme, which looked at borderlands and was held on the border between the USA and Mexico. Because what are identities if not borders? The ultimate borders, the thing that drives, fuels and explains so many of the conflicts we all work on. That year, we looked at how technologies both harden and bridge the social, political and physical borders that shape conflict.

And then came the covid-19 pandemic. In 2020, at a conference that was supposed to take place in Cape Town but ended up being an online event, we reflected that as peacebuilders, we need to pay attention to what the pandemic is doing to the fabric that keeps our societies together. What divisions are being exacerbated and how does that shift the balance of power? In a nutshell, we reflected that social distancing means more of our lives are taking place online, and it’s made it imperative that we dig a little deeper into understanding the socio-technical ecology, this evolving system we exist in, because it is changing how conflicts unfold and how we engage with conflict.

That year I kept asking myself: How much is it worth risking polarisation and fragmentation in order to build a truly plural and inclusive society? How do we influence this balance in our socio-technical ecology? I actually think we went through some kind of unveiling in 2020 — that somehow the pandemic made this socio-technical soup we’re swimming in so much more evident.

Perhaps that’s why we stuck with a version of the same theme in 2021 reflecting on digital adaptations, digital conflict, and the long term impacts of the pandemic on peacebuilding and social justice.

Exploring in the unseen in 2022

We’re still deep in this multi-year reflection that started in 2018, unpacking how technology and innovation are changing conflict: if 2020 and 2021 are about the big shadows that the pandemic threw into digital space, then 2022 is about all that goes unseen.

Exploring the unseen is about dealing with the less-visible sides of the digital and physical societal space, which are often overlooked through the emergence of dominant narratives amidst conflict. Our online and offline realities cannot be divorced from each other — we can look at them together to understand what unites us, what divides us, and how in between the cracks of those answers, lies our capacity to respond to each other peacefully. We need to understand what lies below the surface, together. Explore the unseen.

Something that often goes unseen, and that we’ve talked about before at Build Peace, are the dynamics that lead to supremacy and polarization Physical and structural violence targeting specific groups or individuals — like the violent events in Chemnitz in 2018 — are often the visible tip of the societal iceberg. But there are many unseen dynamics of supremacy and polarization that impact the way humans are able to connect within and between societal groups.

I’ve been interested in how this plays out in online space for years. A lot of us here are thinking about disinformation and misinformation. Just so we’re clear on the distinction: disinformation is creating and spreading incorrect information to intentionally deceive or manipulate others; misinformation is spreading incorrect information without the intent to deceive. Most disinformation is aimed at alienating a person or a group — in other words, disinformation uses a lie to close debate, to simplify the narrative rather than allow for plurality. But then that lie spreads, it becomes misinformation. Why? Why don’t people just not spread misinformation?

There are many possible answers to this question — and I know many speakers will be addressing it in different ways. Most answers seem to fall into two categories. One is that it’s something to do with human neurology / behaviour: basically our brains kind of love heuristic simplicity. The other is that it’s something to do with the spaces in which we now communicate: basically, misinformation spreads more because most digital technologies amplify this very human weakness of preferring simplicity.

This second point about the digital spaces in which we communicate really worries me. Many of us have discussed how Meta, and more specifically Facebook, builds algorithms that maximise for quantity of engagement. Algorithms that want more of our attention on stuff, any stuff. Pushing polarizing content serves this attention extraction model. Lately, I’ve also started paying attention to how this same surveillance capitalism business model is actually financing people who produce disinformation content with the revenue from online ads, and mostly from Google ads. In 2019, the Global Disinformation Index said disinformation websites earned $250 million in ad revenue, and Google was responsible for 40% of those ads. By 2021, that amount had risen to $2.6 billion.

Stop for a moment to think about that figure, and the very fast rise. What’s more, this is not only an issue affecting places like Myanmar or Brazil, where there has been much attention on the impact of misinformation on conflict and violence. To give an example closer to where we sit today: of 30 German-language sites the EU DisinfoLab identified as consistent sources of false content, more than 30% earn money with Google ads. Many of these sites mix far right narratives with COVID19 disinformation

So of course we need to worry about the production of hatespeech or of misinformation content — who produces it and how and why. But while the business model of surveillance capitalism is intact, it really feels like the odds are stacked against us. What needs to change is not only the content, but also the space. How can we create digital spaces that are connecting spaces — rather than drivers of division?

Recommender algorithms as unseen dividers and potential connectors

There are so many possible answers to this question — and again I can’t wait to hear what yours are. One that I’ve been thinking about lately, largely thanks to Jonathan Stray’s writing on this subject, is whether and how recommender algorithms could create more connection. For all my criticism of algorithms earlier, and how they push polarizing content, I also don’t think we could just do away with them. There is just so much content out there now that we need some machine-supported way to sort through it.

That’s what some algorithms solve for: recommenders are algorithms that are supposed to help us navigate content and decide what to present to us. And because algorithms are just opinions embedded in code, recommenders just do whatever their designers think they should do. Or as Stuart Russell neatly simplifies it:

“Like any rational entity, the algorithm learns how to modify the state of its environment — in this case, the user’s mind — in order to maximise its own reward.”

So if we want a user mind that engages with as much content as possible, then we will reward the recommender for “more quantity of engagement”, and that’s all the recommender maximises for. The fact that polarizing content is what gets that result is irrelevant. Or in the words of Krueger and his co-authors:

“We care which means the algorithm used to solve the problem, but we only told it about the ends, so it didn’t know not to cheat.”

And what exactly is it to cheat here? At it’s most simple, a recommender presents content that influences us to want to see other content, in order to predict and then optimise what we see. That might initially sound like something we don’t want, but not all influence is a bad thing, necessarily. If you forget about algorithms and instead think about your interactions with people, you know instinctively that influence can be a support to curiosity or education, but influence can also be a means to manipulate or coerce.

It’s hard to distinguish between these, but we can ask some questions to try to draw this line. The one I think works best is: does the interaction satisfy the needs of both the influencer and the influenced? In other words, something is influence and not manipulation when it feels more like a dialogue, more agonistic than antagonistic. That is, when it’s not a driver of conflict. So maybe the question we need to ask ourselves is whether a recommender is manipulating us. The distinction between influence and manipulation may be easy to spot in a conversation, in a book or in a lecture, but how does it translate into code?

One way some people have suggested is by building a recommender that rewards positive interactions across diverse audiences, including around divisive topics. This seems hard to do, because someone would have define what a “positive interaction” is and what diversity is good. Another option is to introduce some uncertainty into the predictions of the recommender, some randomness and noise in the recommendations. I like this one better because it strikes me as a way to foster complexity in a way that reflects what happens in offline interactions.

That might seem like a long digression about one specific type of algorithm, but the reason I’m so interested in this question of recommenders is that it makes me think that we are at a turning point. We are in a time when we can still see the changes in influence and manipulation that digital spaces are having on the way we are with each other, before they become so prevalent, so embedded, that they become unseen.

We’re at a turning point that is the right time to be asking some important questions. How can the digital era be a time of curiosity and exploration, rather than of exclusion and ideology? What do we need to manifest now to build peace in the digital era? What social, political and technological ideas and practices do we need to nurture?

To answer these questions, we need to dream up a world that doesn’t yet exist. One thing I’ve discovered about the Build Up team is that quite a few of us love science fiction. I wonder if the same is true with all of you? I love science fiction because it is a way of manifesting a world that is as yet unseen. One of my favourite science fiction authors, Ursula LeGuin, wrote:

“The way to see how beautiful the earth is, is to see it as the moon.”

I love that image — the earth as the moon. Only with distance and interval can we see beauty. And only then can our moral imagination kick in, so we can manifest new ways to live together. Turning points are moments when we slow down to give space to our moral imagination so we can see what is possible.

In the run up to the conference, Anooj hosted online sessions that some of you attended. Reflecting on these sessions, he wrote that “Without space to slow down and reflect together on the work we do in the world, this work can easily shift into dynamics that are void of a responsibility to one another.” So maybe this conference can be an interval. A space to take distance and reconsider. Its own turning point of sorts.

Every year at Build Peace, we have a conference slogan. It’s an invitation for how to be in this space for the next days. Our invitation this year is that together, we manifest a world, unseen.

--

--

Build Up
Build Up

Written by Build Up

Build Up transforms conflict in the digital age. Our approach combines peacebuilding, participation and technology.

No responses yet