Take back the analysis: Five things you can actually learn about a conflict context from social media

Mira El Mawla & Krystel Tabet

Originally published on ZIF TECHPOPS

A simple google search about ‘Social media analysis’ takes us to hundreds of pages for tools, platforms and literature on analyzing your social media: who is reading your material? Who is interacting with it? What about the intended audience reach? Etc. However, there is very little knowledge shared about how peacekeepers, peacebuilders and mediators can use social media analysis to create more impact and influence change.

Most social media analysis helps corporates develop targeted strategies to increase their sales or target specific consumers. Can similar analysis also help peacebuilders and mediators in understanding misinformation, hate speech, and division, and in designing interventions based on data from social media? Based on Build Up’s recent work in Lebanon and Libya, we believe it can. This article offers an introduction to how social media analysis of conversations related to a conflict context actually works, and five things you can actually learn about a context from social media.

As social media becomes an increasingly important platform for people to express and inform their political beliefs, monitoring and understanding online conversations can provide invaluable insights into public attitudes.
“Social Media Analytics to Break Through the Political Noise,” University of Connecticut, March 2, 2020

Understanding the basics

As peacebuilders and mediators, whenever we want to design a program or intervention, we look for specific information about the context:

  1. We need to understand key stakeholders: who will we engage? Why these profiles specifically? How much influence do they have? Etc.
  2. We need to understand their agenda \ what they’re pushing for \ their key arguments: what information is being shared? Is it negative or positive? Is there a specific agenda being pushed for on this level?
  3. How they relate \ engage with other influencers \ followers: Who allies with who? How do they engage together? What kind of form of conversation do they have?

The methodology of collecting data is an iterative process depending on several factors, ranging from level of access to big data on any given platform, to the scope of the study, i.e. what needs to be isolated and what will be shared publicly. We found that even within this wide range of possibilities, a hybrid method combining automated tools and more manual analysis is the best approach to concluding valuable findings.

Because social media research allows us to corroborate a wealth of data, as any influential Facebook page can have thousands of interactions on a single post, for example, creating visualizations and conducting automated language analysis can help detect patterns of speech and hate speech, repeated discourse, and more macro-level influence through mapping who follows which pages.

What automated analysis does not tell us is why certain narratives are used, what offline events are they responding to, and how nuances in more micro-level engagement differ from page to page. Also, we bolster key findings by cherry-picking interactions that mirror a larger dynamic we would conclude from automated analysis, to crystallize our conclusions and present them in a way that is familiar to readers.

Manual compartmentalization of data is a helpful and complementary prerequisite to successful automated analysis. This can include tasks like manually assigning sentiments to words, defining slurs or colloquial language as hate speech, and categorizing pages and influencers according to political/community affiliation.

Drawing on this, let’s look at what varieties of information we can gather from social media, concretely thinking about Facebook and Twitter (these categories may differ for other platforms):

  • Posts: content and profiles
  • Comments on posts: Content and users & Sub-comments on comments
  • Reactions (likes, dislikes, etc.)
  • Shares (retweets): who is sharing what, and what is being massively shared

If we look at this collection as interactive and interdependent, we can start to imagine what kind of questions this data can help us\allow us to ask and answer about a context:

  1. Who are the influencers: Who are the most followed profiles? What’s their agenda? Whose camp do they follow? What’s their reach? Who engages with them? Is there a certain cluster of topics, or style of communication, or type of engagement (humor, sarcasm, news-type updates, documentary style videos) that attracts a large number of followers to them? What does that tell us about the culture in the context we are observing?
  2. Who are the followers (users): Who do they follow (patterns)? Which conversations are they most interested in? How do they engage in conversations about specific topics? Is there a difference in engagement when it comes to other topics? How do they influence each other? Etc.
  3. What are trending topics: What are the most discussed topics? How are they connected to each other? Are there certain narratives that go together?

Five things to learn

In our efforts to understand social media behaviors and conversations in Lebanon and Libya (separately), our team gathered data of this kind, and with it we understood the following:


A key question in social media analysis is what the different narratives (or agendas) are that are being pushed for on social media. We found that by developing a list of key terms that together signal a specific sentiment (anti — pro — neutral, etc.), we were able to get an understanding of the sentiments of various types of groupings of users by quickly filtering thousands of comments, or followers of a certain agenda, feel about a specific issue or cause. For example, in our assessment of the Libya scene, we were able to identify users’ sentiments towards foreign actors (namely: the USA, Egypt and Turkey), and we were also able to draw conclusions about issues of concern to people and how they feel about them. Additionally, we were able to match our findings with key offline events to get a better understanding of how these events shaped the online conversations, or the other way around.

The difference between manual assessment and an automated social media analysis is the ability to analyze thousands of comments and users and quantify their reactions, topics of conversations, sentiments, etc.


A second question is simply what users are discussing online. Are they related to specific offline events? Or are the topics of conversation (e.g.: public service, governance, rights, access to services, political commentary, etc.) created online without direct correlation to the current offline context? What can these topics tell us about what engages users most, both positively and negatively? We could find keywords being repeated and links and stories being shared that are not factually correct (i.e. misinformation and disinformation). It is of course possible to do this manually by following specific users of interest to your understanding of the context. The difference between manual assessment and an automated social media analysis is the ability to analyze thousands of comments and users and quantify their reactions, topics of conversations, sentiments, etc.


Mapping social media networks can allow us a better understanding of the different ways individuals form groups and exchange together online. One of the most important outcomes that social media analysis can generate are network maps that connect users or topics. Network maps rely on individual users’ journey online (i.e.: which pages they follow, where they comment, where they comment positively or negatively, where they engage with others and where they only make a reaction, etc.) and how they interact with content with high outreach. A network graph can show you how groups of users are moving between content / topics and who they interact with the most. As a result, you can draw an analysis on where there are organized online campaigns, and which content gets engagement from all ‘camps’ or groups. Here you can also identify where a campaign started, and who is pushing for a specific hashtag or campaign. This also helps us understand the extent of polarization around a specific topic, event, or user, visually showing whether people are talking to each other, or whether users are following pages and people that are aligned with their perspectives and opinions (creating echo chambers and siloed conversations).

A network graph can show you how groups of users are moving between content / topics and who they interact with the most.

Social media network graphs can be helpful in learning more about the variety of social structures that are emerging (or that have emerged). This is helpful in highlighting strategic locations or roles in these webs of connection.

Image 1: Network Visualization, showing connections between Twitter users in Lebanon (2019)

This visualization maps all Twitter users who follow the 79 pages we had isolated for the study, i.e. those that discuss issues related to Syrian refugees in Lebanon. The data analysis tool groups users together who talk to and influence one another, which produces clusters that we can analyze as echo chambers that could be reinforcing or affecting online narratives and opinions. For example, we saw that international organizations and agencies, especially English-speaking pages (grey cluster) were an echo chamber on their own and were distant from other users which meant they did not exert much influence over the domestic Lebanese audience. The diagram also shows that the green section, representing Lebanese influencers and public figures, has more overlap into other clusters, meaning more users from other clusters engage with it. This was not only a valuable finding in terms of the current reality but could also be used to help actors determine entry points for interventions that could influence the narrative around Syrian refugees in the country.


Looking at and studying how people talk to each other can tell you a lot about the divisions, rumors and false news spreading, and about key cultural traits that might impede an impact process you’re working on. This is a qualitative, manual read of specific content to understand tone and nuance, after we understood more about which narratives and topics were most prevalent, how users were engaging with them, and how users were influencing each other’s opinions or polarizing conversations further. We find value in maintaining a human touch for this type of analysis, especially since we draw on literature reviews, stakeholder interviews, and in-depth contextual assessments before looking at how this materializes online.

In our assessment for Libya for example, we were able to determine which forms of hate speech were used against which communities, to map different divisions in Libyan society and better understand how they materialize online. In Lebanon, we went through Tweets and Facebook posts manually to understand user behavior, much like market researchers do this micro-analysis within a process called Netnography. Using the offline events that happened during the data collection period as a benchmark, we were able to study exchanges between and among Lebanese and Syrian users, not to isolate their interactions as content, but to study the character of their exchanges as cultural insights. We knew that Facebook, for example, was a site for more antagonistic sentiment than Twitter, but how was this materializing under each post? Which symbols are more engaging, and how do both users and influencers utilize them to mobilize people with or against a community, in this case Syrian refugees? Which conversations are happening in public fora and which are only discussed in isolated and more homogenous safe spaces, like closed Facebook groups? And finally, which identities and intersections affect patterns of speech and tone? How does the power dynamic change when a commenter is a woman, a refugee, a Muslim, or a combination of these?

Looking at and studying how people talk to each other can tell you a lot about the divisions, rumors and false news spreading, and about key cultural traits that might impede an impact process you’re working on.

One more valuable finding from a more in-depth and micro-level analysis, is to show the power dynamics between some pages and their users, by way of monitoring if and how the conversations are moderated. Depending on the political and security context, online fora ranging from news pages to influencer profiles can have heavily moderated comments and strict posting permissions. Probing which information is kept/highlighted, versus which parts are censored and edited, tells a lot about who tries to control conversations, how polarizing a certain topic (or person) can be, and whose voices are silenced when coming into these spaces. Here, we also try to keep people’s intersections in mind, and always leave room for validating these assumptions with researchers and community members in closer proximity to the offline context in question.


Through both qualitative and quantitative analysis, we quickly learned that coordinated online behavior (CIB) to spread rumors, misinformation, and propaganda is common in politically charged contexts, and is used to bolster conflict narratives. This can be in the form of established networks that have a significant following online, many of which have already been taken down from Facebook and Twitter.

Our research also uncovered some examples of astroturfing, which is the practice of having many users repeat a certain narrative with the aim of making it seem like a real change of public opinion. Studying these processes and how others react to them builds a more layered understanding of the political and security climate, as such behaviors often mirror the powerplay happening offline. Quantitatively, looking closer at the frequency of such activities gave us an overview of the reach of such networks and where they were concentrated, since our database told us which users were “spamming” certain pages with politically charged comments. Qualitatively, we were able to explore the response to them and the counter-narratives, whether equally polarized or diffusing the online tension. Findings from this analysis helps us identify and design recommendations for more strategic social media listening, engagement and change.

Creating impact and influencing change

This knowledge and analysis of social media is valuable to us as peacebuilders and mediators, as it allows us an in-depth understanding of what people are talking about, in which way, and with who, online. This type of exercise is becoming increasingly more important in today’s digital world, where most populations are turning to social media to voice their opinions, questions, concerns, and areas of interest. Peace operations can look to the insights gathered from peacebuilding and mediation efforts, as they contend with challenges and opportunities that result from the growing use of social media in their areas of deployment.

Build Up transforms conflict in the digital age. Our approach combines peacebuilding, participation and technology.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store