Peacetech: remarks at the Geneva Peace Talks

Last Friday, I had the pleasure to speak at the 2015 Geneva Peace Talks, organized by the Geneva Peacebuilding Platform. I was humbled to be in the company of such a great line-up of speakers, all addressing the subject “It’s time for peace”. This blogpost is a write-up of what I said.

The peace of the graveyards

I’m from Spain and as you know Spain had a fascist dictator, Franco, until 1975. In 1964, Franco celebrated 25 years in power with the slogan: “25 years of peace”. The counter-slogan from anti-fascist activists, like my parents, was “We don’t want the peace of the graveyards.”

It’s a slogan that I still find relevant, a call to remember that a peaceful society is not one void of conflict, but one where all voices can negotiate a shared understanding of peace. It asks the question: who gets to decide what peace means? who decides what peace we are working to build?

Peacebuilding as civic engagement

So that’s an interesting anecdote, but you might be wondering what it’s got to do with the subject of my talk. I co-direct a social enterprise called Build Up that works at the intersection of technology, civic engagement and peacebuilding. The provocation we put forward with our work is that we need to re-interpret peacebuilding as civic engagement. And that technology plays a key role in that reinterpretation.

In other words, we believe that the key thing technolgy does is broaden participation in peacebuilding process, so that really what they are is civic engagement processes that deal with conflict. That also means that we can do away with the idea that conflict is something that happens in far-flung transition places or in the Global South. Conflict is in every society. Peacebuilding as civic engagement is needed everywhere, and technology is changing how it’s done everywhere.

Peacetech in the Somali Region and South Sudan

Since this may sound abstract, I want to illustrate it with two concrete examples of work Build Up is currently undertaking.

For the past 2 years, we have been working with Interpeace to support two local peacebuilding organizations in the Somali Region. The organizations we are working with have decades of experience doing qualitative research to understand what Somalis are thinking about the conflict. With the information they gather they run local peace processes and work to influence Somali policy makers.

So they do incredible work. We’ve helped them introduce a few technology tools that build on this. We worked with them to design a participatory polling methodology that introduced a mobile data collection tool linked to an online data management and visualization tool (read more about it here). We’ve also helped them come up with ways to share their findings and messages with more people. They were already doing paper reports and film screenings. Now they are also using social media, learning simple animation and making shorter films to be distributed online.

Perhaps you had an image of Somalia as a black hole where nothing works. In fact, it’s an incredibly resourceful place: there are more people online and on Facebook than you might initially think, especially since the fiber optic cable reached Mogadishu.

The second project I want to tell you about is one we implemented earlier this year in South Sudan. USAID funds the VISTAS program, which has been working on the Sudan – South Sudan border, supporting peace committees to make local agreements and manage divisions across the border.

These committees do important work. They convene elders to negotiate rights of passage and then drive around in cars and read out agreements over megaphones. But with this approach, there are only so many people they can reach and only so many voices that can be represented. In other words: the reasons why traders and cattle keepers want peace are clear, and they hear the agreements. But what about women? Or unemployed young men? They’re not at the negotiation table.

So we worked with one cross-border committee to identify a group of young men and a group of women, mixed Sudanese – South Sudanese, and then over three weeks, we supported these groups in making two short films.

None of the participants had touched a camera before. Only 5 of them could read and write. Many told us it was the first time they had been asked to express their opinion. Yet the films they made where entirely led by them. They planned the stories, filmed every shot, recorded every interview, and chose images and voices for the final cuts.

For the films’ opening night, we projected onto a white sheet strung up in the town’s dusty football field. Hundreds of people came to watch. The groups are now working to screen the films in other towns along the border, show them at video clubs, distribute them via mobile phones, and then make more films. (Read more about this here.)

Technology is just a tool

These are two powerful examples of projects that use technology to build peace. But you may be thinking, what of the risks? Isn’t technology also used for war and oppression? And of course it is, technology is just a tool – but for every negative use, I can probably come up with a positive counter-use.

On Facebook, we listen mostly to people we already agree with, which can make views more polarized, and hatespeech is rampant. But groups like Peace Factory are using Facebook to connect normal Israelis to normal people in Iran, Palestine or Jordan, and groups like Umati (Kenya) or Proxi (Spain) are using social media to monitor and counter hatespeech.

Technology can be used to cut off communications, as the government in Sudan has done, but also in Sudan a local NGO sets up a community communications system that links SMS to radio to help sustain local peace agreements. Videogames teach war, but Games for Peace uses Minecraft to bring Israeli and Palestinian teenagers together. Drones can bomb, or they can be tools for peacekeepers. And so on.

Peacetech can learn from civic tech

I think that what is happening with technology in peacebuilding is similar to how technology is affecting other areas of public life. And so I’m taking you back to Spain: with an economic crisis compounded by many corruption cases, Spanish people over the past few years have been asking themselves: who gets to decide what democracy looks like?

Many of the grassroots movements that started with this question are now turning into political forces to be reckoned with on the electoral arena. And in their organizing process, and in their ability to shift the public discourse, technology has played an instrumental role.

And this phenomenon of greater participation, greater empowerment via technology disrupting traditional processes is not just about Spain, and it’s not just about political activism. It’s happening in governments through civic tech movements too. Citizens are talking directly to governments, on their own terms. And governments are setting up web platforms, apps and social media campaigns to reach citizens.

But this change towards greater participation through technology has not yet reached formal peace negotiations. Peace negotiations continue to take place in closed rooms, between governments or warring parties, away from the people most affected, and with limited participation from civil society.

The examples I gave earlier, the many other examples of peacetech that are out there, they’re all happening among civil society. It’s true that the more effective ones manage to connect with ongoing governance / conflict management processes – like the example I gave you from Somalia. And this is very important, but it’s quite limited.

What is the e-governance of peace processes?

In other policy areas, the civic tech movement has meant that it is no longer acceptable for governments to fail to communicate and consult with citizens regularly: technology makes it easy and cost effective, removing any permissible excuse. I think we need a peacetech movement that does something similar for peacebuilding, and I think we need this urgently.

Peacebuilders on the ground are demonstrating how technology makes it easier to broaden participation. They are using technology to reinvent peacebuilding at the grassroots, in track 3. We can use this experience to also re-invent track 1 formal negotiations. We can start using technology to connect the conversations of conflict-affected communities directly to formal negotiation tables in Addis Ababa, New York, or Geneva. We know from past experience that if these two tracks are not connected, what is signed at negotiation tables won’t take root on the ground.

However different the contexts might be, I see digital activism, civic tech and peacetech as part of a global paradigm shift that leverages technology to disrupt what otherwise remains an unquestioned status quo, dominant power structure, a majority’s perspective as the only truth. Because people don’t want the peace of the graveyards.

Peacekeepers in the sky

A few months ago, Patrick Meier wrote about common misconceptions of Humanitarian UAVs. This post is part of his broader interest in the use of Unmanned Aerial Vehicles (UAVs) for humanitarian response (Patrick founded UAViators, the Humanitarian UAV Network).  I responded with comments specific to the use of UAVs in conflict contexts, to which Patrick answered (as did Sanjana Hattotuwa). What we both agreed on was that the use of UAVs in conflict settings is complicated by a number of issues related to perceptions, politics, ethics and empowerment.

We’ve just co-authored a paper that tries to unpack some of these issues in the specific case of the use of UAVs for peacekeeping. It’s not got all the answers, and it’s not meant to – we want to spark more debate on this topic. We pay particular attention to questions around the data privacy of civilians (non-combatants) and the keystone humanitarian principle of informed consent, which we believe have so far largely been ignored. Edit (September 1, 2015): the paper has now been published in full by the ICT 4 Peace Foundation.

We are not peacekeeping or military experts, so our assessment of the use of UAVs to a military operation will inevitably fall short of other experts. What we hope to bring to this discussion is an ethical exploration based on an understanding of grassroots action and how the introduction of new technologies can alter the balance of power. In the case of UAVs and given the multidimensional nature of peacekeeping operations, we believe it is important to assess their use from this perspective too, and not only focus on military utility.

Next week, Patrick will be speaking at the Build Peace conference about lessons from humanitarians UAVs for peacebuilders. His talk (and the rest of the conference plenary sessions) will be livestreamed: follow #buildpeace and @howtobuildpeace on Twitter to get the livestream links and join the conversation.

Drones, ethics and conflict

Earlier this year, I was invited to speak on the Technology for Peace panel at IPI’s 44th Vienna Seminar. Ameerah Haq (Under-Secretary-General, United Nations Department of Field Support) was also on this panel, and explained how drones are increasingly becoming a feature of DPKO missions. As proof of the importance of this innovation, she recounted a story about the first flight of the MONUSCO drones, operated by UN peacekeeping troops stationed in Goma, North Kivu, in the Democratic Republic of Congo. Goma is on the shores of Lake Kivu, and the most common mode of transport between Goma and Bukavu are unsafe, overcrowded boats across the lake. On their test flight, the UN drones sent back real-time imagery of a boat that was sinking in the middle of the lake. The peacekeepers quickly deployed a few UN boats and saved many passengers from drowning.

Boat on Lake Kivu (CC-BY-SA 2.0 by Julien Harneis)

Boat on Lake Kivu (CC-BY-SA 2.0 by Julien Harneis)

That UN peacekeepers were able to undertake a rescue thanks to their new drones is laudable. But the key purpose of deploying UN troops to Goma is to guarantee the safety and protection of civilians in an area where violence from non-state armed groups is all too common. Why did Ms. Haq chose to share a story that was about a humanitarian action peripheral to the central purpose of DPKO missions? Is it early days and there wasn’t much else to share? Or was this the only story that could be shared because the others would compromise the intelligence gathering that drones are allowing the mission to undertake?

The second thought stayed with me. UN peacekeepers are actively collecting data on civilian (and military?) activities in the Kivus (and elsewhere). Does the local population get a say in what data is collected, and to what purpose? How relevant is this question in conflict settings? Do the same standards apply as elsewhere? Patrick Meier has been doing some great work on the ethics of humanitarian UAVs, but I wonder if we need a concrete discussion on the ethics of drone use for conflict prevention. OCHA recently published a policy brief on the use of UAVs by humanitarian actors where it directly recommends against using UAVs in conflict settings:

“Focus on using UAVs in natural disasters and avoid use in conflict settings: The use of UAVs in conflict settings is still too complex and hard to separate from military uses.”

I understand that OCHA may not want to complicate the still-nascent discussion on humanitarian UAVs by considering conflict settings. However, if drones are starting to be used for non-military purposes in places like the DRC, then we need to begin to discuss this. Here are three problems and two possible solutions to start a conversation on drones, ethics and conflict.

Problem 1: privacy and consent. The discussion around data privacy and UAVs centers on two issues: consent and the imperative to save lives. Consent is critical to any data collection and dissemination in conflict settings, whether via UAVs or otherwise. It is often difficult to meet Do No Harm principles because the unintended consequences of data collection in complex conflict environments are so hard to predict. An important way to mitigate this risk is to obtain the consent of those being surveyed who are most likely to understand these unintended consequences. But if the purpose of the MONUSCO UAVs is to allow peacekeepers to monitor a broader area than they can cover over-land, then how operationally viable is it to obtain consent for UAV-collected data? Humanitarian actors at times argue that the imperative to save lives trumps the need for consent in certain situations and / or at certain levels of data aggregation. This is an important argument to make in humanitarian crises, but how applicable is it to collecting data on civilian protection? It is much harder to draw the line on what is life-threatening in a conflict context. UAVs cannot detect intent, so how are imagery analysts to determine if a situation is likely to result in loss of life?

OCHA's DJI Phantom UAV (

OCHA’s DJI Phantom UAV (

Problem 2: fear and confusion. In describing common misconceptions about humanitarian UAVs, Patrick Meier argues that most drones used by the UN / NGOs are perceived by local communities as toys, not as threatening military equipment. In speaking with local peacebuilders in the Somali Region and in Pakistan, I wonder whether the same is true in (at least some) conflict contexts. There is significant trauma among local populations who have witnessed drone strikes that appeared to come from nowhere. There is also much greater suspicion of anything that looks like an instrument to spy, to relay information to places of power far away, and that might (even unintentionally) make them a target for military action. This blogpost by the IRC raises similar concerns about the difficulty that local populations may have in distinguishing drones-for-good in conflict settings. When the MONUSCO drones first started to operate, a consortium of NGOs working in the Kivus warned that they might (at least in the eyes of local beneficiaries) appear to blur the lines between military and humanitarian actors. The OCHA policy brief reinforces these concerns, arguing that painting and signaling humanitarian UAVs to distinguish them from military drones works well in natural disasters, but is unlikely to be sufficient to overcome the fears of local populations in conflict settings.

Problem 3: response and deterrence. Whether collected with UAVs, via SMS-enabled crowdsourcing or at community meetings, a key issue with any system that gathers data in or about a conflict is that it raises expectations for a response. This risk is especially concerning for MONUSCO, who have in the past been criticised for inadequate response to known threats to civilians. Is it ethical for MONUSCO or other UN /NGO actors to deploy UAVs if they do not have the capacity to respond to increased information on threats? One possible counter-argument is to say that the presence of UAVs is in itself a deterrent (just as the presence of UN peacekeepers is meant to be a deterrent). In fact, the head of DPKO has suggested that deterrence is a direct aim of UN drones. Other initiatives using satellite imagery to monitor violence, such as the Satellite Sentinel Project, have similarly argued that surveillance of conflict areas acts as a deterrent. But the notion that a digital Panopticon can deter violent acts is disputable (see for example here), since most conflict actors on the ground are unlikely to be aware that they are being watched and / or are immune to the consequences of surveillance.

Solution 1: education and civic engagement. Educating communities where drones are deployed is one way to address the issues above. OCHA’s policy brief indicates that it is important to increase “the degree of transparency, acceptance and community engagement of the UAV program”. An open conversation with communities can include considerations about the potential risks of drone-enabled data collection and whether communities believe these risks are worth taking. This can make way for informed consent about the operation of drones, allowing communities to engage critically, offer grounded advice and hold drone operators to account. Still a question remains: what happens if a community, after being educated and openly consulted about a UAV program, decides that drones pose too much of a risk or are otherwise not beneficial? In other words, can communities stop UN- or NGO-operated drones from collecting information they have not consented to sharing? Education will be insufficient if there are no mechanisms in place for participatory decision-making on drone use in conflict settings.

Solution 2: from civic engagement to empowerment. Perhaps civic engagement in how outside actors use humanitarian UAVs is not sufficient. In my view, the critical ethical question about drones and conflict is how they shift the balance of power. As with other data-driven, tech-enabled tools, ultimately the only ethical solution (and probably also the most effective at achieving impact) is community-driven implementation of UAV programs. Drones flown by communities as part of their own conflict prevention processes and activities. If you think that’s a crazy undertaking, consider that something similar is already happening for community-led, UAV-enabled disaster risk reduction in Haiti. And there is plenty that local peacebuilders could use drones for in conflict settings: from peace activism using tactics for civil resistance, to citizen journalism that communicates the effects of conflict, to community monitoring and reporting of displacement due to violence.

I’m guessing this second solution is not going to sit easily with most readers. If you think it would never fly because people would be taken for spies and military / government officials would be afraid of them, then doesn’t that reinforce the three ethical problems outlined above? The more I consider how drones could be used for good in conflict settings, the more I think that local peacebuilders need to turn the ethics discourse on its head: as well as defending privacy and holding drone operators to account, start using the same tools and engage from a place of power.