Humanitarian technology: revisiting the ‘do no harm’ debate

September 18, 2015
Katja Lindskov Jacobsen
An iris scanner in use in Baghdad, Iraq

Humanitarian technology: revisiting the ‘do no harm’ debate

For more than a decade, humanitarian actors have embraced a range of new digital technologies in the hope of finding solutions to a variety of age-old challenges. Drones can help expand access by delivering medical aid. Biometrically registering refugees helps improve accountability, by generating more reliable data, while also reducing the administrative burden by speeding up the process.

Although these new technologies have been used with the best of intentions – namely to improve humanitarian protection of and assistance to people-in-need – there have been instances where new technologies have caused ‘harm’ to the intended beneficiaries.

What is ‘do no harm’?

Since the late 1990s, ‘Do No Harm’ has become an increasingly mainstream concept within humanitarian thinking.  The ‘Do No Harm’ approach assumes that humanitarians can cause harm through their actions and must therefore assess how their assistance practices can affect local conflict dynamics.

The approach has been translated into a framework through which humanitarian actors can map “the interactions of their aid with contexts of conflict” – where such ‘interactions’ refer to “the resources being brought into a context” (e.g. food, medicine) and “the people bringing the resources” (i.e. aid workers). See for example “Key Principles in Do No Harm” (http://www.cdacollaborative.org/programs/do-no-harm/key-principles-in-do-no-harm-and-conflict-sensitivity/) Yet, the application of new digital technology illustrates that technologies, as well as resources and people, can cause damage.

How can technology cause harm?

New technologies can have adverse effects when their application in humanitarian contexts is more experimental than evidence-based.  Sometimes, new digital technologies are only tested in laboratory settings or controlled environments, before being rolled out in harsh field settings and expected to function properly the very first time they are used in the ‘real world’. Introducing technologies in such a way may compromise not only their performance but also the safety and security of the humanitarian subjects implicated in such ‘field tests’.

The most familiar illustrations of the pitfalls of field-testing come from the medical domain. New vaccines, for example, can have negative implications for humanitarian subjects. But as we shall see in the next section, there are good reasons to also think along these lines when considering the possible implications of adding new digital technologies to various aspects of contemporary humanitarian practices.

Before considering the particular harms of specific technologies, it is important to stress that damaging effects do not solely stem from more or less experimental uses of new technology. In addition to decisions about specific applications of technology, we must accept that technology has a certain degree of agency, which should in turn lead us to recognise thattechnology can independently cause other, less acknowledged forms of harm. For example, biometrically registering a refugee produces a digital refugee body – a digitalised version of the refugee’s fingerprint or iris pattern – in addition to the existing physical body in need of protection. If we fail to take account of technology’s constitutive effects then there is a risk that we will also fail to recognise how this digital refugee body can come to encounter harm through new forms of intrusion.

Looking at UNHCR’s use of biometric registration in the Afghan-Pakistan borderland

This section will examine the technology-related harms associated with the UN High Commissioner for Refugees’ (UNHCR) use of biometric registration (more specifically iris recognition technology) as a mandatory part of its 2002-8 repatriation programme in the Afghan-Pakistan borderland. This case study illustrates the risks and vulnerabilities associated with both the harm caused by the design of this biometric registration system and that stemming from the technology itself.

Ensuring the humanitarian use of new digital technology – like biometrics – does ‘no harm’ requires two things.  First of all, it is crucial to recognise that the technology is not flawless and that technological failures may translate into humanitarian failures.  In the case of Afghan refugees, iris recognition was used in an unforgiving setting that differed significantly from where it had previously been used. Untested conditions of heat, dust and humidity could be expected to impact performance in ways that needed to be identified and corrected for. Otherwise the failure of iris recognition technology to correctly identify a refugee could have lasting implications for the refugee at every point of assistance.

A second crucial step will be to recognise how the new digital refugee body is also vulnerable and in need of protection. When a refugee has his/her eyes scanned when registering with a humanitarian agency, a particularly sensitive type of data is produced – a digital representation of a unique physical feature. Such iris recognition technology was used to find the famous ‘Afghan girl’ who appeared on the cover of National Geographic in 1985, years after the original picture was taken.

In other words, the kind of data that humanitarian uses of biometric registration technology produces can subsequently be used to identify individual aid recipients. In sensitive conflict settings, there is a risk that the biometrically registered refugee will become vulnerable to identification by actors whose intentions are not necessarily humanitarian and benign. In the case of UNHCR’s use of iris recognition in the Afghan-Pakistan borderland, no data was collected on whether this possibility of tracing individual aid recipients was a concern amongst the implicated refugees.

However, in a more recent humanitarian context where iris recognition has also been used, the Syrian refugee crisis, some misgivings have indeed been expressed. For example, refugees have said they are worried that UNHCR may share their biometric data with the Lebanese government (Knutsen and Kullab 2014), and that such data sharing could have implications for their safety. Others have voiced concerns about the possibility of the Syrian government getting a hold of biometric refugee data (ECRE 2013), a scenario that would have severe implications for the prospect of a safe return for the concerned individuals. Yet the current ‘Do No Harm’ approach does not tackle these types of issues.

Adopting a more critical approach to using new technology

Importantly, these forms of harm stemming from the introduction of new digital technology can occur despite the best of intentions. UNHCR does its very best to aid refugees globally. Also, the intention here is not to suggest that all humanitarian uses of new digital technology will always cause harm. Rather, the aim is simply to have the humanitarian community consider how and under what circumstances the use of new technologies in a variety of humanitarian contexts may potentially give rise to harmful effects – rather than simply to improved humanitarian protection. Arguably, such critical reflections are largely absent from current representations of the wide range of challenges that the use of various new technologies can help ‘solve’.

Reflecting on the biometric registration of Afghan returnees and Syrian refugees should prompt us to revisit the ‘Do No Harm’ debate. We need to seriously debate how the humanitarian use of new technologies can cause harm, despite good intentions and certainly when there is an absence of clear policies and transparent procedures.

These two case studies help conceptualise technology as a new dimension of the ‘Do No Harm’ approach by illustrating the need to pay more attention to the subtle but crucially important political effects of technology. Most notably, humanitarians need to be mindful that agencies collect sensitive biometric data that in certain contexts will be considered highly relevant by various political actors – be they donors, host governments or the very regimes refugees are fleeing from.

In short, humanitarians need to critically assess how using new technologies can potentially expose already vulnerable populations to further risks and insecurities, even where intentions are at their best and conditions at their most challenging.

Katja Lindskov Jacobsen is Assistant Professor in International Risk & Disaster Management at Metropolitan University College, Denmark.

Comments

Comments are available for logged in members only.