Debating Accountability
by June 2003

Defining what we mean by ‘quality’ is surprisingly difficult. Is quality associated with uniqueness, like a masterpiece in art? Or is it measured by the extent to which something adheres to a set of norms and standards? Or is it to do with perception – is ‘quality’ in fact in the eye of the beholder? So too ‘accountability’, which is intriguingly difficult to express in a number of languages, not least French, which requires a whole sentence to convey the sense. Despite these difficulties, the idea of making humanitarian action accountable – to donors and to recipients – has gained increasing currency. The first concrete expression of this trend is the Sphere project, a mechanism for the quantitative benchmarking of humanitarian action, and the Humanitarian Accountability Project (HAP). Yet by no means all humanitarians agree with what Sphere and the HAP represent. This article argues that the kind of benchmarking that they envisage is unhelpful, perhaps even dangerous, and describes an alternative approach to the question of quality and accountability. Everyone working in humanitarianism is concerned with improving the quality of assistance delivered, and we should all acknowledge the need for accountability. Whether Sphere and the HAP are the right mechanisms for achieving this is, however, another matter.

Critiques

The Sphere project has two parts: a Humanitarian Charter and a set of Minimum Standards. Both are open to criticism: the charter because it endangers existing texts and laws, and allocates to NGOs responsibilities that are not theirs, and the standards because many of their technical points are neither universally accepted, nor universally relevant. In its desire to standardise humanitarian aid, the Sphere project risks mechanising the operation of humanitarian aid. Using technical indicators as the standards by which ‘quality’ is measured ignores the diverse cultural, political and security contexts in which aid is delivered, and against which the relevance and appropriateness of aid need to be measured. Universal benchmarks ignore the fact that each humanitarian emergency is unique, and each calls for different, perhaps original, responses. If funding bodies adopt these standards as decision-making criteria, agencies will increasingly be compelled to demonstrate ‘success’ in ways that do not reflect the totality of humanitarian action, which has important aspects that are not open to measurement in any formal sense. How do you quantify compassion and solidarity? Conversely, Sphere may be used by donors as a way of withholding funding from NGOs deemed not to ‘conform’. Standardisation based on benchmarks developed by Northern NGOs also risks penalising agencies from the developing world, turns NGOs into service providers and casts victims merely as bodies to be fed, sheltered or transported. A market logic of supply and demand would take over from factors like solidarity and justice in the provision of aid.

The HAP began life as the Ombudsman project, which aimed to create a ‘complaints procedure’ whereby beneficiaries of humanitarian aid could judge the adequacy of the assistance they had received. One of the main problems with this is that it absolves both the local authorities and the international community of responsibility for people’s welfare by shifting the focus exclusively on to the NGOs, and making them accountable for the well-being of the population at risk. The second issue is to do with the methodology. How do you define the victims and identify who should speak in their name – and hence how do you identify the people to whom humanitarians should be held to account? Is a victim simply someone who receives assistance? As the interahamwe showed us in the camps in former Zaire in 1994–96, this is an unreliable guide.

The Quality Platform

The Quality Platform (QPF) is one expression of the opposition to Sphere and the HAP. It was set up by a group of French NGOs in mid-2000, and within a few months NGOs from nine other countries had joined. It is designed to raise awareness that there is disagreement over the value of Sphere and the HAP, and that there is a reaction to what one African NGO has termed Sphere’s ‘bulldozer’ approach.

The QPF argues that technical standards can only be used within the framework of policies that pay much greater attention to the specific and diverse contexts in which humanitarian aid is delivered. It advocates the enhancement of local participation, improved analysis of the political context and a better understanding of the impact of aid on the local environment, greater attention to staff training and a reaffirmation that states, not NGOs, have a primary responsibility for safeguarding their citizens. This includes respecting international human-rights and humanitarian law and allowing NGOs free access to people in need. There has been regular contact between the Sphere team and the agencies behind the QPF, and these worries have been frequently aired. In some areas – the potential for Sphere to be manipulated by donors, for instance – there is a certain level of agreement. However, the QPF team has reached the view that there is little willingness to think again about the process as a whole.

QPF members

The Quality Project

Agencies have also developed a Quality Project (QP), which sets out alternative ways of improving humanitarian assistance. The QP is built around the three stages of the project cycle: initial diagnosis and context analysis; design and implementation; and evaluation and learning.

In many funding proposals prepared by humanitarian agencies, levels of information about the context in which assistance is to be delivered can be low. In effect, these proposals represent an agency ‘offer’ rather than a real analysis of the problem at hand – the local needs, the constraints on humanitarian action and the local capacities available. This tendency will be accentuated if the approach to quality is dominated by norms and standards in predefined fields.

By contrast, the QP seeks to take account of the diversity of situations in which humanitarian assistance might be delivered, and tries to frame programmes most appropriate to these circumstances. To do this means developing the tools for context analysis, needs appraisal and capacity assessment. The response to an acute emergency is not the same as the response to a protracted crisis. The response required in fragile situations between peace and war is different again. In South Sudan, for example, such an assessment might dictate wide-ranging support to livelihoods; in Albania, the core of the programme might be helping families sheltering Kosovar refugees. Failing to develop context analysis has, however, meant that assistance is channelled primarily to camps. Similarly, the particular and extreme conditions in Grozny make an approach based on imported norms and standards absurd and unhelpful to the handful of aid actors still operating there.

The second component of the QP is the elaboration of a process for assisting with programme design. Since the initial needs appraisal might result in a number of possible alternatives, the QP is developing ‘filters’ to help in the decision as to how programmes should be designed.

The last component of the QP is related to evaluation. Here, the focus is on learning, rather than on accountability. It is assumed that NGOs are committed to being accountable to their donors. Most of the French NGOs involved in the QP are members of the Comité de la Charte, an institution designed to ensure financial transparency and accountability in the use of funding. The best way to ensure accountability to beneficiaries is to develop and strengthen participatory mechanisms in the diagnosis, design and implementation of programmes. This prohibits the use of preset formulas.

People, not processes or ‘technics’

Ultimately, people, not processes, hold the key to high-quality humanitarian action. Thus, develop-ing training modules is an integral part of the QP. As part of the ongoing research around the QP, missions are planned to a range of countries, including El Salvador, Nicaragua, Afghanistan and Sudan. The smartest way to measure the accoun-tability of institutions and the quality of their actions would be to use two ‘proxy indicators’: the percentage of the financial resources of the agency allocated to evaluation and learning; and the percentage of the evaluation report that goes into the public domain. These two indicators would underline a public and transparent commitment to doing better, and a greater willingness to allow public scrutiny.

 

François Grünewald is Chairman of the Groupe URD and Associate Professor at Paris XII University; Claire Pirotte and Veronique de Geoffroy are both on the staff of the Quality Project. For more information, contact HumaQuality@aol.com.

Share
FacebookTwitterLinkedIn