Issue 3 - Article 4

Accountability in Disaster Response: Assessing the Impact and Effectiveness of Relief Assistance

April 1, 1995
Humanitarian Practice Network

There are three powerful reasons why the effectiveness and impact of relief programmes should be regularly assessed.

First, they are intended to save lives and reduce suffering and their effectiveness is therefore crucial to the affected population.

Second, the international community spends hundreds of millions of dollars each year on the provision of relief assistance, and the costs are rising.

Third, evaluations can, potentially at least, fulfil an important accountability function. In contrast to financial accountability mechanisms, which are now comparatively well developed within donor and implementing agencies, the mechanisms for ensuring accountability to the population being served are poorly developed.

Perhaps as recently as ten years ago a prevalent attitude among relief agencies was that assessments of the effectiveness and impact of relief programmes were unnecessary, if not unwelcome.

When articulated, the sentiment was effectively ‘Our motives were well-intentioned, we did our best under difficult circumstances, why should we now subject ourselves to a critical examination?’. Fortunately, such attitudes are now less prevalent.

However, the situation remains highly unsatisfactory. There are powerful factors which prevent or discourage the evaluation of relief programmes and which limit the ability of those evaluations which do take place to assess properly the effectiveness and impact of the assistance provided.

Many of these factors stem from the nature of emergency programmes and the organisational, technical and methodological, political and organisational difficulties that this creates for evaluators.

The methods and approaches used to evaluate development assistance have often been transposed into relief projects. A review of the experience of evaluations of development assistance raises a number of points.

First, the dominant approach in the evaluation of development assistance, at least until recently, was a method which required that project outputs could be readily identified, measured and valued.

Secondly, approaches to the measurement of efficiency and impact required very substantial investment in time-consuming data collection and analysis in order to produce statistically valid results.

Finally, the body of knowledge which exists on the evaluation of development assistance is currently in flux. A number of ‘new’ techniques such as Rapid Rural Appraisal have been developed over the last decade to take account of shifts in the content and objectives of development projects.

The role of NGOs in the field of development is growing, and there is increased respect for the views of the populations being served. Participatory techniques are only just beginning to be used in the evaluation of relief programmes, but it would appear that these offer a rich source of methods.

Evaluating Relief: Defining the Difficulties

Transposing the values of evaluation of development programmes to evaluation of relief interventions poses a number of difficulties.

First, there is the matter of terminology. Strictly speaking, studies which do not address all five aspects of evaluation: relevance, efficiency, effectiveness, impact and sustainability, should not be called ‘evaluations’. They fall more properly into the category of reviews or audits, and it is these which predominate amongst assessments of the impact of relief programmes.

A second central problem faced by all evaluators of relief programmes is the lack of appropriate data, caused primarily by the severe time pressure which is characteristic of virtually all relief programmes. It would also seem that certain types of information required by evaluations are not collected as a result of inadequate thought being given to the types of information needed to assess impact and outcomes.

This lack of an information strategy is both one of omission and of commission. The total absence of basic data such as utilisation rates of vehicles or timing of seed distributions is frequently encountered. Equally, where information is collected, for example through surveys, inconsistent methods are often used in successive surveys, depriving the evaluators of the opportunity to construct a valid longitudinal picture.

Ethical considerations also constrain the application of conventional criteria and methods of evaluation.

For example, the use of cost-benefit analysis to measure the efficiency of emergency medical intervention is viewed with distaste by many. The intensive feeding of severely malnourished children is known to be very expensive per beneficiary in comparison with less intensive supplementary feeding.

Such programmes also make substantial demands on the time of skilled personnel, possibly diverting them from involvement in programmes that would benefit and perhaps save the lives of much larger numbers of children.

Nevertheless, the chance of providing assistance to children who are close to death is frequently regarded as sufficient justification for the inclusion of intensive feeding components in most relief programmes. If decisions are made on the basis of non-economic factors, then evaluation techniques developed within an economic framework are, if not redundant, difficult to utilise.

The environment of change and uncertainty which characterises relief interventions also raises difficulties for evaluation. In particular, the setting of objectives by which interventions will be judged is highly problematic: the goal posts are moving continuously. For instance, a delay in the arrival of food aid for use in general ration distributions might lead an agency to establish a targeted supplementary feeding programme in an attempt to prevent a deterioration in the nutritional status of children and other physiologically ‘vulnerable’ groups. In the face of such uncertainty, many relief agencies describe their objectives only in very general terms. This lack of specificity and identification of indicators of achievement means that conventional evaluation techniques cannot be transferred uncritically.

In contrast to development programmes which usually involve only a limited number of agencies, a relief programme typically involves a large number of organisations and agencies. Formal agreements and undertakings usually exist ‘vertically’ between donor organisations providing resources and those agencies responsible for receiving and distributing the assistance.

However, ‘horizontally’ the relationships between organisations are often informal, and, depending on the context, the nature of the organisations involved and the personalities of the key individuals within the different organisations, may be characterised by misunderstanding and rivalry. The lack of formal agreements between different agencies reduces the ‘points of reference’ available to evaluators for comparing performance against expected roles and responsibilities that are both agreed and documented.

In such a context there are obvious limitations to the usefulness of evaluations which focus only on the activities of particular agencies.

The case for joint evaluations involving several donor organisations and relief agencies was clearly illustrated by the experience in southern Africa during 1993–94, when at least a dozen agencies undertook independent evaluations of their response to the 1991–92 drought.

A multi-agency overall evaluation combined with more focused agency-specific case studies would probably have been more cost-effective, and, in terms of lessons for the international relief system as a whole, a more valuable approach. This has been recognised in the current evaluation of international responses to the crisis in Rwanda, described in more detail below.

The final, and perhaps most intrinsic, difficulty confronting the evaluation of relief interventions is their high political and media profile. Relief programmes undertaken in areas of conflict require donor agencies to consider factors such as sovereignty, international law, the appropriate ‘balance’ of aid between opposing sides and perhaps also national foreign policy interests – factors which would not normally be considered when responding to a natural disaster in an otherwise peaceful country.

Evaluations of such responses therefore involve the evaluators in the examination of matters which would normally be kept out of the public domain, and which require different analytical skills and methods.

Most relief programmes, whether in response to natural or conflict-generated disasters, have a high media profile which may also influence decisions on the timing, scale and nature of the response.

Such a profile generally extends to the evaluations of these programmes. Taken together, the high media profile and the frequent involvement of political and legal questions in the response to relief needs in areas of conflict, makes the evaluation process for relief programmes far more sensitive than is the case for the majority of development programmes.

Towards Improved Evaluation

Given all these difficulties, where does this leave the practice of evaluation in relation to relief programmes?

A starting point has to be that evaluations are seen as crucial to the process of learning from experience and improving upon past performance, and that the evaluation process and the information requirements for effective evaluation are accorded higher priority.

If agencies involved in relief operations are genuinely committed to improving their performance, then evaluations should be undertaken more frequently, and appropriate resources allocated to them.

There is a need for basic agreement among donor and implementing actors as to the criteria which should be used to measure the ‘success’ of relief interventions in conflict and man-made disasters. These criteria could be used to guide planning, implementation and evaluation, and to facilitate inter-agency comparison.

It will also be important that evaluators solicit the views of the population in the affected area. In most evaluations it will not be possible to undertake extensive surveys capable of yielding statistically valid results. Rapid Rural Appraisal and crude sampling techniques could be used to gather the views of small, but reasonably representative samples, of the population regarding the appropriateness, effectiveness and timeliness of the relief assistance, and its impact on existing coping strategies and community organisation.

Improving the accountability of relief interventions will be dependent, therefore, upon developing appropriate criteria and methods to measure ‘success’. But, perhaps more importantly, it will also rely upon the creation of a management culture which exacts the high standards of accountability from donor and implementing agencies to recipient communities.

This article draws on a chapter by John Borton from the World Disaster Report 1995, which will be published shortly by the International Federation of the Red Cross and Red Crescent Societies.

Comments

Comments are available for logged in members only.