Issue 26 - Article 5

The Joint Evaluation of Emergency Assistance to Rwanda

April 1, 2004
John Borton, John Borton Consulting

The 1994 genocide and the ensuing relief operations provoked an unprecedented international collaborative evaluation process – the Joint Evaluation of Emergency Assistance to Rwanda (JEEAR) – which has remained unsurpassed in terms of its scope and scale, and arguably its impact. This article reviews the JEEAR and follow-up process, and offers some personal observations on the evaluation’s impact eight years on.

The Joint Evaluation

The JEEAR process was first proposed by the Danish government’s aid agency Danida in September 1994, just two months after the end of the genocide and the influx of almost two million refugees into eastern Zaire. An approach to the OECD’s Development Assistance Committee (DAC) Expert Working Group to approve the process as a DAC activity did not receive the full support of all DAC member governments, and so in November 1994 Danida’s Evaluation Department organised a meeting of organisations interested in participating in a collaborative evaluation process.

The meeting, in Copenhagen, was attended by a broad range of bilateral and multilateral donors, UN agencies and NGOs. It agreed the organisational structure for managing and overseeing what would clearly be a complex and unprecedented evaluation process. The whole effort would be guided by a 38-strong Steering Committee representing the international aid community, while day-to-day management would be entrusted to a Management Group comprising the heads of the evaluation departments on the Swedish aid agency body Sida, Norway’s Norad, Danida, the UK’s Overseas Development Administration (now DFID) and the US Agency for International Development (USAID).

Danida acted as the chair. The Steering Committee held its first meeting in December, at which terms of reference were approved for five separate studies. Each member of the Management Group took responsibility for managing one of the five.

Study 1, on historical perspectives, produced its report first, so that it could act as a resource for the other studies. Studies 2, 3 and 4 all circulated their draft reports to the Steering Committee in October 1995, and each team gave a presentation to the November Steering Committee meeting in Copenhagen. Work on the synthesis began in December 1995, merging the main findings, conclusions and recommendations from studies 2, 3 and 4 into one overall report containing 64 recommendations.

All five reports were published in March 1996. Simultaneous launch events were held in Geneva, New York and Nairobi, with a press launch in London. Over 5,000 copies were printed and distributed.

The scale of the process was unprecedented. Overall, 52 researchers and consultants were employed on the five studies, and the cost of the whole process including translation and dissemination of the published reports was $1.7 million. The largest of the studies, Study 3 on humanitarian aid, cost $580,000 and had a team of 20 specialists and support staff with a combined input of four person-years.

Screen Shot 2012-10-24 at 12.01.51 PM

Screen Shot 2012-10-24 at 12.02.00 PM

The JEFF process: an early assessment of impact

At its meeting in November 1995, the Steering Committee agreed to review the impact of the JEEAR reports one year after their publication, and a second process, the Joint Evaluation Follow-up, Monitoring and Facilitation Network (JEFF), was set up to monitor and report on the evaluation’s 64 recommendations. JEFF was a small network of 11 individuals representing the Management Group, the study teams and the Steering Committee, with a part-time secretariat and a modest budget. In the 15 months following publication, JEFF members participated in a total of 73 events. JEFF’s final report was issued in June 1997, 15 months after the publication of the evaluation itself.

The JEFF process assessed the status of each of the 64 recommendations according to four principal categories (A–D) and two mixed categories (A/D and C/D).

Two-thirds of the recommendations were judged to have had at least some positive outcomes. The main areas of progress were:

  • the strengthening of human rights machinery in Rwanda;
  • the development of early-warning information systems in the Great Lakes region;
  • the broadly supported efforts within the NGO community to improve performance through the development of standards and self regulation mechanisms; and
  • the commitment shown by donors, UN agencies and NGOs to improve accountability within humanitarian aid.

The main areas where no progress was found were:

  • ‘Fostering Policy Coherence’ (directed at the UN Security Council, Secretariat and General Assembly); and
  • ‘Effective Prevention and Early Suppression of Genocide’ (directed at the UN Security Council, the secretary-generals of the UN and Organisation of African Unity (OAU) and the High Commissioner for Human Rights.

The four recommendations that had been formally considered and rejected (category B) involved the more radical of the options offered on UN coordination, the regulation of NGO performance and mechanisms for improving accountability.

Screen Shot 2012-10-24 at 12.03.53 PM

The longer-term impact of the JEEAR

The evaluation literature identifies four main ways in which evaluations are used:

  1. Guidance for action – the direct use of the evaluation to change programmes or policies
  2. Reinforcement of prior beliefs – reaffirms and bolsters the confidence of those who want to press for change
  3. Mobilisation of support – providing ammunition for a particular change
  4. Enlightenment – a general increase in understanding that may not itself lead to action, but that leads to changes in thinking and the reordering of priorities that may eventually result in a change.

While evaluations certainly are used directly to effect change, this appears to be the least common outcome. This is broadly the case with the JEEAR. Whilst the evaluation can claim to have had a direct impact on certain programmes and policies, it has also had many other less direct impacts and uses, though these are often difficult to measure and assess objectively.

Three personal observations are offered below on the areas where the impact of the JEEAR seems to have been more and less evident.

  1. The JEEAR’s impact is most evident in the areas of humanitarian accountability and evaluation

At least three of the significant initiatives aimed at improving accountability and performance in the humanitarian sector over the last eight years – the Sphere Project, the Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP) and the Humanitarian Accountability Project (HAP) – stemmed directly from, or were substantially influenced by, the JEEAR.

Although Sphere’s beginnings just predated the JEEAR, the evaluation gave the project a fillip, partly by encouraging the initiative as a piece of welcome self-regulation, and partly by raising the prospect of external regulation of the NGO community.

ALNAP, a network bringing together bilateral and multilateral donors, UN agencies, NGOs and the Red Cross, grew out of a European bilateral donor meeting in 1996 to consider the JEEAR, and was significantly influenced by the inclusiveness and perceived value of the JEEAR Steering Committee. Finally, while the JEEAR’s recommendation for a ‘humanitarian ombudsman’ was initially rejected, British NGOs nevertheless set up the Humanitarian Ombudsman Project, out of which grew the HAP.

Screen Shot 2012-10-24 at 12.05.04 PM

The JEEAR also appears to have made significant contributions to the evaluation of humanitarian action through:

  • direct and indirect contributions to thinking on the methods and approaches to the evaluation of humanitarian action; and
  • providing a ‘demonstration effect’ that encouraged the greater use of evaluation in the sector and setting a ‘gold standard’ for it.


2. JEEAR’s impact is much less evident in relation to the discourse on the prevention of genocide and in relation to political and military processes in the Great Lakes

The JEEAR made an important contribution to the understanding of early-warning signals and decision-making processes in the UN and Western capitals. Indeed, it can legitimately claim to have put into the public domain Major-General Romeo Dallaire’s infamous cable of 11 January 1994, discussed by Randolph Kent in his article (pages 9–11).

However, the JEEAR’s contribution to the discourse on how to prevent genocide seems to have been less clear. The events of 1994 have been the subject of numerous publications, including by people involved in the JEEAR. In addition, the genocide was the subject of international official investigations including by the UN in 1999, and the OAU the following year. The JEEAR was therefore one among many studies, and it would seem that its contribution and ability to provide a focus for the debates on how best to prevent genocide were diluted.

During the JEEAR process and for ten months after its publication, 1.8 million Rwandans lived as refugees in neighbouring countries. The new Rwandan government struggled to establish its control over the country and its international credentials. In November 1996, many of the refugees in the camps around Goma returned to Rwanda as a result of Rwandan military action against the Hutu militia who had been controlling the camps.

Whilst this broke the impasse with the refugees and enabled the Rwandan government to focus on reintegration and stabilisation inside Rwanda, it also saw the start of several years of direct and indirect Rwandan involvement in the civil war and ethnic conflict in large areas of Zaire (now the DRC). This fundamentally altered the context in which the study and the recommendations had been generated, and may have made its conclusions appear less relevant than at the time of publication.


3. The recommendations on policy coherence were misinterpreted by some actors

The JEEAR argued that the lack of effective political responses to the genocide, and to the problem of Hutu militia control of the camps in Zaire, forced humanitarian agencies to work in situations that were untenable. However, the JEEAR’s call for more effective political action and greater policy coherence between the aid and political spheres seems to have been interpreted by some donor organisations as a call for the integration of humanitarian assistance within an overall political framework. For instance, the British government appears to have pursued a policy of not funding humanitarian aid in Sierra Leone after the March 1997 coup there, fearing that the aid would sustain the (unwelcome) new regime.

Conclusions

These observations are subjective and impressionistic. It is highly likely that a more thorough exploration will reveal other areas where a linkage between changes in policy and practice can be traced to the JEEAR. It may also be that the effects in relation to genocide prevention and politico-military processes in the region have been more positive than appears to be the case to this observer, at this stage.

A larger study of the legacy of the JEEAR is planned for presentation to the ALNAP Biannual Meeting in Copenhagen in June 2004. Whatever its outcome, it is clear that the JEEAR represented a unique process – a product of the shock felt by so many of those working in the aid community at what had happened, and been allowed to happen, in Rwanda.

Under the able leadership of Niels Dabelstein, the Head of Evaluation at Danida, that sense of shock was used to galvanise the aid community into undertaking a collaborative process that has had a fundamentally positive impact within the humanitarian sector, and in other areas as well. Efforts at similar collaborative, system-wide evaluations following Hurricane Mitch in 1998 and the conflict and humanitarian crisis in Kosovo in 1999 failed to bear fruit. Although the benefits of such evaluative exercises are readily apparent, it seems that it takes events as shocking as those in Rwanda in 1994 to generate the effort and collaborative spirit required.

John Borton is a freelance consultant specialising in evaluation and learning activities in the humanitarian sector. He was formerly an ODI Research Fellow, during which time his roles included being Coordinator of HPN and, more recently, of ALNAP. His email is johnborton@ntlworld.com.

References and further reading

The five JEEAR reports can be downloaded from the evaluation section of the Danida website at www.um.dk/danida/evalueringsrapporter/1997_rwanda.

See also:

The Joint Evaluation of Emergency Assistance to Rwanda: A Review of Follow-up and Impact Fifteen Months After Publication (London and Copenhagen: ODI and Danida, 1997).

John Borton, ‘Doing Study 3 of the Joint Evaluation of Emergency Assistance to Rwanda: The Team Leader’s Perspective’, in Adrian Wood, Raymond Apthorpe and John Borton (eds), Evaluating International Humanitarian Action: Reflections from Practitioners (London: Zed Books/ALNAP, 2002).

Margaret Buchanan-Smith, How the Sphere Project Came Into Being: A Case Study of Policy Making in the Humanitarian Aid Sector and the Relative Influence of Research, Working Paper 215 (London: ODI, 2003).

Niels Dabelstein, ‘Evaluating the International Humanitarian System: Rationale, Process and Management of the Joint Evaluation of the International Response to the Rwanda Genocide’, Disasters, vol. 20, no. 4, December 1996.

Organisation of African Unity, Rwanda the Preventable Genocide, International Panel of Eminent Personalities to Investigate the 1994 Genocide in Rwanda and the Surrounding Events (Addis Ababa: OAU, 2000).

United Nations (1999) Report of the Independent Inquiry into the Actions of the United Nations During the 1994 Genocide in Rwanda, UN Coc. No. A/54/549, 15 December 1999. Carol Weiss, Evaluation (Upper Saddle River, NJ: Prentice Hall, 1998).

Comments

Comments are available for logged in members only.