Issue 7 - Article 4

'One year on': follow-up monitoring of the Rwanda Evaluation

February 1, 1997
Humanitarian Practice Network

What happens to evaluations of humanitarian aid programmes once the final versions have been completed? Who is responsible for ensuring that the recommendations are given proper consideration and the outcomes recorded?

For most evaluations the answer to these questions is clear – responsibility for follow-up rests solely with the commissioning organisation. The commissioning organisation determines how the recommendations are responded to and, in many organisations, actually chooses who it informs about the evaluation and its findings. Whilst some commissioning organisations may decide to implement the recommendations and modify their structures and procedures accordingly (or request their partner organisations to do so), others, if they so wish, may decide to ‘bury’ the report and take no heed of the recommendations.

For an unprecedented, system-wide evaluation commissioned by a 37-organisation Steering Committee – such as The Joint Evaluation of Emergency Assistance to Rwanda – responsibility for follow-up is bound to involve greater ambiguity. The Synthesis report of the Joint Evaluation contained no fewer than 64 individual recommendations addressed to different parts of the international community – including the Security Council, heads of UN agencies, donor organisations, NGOs and media organisations. In late 1995, the Steering Committee for the Joint Evaluation of Emergency Assistance to Rwanda agreed that it would reconvene one year after publication of the Evaluation, to review the impact of the Study. At that time however, it was unclear what mechanism (if any) would be used to monitor the follow-up to the Study and prepare a ‘one year on’ report to the Steering Committee. Sida took the initiative in establishing a group to undertake this role and in May 1996 the ‘Joint Evaluation Follow-up Monitoring and Facilitation Network’ (JEFF) was formed. In mid-February 1997, just as we were going to press, the Steering Committee met to consider a draft version of the ‘one year on’ report prepared by JEFF.

JEFF was made up of 11 individuals based in 7 countries who were representative of the four Study Teams of the Joint Evaluation, the Management Group, and the types of organisation which comprised the Steering Committee. Finance and capacity support was provided to JEFF by CIDA, Danida, Sida and USAID, and ODI served as the ‘hub’ of the network. JEFF members participated in meetings where the Joint Evaluation was discussed and, in addition, attempted to monitor the outcome of those meetings and processes in which they were not able to participate. Documentation from these meetings was collected and held centrally by ODI. Members of the Steering Committee were also requested to prepare reports on follow-up within their organisations.

For the purposes of preparing the ‘one year on’ report to the Steering Committee, each of the 64 recommendations was accorded a ‘response status’ category. The ‘A’ category denoted those recommendations that had not been formally discussed by the organisations to which they had been addressed. Those recommendations that were formally discussed but rejected were accorded a ‘B’. Those that had not resulted in resolution or action fell into category ‘C’, and those that had resulted in resolution or action, ‘D’. Because many recommendations had multiple addressees, there was a problem of categorising those recommendations which had been discussed and acted upon by some but not others and a number of recommendations had to be accorded a dual category status.

The draft ‘one year on’ report prepared by JEFF concluded that the Joint Evaluation has had a substantial impact. Considerable debate has been provoked and, in a number of areas of policy and practice, significant progress has been achieved. According to the categorisation system used, more than half of the recommendations had seen some resolution or action. Follow-up on recommendations addressed to the Security Council and the UN Secretariat was found to have been very limited; but with the arrival of a new UN Secretary General and recent evidence that the Security Council is more open to the perspectives of the humanitarian community (see article below), the prospects for follow up in these areas over the coming weeks has been improved.

Whatever the eventual impact of the Joint Evaluation, it is hoped that JEFF has demonstrated the value of mechanisms designed to monitor the follow-up to large, system-wide, evaluations of the international community’s response to complex emergencies. The practice of undertaking automatic ‘one year on’ reviews of major evaluations is not widespread, even within many bilateral and multilateral aid organisations and it is hoped that the JEFF experience will encourage a trend in this direction. Though lessons can be learnt from the way JEFF functioned, the experience will be useful in the design of future follow-up monitoring mechanisms.

Within a system as complicated and as densely populated by organisations as the international community’s system for responding to complex emergencies, it is only too easy for individual organisations to avoid giving proper consideration to those recommendations addressed to them, or to reform proposals involving them which have been prepared after substantial and careful study by external reviewers and researchers. As shown by the various ‘Watch’ organisations, well-informed scrutiny and the ability to place information in the public domain is a critical component of accountability. In the efforts to improve the functioning of the international community’s humanitarian system, mechanisms such as JEFF will have an important role to play in maintaining the pressure necessary to keep the system improving.

Comments

Comments are available for logged in members only.