Issue 52 - Article 4

Real Time Evaluations: contributing to system-wide learning and accountability

November 30, 1999
Riccardo Polastro
DARA
Evaluating access to information in Dadaab camp

Over the last 20 years or so the humanitarian community has introduced a number of initiatives to improve accountability, quality and performance. Codes of conduct, standards, principles, monitoring frameworks and Real Time Evaluations (RTEs) have all been rolled out, and a new humanitarian evaluation architecture has emerged, in which RTEs are becoming a central pillar.

What is an RTE?

An RTE is a participatory evaluation that is intended to provide immediate feedback during fieldwork. In an RTE, stakeholders execute and manage the response at field, national, regional and headquarters levels. An RTE provides instant input to an ongoing operation and can foster policy, organisational and operational change to increase the effectiveness and efficiency of the overall disaster response. For further information on the characteristics of RTEs see A. Jamal and J. Crisp, Real-time Humanitarian Evaluations: Some Frequently Asked Questions (EPAU /2002/05), UNHCR Evaluation and Policy Unit, May 2002, http://www.unhcr.org/research/RESEARCH/3ce372204.pdf.

RTEs are formative evaluations of intermediary results. They can free up operational bottlenecks and provide real-time learning. An RTE is intended to be a support measure for learning in action. RTEs are also improvement-oriented reviews – dynamic tools used to adjust and improve planning and performance. They can contribute to reinforcing accountability to beneficiaries, implementing partners and donors, and can bridge the gap between monitoring and ex-post evaluation.

RTEs are, in principle, carried out in the midst of an emergency operation. They are interactive, involving a wide range of stakeholders and therefore contributing to peer-to-peer learning and accountability. Because the results and recommendations are intended to be applied immediately, RTEs must be rapid, flexible and responsive. In contrast, mid-term evaluations look at the first phase of the response in order to improve the second phase, and ex-post evaluations are essentially retrospective: they examine and learn from the past. Monitoring in humanitarian aid is often absent and, when it is in place, is not adapted to the changing realities on the ground. An RTE can help bridge the gap as it provides an immediate snapshot that can help managers identify and address the strengths and weaknesses of the response.

RTEs are one of the most challenging types of evaluation because teams are usually fielded within six weeks to six months following a disaster, when agencies are trying to scale up activities. They have a short timeframe, and findings are made available quickly. The inter-agency RTE carried out in Haiti in 2010 was deployed just three months after the earthquake struck. In these circumstances, the RTE can become burdensome to the agencies involved, and the exercise can suddenly become a ‘wrong time’ evaluation. RTEs also have to be carried out within relatively short periods of time. In general, teams have only two to three weeks to conduct the analysis and make the evaluation judgment before leaving the field. Findings are then fed back for immediate use. RTEs can potentially identify and suggest solutions to operational problems as they occur and influence decisions when they are being made by feeding back aid recipients’ and providers’ views.

RTEs can also reinforce the link between operations and policy formulation. This was the case in Mozambique, where the RTE examined how the UN humanitarian reforms were being rolled out in the field. A management matrix was implemented and the recommendations were closely monitored by the UN Emergency Relief Coordinator, looking at how Humanitarian Country Teams were applying lessons on UN humanitarian reform. Tony Beck and Margie Buchanan Smith, Joint Evaluations Coming of Age? The Quality and Future Scope of Joint Evaluations, ALNAP, 2008, http://www.alnap.org/pool/files/7rha-Ch3.pdf.

Methodological approaches

Evaluations of humanitarian aid demand specific methodological approaches because of the speed and turbulence of these interventions and the fast-evolving contexts in which they take place. Baselines are often absent and there is high staff turnover. Evaluation teams must be small and flexible with a very light footprint in the field as ‘all the team must fit in a Land Cruiser’. As with other evaluations, RTEs essentially use qualitative methods including interviews (purposeful snowball sampling with ‘information rich’ individuals, group discussions etc.), extensive field travel to sample sites, peer review, observation and documentary research.

An RTE is more interactive than other types of evaluations – the evaluator acts as a facilitator and there is sustained dialogue with key stakeholders throughout the evaluation in the field, in the national capital, at regional level and in HQ. The level of interactivity must be high and continuous in order to identify and resolve problems with organisational or operational performance and to act as a catalyst for improvements. The evaluator observes and advises on the emergency planning and operational process and fosters stakeholders’ involvement. As a result during the RTE process stakeholders define what, how and who can improve the overall response, outlining clearly roles and responsibilities.

Single-agency and inter-agency RTEs

Single-agency RTEs focus on a particular agency response, while inter-agency or ‘joint’ RTEs evaluate the response of the whole humanitarian system. Joint RTEs adopt a broader perspective and deeper understanding of cross-cutting elements such as the overall direction, coordination and implementation of the response, including needs assessments, threats to humanitarian space, coordination and operational bottlenecks. When done jointly, an RTE represents a learning and accountability opportunity for participating agencies and national and local governments, as well as affected communities. Actors involved in the response are consulted (the affected population, national government, local authorities, the military, local NGOs, international donors, the UN, the Red Cross/Red Crescent and INGOs), fostering increased learning and accountability across the humanitarian system. John Cosgrave, Ben Ramalingam and Tony Beck, An ALNAP Guide Pilot Version, 2009, http://www.alnap.org/pool/files/rteguide.pdf.

Key stakeholders

N3243 box1ormally, the primary audience of an RTE is in the field, the secondary audience is at HQ and the tertiary audience is the humanitarian system as a whole. However, this strongly depends on who initiates the RTE and who raises the key issues to be addressed. If the evaluation is launched from headquarters, the level of ownership in the field is likely to be reduced. In this case, the RTE may be perceived as intrusive and primarily geared to upwards accountability rather than facilitating joint learning and accountability on the ground. In contrast, when the exercise is initiated in the field (as was the case in the Mozambique inter-agency RTE of the response to the floods and cyclone in 2007 and in the humanitarian response to Pakistan’s internal displacement crisis in 2010 – see Box 1 See Riccardo Polastro, Aatika Nagrah, Nicolai Steen and Farwa Zafar, Inter-Agency Real Time Evaluation of the Humanitarian Response to Pakistan’s 2010 Flood Crisis, DARA, March 2011, http://ochanet.unocha.org. ), the RTE is usually welcome as all actors believe that it can contribute to improving the ongoing response.

An RTE can contribute to improved accountability to different stakeholders by involving them in the process. With both inter-agency and single-agency RTEs the agencies whose activities are being evaluated are meant to act on the recommendations. However, feedback tends to be given mainly to peers and donors. Despite being the primary ‘clients’ of the aid industry, beneficiaries and local and national governments rarely receive feedback on the recommendations or how they are being implemented.

Challenges and limitations

There is a growing tendency to describe any evaluation as ‘real time’. When fielded too late, after the disaster emergency response is over, the relevance of and need for an RTE should be questioned. During the first ten months following the earthquake in Haiti, ten separate RTEs (for donors, the Red Cross, UN agencies, NGOs and at inter- agency level) were fielded, which represented an enormous burden on staff and key informants.

Agencies initiating these simultaneous RTEs claim that they have individual learning and accountability needs, but there is no evidence to suggest that the added value outweighs the costs. Joint RTEs, in contrast, can add value through providing a mechanism for increased peer-to-peer accountability, particularly if the HC implements recommendations. By involving aid beneficiaries and local authorities, joint RTEs can also reinforce downwards accountability and learning. Unfortunately, however, such conditions can be difficult to achieve; only in the Philippines (2010) and Pakistan (2011) inter-agency RTEs were governments involved throughout the evaluation process.

Another challenge concerns who initiates and owns the evaluation. If HQ initiates the evaluation, key stakeholders in the field are likely to be less involved in the identification of issues and key questions, as well as during implementation on the ground. For the evaluator, the challenge becomes identifying what the key questions are, who poses them and who will use the evaluation findings and recommendations. No Humanitarian Country Teams that had an RTE fielded in 2010 drew management matrixes defining which recommendations had been accepted, who was responsible for taking action and implementing them, and what the deadline was for doing so. Only in the cases of the Mozambique (2007) and Myanmar (2008) RTEs were management matrixes drawn up after the reports were released.

Another recurrent problem in many types of evaluations is the limited time available for consultations with beneficiaries. Careful planning can ensure that what time there is is used to best effect to ensure maximum stakeholder consultation. For instance, in Mozambique, as the inter-agency RTE was both initiated and supported by the field, four of the five team members were able to travel extensively and consult a representative sample of local people in the provinces affected by the cyclone and floods. Similarly, in the Pakistan 2010 floods RTE, incorporating lessons from previous RTEs the team dedicated 80% of its time to field consultations thanks to the involvement of all field hubs. It is important to achieve a balance between site visits (to gather a representative sample of the affected population) and interviewing information-rich individuals (who tend to be in capitals managing the response). The lack of experienced evaluators is another key challenge, as suitable candidates are generally booked up three to six months in advance.

A final limitation is lack of funding, even when calls for proposals for RTEs are launched. For instance, in July 2010 there was a call for proposals for the Kyrgyzstan inter-agency RTE, but no funding was secured; the Flash Appeal was also underfunded due to the time of year and the focus on other emergencies such as Haiti and Pakistan. In the case of Pakistan (2010 RTE), it took a long time for donor funding to be disbursed.

Conclusion

RTEs have a key role to play in humanitarian aid. First, they can contribute to improved learning and accountability within the humanitarian system. Second, they can bridge the gap between conventional monitoring and evaluation. Third, they can influence policy and operational decision-making in a timely fashion, and can identify and propose solutions to operational and organisational problems in the midst of major humanitarian responses. That said, there is a risk that RTEs may become just a wasteful box-ticking exercise, especially when carried out too late. The tendency to use them primarily for upward accountability purposes rather than for field-level peer learning and accountability undermines the added value of RTEs for personnel involved in the response.

To improve the humanitarian system’s planning and performance, RTEs should be done at the right time. A triggering mechanism is needed to ensure that this happens and that adequate human and financial resources are allocated. Incentives for improving knowledge management and encouraging real-time learning and accountability must be identified at the field level. There is a need to specify who is responsible for implementing recommendations and action plans after they have been formulated. Workshops with key stakeholders can help to validate and prioritise recommendations presented in the draft report and assign responsibility for implementation.

Finally, to maximise the potential contribution of RTEs to accountability and lesson learning, it is key that they become exercises that are ‘owned’ and utilised by the Humanitarian Country Teams (HCTs), rather than headquarters-imposed exercises carried out by flown-in evaluators who come and go, and then disappear. For inter-agency RTEs the ERC must hold Humanitarian Coordinators accountable for developing, monitoring and reporting on implementation of action plans agreed by the HCTs. In addition to ensuring adequate involvement of field-level stakeholders in the RTE, including aid recipients and local authorities, initiating organisations need to provide regular feedback to them on the implementation of recommendations. Last but not least, RTEs should be disseminated better so that the wider humanitarian system can benefit from them. All inter-agency RTE reports carried out to date are publically available at http://www.unocha.org/what-we-do/policy/thematic-areas/evaluations-of-humanitarian-response/reports.

Riccardo Polastro is Head of Evaluation at DARA. This article draws on a presentation made by the author on ‘Lessons Learned from Recent RTEs’ given at the 26th ALNAP meeting in Kuala Lumpur, Malaysia, in November 2010. Powerpoint slides of the talk are available at http://www.alnap.org/pool/files/ia-rtes-alnap-riccardo.pdf.

Comments

Comments are available for logged in members only.