Real-Time Evaluation: where does its value lie?
- Issue 32 Indian Ocean tsunami
- 1 Linking preparedness and performance: the tsunami experience
- 2 The international tsunami response: showcase or circus?
- 3 Managing private funds maintaining a humanitarian perspective
- 4 Accountability lessons from the tsunami response in India
- 5 'People to People': an alternative way of delivering humanitarian aid
- 6 Donor issues in the tsunami response: the view from DFID
- 7 Cash-based transfers and alternatives in tsunami recovery programmes
- 8 Emergency malaria and dengue fever control: lessons from the tsunami in Aceh
- 9 Implementing minimum standards for education in emergencies: lessons from Aceh
- 10 An IHL/ICRC perspective on 'humanitarian space'
- 11 International troops, aid workers and local communities: mapping the perceptions gap
- 12 The development of the International Criminal Court: some implications for humanitarian action
- 13 Addressing sexual violence in emergencies
- 14 SMART: a collaborative approach to determining humanitarian needs
- 15 Using satellite imagery to improve emergency relief
- 16 Land, housing and property restitution after conflict: principles and practice
- 17 Real-Time Evaluation: where does its value lie?
- 18 Katrina and Goliath: why the greatest military and economic power in the world didn’t protect New Orleans
The practice of humanitarian evaluation has come a long way. Fifteen years ago, evaluations of emergency assistance or disaster relief programmes were rare; today, by contrast, we may be seeing an unprecedented evaluation boom. The evaluative response to the crisis in Darfur and the tsunami emergency suggest that this boom may be getting louder, with many evaluation teams being despatched to both regions. It is incumbent on the humanitarian evaluation community to examine the effects of this boom, and to ensure that evaluation is better able to influence performance.
One significant development in this context has been the growth of Real Time Evaluations (RTE), as opposed to traditional ex-post evaluations. The key principle underlying RTE is that it can affect programming as it happens. This makes it similar to monitoring, and challenges the conventional categorisation of activities as monitoring or evaluation. This article looks at the practice of RTE, and asks how it relates to traditional evaluation practice and theory, and where its value lies.
RTE: a brief history of concept and practice
In other sectors, one finds In-Process Reviews, Process Evaluations and similar exercises, and RTEs have been common practice in various fields of science and technology. One of the first references to RTE in the humanitarian literature is in Alistair Hallams Good Practice Review Evaluating Humanitarian Assistance Programmes in Complex Emergencies, published by the Humanitarian Practice Network in 1998. It is likely that the immediate desire to save more lives played a significant part in RTEs adoption by front-line agencies concerned that traditional, after-the-fact evaluations come too late to affect the operations they were assessing. Given that institutional memory in many organisations is weak, there was also concern that these evaluations were not influencing future operations either.
In the humanitarian sector, UNHCRs Evaluation and Policy Analysis Unit (EPAU) was for several years the chief proponent of RTE, putting an FAQ page on its website, as well as carrying out RTEs and publishing them on the web. WFP, UNICEF, the Humanitarian Accountability Project, CARE, World Vision, Oxfam GB, the IFRC, FAO, WFP and others have all to some degree taken up the practice. The UKs Disasters Emergency Committee (DEC) carries out similar processes under the title of Monitoring Mission, and Groupe-URD has an Iterative Evaluation Process (Lévaluation itérative avec mini-séminaires (EIMS); see http://www.urd.org).
RTE methodology
Although there are diverse methodological approaches to RTE, there are also some perceptible common characteristics.
- The RTE takes place during the course of implementation (EPAU recommends that it starts as early as possible).
- Like monitoring, it may aim to be iterative rather than one-off, hence the idea of on-going evaluation.
- The time-frame is short, with each exercise typically lasting days, rather than weeks.
- The methodology pays the usual attention to secondary sources of information, but is then interactive. Most RTEs are carried out through field visits combined with headquarters meetings, although some have been based purely on telephone interviews with field-based staff.
- RTEs use internal consultants rather than, or perhaps alongside, externals/independents. The number of team members varies from one to many, but may include sectoral or other specialists, local staff or consultants.
- Restricted use is made of the DAC criteria for evaluation , with a greater emphasis on process.
- The emphasis is on immediate lesson-learning over impact evaluation or accountability.
- Quick and dirty results enable a programme to be changed in mid-course. This brings RTE closer to monitoring, which primarily tracks progress, than to evaluation, which makes value judgements.
RTE outcomes
An RTE is an improvement-oriented review; if it can be carried out with programme or project staff, it can be regarded more as an internal function than an external process. Having said that, it is also important that management initiate and support the RTE, and own the results; as we shall see below, many of the findings are directed at them too.
Unlike the majority of final ex-post evaluations, the process and products of an RTE are integrated within the programme cycle. Interaction with programme staff and managers during the course of implementation means that discussion, which may or may not be reflected in a final document, can help to bring about changes in the programme, rather than just reflecting on its quality after the event. As EPAU notes: a real-time evaluator is actually a facilitator, encouraging and assisting staff to take a critical look at their operation and to find creative solutions to any difficulties they are encountering.
Staff, especially junior staff, often see this as very supportive, as it acts as a kind of catharsis, providing a way of expressing tensions and concerns in a relatively un-threatening setting. Those familiar with the stresses of a full-on emergency response will see that this should be more than an incidental effect; it should be a core outcome of the RTE process. This is not, of course, to deny the challenge of coming in with an extra agenda and working with teams that are under a lot of pressure to scale up, to set up and to deliver results.
The actual report that the RTE produces is, in that sense, not as central a part of the evaluation as it is with a final ex-post evaluation. Indeed, by the time the report is produced and circulated it should be out of date, if the process has helped to encourage change. Nevertheless, it should also constitute a written record for longer-term institutional lesson-learning. The difficulty is that the RTE will inevitably capture a snap-shot of the situation, which runs the risk of fixing a picture that, at the time that the report may be read, is no longer valid. It is important to highlight this risk in the report. Alternatively, there is the chance that an ever-changing picture will make it all but impossible to complete the exercise or finalise a report for dissemination.
Terms of reference, findings and recommendations An ex-post evaluation should have terms of reference (ToR) that define the areas of investigation, which may or may not cover all that is proposed in the ALNAP Pro Forma for evaluations; for example, the evaluation may be aimed at learning specific technical lessons, or capturing management practices only. The practice for RTEs is less settled, although ToR tend towards a short list of topics covering both programming and management, rather than a more carefully defined list of specific areas of concern.
Bernard Broughton suggests looking at the operations relevance and design, progress in achieving the operations objectives (i.e. results), any gaps or unintended impact, the effectiveness and efficiency of the mode of implementation, and the appropriateness and application of operational guidelines and policies. The DECs Monitoring Mission ToR specify the last part of this as adherence to the Code of Conduct for RC/RC Movement and NGOs in disaster relief, as well as Sphere guidelines, and the best practice encapsulated in People in Aid and the IASC Code on protection from sexual abuse and exploitation. UNHCR suggests that RTEs will reverse [the conventional] process [which tends to look at specific situations and draw general conclusions]: the real-time evaluator will be aware of such general lessons and apply them to specific situations. This emphasis appears to be common in ToR for RTEs.
The judgements made in RTEs in general concern how results are being achieved: they look at process, rather than impact. Impact is not only in general harder to pin down but, given the timing of an RTE, meaningful statements about impact cannot be expected. However, in this regard an RTE can play a valuable part not only in illustrating some sort of descriptive base-line at the point at which it is carried out for the use of future evaluative efforts. It can also propose areas for any later ex-post evaluation to concentrate on.
Ex-post evaluations have always tried to fulfil two functions: learning and accountability. These have often not fitted comfortably together, and have even involved a trade-off. Trying to be accountable through evaluation should not be at odds with learning from what has been done, but the time-frames are incompatible. The accounting is a closed exercise, telling us what was done and how, with associated costs. The lesson-learning is a doorway into a process that needs to be taken up as far as ex-post evaluations are concerned in contexts separate from the context where the lessons are generated to be taken back into an organisation, or transferred to other field locations. An RTE is unlikely to enhance accountability backwards, that is to donors; on the other hand, if carried out in the field and including consultations with actual or potential beneficiaries, it may promote forwards accountability. However, an RTE is not strongly directed at accountability unless, as for example UNHCR has generally done, the reports are put into the public domain. A priori it is likely that an RTE undertaken early on in a humanitarian response will point to failings in both programming and management, and some organisations have not felt able to publish such reports. An RTEs ability to address the question of impact is also probably weak, depending on the phase of the programme at which it is taking place, despite the aspirations of many of their ToR.
As the evaluation industry has expanded, there has been concern that this growth has been at the expense of evaluations poorer cousin, monitoring. This has led to a disproportionate emphasis on accountability and a focus on longer-term outcomes and impacts, compared with real-time programmatic change and improvement. It is likely that RTEs have emerged as a way of bridging the widening gap between conventional ideas of monitoring and evaluation. Looked at from another point of view, one may even ask if the evaluation lexicon has simply colonised some of the language more commonly associated with monitoring.
Conclusions
What should we make of the increased interest in RTEs? Inconsistencies in methodology, an absence of theoretical underpinnings and resistance to categorisation and standardisation have left some in the evaluation community wondering whether RTEs are sufficiently rigorous tools. Sceptics may have a point. However, let us first recognise and underline the importance of responding to the humanitarian imperative and encouraging evaluative mechanisms that gear themselves to helping people, rather than improving the system (important though this is). In this way, RTEs may be seen as a natural corrective to an over-emphasis on ex-post evaluation, and the institutional agenda that this has generated.
Specifically, RTE gets closer to two sets of people who are critical for effective humanitarian action: field staff and the people affected by disasters. Each year, the ALNAP Review of Humanitarian Action emphasises that field staff are the lynchpin of effective humanitarian assistance. Each year, it also highlights that field staff, particularly national staff, are often neglected. The facilitatory element in RTE has the potential to offer much-needed support at this level. This can only be a good thing. RTEs are also much better at getting closer to the people affected by crisis, and this should pay dividends in improving support for accountability to beneficiaries still generally a lamentable omission among the improvements the sector has made in the past 20 years.
Maurice Herson (m.herson@odi.org.uk) is ALNAP Senior Projects Manager. John Mitchell (j.mitchell@odi.org.uk) is ALNAP Coordinator.
Comments
Comments are available for logged in members only.