Issue 74 - Article 3

Re-centering our focus in humanitarian response

February 5, 2019
Bronwyn Russel
CFP team holding a focus group discussion with earthquake-affected Dalit women in Nuwakot district.

After the 2015 earthquake in Nepal an experiment, long in the making, was brought to life. The experiment? To see if a common service approach to system-wide community engagement and accountability could successfully serve both the humanitarian community, and the people affected by the earthquake. Nepal was not a simple context to realise this experiment in. A vast number of people, over 800,000 households, were affected by the earthquake, spread across half the country, over some of the world’s most extreme mountainous terrain, with zero road access in a huge number of communities. Despite these challenges, what Nepal did benefit from was a perfect combination of actors – including the UK Department for International Development (DFID), Ground Truth Solutions, the UN Office for the Coordination of Humanitarian Affairs (OCHA) and the Humanitarian Coordinator (HC) at the time – who shared the will and the expertise to bring this vision to life.

The Inter Agency Common Feedback Project (CFP) was born in June 2015 in the UN Resident Coordinator’s Office. It was conceptualised as a common platform to aggregate and consolidate feedback from earthquake-affected communities and provide inputs to the Humanitarian Country Team, Inter Cluster Coordination Group and clusters on the perspectives of earthquake-affected communities. The goal was to ensure that the voices of affected people had a place at the decision-making table and were able to influence response and recovery efforts.

It was simultaneously a novel concept and an obvious step. At the root of humanitarian activities, whether an individual’s, an organisation’s or a country’s, the underlying intention is to help people who are suffering as a result of natural or human-made crisis. If the goal is to help people, then it is undeniable that what those people think and feel about the way we are responding to their needs should be the primary measure of whether or not we’re achieving our goals.

How it works

There are several components that make up the CFP’s community engagement and accountability platform. Working together properly, they are designed to ensure that feedback from communities is collected regularly, processed efficiently and systematically interacts with senior decision-making, and that communities get answers to their urgent questions, concerns and grievances.

Feedback collection

The beauty of a common platform for feedback collection is that there is no targeting of ‘beneficiaries’. Just as it works to serve the entire humanitarian community, it also targets all affected people, so everyone gets an equal opportunity to have their voice heard. Ensuring that everyone has an ‘equal’ chance means using as rigorous a statistical sampling method as possible to achieve maximum representativeness. Although the sampling strategy has changed over time to reflect changing circumstances, generally the CFP collects over 2,000 household-level perception surveys at regular 2–3-month intervals. All of this feedback can be disaggregated by gender, caste/ethnicity, age group, disability and geographical location. While the sampling strategy is random, the demographic profile of respondents roughly matches that of the earthquake-affected areas. To ensure gender parity, enumerators request a respondent of a different age and gender from the pool of eligible respondents (household members over 15 years) at each household.

This is the foundation of the feedback component, and in fact the entire project. Feedback from communities is sought in a statistically significant, quantitative manner, instead of waiting for it to come in through other, more programme-specific mechanisms, such as hotlines or suggestion boxes. However, quantitative feedback alone has its limitations, which is why the CFP complements this data collection with focus group discussions, conducted in the majority of survey districts by project field staff. These qualitative insights help to provide depth to the quantitative findings and put a human face on a particularly salient issue. Feedback also comes in from partner agencies and organisations on a voluntary basis, through a feedback 3W (who is saying what, where) populated by regular accountability and community engagement mechanisms. Agencies that report are credited for their collaboration, but feedback is not directly attributed to them.

Hotlines, information booths, suggestion boxes and other similar methods will tend to attract those who have a particular issue, or who feel confident enough to lodge a complaint or provide feedback. In a society with as much socio-cultural stratification as Nepal, these methods of gathering feedback will not necessarily appeal to marginalised and vulnerable people. While these mechanisms can play an important role in supporting programmatic accountability, without overarching coordination they cannot provide comprehensive insight into the issues and concerns of the people we aim to serve on a response-wide level. To ensure that the response is listening to a diversity of views, it needs to bring the opportunity to provide feedback directly to people’s doorsteps.

Analysis and reporting

The breadth and richness of data allows the CFP to confidently report on the main issues being faced by communities in the response to humanitarian leadership and all humanitarian partners. Once each round of data collection is complete, the information is processed and analysed, and a report produced within two weeks. This tight turnaround ensures that feedback reaches humanitarian actors in real time, and issues are presented when course corrections can still be made. The report presents its findings in an easily digestible way, with tables and infographics accompanied by short text on each question, and an overview page on key findings and recommendations. Each report is supplemented by a five-minute infographic video, uploaded to CFP’s YouTube channel. Humanitarians are busy. CFP strives to hook decision-makers into the findings and analysis by making it easy to understand feedback from affected communities over a morning coffee.

Despite being short and easy to read, the analysis is also insightful, as it presents the information coming from communities in context. Reports disaggregate each question according to age, sex, caste and ethnicity, occupation, geographic location and disability, and provide a thorough understanding of the ways in which all of these socio-cultural variables interact and shape the way individuals and communities experience the response.

It’s impossible to present everything that could be significant to every actor in a short, predominantly visual report. Likewise, it’s impossible to know everything that may be important, or even relevant, to every actor in a response. For this reason, another key element of the CFP is that all data is open and publicly accessible, and a visualisation platform has been jointly developed with HDX to allow users to interact with and query data to produce whatever analysis is most useful for their programmatic needs.

Advocacy

The next step for CFP is to do justice to the feedback collected from affected communities. CFP, and any initiative like it, will always be judged based on the impact of the feedback it gathered. Commitment from agencies to follow up on feedback from communities has never been guaranteed. It is something that had to be negotiated. For this, consideration is given to the outcome of feedback from the outset of research design. It is important to ask, almost exclusively, questions that are actionable, and that can have an impact on programmatic decisions. When questions are asked at the right level of specificity – with a direct link to programmes or potential programmes, but without being too detailed – then practical recommendations can be made around which strong advocacy can be pursued. For instance, if protection is a concern, asking ‘who are the main perpetrators of violence’ in an anonymous survey is impractical. However, asking ‘what areas of the community is violence likely to occur in’ can make it possible to identify potential interventions, such as investments in lighting.

When looking at feedback from communities, the findings are, of course, not always positive. This is what scares a lot of people away from this type of work. Sometimes affected communities are not satisfied with the assistance they are receiving, want an entire agency to leave their community or report corruption. The way in which these findings are presented and stakeholders engaged on the issues has an impact on their willingness to hear the voices of communities, investigate and make necessary changes.

These are sensitivities the CFP has learned to navigate, while remaining strong on advocacy. Everyone in a humanitarian response is working hard and wants their work to have a positive impact on the lives of people who are suffering through terrible circumstances. It is essential to recognise that, when advocating around feedback from communities, the intention is not to scold humanitarians, or slap them on the wrist if you find that affected communities aren’t satisfied. The purpose is to support the humanitarian community to continuously check the pulse of its work, implement course corrections as needed and ensure that all the hard work and efforts of humanitarian actors and agencies are having the desired impact.

Closing the feedback loop

Finally, in order to be truly accountable to affected people the CFP does its best to let affected communities know what happened as a result of the feedback they provided, as well as filling in the information gaps they have identified so that they can make informed decisions about their own recovery. Crisis-affected people are not passive, helpless recipients of assistance. They have agency and resources and they will make decisions, based on the information they have, about how to recover and move forward. They have a right to know about the factors that will impact their lives, and their decisions. If the humanitarian system does not do its absolute best to ensure that the information affected people have is clear, accurate and timely, it does not support the fulfilment of that right.

The CFP used three complementary mechanisms to close the feedback loop with affected communities. The most direct is an interactive voice response (IVR) system that sends an audio message to the mobile phone of each survey respondent who provided a phone number. During data cleaning and processing, key concerns, information gaps and questions are grouped in the IVR system, verified answers are sought and audio messages are recorded and then sent out. This ensures that each respondent sees the outcome of spending their time providing feedback to an enumerator. This method reaches 70–80% of respondents who provide a phone number when surveyed.

The second method is through community meetings, held in rural areas with local government officials, local I/NGO workers, media and other relevant stakeholders. In these meetings, organised by CFP, key issues arising from the most recent feedback from that particular area are addressed, followed by a question and answer session. Often these questions are around government policies, timelines and support packages. In many instances questions deal with exclusions from beneficiary lists, and often the authorities are able to rectify an oversight such as this on the spot. Local media cover the event for radio and print media to ensure that the questions and answers reach a broader audience than just those in attendance.

The least direct mechanism is community radio programmes. The CFP has a partnership with the Association of Community Radio Broadcasters, with an agreement to support local radio to produce content based on the findings of community perception surveys in local languages. Additionally, national-level programmes take pressing issues from community feedback and follow up with government and other decision-makers at the policy level. For instance, when rumours about households unable to reconstruct on time being blacklisted and denied government services were circulating, national radio programmes broadcast interviews with the National Reconstruction Authority spokesperson stating that no one would lose any government services as a result of an inability to rebuild according to the deadline.

What does it take to make it work?

With over three years of experience behind it, the Common Feedback Project has learned quite a lot about what it takes to get a common service for community engagement off the ground and running continuously through emergency response, recovery, reconstruction and preparedness. Although the components and processes will depend on context in future models, four main factors have led to the CFP’s long-term success, and should be replicated in future similar projects.

  1. One of the most important features of a good feedback system is flexibility. A common feedback and accountability service should collect feedback with a regularity that makes sense, given its objectives. Doing this regularly allows for tracking of important issues, but also encourages flexibility in its approach. Sometimes issues fade away, and new ones crop up. In a humanitarian setting circumstances, information and needs change rapidly. A collective platform needs to be able to react to this.
  2. Ensuring that the methodology by which feedback is collected is extremely strong is essential. The best offence is a good defence, meaning the first thing is to make sure no one can throw the findings out. People don’t want change, so if they can find a way to disregard findings, especially if they’re unflattering, they will. Make it impossible. The best methodology will depend on the context, objectives and scope of each system, but it should be as rigorous as possible given those factors.
  3. The platform manager or coordinator should be a diplomat, or a lobbyist, rather than a technical specialist. Technical specialists are essential to make sense of the large amounts of information a common platform receives. But for the outward face of the platform, it is essential to have someone who can negotiate buy-in from even the most reluctant to change. The biggest mistake that data-rich projects make is to assume that people will know what to do with the vital information they provide. This is not the case. In order for the voices of affected people to connect with humanitarian programming it is necessary to have a person who can communicate that voice effectively, who can get people on board, who can navigate difficult findings with partners in a diplomatic way, who can build strong and strategic partnerships to mitigate risk, and who can mentor and coach stakeholders through internalising feedback data to ensure that that voice is heard.
  4. To get communities a seat at the most senior decision-making tables, a common platform must be positioned close to senior leadership. It is very difficult for an agency with a programmatic mandate to host a truly common platform, so the more central and neutral a decision-making body the platform can be positioned within, the better. Additionally, for greatest impact, the manager or coordinator of the platform should have direct access to senior response leadership so that they can develop that individual as a champion for the voice and perspectives of affected people. This can have different incarnations at different levels. For instance, national and local decision-making powers and processes may be de-coupled, which would mean a successful common platform would want to position itself close to those decision-making bodies at those different levels.

Conclusion

The CFP in Nepal has achieved a great deal in the past three years, not least of which is setting a global example that collective community engagement in emergencies, and subsequent recovery, is not only feasible but also useful in improving the response. However, it remains to be seen if the Nepal example will be a one-off or if it can be successfully replicated and adapted in other contexts. Buy-in from senior leadership is not a given in every emergency, and the ability of programme and policy actors to internalise feedback from communities has not yet been learned globally. These pieces are necessary to the success of any common platform.

The CFP in Nepal is part of a global change management initiative to recast the terms of humanitarian engagement and stop circumventing those people who are most important at the decision-making table, those people who are the very reason there is a decision to be made at all. Change is not easy. People don’t like it. It’s scary, and it’s uncomfortable. This means we all need to work, day-by-day, brick-by-brick to convince the system, and the people who make it up, to change. It will take time. These things don’t change overnight. They don’t change by declaring commitments and strategies or policies. They happen though negotiation, through diplomacy, through coordination. In Nepal this change has run its course and come out the other side, proving it is possible to give people affected by crisis a real voice in emergency response and recovery.

Bronwyn Russel is Team Leader for ACAPS in Cox’s Bazar. Previously, she was Project Manager of the Inter Agency Common Feedback Project, UN Resident Coordinator Office (UNRCO) Nepal.

Comments

Comments are available for logged in members only.