Ten things we’ve learned about tracking perceptions

October 19, 2020
Meg Sattler and Nick van Praag
Perceptions survey being conducted in Haiti

Back in the early days of Ground Truth Solutions (GTS), the field director of an international relief agency was incredulous when told that our work entailed tracking how people affected by humanitarian crises perceive the provision of aid. ‘Why should I care about the perceptions of affected people?’, he asked.

That was then. Here and now, things are different. Perception surveys are all the rage and no humanitarian crisis is complete without teams of enumerators asking questions about how affected people see things. This is a positive development. Perception surveys, done right, can enhance the quality of humanitarian action because they offer practical intel from people well placed to know what’s working and what’s not and, more importantly, how to fine-tune programme design and implementation.

But getting a handle on perceptions is not as straightforward as simply asking questions and turning the answers into graphs. Done badly, perception tracking does no justice to those whose opinions we seek to amplify. Like many activities in the accountability space, we’ve grown accustomed to collectively celebrating approaches that are considered adequate because they are better than doing nothing. But, as the Covid-19 response has highlighted, if perception surveys are to serve a useful purpose, they should respect 10 essential precepts:

  1. Worry the questions. The first step is to consider whether a perception survey is the right way to get the information you need, or if there are better alternatives. If the answer is yes, then each question requires careful formulation to make sure it corresponds to the actual concept you want to measure, and that it yields consistent results across demographic groups and locations. Preferably, questions should allow for actionable follow-up. This is more likely if they are based on clear hypotheses. In our work on Covid-19, one of our hypotheses is that people are less concerned about the health consequences of the virus than the economic and social impacts – something health experts might overlook. It’s important to have more than one question on a theme. No one would measure intelligence using a single question, and the same goes for issues like trust and equity, which are central to successful humanitarian action. But this requires careful crafting, because the questionnaire itself should be brief, to respect people’s time.
  2. Link your inquiry to accepted norms and programme goals. The survey should link to concrete stuff. This can be a normative framework, like the Core Humanitarian Standard, which means you are asking about issues broadly accepted as central to humanitarian action. Alternatively, the questions can relate to the goals of a particular programme or the overarching objectives of a response-wide plan. Take the work GTS is doing with the CHS Alliance in Chad. There, the Humanitarian Country Team is using perception indicators and targets to assess progress in meeting both the CHS commitments and the goals of the Humanitarian Response Plan. Linking survey instruments to accepted norms and operational goals can increase the likelihood of buy-in by humanitarian actors, and along with it, the likelihood that findings will trigger action.
  3. Sample rigorously and target accurately. You need a sample large enough to ensure confidence in the data. But larger is not automatically better. Targeting is essential if you are to collect feedback from the people whose views you seek – the customers of humanitarian services. The sample must aim to include a representative cross-section, with appropriate gender and age balance and the inclusion of people with distinct needs or physical challenges. Although we at GTS strive to get it right, merging data science with rapidly evolving field realities, it’s never easy.
  4. Test, test and test again. It’s important to test the survey instrument with a small sample, and to make sure that the questions, as formulated, get at the intended issues. This invariably results in adjustments. In most instances, your survey will need to be translated. This is never seamless. Concepts can take on wildly different meanings in different languages, or are simply lost in translation.
  5. Supervise data collection assiduously. Supervision is essential if well-laid plans are not to go awry, irrespective of whether data is collected face-to-face or remotely, by phone or SMS. Supervisors must ensure the randomness of collection – every fifth tent, say, or every tenth telephone number on a recipient list. They must also check that each interview is neither suspiciously short nor exaggeratedly long. The purpose of the survey must also be made clear, to reduce the chances of courtesy bias. This is why simply tacking perception questions onto other data collection exercises, like needs assessments, rarely works.
  6. Never take your data at face value. You should do your best to understand your data in context. Limitations in sampling must be noted and, where possible, weighting techniques applied so that the data can be presented with adequate nuance. In Bangladesh, our data is consistently positive, perhaps telling us more about barriers to open feedback than the findings themselves. To get to the bottom of this, we are embarking on a full methodology review with our partners in Cox’s Bazar. With Covid-19 complicating both sampling and targeting, we must think more critically than ever.
  7. Get back to communities and probe the responses. Why do people see things the way they do? How can their concerns be addressed? Closing the loop is key in ensuring that affected people feel like respected participants in an inclusive process of inquiry, rather than extras in one that is essentially extractive. It also means they can use the findings for their own understanding and advocacy.
  8. Champion the feedback with operational agencies and donors, so they act on it. That’s the whole point – and it won’t happen without strenuous follow-up and dialogue. Engage the people who are best placed to act, encouraging them to reflect on the findings and work out how they can respond.
  9. Keep going. One-off surveys are pretty useless. The power of the data comes from successive waves of surveys, tracking the way people see things over time, and giving implementers a regular yardstick against which to change or adjust their programming and behaviours. It also gives affected people the opportunity to provide their perspective – and thus influence decisions that matter to them – on an ongoing basis.
  10. Combine forces. The voices of affected people are more likely to get a hearing if they are woven together with strong contextual analysis and fact-based data. The latter abounds in most humanitarian crises but, as ACAPS’ analysis of the information ecosystem in South Sudan illustrates, different data streams are too rarely combined in ways that decision-makers can use.

We’ve learned most of this by trial and error. If the lessons are respected, affected people’s perceptions can provide valuable insight about whether a humanitarian programme is delivering on commitments to respect accepted norms of humanitarian behaviour and the objectives of a particular programme or response-wide plan. The risk, if this does not happen, is that perception data is just another source of noise in a system that needs clear signals to drive action in the right direction. GTS was founded on the principle that a combination of people’s perceptions and statistical rigour could fast-track humanitarian reform. This is an area where getting it right is harder than it looks.

Meg Sattler is a Director at Ground Truth Solutions.  She leads on response-wide programmes, focusing on advocacy and global initiatives.

Nick van Praag is the Founder and Director of Ground Truth Solutions. He focuses on building the organisation’s team and its activities.

Comments

Comments are available for logged in members only.