A new tool in the toolbox: using mobile text for food security surveys in a conflict setting

February 20, 2014
Jean-Martin Bauer, Koffi Akakpo, Marie Enlund and Silvia Passeri

Primary collection of household food security data is typically both expensive and cumbersome. As a result, decisions on humanitarian assistance are often based on information that is out of date, or on unsatisfactory aggregate proxy indicators. However, thanks to increasingly widespread access to mobile telephony, many survey respondents can now be contacted through their mobile phones, offering the possibility of much cheaper, faster and more timely data collection.

In order to assess the suitability of mobile text surveys for household food security assessments, the World Food Programme (WFP) conducted a field test in North Kivu province in the Democratic Republic of Congo (DRC) in mid-2013. Approximately half of households in the province own a mobile phone (above the average for Sub-Saharan Africa). The field test took place at a time of active conflict, restricted humanitarian access and large-scale displacement.

How did we evaluate the performance of mobile text surveys?

WFP conducted three monthly mobile text survey rounds in July–September 2013 in North Kivu, and compared the results with a face-to-face emergency food security assessment implemented in March and April 2013 in the same area. The approach allows the comparison of data quality, cost and timeliness for both survey types.

WFP partnered with GeoPoll, a US-based polling company, to implement the mobile text survey rounds. GeoPoll maintains a roll of 7.9 million mobile phone subscribers in the DRC. The 500,000 subscribers in GeoPoll’s database that were identified as living in North Kivu constituted the sampling frame for the field test. Each month, some 1,500 randomly selected respondents received questionnaires by text message. In addition to an identifier question on displacement, either the food consumption score (FCS) (ten questions) or the reduced coping strategies index (rCSI) (five questions) was collected. Responding to the mobile text survey was free of charge. The March–April 2013 face-to-face assessment collected the same indicators, on a sample of 2,713 households.

Data quality: why did the rCSI perform better than the FCS?

The proportion of female respondents in both mobile text and face-to-face surveys was approximately 30%, indicating that using a mobile text survey does not necessarily lead to large gender bias. Overall, mobile text survey results tracked the correct seasonal patterns and identified the right groups as being most food insecure.

The FCS data collected through the field test reproduced the vulnerability patterns apparent in past face-to-face surveys. Notably, displaced households had a lower mean FCS than non-displaced households, for all rounds. FCS text data also correctly tracked seasonal trends in food consumption, showing a steady increase from July to September, which coincided with the second season harvest in North Kivu. However, FCS text data tended to produce lower estimates of the prevalence of ‘borderline’ food insecurity than the comparator survey (p=<0.01 for all rounds). As Figure 1 shows, some of the bias in FCS data collected through mobile text is attributable to respondents being somewhat better off in the first place. However, the large mismatch in the distribution of responses for mobile phone owners interviewed face-to face, and that of responses obtained through the mobile text survey, suggests that other factors account for much of the bias.

The rCSI module produced results that better matched data obtained through face-to-face surveys. Text message results showed that displaced people had a higher mean rCSI than the non-displaced, as past surveys have also found. The mobile text survey results also showed that the mean rCSI decreased from July to August for non-displaced people, a trend consistent with the expected post-harvest improvement at that time of year. The mean rCSI results produced through mobile survey rounds were statistically similar to the results of the face-to-face round for that indicator (p=0.46). Whereas large differences in the distribution of responses were observed for the FCS, the distribution of data collected by text message closely matched that of data collected through face-to-face interviews for the rCSI (Figure 1).

bauer-fig

User-friendliness seems to explain why the rCSI produced ‘better’ data than the FCS. While on average 43% of respondents who started an rCSI questionnaire completed it, only 30% were able to do so for the FCS. As both indicators have identical recall periods (seven days) and response ranges (0–7), questionnaire length seems to have been a key factor. Respondent drop-off was observed for each additional question, making questionnaire length an obstacle to achieving high completion rates for the ten-question FCS, while being less problematic for the five-question rCSI. Overall, the simple, short rCSI is a more user-friendly model than the more complex FCS. The inherent complexity of the FCS meant that a disproportionate number of mobile text responses came from better-off mobile phone users with the literacy skills or patience to complete the questionnaire. The tricky nature of the FCS may have also led to bias. For instance, the consumption of trivial amounts of food (such as milk in coffee) is not to be reported in the FCS; conveying that instruction to respondents by text message was difficult. This also probably contributed to over-reporting of consumption in the FCS module.

Cost and timeliness

In North Kivu, WFP’s face-to-face food security surveys cost an average of $22 per respondent (including enumerator training, transportation, fuel and per diems), four times more than text surveys. Had a larger volume of mobile text questionnaires been sent out, the cost per complete questionnaire could have dropped to $3, indicating the potential for economies of scale. In value terms, however, the comparison is less straightforward as different survey types produce different types of information. At $5 per text questionnaire, the amount of information generated per household is limited to very basic profile information and a single food security outcome indicator – either the FCS or the rCSI. At $22 per respondent, face-to-face surveys produce more detailed information, including data on household demographics, income, expenditure and exposure to shocks, as well as both the FCS and rCSI for each respondent. Were all the information a face-to-face survey includes collected remotely, the cost of the mobile survey would vastly exceed that of the traditional survey.

bauer-table

Data collection by text message was quicker than the comparator face-to-face survey. One to two weeks was required to conduct a monthly mobile text survey round that covered from 1,000 to 2,000 responses. By comparison, the face-to-face survey took approximately six weeks.

Discussion: opportunities and trade-offs

The field test identified success factors, notably that mobile text messaging proved to be a user-friendly, rapid, low-cost way of collecting data from households. Data collected by text survey was of sufficient quality to meet monitoring needs and, for the rCSI, the quality was comparable to a face-to-face survey. The fact that three survey rounds were successfully run under the difficult conditions that prevailed in North Kivu in mid-2013, bears witness to the potential of mobile text surveys in low-access humanitarian settings. Data collection was possible without putting enumerators in harm’s way. The higher frequency of reporting that mobile text surveys allow could enhance decision-making tools, notably the Integrated Food Security Phase Classification vulnerability assessment process, which relies on up-to-date household food security data.

On the other side, text messaging was not suited to establishing the prevalence of FCS, a more complex indicator. Our analysis suggests that the challenge was not the technology itself: respondents were able to answer the simpler mobile text survey we implemented. The fact that results of the more complex FCS module were skewed suggests that the problem lay in its characteristics, rather than the technology used to collect the data. Our experience supports the contention that ‘text messaging may be appropriate where relatively simple responses to a small number of simple questions are required‘. As respondents were asked either just the FCS or the rCSI, the data did not offer the possibility of cross-tabbing indicators, a common analytical approach for traditional surveys. Only very limited household profile information was available, restricting the analytical applications of the data. This challenge argues for the application of mobile text surveys in contexts where baseline information already exists.

The field test brought to light the trade-off that mobile text surveys involve: gains in efficiency imply a loss in the depth of the information that is captured. This trade-off has broad implications for the appropriate use of mobile text surveys. Under the sampling arrangements used for the field test, mobile text surveys would be suited to monitoring and early warning functions.

Conclusion: towards mixed-mode data collection systems

The results presented here indicate that the question should not be whether mobile text messaging should be used to collect household food security data. Rather, the debate should be how the modality should be introduced into existing information systems, in full recognition of the trade-offs and complementarities of different data collection methods. The Organisation for Economic Cooperation and Development (OECD) has called for ‘mixed mode’ data collection systems that combine remote and face-to-face surveys, building on the respective strengths of various modalities. In such mixed-mode systems, mobile text surveys could take place between face-to-face surveys, or when security or other factors restrict physical access to respondents.

As collecting food security data through mobile text technology is new, results should continue to be triangulated with other sources – such as remote sensing, market data or qualitative information – in order to avoid errors of interpretation. Complementarities with face-to-face surveys for sampling should be exploited. Respondents of a face-to-face survey could be asked to provide their mobile phone numbers (or given a phone if they do not own one) and to agree to being contacted in the future. This would deepen knowledge of the household characteristics of mobile survey respondents. Distributing phones to the poorest and most vulnerable respondents would ensure that they are not excluded from these surveys. Using voice calls would allow illiterate respondents to participate.

In order to reap the benefits of mobile text household surveys, the collection of the rCSI by mobile text should continue on a smaller sample in North Kivu, to build up a knowledge base with which future mobile text survey rounds could be compared. Improvements or alternatives to the FCS module could be tested. Live or automated voice surveys should also be implemented as a complement or alternative to text surveys. Additional field testing would determine what modules work best using remote mobile data capture, and will guide analysts in their choice of data capture method.

Jean-Martin Bauer is Programme Advisor (WFP Rome), Koffi Akakpo is VAM Officer (WFP DRC), Marie Enlund is Food Security Analyst (WFP Rome) and Silvia Passeri is Data Analyst (WFP Rome). The authors are grateful to Arif Husain, Chief Economist, WFP, for his comments.

This is an article in HPN’s Online Exchange. To read other Exchange articles, please visit https://odihpn.org/humanitarian-exchange-magazine.

Comments

Comments are available for logged in members only.