Civilian deaths: a murky issue in the war in Iraq
- Issue 29 Good Humanitarian Donorship
- 1 Good donorship: how serious are the donors?
- 2 Welcome to the Good Humanitarian Donorship club
- 3 Too good to be true? US engagement in the GHD initiative
- 4 The EU: Good Humanitarian Donorship and the world's biggest humanitarian donor
- 5 Promoting Good Humanitarian Donorship: a task for the OECD-DAC?
- 6 Good Humanitarian Donorship and the CAP
- 7 GHD and funding according to need
- 8 Good donorship in practice: the case of Burundi
- 9 No magic answers: Good Humanitarian Donorship in the DRC
- 10 Civilian deaths: a murky issue in the war in Iraq
- 11 The Darfur crisis: simple needs, complex response
- 12 Predatory governance in the DRC: civilian impact and humanitarian response
- 13 A crisis turning inwards: refugee and IDP militarisation in Uganda
- 14 Schooling in refugee camps
- 15 Saudi Arabia's humanitarian aid: a political takeover?
- 16 Is cultural proximity the answer to gaining access in Muslim contexts?
- 17 Dead or alive? Ten years of the Code of Conduct for Disaster Relief
- 18 The Kobe conference: a review
- 19 Tsunamis, accountability and the humanitarian circus
In September 2004, a study was undertaken by Al Mustansaryia University in Baghdad to estimate the number of civilian deaths during the war in Iraq. The researchers visited 33 randomly-selected neighbourhoods in Iraq, interviewing 30 households in each location. Security constraints were extreme, and the sample was not stratified or enlarged above the standard minimum, in order to limit the risk to the interviewers. Interviewers asked about the age and gender of the people who lived in the home, the composition of that home on 1 January 2002, and deaths or departures up until the date of the interview. Eighty-one percent of reported deaths were confirmed by death certificates.
The study found that violence was up 58-fold after the US-led invasion in March 2003, that violence had become the major cause of death, and that airstrikes by coalition forces were responsible for most reported violent deaths. The number of deaths was less clear. In one neighbourhood, in the city of Falluja in Anbar Province, almost a quarter of residents had died, implying perhaps 200,000 deaths in the province as a whole. In most neighbourhoods, less than one percent of residents had died as a consequence of the invasion and occupation. The cluster death rate in Falluja was so high that it was set aside when the death toll was calculated. Results from the other 32 neighbourhoods surveyed suggested that some 100,000 deaths had occurred.
Public perceptions of the study
The study was published by the Lancet, and was put online on 29 October 2004. The results received a great deal of attention in the worlds press, though not in the United States, where coverage was very limited. Most major US papers picked it up as a wire-service story. The New York Times covered it on page 8, the Washington Post on page 12. Both stories attempted to paint the report as controversial. In particular, the Post quoted Marc Garlasco of Human Rights Watch, a weapons analyst and author of a respected report on the relative lethality of various coalition weapons used in Iraq, as saying that the 100,000-death estimate seemed too high. What the Post did not report is that Garlasco also said that he had not seen the report, and that since that time he has stated (see http://chronicle.com/free/2005/01/2005012701n.htm) that he wished he had not aired his initial doubts.
By 30 October, two discussions had appeared on the internet which helped to defuse the politically volatile results of the survey. One was an online critique by a long-time US Defense Department Official, Anthony Cordesman; the other, in the online magazine Slate, was by the reporter Fred Kaplan. Both were complimentary about the researchers, both discussed the difficulty of this kind of work, and both focused on imprecision in the results. They ignored the Anbar Province data, ignored the 58-fold increase in violence and ignored the interpretation of the data by the authors and the Lancets reviewers. Instead, they focused on the results from the safest 32 neighbourhoods. In these 32 neighbourhoods, the study reported that 98,000 people had died, with a 95% confidence interval from 8,000 to 194,000. This means that, if the study was repeated 100 times with the exact same method but choosing different sampling locations, it is expected that 95 of the repeats would estimate the death toll to be between 8,000 and 194,000.
Both writers concluded that this result added little new information since the range included the most widely-quoted estimate at the time, which was about 15,000 violent deaths. Kaplan said of the Lancet study: This isnt an estimate. Its a dart board (http://slate.msn.com/id/2108887/). Both authors implied that the 95% confidence interval for the 32 neighbourhoods indicated that the true result was somewhere anywhere between 8,000 and 194,000. In fact, the most likely number is the estimate of the study. The further from that number in either direction one moves, the more unlikely it is that that result will be found. The reported distribution implied that there was a 2.5% chance that the true number was below 8,000, and only a 10% chance that the number was below 44,000. When the extremely high outlier cluster of Falluja is included, there appears to be little chance that the death toll had been below 100,000 at the time of publication.
This spin of the story spread through the US with astonishing speed. Talk show radio hosts and ministers all passed the word that the true number might only be 8,000. By US election day on 2 November, my next-door neighbour had not heard the actual Lancet estimate, but she had heard on talk radio that the Lancet study estimating only 8,000 deaths was flawed.
What does this mean?
The Lancet study raises two issues for humanitarian workers who document hardships in politically volatile settings:
1) How do we articulate the complexity of imprecise results in language that will be understood or reported by the press?
2) Are we responsible for the digestion of our information by the public once it is released?
Most of us have been exposed to the idea of a normal distribution, but few of us really understand the related nuances. In particular, the probability that a specific number is the true measure declines the further from the mid-point of the distribution one moves. In the 32 neighbourhoods of our study excluding the Anbar Province cluster, there was only a 7.5% chance that the true number of related deaths was between 8,000 and 44,000, but about a 42% chance that the true number was between 44,000 and 98,000. Scientists use 95% confidence intervals as a default criterion to avoid allowing the subjective judgement of the individual researchers to influence their conclusions. The use of this default is somewhat arbitrary. When dealing with the press, providing an 80% confidence interval would probably be a more effective way of communicating imprecision than the 95% confidence interval because the small and unlikely outcomes covered by the tails of the distribution would not be included. In the case of the 32 neighbourhoods discussed above, we could state that there is an 80% chance that the true number of deaths was between 44,000 and 152,000, instead of a 95% chance that the true number was between 8,000 and 194,000. The former implies that researchers were 80% sure that the commonly-quoted estimate at the time was at least three times too low. The latter, according to Cordesman and Kaplan, implied that the researchers were not sure if the results differed from the existing 15,000-death estimate.
A separate issue concerns judgment. The Falluja data was set aside because it statistically did not belong with the 32 other neighbourhoods when describing the range. Many lay-people felt that this meant the data was discarded. Anyone watching the news during the summer of 2004 would have reason to believe that a death rate in Falluja 25 times higher then the average elsewhere was very plausible. In keeping with sampling theory, the Falluja cluster implied that about 200,000 deaths had occurred in Anbar Province, although the precision of this estimate was essentially unquantifiable. Thus, when looking at the dramatic increase in violence and the evidence of far more deaths in Anbar Province, the investigators were confident that the death toll was far more likely to be over 100,000 than under 100,000. In the Lancet article, the abstract concludes that Making conservative assumptions, we think that about 100,000 excess deaths, or more have happened since the 2003 invasion of Iraq. The or more part was discussed extensively in the European press, but almost never mentioned in the limited US coverage.
The question arises, when the public interpretation of science is either done deceptively or incompetently: what is the role of the investigators in responding to the misunderstanding? In this case, the investigators were hampered by several factors. The timing of the studys publication, five days before the US election, was unfortunate. Investigators planned to conduct the study in April 2004, but videotaped beheadings convinced them to delay until June. In June, security was worse. The lead investigator had five months of teaching obligations beginning in the last week of October. Thus, the final preparations were conducted in August and the survey began in early September, ending in Falluja on 20 September. The data were entered and an initial analysis completed on 24 September. The manuscript was submitted to the Lancet on 1 October. The timing may have made some members of the press wary, especially given a scandal in the weeks before, when documents alleging that President Bush had shirked his National Guard duties during the Vietnam War appeared to have been faked. Had the Lancet article appeared a week or two earlier, it may have received more attention in the US.
It was also a mistake for the lead investigator, faced with repeated questioning by an Associated Press reporter, to admit that he had been opposed to the invasion of Iraq. This was not a very controversial position, given that most people on the planet had been opposed to the invasion. The reporter included this in her piece, not mentioning that other investigators had been in favour of the invasion, and not mentioning the first response to this question, which was that this was primarily a study of the occupation, which all of the investigators wanted to go well and peacefully. Cordesman cited this AP-reported bias as another reason for disregarding the study findings. This blunder highlights how poorly equipped most relief workers and scientists are at managing messages.
Time favours truth
Time will reveal a more precise estimate of the death toll from the war in Iraq. According to a July 2004 New England Journal of Medicine article, 12% of returning army ground forces and 24% of returning marine ground forces report that they were responsible for the death of an Iraqi non-combatant. The NGO Coordinating Committee of Iraq (NCCI) has been recording twice as many Iraqi deaths as the most widely cited website, Iraqbodycount.net. It is not important that the Lancet studys 100,000 figure will almost certainly be shown to be an underestimate. It is important that the recording of tens of thousands of Iraqi deaths at the hands of the countrys occupiers did not produce a meaningful response, either to limit civilian deaths in Iraq or to bolster the human rights community so that it might convince the world that pre-emptive war should be viewed as incompatible with civil society.
Les Roberts is a Research Associate at the Center for International Emergency, Disaster, and Refugee Studies, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD.
Comments
Comments are available for logged in members only.