This memo was prepared for a WPF seminar on “New Wars, New Peace” held at the Fletcher School, January 12-13 2012.


–       I didn’t have any accurate numbers, so I just made this one up. Studies have shown that accurate numbers aren’t any more useful than the ones you make up.

–       How many studies show that?

–       Eighty-seven.

This amusing quote from Dilbert introduces a chapter in the book Sex, Drugs and Body Counts edited by Peter Andreas and Kelly Greenhill. It provides an at times shocking overview of the ‘politics of numbers’. Reading the book will make anyone deeply mistrustful of any war casualty statistic. In Bosnia, what posed as a statistic more resembled a game of Chinese whispers: a Bosnian foreign minister plucked a casualty number of 250,000 out of thin air, which was then repeated so often by the media and international organizations that it became the accepted (and grossly overstated) ‘truth’, to the point that Bosnians who tried to challenge it were labelled traitors. In Eastern Congo, 700,000 Rwandan refugees ‘disappeared’ over night as the US government no longer recognized their existence, in order to avoid having to send troops to the region.

The book is an excellent antidote to misleading, blind faith in numbers. However, I would argue that there is also such a thing as unproductive scepticism when it comes to statistics.

To some, numbers are cloaked in magic glow; they seem objective (numbers don’t lie after all) precise and true. Some say that you can’t argue with numbers. I say that you can. Numbers are never ‘just out there’: they are collected by someone and that someone can be good or bad at the job. What is more, that someone may have an interest at being bad at the job. As a result of these interests, and the predictable difficulties of gathering information in war zones, there are a lot of casualty numbers out there that are inaccurate, exaggerated, downplayed or invented.

So not all numbers are ‘Mister Right’ and some are downright liars. But, to push the analogy a bit, that is no reason to then conclude that ‘all numbers are pigs’ and throw our hands up in the air in despair. Recognizing the most blatantly bogus numbers is not necessarily rocket science: If the source of the number is not mentioned, there might not be (a reliable) one. If basic questions about how the number is gathered are not answered, there is no reason to trust the number. If the maker of the statistic has an interest in the number, be doubly on guard.

What seems to be a more powerful reason to despair is that in many cases two decent methodologies come up with radically different absolute numbers of war casualties. The best example of this is the number of Iraqi casualties. Between 2003 and 2006, the Iraq Body Count Project measures about 67,000 civilian casualties; the Iraq family health survey came up with 151,000 ‘violent deaths’; the Lancet study’s final tally is over 600,000 ‘violent deaths’; and the Opinion Research Business survey provides the highest estimate (albeit over a one year longer period) at over a million. If ever there was a reason to dismiss numbers, this seems to be one.

These studies come up with such different numbers is not because there are gaping holes in their methodologies. Methodologies are different, but all reasonable. The Iraq Body Count (IBC) is based on morgue records, adding deaths confirmed by multiple English-language news sources. As not all bodies make it to a morgue and not every death makes the news, IBC numbers are predictably low. The other numbers are the result of surveys, in which researchers ask a sample of Iraqis whether any family members have died a violent death. As people are inclined to give answers they believe the interviewer wants to hear, these are higher. Difference amongst these survey results are the result of relatively predictable differences: some required to see death certificates before they registered a death, the definition of ‘family’ differed amongst various surveys etc.

Is it a problem that the resulting numbers are so widely different? It is important here to distinguish between absolute and relative use of numbers. An absolute number tells us how many deaths there were in Iraq exactly. A relative number tells us, for example, whether there are more or less deaths in Iraq today than there were yesterday or whether there are more deaths in certain areas of Iraq than in others.

Absolute numbers are important in certain respects: a (high) absolute number can attract policy makers’ attention, determining the exact number of victims may be important for judicial proceedings and from a moral perspective, every death should ‘count’. As is obvious from Iraq, it is virtually impossible to reach a consensus on the ‘correct’ absolute number, if only because there is generally no consensus definition of what we try to measure. But how important is this? When it comes to understanding a war, absolute numbers tell us very little. For example, we would like to know whether areas have become safer after certain interventions or whether areas with more ethnic diversity, natural resources, troops etc. are more or less violent. One single absolute number does not help us here: we need to track trends over time and across regions. Actually, the fact that absolute numbers differ widely does necessarily matter at all: as long as they track the same trends, analysing them would lead to the same conclusions. Imagine analysing the effect of exercising on individual’s weight measured in both ounces and kilograms; the results from the two analyses would not be any different.

This is why I argue that we should spend less time despairing over inconsistent absolute numbers and fewer resources doing multiple surveys employing different methodologies to arrive at that one elusive ‘correct’ number of casualties. Instead, we should use our resources to execute the same survey methodology in multiple years, creating relative data. In the end, the absolute number is of lesser importance than the understanding we can derive from it.

Anouk Rigterink is a PhD student at the Department for International Development at the London School of Economics and Political Science.

Tagged with:
 

Leave a Reply

Your email address will not be published.