This memo was prepared for a WPF seminar on “New Wars, New Peace” held at the Fletcher School, January 12-13 2012.[i]
Contemporary policymakers face something of a conundrum. Concerns about the human costs of conflict, including refugees, internally displaced persons and military and civilian casualties, tend to figure prominently in decision-making processes—strategically, operationally and politically. And pressures to count and take account of these costs are on the rise, from growing Western casualty sensitivity on the military side to a greater emphasis on the prevention of collateral damage on the civilian side, and a growing fetishism towards quantification and counting, more generally. Yet, these imperatives do not come without their own costs and challenges. Despite significant recent technological innovations—both in the context of an improved capacity to observe from above and enhanced computing power on the ground—in many contexts, accurately assessing the human costs of conflict can be difficult at best. Given increased attention and incumbent pressures associated with counting these costs, the incentives to distort and politicize these numbers can be profound. Four overlapping, yet distinct, factors tend to impede accurate and agreed-upon measurement of the human costs of conflict: data availability; data reliability; measurement disparities; and political imperatives and biases. Below I outline the challenges each of these factors can pose to conflict measurement, drawing upon illustrative recent examples to illustrate these challenges in action. I conclude with a brief set of recommendations for consumers and for producers of conflict-related statistics.
Contemporary armed conflicts by their very nature often occur in dangerous and difficult to access terrain, amongst hostile parties, making acquisition of accurate conflict-related statistics especially arduous. Consider, for example, the fact that most of the coverage of the 1994 Rwandan genocide focused on the humanitarian disaster that beset those Hutu who fled to Zaire in its aftermath rather than on the horror show that was the bloodbath itself. Consequently, estimates of the total number killed during the genocide still vary by as much as half a million people, from under 500,000 to well over one million.[ii]
Moreover, in many parts of the world the relevant data gathering apparatuses may be internally inept, externally obstructed, or simply corrupt even before the outbreak of hostilities; situations such as these can hardly be expected to improve under fire. Among other problems, hospital and morgue reporting systems are often disrupted. Separating combatants from non-combatants can be problematic even under the best of conditions., and in unconventional and asymmetric conflicts like those that dominate the contemporary landscape, conditions are anything but ideal. These challenges may be further exacerbated by the fact that pre-war statistics, such as census data, may have been inaccurate, making pre- and post-violence comparisons still more onerous. Further, active impediments or deterrents are sometimes placed in the way of those tasked with gathering “inconvenient” conflict data.[iii]
Measurement can be challenging even when those responsible for tallying a conflict’s toll are not directly or indirectly menaced and even when data are available. This is true in no small part because the data gathered may simply be inaccurate and it may be difficult, if not impossible, to verify it. A wide array of psychological and anthropological studies has shown, for instance, that under certain conditions subjects will (whether consciously or unconsciously) provide false and/or biased information. Anthropologists further note that cultural considerations may inhibit what might be viewed as honest responses, due to differences in storytelling norms across cultures and societies. Respondents may engage in duplicity or embroider answers to protect themselves; they may do so for ego-related reasons, to avoid embarrassment, or out of straightforward personal or communal fear. People may likewise dissemble for self-promotional reasons; they may anticipate pecuniary or other positive incentives to result, or they may wish to appear more important and/or their (family’s) role in a conflict more consequential.
Research has also repeatedly demonstrated that respondent answers may differ materially depending on how, when and in what order questions are posed.[iv] Respondents may also exhibit social desirability bias, a common cognitive phenomenon whereby respondents provide what is believed to be the “right” answer from the enumerator’s perspective, whether that news is positive or negative and the numbers involved large or small. Finally, it is by now well known that interviewees in-country (and policymakers farther afield) often believe it is perfectly legitimate to lie, if they are doing so in the context of a larger “truth”[v] The list above is not exhaustive, but it should be sufficient to demonstrate that reliability issues can may undermine attempts to accurately measure costs of conflict—whether one is engaged in small-N ethnographic research or employing extrapolation as part of a multiple systems estimation (or other large-N) counting enterprise.[vi]
Parties to conflict often also have operational incentives to hide, undercount or downplay their own military casualties as well as the civilian casualties they’ve inflicted, while the other side has incentives to inflate the damage they have inflicted on their adversary and the number of civilian casualties they have suffered. This kind of manipulation can be useful operationally as well as in the court of public opinion. This is hardly a new phenomenon, but is an important one.
Even when multiple and studious attempts have been made to gather data, estimates of the costs of conflict may vary wildly due to measurement disparities and inconsistencies. Take for instance the aforementioned case of Iraq, which has been one of the most widely measured (and yet also most numerically contentious) conflicts in recent memory. Why from a data disparity perspective have estimates varied so significantly? For one thing, different studies have examined different periods of time. This has significant implications, given that levels of violence in Iraq have waxed and waned tremendously in the period since the March 2003 invasion, with the years 2006 (especially) and 2007 being the deadliest. Different sources and studies have also used different definitions of war dead and thus have counted different groups of people, key scope conditions that have often been omitted when these statistics have appeared in the press. Some studies have included soldiers in their tallies, while others have focused solely on non-combatants. Some sources have included indirect deaths, while others only have counted only violent, apparently intentional, deaths. The disparities inherent in these apples versus oranges comparisons are made still more acute by the fact that definitions of what constitutes violent and non-violent causes of death also vary.
Perhaps most critically, fundamentally different methods for counting the dead have been utilized in different studies. Employing what are called “passive surveillance” techniques, the UK-based research group Iraq Body Count (IBC) has cross-referenced fatalities reported in the media with figures from Iraqi hospitals, morgues and NGOs and estimate that there have been approximately 100,000 violent civilian deaths since 2003. IBC expects to raise this tally by about 19 percent, following inclusion of new data gleaned from the mass of classified US government documents released by Wiki-leaks in late 2010. IBC acknowledges that their figures are most likely an underestimate, because passive techniques tend to suffer from under-counting and under-reporting.[vii]
Significantly higher estimates have resulted from the use of active survey techniques. At least four household surveys have been conducted, in each of which Iraqis were asked to identify the family members they have lost—and in some cases, to provide documentation of their deaths. (In some cases, however, it appears survey respondents’ answers were limited to immediate family members, while in others, they included extended family members.) Survey results were then extrapolated to generate nationwide estimates ranging from several hundred thousand, in a generally highly regarded World Health Organization-sponsored (WHO) study, to well over one million, in a highly contentious study undertaken by a group called Opinion Business Research. Arguably, the most highly publicized—and most broadly criticized—of these studies were conducted by public health researchers from Johns Hopkins University. In results published in the British medical journal the Lancet, they suggested that war-related deaths, broadly defined, number between the low 400,000s to just under 800,000. Although Lancet study figures are significantly lower than OBR estimates, they still dwarf the WHO study numbers. This still sizable gap is a key reason why claims of a third factor—namely politics—enters the picture.
Since conflict measurement is politically consequential, it also tends to be politically contentious. Stakeholders will tend to emphasize or downplay the costs and consequences of a particular conflict, depending on their position, goals and imperatives. They may likewise stress or ignore the complexity, uncertainty and methodological issues tied to a particular set of conflict numbers. To paraphrase an old political truism, “How and what you count depends on where you sit.” But does it ultimately matter? Given the existence of stark political and objective challenges to conflict measurement and the nearly inescapable conclusion that conflict-related statistics will tend to be suspect, do the source, size, and ultimate credibility, of such statistics matter? Should we be concerned, in other words, if conflict-related social facts are often not facts at all, but rather (often politically motivated), socially constructed inventions?
Contrary to the accepted wisdom in some circles—that the ends effectively justify the means, and thus if one’s intentions are good, numerical manipulation and misrepresentation is legitimate[viii]—I argue that the veracity and ultimate credibility of conflict statistics do matter. A a failure to at least strive for statistical accuracy in the realm of warfare can prove demonstrably counterproductive and enduringly damaging, from political, humanitarian, juridical and scholarly (knowledge-focused) perspectives. These numbers shape both public and closed-door policy debates. They serve to legitimize some positions and undercut others. They help us understand what is actually happening (or has happened in the past). Moreover, the “right” numbers can confer a certain kind of authority—even if it is potentially perishable. And once statistics have been released and widely adopted, they are very hard to unseat and replace, even if and when better data become available. Conflict statistics consequently carry significant implications for a wide range of security-related policies.
In the context of conflict itself, prevailing estimates of the scale of violence, its complexion, and its measurable consequences undeniably play a role in shaping policy priorities and objectives. Operating under false pretenses therefore compromises the ability of both politicians and their polities to assess what those priorities and goals should be, and how and when they may need to be revisited or revised. This in turn may serve to actually exacerbate suffering and ultimately increase the human costs of conflict. In particular, politically motivated statistical distortion can lead to the following counterproductive policy outcomes:
- a misallocation of resources, such that conflicts in which figures are inflated receive disproportionate shares of human, military and financial resources that might be better spent elsewhere—a problem that may be exacerbated by pecuniary incentives to distort;[ix]
- a degradation in the conduct and/or efficacy of military operations and implementation of related policies—whether by adversely affecting levels of public support, or by undermining leaders’ own ability to honestly evaluate the magnitude and scope of a problem;[x]
- the prolongation of wars, both by contaminating battlefield assessments and by misleading those who are destined to fight them;
- muddied evaluations of policy success and failure, which in turn can affect levels of support for current and future operations.[xi]
- political cover for inaction if actors are eager to avoid undesirable missions;[xii]
- damage to interstate relations;[xiii] and
- the distortion of history and collective knowledge and the creation of effective political ammunition that can be mobilized by enterprising politicians to help fuel perceptions of victimhood and also help justify retaliation and even mass killing.
If the policy repercussions of statistical inaccuracy and manipulation were just a matter of misdirected resources (read money), one could argue that politicization problems could be dismissed. But what is often at stake is more consequential than cash; reliable statistics are a matter of life and death. Moreover, the effects of politicization tend to be enduring and not necessarily even limited to the conflict in which they originated. Inflated, deflated, and inaccurate conflict-related data can be readily manipulated: by the media to stir up public opinion, by organizations to further their missions and imperatives, by political entrepreneurs and by governments to justify the political objectives they embrace and avoid taking on others.
Furthermore, once produced, numbers are not dependent on their creators to be perpetuated and legitimated. The public announcement of an impressively large sounding number, regardless of its origins or validity, can generate prominent press coverage, which in turn further legitimates and perpetuates the use of the number, helping conflict-related statistics take on lives of their own once they have been made public. Skeptical treatments of statistics, on the other hand, tend to receive significantly less media attention. This is due in part to the fact that many people are relatively innumerate. They consequently have trouble thinking critically about statistics and overly rely on the presumed expertise of their producers. To complicate matters further, for a variety of psychological and cognitive reasons, people tend to “anchor” their beliefs most strongly on the first number they hear, particularly if it is shocking and precise. Once a piece of information has been rooted in this manner, it becomes stickily and stubbornly resistant to updating even when new, more reliable information becomes available.
A FEW RECOMMENDATIONS
A) FOR CONSUMERS OF STATISTICS: Ask the following set of simple questions when evaluating conflict data:
1) What is/are the source(s) of the numbers?;
2) How is what is being measured defined—for instance, who is a combatant? And what
constitutes a combat-related death?;
3) What are the interests of those providing the numbers? What do these actors stand to gain
or lose if the statistics in question are (not) embraced or accepted?;
4) What methodologies were employed in acquiring the numbers?; and
5) Are there potentially competing figures, and, if so, what do we know about their sources,
measurements, and methodologies?
B) FOR PRODUCERS AND GATHERERS OF CONFLICT DATA: (These are generally well understood, but nevertheless worth reiterating)
1) As circumstances permit, use what experts view as the most appropriate methodology/ies for measuring the conflict in question;
2) Whenever possible, aim to replicate the exercise with a complementary, alternative source of measurement; and
3) Remain cognizant of, and vigilant regarding, the complicating factors outlined above when granting credibility and authority to any (set of) conflict statistics.
Kelly Greenhill is a Research Fellow in the Belfer Center’s International Security Program at Harvard University and Associate Professor of International Relations and Security Studies at the Fletcher School.
[i] Some of the material in this memo has been drawn from Kelly M. Greenhill, “Counting the Human Cost in Iraq,” British Broadcasting Company (BBC) (May 2011) ; and from Peter Andreas and Kelly M. Greenhill (eds.), Sex, Drugs and Body Counts: The Politics of Numbers in Global Crime and Conflict (Cornell University Press, 2010).
[ii] Such a gap is particularly striking given that Rwanda’s total pre-war population was under eight million.
[iii] For example, in the aftermath of the 2003 US-led invasion, the Iraqi health ministry reportedly initially tried to count keep a count based on morgue records, but stopped releasing numbers because of international pressure. Likewise, the director of the morgue in Baghdad allegedly received death threats due to the “embarrassment” he was causing by publishing death tolls. Jonathan Steele and Suzanne Goldenberg, “What is the Real Death Toll in Iraq?,” The Guardian, March 18, 2008.
[iv] See, for instance, Roger Tourangeau and Tom W. Smith, “Asking Sensitive Questions: The Impact of Data Collection Mode, Question Format, and Question Context,” Public Opinion Quarterly 60 (1996): 275-304.
[v] During the 1999 Kosovo conflict, for instance, it was discovered that Raimonda, a young Albanian who claimed to be killing Serbs to avenge the murder of her little sister, made up the whole story for the benefit of the Western TV journalist who interviewed her. Later, when Raimonda’s sister was found to be alive and well, her family was unrepentant. “If her little lie helped the Albanian cause, that’s just fine,” her father reportedly commented.
[vi] When survey methods are used to provide baseline numbers from which extrapolation may be undertaken, apparently marginal errors can multiple, even exponentially. One increasingly embraced method of addressing such eventualities is through the use of multiple counting strategies in the same conflict. However, this presupposes the existence of sufficient resources and capacity to undertake multiple counting exercises in potentially quite perilous conditions; such circumstances as apt to remain the exception rather than the rule for the foreseeable future.
[vii] That said, IBC figures are in the same ballpark as those offered in the Washington-based Brookings Institution’s Iraq Index, which also employs passive techniques, but relies on a somewhat different bundle of information than IBC.
[viii] This is the position embraced by what I term the “intentions-based” school. See Greenhill, Chapter 6 in Sex, Drugs and Body Counts.
[ix] For instance, during the Mozambican Civil War, local officials quickly learned that
inflating numbers of “needy populations” could bring more food aid to their localities, which in turn offered an increased possibility of diverting a portion for public or private benefit. As a result, the figures sent to the government by provincial authorities “were highly inflated, at the same time as corruption was surfacing as a serious issue in the relief operations carried out by the government.” Sam Barnes, Humanitarian Aid Coordination During War and Peace in Mozambique, 1985-1995 (Studies on
Emergencies and Disaster Relief, no. 7), 13.
[x] George W. Bush Administration’s stubborn resistance to acknowledge the size of the
insurgency it was facing in Iraq following the 2003 occupation, significantly delayed deployment of troops and materiel needed to engage the enemy and secure the population.
[xi] “Misremembered” assessments of what NATO bombing accomplished in Bosnia, for instance, made the Clinton Administration overly sanguine about what “a few days of bombing” would do to change the mind of Slobodan Milosevic on the issue of Kosovo.
[xii] The systematic underestimation of the scale of killing in Rwanda in 1994 and the Darfur region of Sudan in the early 2000s provide just a few recent examples of where inaction was rationalized in this manner
[xiii] Such as in the context of the U.S. Civil War, when British pro-Southern journalistic coverage of the war “caused such a cleavage between the nations that it required a generation to heal it.”
Tagsadvocacy Africa African Union arms trade atrocities AU book review Bosnia Colombia conflict data Democratic Republic of Congo Drugs Ethiopia gender genocide Getting Somalia Wrong? human rights memorial Human Security Report illicit trade Indonesia intervention Iraq justice Kony Libya Mali mediation memorialization new wars Olympics peace Re-Framing the Debate responsibility to protect Rwanda Somalia South Africa South Sudan sports Sudan Syria trafficking Uganda UN Unlearning violence Zenawi