An October report by Columbia Law School's Human Rights Clinic claims to have found significant flaws in media reports regarding casualties caused by U.S. drone operations in Pakistan. Three organizations, the Bureau of Investigative Journalism, the Long War Journal and the New America Foundation, maintain databases that collect information on the casualties for each strike and their research is regularly cited in congressional reports and news articles. While the Columbia report laments that these estimates can only "substitute for hard facts and information that ought to be provided by the U.S. government," it proceeds to weigh in on the casualty debate. After a strike by strike comparison of the three databases' 2011 data, Columbia concludes that two of these organizations "significantly undercount the number of civilians killed by drone strikes," while singling out the Bureau as the most accurate and reliable source of information on drone casualties.
The Columbia study is quick to critique the drone data compiled by the New America Foundation and the Long War Journal, yet it devotes negligible attention to potential shortcomings in the Bureau's reporting. The study repeatedly applauds the Bureau's investigative practices, analysis criteria and the breadth of its sources. It offers the most guarded criticism, writing only, "we do not agree with the Bureau's analysis of media sources in all cases." Upon reading Columbia's "Counting Drone Strike Deaths," one is led to conclude that the Bureau's casualty estimates are both methodologically rigorous and empirically sound.
And yet, a careful reading of the separate 65 page dataset, which details the findings of Columbia's exhaustive comparison, reveals numerous instances in which the Columbia researchers reject the Bureau's interpretation of the evidence or dispute the credibility of their sources, criticisms that receive no mention in their widely circulated report.
Columbia only analyzed reports for 2011, but had they continued on with their research, they would have found that these problems pervade the Bureau's reporting on strikes from 2004 through 2012.
Based on this tenuous evidence the Bureau has claimed 45-240 civilian casualties. Taken together, this methodologically flawed reporting accounts for over 25 percent of the 474-884 civilian casualties the Bureau claims died between 2004 and 2012. While it is highly probable that some of these deaths may in fact have been civilians, in the face of so much ambiguity, it would be more prudent to label the deceased as unidentified or unknown. This would provide a more accurate representation of the evidence, and acknowledge that despite the best attempts to gather information, there is still much uncertainty about the outcome of individual strikes and the overall effect of the U.S. drone program.
The trends highlighted above point to three broader methodological flaws in the Bureau's analysis that Columbia researchers fail to highlight. The first is a problem of evidence. The Columbia report suggests that the widest range of sources will provide the most credible evidence, and based on the fact that the Bureau cites the largest body of sources and has the highest casualty figures, its numbers are the most reliable. Beyond the fact that the Bureau is a notable outlier as compared to the other two datasets produced by New America Foundation and The Long War Journal, it is a mistake to privilege quantity of evidence over quality. Pakistan Body Count, South Asia Terrorism Portal and the Long War Journal are secondary sources that rely on reporting from other media outlets and should not count as corroborating sources, yet the Bureau does so. Antiwar.com, sify.com, Prison Planet and the World Socialist Web Site are simply not credible news outlets, yet these are some of the sources that the Bureau is praised for citing.
The second problem is the absence of transparency in investigations of drone strikes carried out by the Bureau's own researchers. The Bureau says it has conducted independent investigations of certain strikes and their database includes 13 strikes where the sole source of information citing civilian deaths is from "the Bureau's own researchers." These uncorroborated claims account for at least 56-64 of the Bureau's reported civilian casualties. The strike on January 6, 2010 includes a typical description: "According to the Bureau's researchers five rescuers died, named as Khalid, Matiullah, Kashif, Zaman and Waqar, all belonging to the Utmanzai Wazir tribe." However there is no indication of whom these researchers are or the standards they apply to their reporting.
The same criticisms that the Columbia report levies at unnamed Pakistani government officials discussing drone strikes on the condition of anonymity, could just as easily be aimed at the Bureau's own reporting:
We do not know who the unnamed Pakistani officials are, although observers believe they are Pakistani army officials. What definition these officials use to categorize a person as a militant or civilian is unknown. Nor do we know how the Pakistani Army confirms such deaths or the quality of information it is able to rely on given the limited accessibility of some of the tribal regions to even the Army.
The Bureau's researchers might well be the sort of local journalists or "stringers" the Columbia report is quick to term unreliable. Nor does the Bureau make mention of whom their sources are, when or where they were interviewed, or what was said. If the Bureau wants its findings to be taken seriously by other researchers, then it should provide independent reports of its investigations rather than cursory references in the midst of its dataset.
The third problem is one of interpretation. The Bureau consistently counts references to "local" deaths as civilian casualties, but as the Columbia dataset notes, these descriptors are not synonymous. The media reports are riddled with references to "local militants" and "tribal militants." It stands to reason that a significant number of the militants operating in the tribal regions of Pakistan would live in the area, and thus the mere fact that the deceased are reported as local is not sufficient to establish that they are civilians. And yet, the Bureau consistently claims just that. Even worse, it frequently labels the fatalities as civilians when the media accounts refer to them in neutral terms such as "people" or admits that their identity is unclear.
Furthermore, the Bureau's written methodology provides limited insight into how it makes these interpretations. The methodology makes no mention of how the Bureau treats reports of "local" deaths. Nor does it explain how it deals with conflicting reports. The methodology says that when reports differ it provides a range of total casualties, but it does not explain how the Bureau determines who to count as a civilian. It goes on to state that "where media sources refer only to ‘people' killed... we indicate that civilian casualties may be possible." One would assume they indicate this by way of an asterisk or note, but it would seem that in most instances it reports a range of civilian casualties with a low end of zero and a high end of the total killed. This denotes the uncertainty but potentially inflates the high end of the range of civilian deaths. Moreover, it signals a clear preference for counting unidentified casualties as civilians.
Admittedly, this somewhat esoteric discussion about the veracity of the Bureau's claims versus those of other databases, or the appropriate methodology for counting casualties, risks losing sight of the broader picture. These are not merely numbers; these are people. And no matter which database you reference, civilians are being killed by the hundreds. While this consideration should be paramount, an assessment of the drone program should also take into account those factors that are less quantifiable: the elevated rates of PTSD in areas where drones operate, the dangerous example being set for other states, most notably China and Russia, and the increase in anti-U.S. sentiment in Pakistan that risks endangering the lives of American citizens.
But as the Columbia study points out, numbers matter. Numbers drive our public discourse. Numbers are how politicians measure outcomes. Numbers are how we make sense of our world. And numbers are vulnerable to manipulation, a distortion that is equally dangerous whether it involves government officials lowballing civilian casualty reports or independent researchers potentially inflating them.
Meg Braun is a Rhodes Scholar and MPhil candidate in International Relations at Oxford, where she is researching the evolution of U.S. drone policy. She was an intern at the New America Foundation during summer 2012, where she worked to revise and update its drone database.
Correction: This post initially stated incorrectly that "The Bureau says it has conducted independent investigations of certain strikes and their database includes 15 strikes where the sole source of information citing civilian deaths is from "the Bureau's own researchers." These uncorroborated claims account for at least 65-73 of the Bureau's reported civilian casualties." The correct numbers are 13 strikes and 56-64 reported civilian casualties.