War and Peace

Do we need to take cyberattacks more seriously?

Thomas Rid's warning against cyberwar hype ("Think Again: Cyberwar," March/April 2012) would be more useful if it were not also a contribution to the jousting match between those who claim we're fighting one already (Rid's right; we're not) and those who say there's nothing to worry about (he's wrong; there is).

We're in a period between war and peace when even supposedly secret communications are vulnerable and nasty cyberoperations are becoming the norm. What confuses matters is that we use "attack" to refer to everything from a network probe to a penetration to steal information to a penetration to destroy or degrade a system or the information on it. Even real attacks do not amount to acts of war unless they have physical results.

Rid's dismissal of supply-chain operations is also troublesome. He pooh-poohs accounts of the 1982 Soviet pipeline explosion because the KGB denied it (duh!) and the CIA declined to confirm it (double duh!). Every large corporation I deal with is concerned about supply-chain security -- not just the Pentagon, which has already suffered system degradations. This isn't just a threat from commercial counterfeiting. Supply-chain attacks are a bread-and-butter technique of foreign intelligence services.

Rid is similarly cavalier when arguing that cyberattacks on infrastructure are now more difficult to execute because "[s]ensitive systems generally have built-in redundancy and safety systems." If only this were true of our electric grid! As the North American Electric Reliability Corp. reported in 2010, supposed efficiency improvements "have allowed some inherent physical redundancy within the system to be reduced."

I don't know a single front-line cyberoperator who agrees with his assessment that offensive tactics do not have an inherent advantage over defensive measures in the present state of technology. We are being penetrated, and our infrastructure is vulnerable.

Author, America the Vulnerable: Inside the New Threat Matrix of Digital Espionage, Crime, and Warfare
Cooley, LLP
Washington, D.C.

Thomas Rid replies:

The consulting outfit Digital Bond recently demonstrated that industrial control systems -- specifically, certain components of so-called SCADA systems -- run software that does not prioritize security. These field devices control all sorts of stuff that moves around, from trains to oil to elevators. Worse, many such systems are exposing their open flank through the Internet.

So Joel Brenner is right: Yes, we're very vulnerable. Yes, plenty of actors out there have malicious intent. Yes, we're being penetrated. And yes, we should do something about these problems -- urgently. But harping on about wholesale cyberwar is counterproductive. Discussing WikiLeaks, commercial espionage, financial fraud, and the most potent intelligence operations in the same breath displays a lack of much-needed nuance. Only the thin top end of that range of threats is really scary. "Supply-chain attacks," Brenner insists, "are a bread-and-butter technique of foreign intelligence services."

Why, then, have we not seen a more serious attack against SCADA systems? Stuxnet has been the most spectacular attack to date. Even that mean worm, however, in the larger scheme of things, neither halted nor significantly dented Iran's nuclear program. No serious treatment of cyberwar can dodge this question any longer. Only scaremongers can.

My colleague at King's College London, Peter McBurney, and I have tried to answer this question. Developing and deploying a destructive cyberweapon requires significant resources, intelligence, and time. And the more destructive the design of such a weapon, the smaller the number of targets, the smaller the risk of collateral damage, and, ultimately, the smaller the political utility of cyberweapons.


Trust But Verify

How important is representative data in human rights work?

There is an urgency to humanitarian response that Tina Rosenberg ("The Body Counter," March/April 2012) and her subject, Patrick Ball, do not seem to appreciate. Representative numbers are important in the human rights field, but only to the extent that they actually improve people's lives. The U.S. Marine Corps used the SMS data mapped on the Ushahidi-Haiti platform to save hundreds of lives after the 2010 earthquake. Something is wrong when self-styled human rights defenders attack lifesaving volunteer work.

Ball does not typically work with representative samples. He simply applies methods that assume he has nicely behaved random samples. Validation studies are needed to demonstrate that a technique can perform well despite substantial real-world violations of its assumptions. Validation works by applying the technique to a case with an already known answer to test whether one gets the right answer. There are, unfortunately, few validation studies in Ball's area of work. Many of his studies therefore ride entirely on assumptions.

In the case of Haiti, we actually have a strong validation study. This research was produced independently by the European Commission's Joint Research Center and survived a peer-review process, which is how scientific work is validated. The Ushahidi data provided a truer guide to the damage in Haiti than Ball's alternative -- a map of buildings -- would have. (Note that no such map existed at the time anyway.)

Ball and Rosenberg also appear to be confused about the Ushahidi platform, which is simply an information collection and visualization tool that is equally usable for either representative or nonrepresentative data. Columbia University researchers Macartan Humphreys and Peter van der Windt, for example, used Ushahidi to collect and visualize representative cell-phone data in their Voix des Kivus project in eastern Congo.

Finally, Rosenberg states that I "ultimately retreated to a narrower set of claims" after defending the European Commission's analysis. Absolutely not. I fully stand by my original arguments.

Director of Crisis Mapping, Ushahidi
Nairobi, Kenya

Tina Rosenberg replies:

Patrick Meier is incorrect in thinking I am attacking Ushahidi's lifesaving work. The information Ushahidi collects is invaluable for first responders during times of man-made or natural disaster. The question is whether Ushahidi can go beyond this core mission to map a disaster accurately.

Meier contends that the Ushahidi platform can be used to collect and visualize representative cell-phone data. But it is almost never used this way, and the Voix des Kivus project he cites shows why. It tries to create a representative sample by pre-positioning cell phones and pre-training villagers who are selected because they are members of different groups.

Most disasters, of course, do not wait for this kind of preparation. In the vast majority of cases, then, Ushahidi's data can pinpoint reports of violence or destruction, but cannot reliably describe their pattern or multitude.  

As Tina Rosenberg's profile of statistician Patrick Ball details, measuring suffering is a complex endeavor. And the International Rescue Committee (IRC) agrees with Ball that practitioners must use the best available data and scientific methods in obtaining numbers. We disagree, however, with his suggestion that the IRC erred in its estimation that 5.4 million people died from conflict-related causes in the Democratic Republic of the Congo between 1998 and 2007.

Over a seven-year period, the IRC partnered with leading epidemiologists to conduct five mortality surveys in Congo. To estimate the number of war-related deaths, our experts needed to know the prewar mortality rate. The best source available was Congo's official crude mortality rate of 1.3 deaths per 1,000 people per month, based on the country's most recent national census.

Ball is quoted as saying that the prewar rate we selected was "far too low," leading to a higher excess death estimate. In fact, to be conservative in our calculations, the IRC deliberately used a baseline rate for sub-Saharan Africa that was 15 percent higher than Congo's official rate and 20 percent higher than that used by UNICEF. Using this higher rate resulted in a lower estimate of conflict-related deaths.

We also disagree with Ball's claim that a "correct" baseline estimate would have resulted in "an excess death figure that is only one-third or one-fourth as high." To arrive at such a low figure, one would need to assume that the prewar baseline mortality rate for Congo was 2.85 deaths per 1,000 people per month -- more than double the rate used by Congo's government and UNICEF and higher than any rate ever reported for an African country. This is simply not a plausible baseline figure.

Prior to the IRC surveys, the world made wild guesses about Congo's war-related mortality. Today, there is hard evidence that millions died. Like Ball, we believe that "measurement matters."

Senior Health Director, International Rescue Committee|
New York, N.Y.

Tina Rosenberg replies:

In its Congo studies, the International Rescue Committee (IRC) compared the death rate during the war with a prewar baseline death rate and then calculated the excess. Obviously, if the baseline death rate is too low, then the excess will be too high.

What was the IRC's baseline? The average death rate in sub-Saharan Africa. But by any measure, conditions in the Democratic Republic of the Congo -- even before the war -- were worse than those almost anywhere else on the continent. The death rate in Congo was likely much higher than the African average.

While the IRC's figures have been widely cited in the media, they are controversial among demographers. One study by two Belgian demographers using a higher baseline came up with an estimate of 200,000 excess deaths due to the war between 1998 and 2004. For the same period, the IRC estimated 3.9 million excess deaths -- 20 times as many.