The Body Counter

Meet Patrick Ball, a statistician who's spent his life lifting the fog of war.

BY TINA ROSENBERG | MARCH/APRIL 2012

There are two ways to get those answers. Most often we guess. But those guesses are based on data that can mislead us utterly. People still take the answer seriously, says Ball, "even though the science underneath it may not be any better than in the '60s and '70s, when we just made the stuff up. It's not only easy to misinterpret these numbers; it's substantially easier to misinterpret them than to interpret them correctly. You look at the graph and say, 'Now I know what happened in Liberia.' No, you know what happened to people who talked to the truth commission in Liberia. You can say there were at least 100 people killed. But you can't say we had five in the north and five in the south and start drawing patterns."

Ball's accomplishment has been to provide an alternative to guessing: With statistical methods and the right kind of data, he can make what we know tell us what we don't know. He has shown human rights groups, truth commissions, and international courts how to take a collection of thousands of testimonies and extract from them the magnitude and pattern of violence -- to lift the fog of war.

IN THE LATE 1990s, Ball was commuting between Cape Town and Guatemala City, working for both the South African and Guatemalan truth commissions, immersed in the varying atrocities of apartheid and genocide committed by two very different regimes. Then in June 1998, when the staff of the South African Truth and Reconciliation Commission was writing its report, commissioner Mary Burton asked him what should have been a simple question: How many killings had there been in apartheid South Africa?

"According to the testimonies, 20,000," said Ball.

"No," said Burton. "How many were there in all?"

Uh-oh, thought Ball. "I don't have an answer," he told Burton.

The commission had been hobbled by a boycott from the Inkatha Freedom Party, the second-largest black political group, which had blocked its members from testifying because its leaders thought the commission was biased toward the African National Congress, with which it was involved in a long, simmering war. But just before the deadline for testifying expired, Inkatha lifted the ban and its members came forward. Those testimonies completely rewrote the commission's conclusions, as some 8,000 Inkatha members gave evidence -- about one-third of the commission's final total. What, Ball wondered, would have happened if Inkatha hadn't decided to participate? "Are people really coming forward in the same proportion as they suffered?" he asked himself.

As he was talking to Burton, it hit him: We don't know what we don't know. "It wasn't enough to take good care of the evidence we have," he says. "We have to transcend what people are willing to tell us. You're thinking about: What can I fix about the data coming in? But the big problem is: What's true in the world?"

At the University of Michigan, where Ball studied sociology in a program heavy on statistics, the solution to such problems was clear: Go out and take a random sample. You choose households at random and survey them about what happened. Since your sample is representative of the whole, you can easily extrapolate the results to the larger universe. But this was not something human rights groups knew how to do, and it would have been prohibitively expensive. It was not the answer.

Working on the Guatemalan data, Ball found the answer. He called Fritz Scheuren, a statistician with a long history of involvement in human rights projects. Scheuren reminded Ball that a solution to exactly this problem had been invented in the 19th century to count wildlife. "If you want to find out how many fish are in the pond, you can drain the pond and count them," Scheuren explained, "but they'll all be dead. Or you can fish, tag the fish you catch, and throw them back. Then you go another day and fish again. You count how many fish you caught the first day, and the second day, and the number of overlaps."

The number of overlaps is key. It tells you how representative a sample is. From the overlap, you can calculate how many fish are in the whole pond. (The actual formula is this: Multiply the number of fish caught the first day by the number caught the second day. Divide the total by the overlap. That's roughly how many fish are really in the pond.) It gets more accurate if you can fish not just twice, but many more times -- then you can measure the overlap between every pair of days.

Guatemala had three different collections of human rights testimonies about what had happened during the country's long, bloody civil war: from the U.N. truth commission, the Catholic Church's truth commission, and the International Center for Research on Human Rights, an organization that worked with Guatemala's human rights groups. Working for the official truth commission, Ball used the count-the-fish method, called multiple systems estimation (MSE), to compare all three databases. He found that over the time covered by the commission's mandate, from 1978 to 1996, 132,000 people were killed (not counting those disappeared), and that government forces committed 95.4 percent of the killings. He was also able to calculate killings by the ethnicity of the victim. Between 1981 and 1983, 8 percent of the nonindigenous population of the Ixil region was assassinated; in the Rabinal region, the figure was around 2 percent. In both those regions, though, more than 40 percent of the Mayan population was assassinated.

It was the first time anyone had applied MSE to human rights work. "He produced numbers that provided a strong, crisp basis for drawing the conclusions the commission did about violence, in a way you can't get from testimony," says Kate Doyle, a senior analyst with the National Security Archive at George Washington University, who has worked extensively with archives in Guatemala.

Photograph by Peter DaSilva for FP

 SUBJECTS:
 

Tina Rosenberg is co-writer of the Fixes column on NYTimes.com and author, most recently, of Join the Club: How Peer Pressure Can Transform the World.

2GEORGE

11:06 AM ET

March 1, 2012

counting is counter-productive

"One death is a tragedy, a million a statistic," said a famous mass murderer. Alas, he's right. Statistics and human rights talk, being abstract, sap the motivating power of the values that human rights are meant to assert. It's an ironic tragedy: http://papers.ssrn.com/sol3/papers.cfm?abstract-id=1330693

 

SAKSIN

5:45 PM ET

March 1, 2012

...but maybe not always

I agree with 2GEORGE that death-tolls somehow miss the moral point. Both suffering and morality are personal, and nothing is more impersonal than numbers. Yet Ball's use of numbers plus analysis occasionally is able to reveal patterns that bear on intent, as apparently happened in the Yugoslav case, and then you are approaching the universe of moral discourse. So I would not dismiss his efforts altogether. Having read only the cited paper's abstract, I look forward to reading the rest.

 

ANDYWILCOXSON

12:52 AM ET

March 4, 2012

He Proved Nothing About Kosovo

Ball's report was rejected by the Tribunal. Dr. Eric Fruits totally discredited Ball's methodology and his conclusions regarding Kosovo.

 

BVDJSKIF8DS9

7:34 PM ET

March 4, 2012

very good web: ===

very good web: === http://www.plzzshop.com

The website wholesale for many kinds of fashion shoes, like the nike, jordan, prada, also including the jeans, shirts, bags, hat and the decorations.

All the products are free shipping, and the the price is competitive, and also can accept the paypal payment., After the payment, can ship within short time.

We will give you a discount

WE ACCEPT PYAPAL PAYMENT

YOU MUST NOT MISS IT!!!

=== http://www.plzzshop.com

thank you!!!

Believe you will love it.

We have good reputation, fashion products,

come here quickly== http://www.plzzshop.com

Opportunity knocks but once

 

PATRICKBALL

8:02 PM ET

March 4, 2012

Re: mortality in Bosnia

I did comment on the Bosnia mortality statistics, and my comments were specifically in favor of the enumeration by the Sarajevo-based Research and Documentation Centre that found approximately 97000 deaths (see http://www.hicn.org/research_design/rdn5.pdf). The RDC's enumeration was subsequently validated as nearly complete by an MSE estimate done by ICTY demographers Jan Zwierzchowski and Ewa Tabeau (I think their excellent study was published only at the Feb 2010 Households in Conflict Network conference in Berlin, but it might have come out elsewhere). If I understand your comment correctly, it seems that we largely agree on the total mortality for the Bosnian conflict.

 

THAMUS

11:41 AM ET

March 1, 2012

Those honest earthquakes

"Ball ... says; at least earthquakes don't have bad guys actively trying to keep people from releasing information."

Oh yeah? Ever heard of the Chinese government?

 

SAKSIN

5:55 PM ET

March 1, 2012

Where I'd like to see Ball go next

It seems to me the Israel-Palestinian conflict needs some of Ball's best medicine. I am guessing that there are few places on earth where the media image - at least as far as I know it in Europe - is farther from realities than there.

 

EBOY

6:05 PM ET

March 2, 2012

A Fine Article

What a guy. But I so wished that Tina Rosenberg didn't use that word evangelical to describe Ball.

 

MARTIAL

7:03 PM ET

March 3, 2012

The methodology appears a bit odd

Reports of deaths are not independent samples of fish in a pond. Whereas fish caught in a pond can be objectively ascertained & analyzed, reports by survivors or interviews by objective persons of those living in affected areas might well have common systemic biases. The persons being interviewed are people, not fish. Different people might have heard of the same person's being dead; this does not mean, like with the fish, that the same fish has been caught. One can count corpses; using estimates from people who might have bias is not as secure.

Another problem:

Page 20: Many people in Kosovo have similar surnames, and it can be difcult to distinguish between people by last name alone

Page 24: The primary keys for identifying duplicate reports of the same individuals were last and first names. . . . When records contained similar names, they were considered to be matches unless other information clearly distinguished them from each other

____________

People commonly misremember names. The misremembering may be either common (person X & person Y are more likely to misremember name A than name B) or differential (persons in group X more likely remember name A than name B).

 

MARTIAL

7:41 PM ET

March 3, 2012

cont.

The errors can go both ways, of course. The problem is that the matching may either undercount or overcount the number of matches because of errors of memory. Also, the circumstances are precisely those that produce false impressions. When bombs fly & people are being killed, your activities as a witness are primarily to do what is needed to avoid becoming a corpse. Your attention to everything else is as small as it ever will be. Whereas normally, you might well be expected to remember the names of those who perished from natural causes with accuracy (that's questionable in itself), in a war time situation, any assumption of that sort cannot hold eo ipso.

No matter how elegant the mathematics of such computations are, they cannot correct for inaccuracies of persons reporting the deaths. Like it or not, only exhumations can produce accurate body counts.

The real key, before accepting Prof. Ball's method for future events, is to see how well it works against a gold standard. When one knows in actuality the number of deaths in a calamity, war or famine being two examples, one could then apply the method & see if it holds up. Otherwise, there is no reason to trust the numbers of deaths. There is, by the way, no reason to accept this analysis simply because it had been done before, perhaps with great inaccuracy, in Nicaragua.

 

MARTIAL

7:53 PM ET

March 3, 2012

Happened before

Robert Conquest used aberrant mathematics to estimate Stalinist deaths. The use of his method would show the US during the depression killed even more.

 

ANDYWILCOXSON

1:03 AM ET

March 4, 2012

Methodology

If his methodology seems odd it's because it's bogus.

Go to icr.icty.org and download Dr. Fruits' refutation of Ball's findings. Dr. Fruits' expert report is exhibit 3D00893 in the Milutinovic et al trial. He lays it all out and exposes Ball for the fraud that he is.

The Tribunal rejected Ball's findings, his work basically amounts to garbage in = garbage out. His data was flawed, his methodology was flawed, and his conclusions were unsound -- and the trial judgment says the judges didn't even trust his objectivity as an expert.

 

PATRICKBALL

8:07 PM ET

March 4, 2012

Re: a gold standard

It's tough to have a gold standard, but there are two ways to come at it: first, through simulations. We can simulate in a computer random populations, then draw samples (even very biased samples) from the simulation, then make estimates using MSE from the samples. One way to address your concern about the difficulty of uniquely identifying people is to add error to the simulated linkages. In this way, we can assess the impact of identification error on the estimates. I've done this for a couple of projects, and the changes in the estimates tend not to be sufficient to affect interpretation. That is, in most real-world situations, overlap measurement error does not seem to contribute substantially to estimate bias.

Another question you raise is the independence of samples. This is a key point, and in the MSE literature it is addressed as "list dependence." The solution is to use more than two lists, and then estimate the correlation between subgroups of lists, thus correcting for list dependence. The classic source on this method is Bishop, Fienberg, and Holland, _Discrete Multivariate Analysis_, ch6. In Kosovo, we used 4 lists.

Another approach is to look at a real-world situation with multiple methods. Kosovo March-June 1999 offers such a case. There were two probability-sample surveys (one done by Physicians for Human Rights, the other by a group from the CDC led by Paul Spiegel and Paul Salama); the MSE that my colleagues and I did; and recently, a near-census of all killings done by the Belgrade-based Humanitarian Law Centre. The MSE and the surveys agree closely with an overall estimate of total killings March-June 1999 of approximately 10,000. The surveys do not offer enough detail to test the time and space patterns. The HLC near-census matches the time and space patterns found in the MSE remarkably closely. The data for the HLC and the MSE estimates I did is all online, it's easy to check out. A similar validation has been done with a near-census and MSE in Bosnia.

Whether this example validates MSE in general for conflict-related mortality is an ongoing debate, but I think evidence from multi-method approaches -- in Kosovo, in Bosnia, and in Timor-Leste -- is accumulating in favor.

 

JPMAHER

11:00 PM ET

March 4, 2012

Ball thinks..

"I think evidence from multi-method approaches -- in Kosovo, in Bosnia, and in Timor-Leste -- is accumulating in favor." -- Therefore it is?

 

MARTIAL

11:59 PM ET

March 4, 2012

There still is no gold standard,

so the method, which is very odd to me, lacks any real proof of validity. Robert Conquest estimated excess deaths under Stalin by the use of two census figures; complete garbage. Let's go through your methods, if we might.

 

PATRICKBALL

11:16 AM ET

March 5, 2012

re: a gold standard

In the statistical measurement of social phenomena, there are three steps in validation. First, does the math work in a formal sense, given relevant assumptions? Second, does the estimation agree with known-good ("ground truth") simulations, and with how much sensitivity to perturbations of the assumptions? Third, when there are multiple measurements of a real-world case of the phenomenon using different methods, do the methods agree?

MSE has passed the first test; there is a wide and growing literature in demography, ecology, and biological and mathematical statistics. Evidence from the second test is emerging: there's a lot done in the academic literature, and there are several studies that I know of (some by me and my colleagues) looking specifically at the conditions we expect in the measurement of conflict-related mortality. The third test (multiple measures) is expensive and consequently rare. In the three cases we have, MSE has been found consistent with the other measures.

Certainly much more can and should be done. A number of scholars have posed hypothetical situations in which they believe MSE would fail, and I am actively engaged in exploring the implications of those concerns. So far, however, all the empirical results are in favor of MSE's validity.

 

ANDYWILCOXSON

10:26 PM ET

March 3, 2012

Ball's Findings Were Rejected By The Tribunal. He's a Fraud.

The ICTY rejected Ball's findings in the Kosovo 6 trial. Here's the judgment (See paragraphs Para 21-29):
http://www.icty.org/x/cases/milutinovic/tjug/en/jud090226-e3of4.pdf

The trial chamber doubted Ball's objectivity as an expert and they found his methodology to be flawed and his conclusions to be unsound. The guy got his hat handed to him.

Dr. Eric Fruit's successfully refuted Ball's work. Dr. Fruit's expert report is exhibit 3D00893 in the Milutinovic et al trial if anybody wants to look it up in the tribunal's archives at icr.icty.org and read it.

Dr. Fruit's trial testimony can be read here:
http://www.icty.org/x/cases/milutinovic/trans/en/080423IT.htm

Oh and you want to know why the migration stopped on Easter: the border was closed.

I can't believe that Ms. Rosenburg was so lazy that she didn't even bother to read the verdict in the trial to see if Ball's work had even been accepted. I'm not surprised that he didn't tell her it wasn't -- but if he gave her that silly picture of himself looking like a character out of Tron it should have raised some red flags.

 

MARTIAL

12:44 AM ET

March 4, 2012

Happens more frequently than we like to admit.

and not on "one" side, if you know what I mean. There was this pair that claimed they could prove by virtue of statistics that the Iranian election was fraudulent. If you knew about number crunching it became apparent fairly quickly that they were taking the results, hunting for a weird pattern (they always exist) & then doing a statistical analysis on it.

In this case, the abuse of statistics is worse because it ignores vital differences between fish who are labelled by humans & then counted & reporting from distressed humans who almost by definition will provide inaccurate reports. Virtually every one reading this can recall a minor emergency they experienced when they were certain something happened which did not actually occur or did not see something obvious going on that everyone else noticed. Now imagine how accurate you would be in a war zone, when your life & the lives of your family & friends are at risk. Don't you think you just might get details wrong, like who had actually died? All you care about is saving the living.

 

PATRICKBALL

8:13 PM ET

March 4, 2012

the findings in Milutinovic

The ICTY findings explicitly found that I was *not* biased: from the section that you cite above, para 24, "His evidence will therefore be examined below on a substantive basis."

You're right that the trial chamber did not accept my evidence, but I think they erred in doing so. Dr Fruits' primary objection was that we did not include Yugoslav force activity in our analysis. True, but we did not make conclusions about Yugoslav force culpability -- we were doing something different, which was rejecting the possibility that NATO or KLA were plausible causes of killing and migration. Dr Fruits managed to confuse the judges on this point, but of course that was his job as a defense expert, after all.

Dr Fruits further criticized our findings by raising a number of tests that he thinks we should have done; he did not conduct those tests himself. After reading his report, we did some of the tests he suggested, and we found that the tests confirmed our original findings. Unfortunately for the debate, the prosecutor chose not to present this in cross-examination. The problem is that there are so many possible tests, not all can be done in any study. We did the ones that we felt were most relevant. Of course, as Dr Fruits noted, there are more tests that could be done, but the judges did not seem to understand that Dr Fruits was raising essentially redundant and irrelevant concerns. Alas, but this is the risk of legal vs scientific debates.

As noted in the article above, the Yugoslav authorities announced a unilateral cease fire on Tuesday 6 April 1999, and they maintained their cease fire until Saturday 10 April (the cease fire stopped the day before Orthodox Easter). The Yugoslav authorities did close the border, as you note. During this period, KLA and NATO activity increased, doubling and tripling relative to the previous week. However, and this is the key point, *killings* declined from their second-highest point in the conflict 1-6 April to nearly zero during the 7-10 April cease fire. When the Yugoslav forces announced that they were resuming operations, killings increased rapidly to the third-highest peak on 14-15 April.

The argument is that if the KLA or NATO were responsible for killings, we would expect that their increased activity during the 6-10 April period would be associated with an increase in killings; in fact, the opposite occurred. When the Yugoslav forces publicly announced that they were suspending operations, killings declined to near zero, and when the Yugoslav forces announced that they were resuming operations, killings increased. This isn't proof that the Yugoslav forces committed the killings, but, as I said in both Milosevic and Milutinovic, we observe only that this correlation is consistent with the claim that the Yugoslav forces were responsible for the killings. The report is here: http://shr.aaas.org/kosovo/icty_report.pdf

Finally, Tina and I discussed the Milutinovic case at length, but she chose not to include it in her article. Not my call. Also the photo was taken by FP's photographer, he conceptualized it and set it up, and FP editors chose this photo in preference to more traditional images. Again, not my call.

 

MARTIAL

11:51 PM ET

March 4, 2012

Labelled fish are not reports from possibly biased opinions.

How can you say otherwise? What proof have you that this technique actually works from a test against a gold standard in human reports from a calamity?

 

ANDYWILCOXSON

1:11 AM ET

March 5, 2012

Did you tell her that your evidence was rejected?

Mr. Ball,

I can forgive you for the picture. That could have happened to anybody.

But para 24 of the judgment says, "The Chamber considers that the evasive nature of the witness’s responses casts doubt upon his objectivity as an expert witness. However, Ball’s expert reports do not, on their face, display any signs of bias in respect of their preparation and contents. Moreover, Ball displayed no bias during his oral testimony before the Chamber. His evidence will therefore be examined below on a substantive basis."

If they explicitly found that you were not biased, then what did they mean when they said: "The Chamber considers that the evasive nature of the witness’s responses casts doubt upon his objectivity as an expert witness." ?

And when you say, "Tina and I discussed the Milutinovic case at length, but she chose not to include it in her article." Does that mean that you told her that your evidence had been rejected by the Tribunal? In your lengthy discussions, did you ever tell her that?

Lying by omission is still lying. If you told her that your evidence was rejected, and she chose to withhold that information from her readers, then she's dishonest. If you never told her, then you're dishonest. Which is it, did you tell her or not?

 

PATRICKBALL

11:07 AM ET

March 5, 2012

re: Milutinovic

The substantive parts of para 24 say that my expert reports and oral testimony do not display bias; this seems to me the conclusion. The part the court didn't like was a line of questioning in which, frankly, they confused me and, in the moment, I couldn't figure out what they were getting at. I disagree that I was evasive.

When I discussed the case with Tina during our interview, I told her that the court rejected the evidence, and I explained why I think they erred in their judgment. It would be interesting to revisit the technical debate in a scientific forum moderated by statisticians. Perhaps AAAS or Am Stat Ass'n could organize a panel.

 

ANDYWILCOXSON

1:07 PM ET

March 5, 2012

Bias

I owe you an apology if you really did tell Ms. Rosenburg that your evidence was rejected. I'm going to e-mail her and see what she has to say about that, because if you were honest and you told her, and she concealed that information from her readers, then she's not a very credible journalist is she?

I also note that para 24 doesn't exactly give your reports the good housekeeping seal of approval. The judgment isn't as definite as saying that your reports are not biased. It says, "Ball’s expert reports do not, on their face, display any signs of bias in respect of their preparation and contents."

By using the qualifier "on their face" they left open the possibility that your reports are biased. What they're doing in para 24 is they're giving you the benefit of the doubt. Eventhough they felt that you had testified in an evasive manner and they had doubts about your objectivity as an expert, they nonetheless evaluated your evidence on a substantive basis, rather than dismissing it outright, because they did not detect signs of bias in your work itself -- at least not on its face.

The reason why they rejected your evidence was because they felt that Dr. Fruits had successfully undermined it. His report is exhibit 3D00893 in the Milutinovic trial, I encourage anyone reading this to go to icr.icty.org and download a copy of it. I'm not a scientist, but it appears to me that his critique of your work goes much deeper than you've indicated here.

Your remark that "Dr Fruits managed to confuse the judges on this point, but of course that was his job as a defense expert, after all." is telling. How did you view your job as a prosecution expert? Was it your job to confuse the judges in order to secure a conviction for the prosecutor?

 

WONDERING14

2:36 AM ET

March 4, 2012

Keep improving

Dr. Ball's efforts to make hard a miasma are laudatory, but they have similar obstacles to those confronting of climate change enthusiasts. One is their very enthusiasms, which unavoidably affect the soundness of their work. Another obstacle is that the fact of the chaos of mass killing and the chaos of climate may be known but may not be eliminated. Another is that multiple databases may have borrowed from one another, incurring ruinous dependency. Another enduring problem is sheer methodology and modeling, which are almost always inherently manipulable for purposes mentioned in the article, and others. Additionally, I second MARITAL's comments.

Surely Ball's work is much better than the almost always misinforming estimates of journalists and human rights organizations, which cheapen their efforts so. The mere involvement of a dedicated, good statistician, experienced in the field, adds much credibility to that which was previously preposterous. May the next generation of believability arrive soon.

 

MARTIAL

9:49 AM ET

March 4, 2012

Perhaps you should look at matters differently

Journalists do not appear in courtrooms. All they do is report information to the public that may or may not be accurate. Sometimes, as in the present case, they do not completely research the subject. These are common flaws. Incomplete research is characteristic of all of us, journalists or otherwise, because there's always stuff out there we did not read.

In the case of statisticians who blow it, often they believe in what they are doing. They are simply wrong. Again, this is characteristic. Which of us can say we have never been wrong? Sometimes, of course, as with the Iranian election pair whose work appeared in the Washington Post, that is impossible to believe. The reason is that they used different means of assessing the election results in Iran than they did in their prior study, rendering reasonable the inference of post-diction (e.g., looking for the result in the data that would yield P < 0.05).

Failure in a courtroom is far worse than failure in a journal, magazine, or newspaper. Prof. Ball's error was infinitely more serious than Ms. Rosenberg's mistake. The reason to think his failings were not simply the being wrong that afflicts everyone is that he appears to have used the same flawed method in subsequent cases. Unless he admits he was wrong & changes his methods of analysis, he should never again be listened to by anyone.

 

JPMAHER

10:11 AM ET

March 4, 2012

Ms R's "mistake"

Her "mistake", like Dobbs's "mistake" was to recite fraudulent claims and stamp them as true.

 

ANDYWILCOXSON

1:43 PM ET

March 4, 2012

Ms. Rosenburg Can Redeem Herself By Issuing A Retraction

@ Martial - I do attribute Ms. Rosenburg's error to incomplete research. It certainly appears as though Patrick Ball misled her when she interviewed him. It appears that he held up his work in Kosovo as a shining example of his expertise and, in the hopes that she would write a favorable article about him, neglected to tell her that his evidence was rejected by the Tribunal because his data and methodology were found to be unsound and the judges doubted his objectivity.

He told her point blank, "People who want to dismiss us say we're just making s**t up. If they're ever right when they say that, we're in big trouble." Clearly, he led her to believe that his work was sound and that it had withstood scrutiny when, as we can see from the judgment and the evidence in the Kosovo 6 trial, it hasn't.

The fact that he would mislead her, and do it in such a brazen way, says something disturbing about his mental state. The fact that he could look her in the eye and make a cocky remark like that when he knew that his work on Kosovo had been discredited and rejected by the court -- it says a lot about the kind of person he is.

I think he fooled her and took advantage of her in a cynical effort to enhance his own reputation. Ms. Rosenburg's error can be forgiven if she goes on record and sets the record straight. If she publishes a retraction saying that he that he misled her, then she can be forgiven. But if she doesn't do that, if she chooses to let this deceptive article stand, then she's as guilty as he is. A person like Patrick Ball can't be trusted to work in the field that he's in. He needs to be exposed.

 

MARTIAL

5:04 PM ET

March 4, 2012

The proper procedure for journalists is not

my realm of expertise. You may be right; on the other hand, Foreign Policy & journalists in general may consider the printing of uncontested contradictory argument sufficient. Comments sections are vital, nitpicking, petulance, & dishonesty of commentators notwithstanding; they help ensure journalistic integrity.

Online publications that most bruit "Zionist control" oft least permit comments | most police them. Consider Counterpunch. http://www.counterpunch.org/2012/02/22/what-really-happened-in-the-yom-kippur-war/ , which regularly posts Mr. Israel Shamir, in reality Norwegian neo-Nazi Mr. Joren Jermas http://my.telegraph.co.uk/updates/tag/joran-jermas/ Neither Counterpunch nor Mr. Jermas' website http://www.israelshamir.net/English/Putin.htm permit comment.

 

MARTIAL

5:16 PM ET

March 4, 2012

The Big Lie

is not a Nazi concept as most would conceive of it, rather it is a tactic the Nazi's alleged that Jews used to control the planet. Prof. Bytwerk, as usual, has words concerning this that cannot be surpassed:

Hitler and the “Big Lie”

The false Goebbels quotation above is actually a take-off on Hitler's familiar statement in Mein Kampf, which is often misunderstood. Hitler stated:

“In this they [the Jews] proceeded on the sound principle that the magnitude of a lie always contains a certain factor of credibility, since the great masses of the people in the very bottom of their hearts tend to be corrupted rather than consciously and purposely evil, and that, therefore, in view of the primitive simplicity of their minds, they more easily fall victim to a big lie than to a little one, since they themselves lie in little things, but would be ashamed of lies that were too big. Such a falsehood will never enter their heads, and they will not be able to believe in the possibility of such monstrous effrontery and infamous misrepresentation in others.…” (p. 231 of the Manheim translation)

Hitler is accusing the Jews the Vienna press of this strategy. It is often taken as evidence that Hitler advocated the “Big Lie.” He is, in fact, accusing his enemies of lying.

Now, Hitler was entirely willing to lie — but in public he insisted that he and his propaganda were truthful.

http://bytwerk.com/gpa/falsenaziquotations.htm

He is a wonderful man, Prof. Bytwerk. A point that I always make because of people like him is that no German or Japanese born after say, 1935, is responsible in the slightest for fascist atrocity. I once begged him to translate all the Nazi propaganda films, but he said that had already been done. Wish they were online!

 

MARTIAL

11:47 PM ET

March 4, 2012

I now lack direct proof,

but it's definitely there, fully documented. You can find it if you look. Appearances can be quite deceiving; some women reported feelings of great comfort & safety when being escorted to cars by Mr. Ted Bundy. My slightly more generous approach to Prof. Ball than a colleague here may stem from abandoning Wilcoxon's T tests for the bootstrap, yielding greater sensitivity & fewer assumptions.

 

JPMAHER

10:06 AM ET

March 4, 2012

rhetoric "surely" - a pious wish

"Surely Ball's work is much better than etc ..." -- Why should anyone be sure?. Writers who preface a statement with the word "surely" really mean "I want to believe it and I want you to believe it, but don't bother me with any demand for evidence".

 

XFUNC_CARTER

7:24 PM ET

March 4, 2012

A statistical view of war is foolish.

There are two ways to view war (in general):

Statistically or historically.

The statistical approach is recent, but it provides the illusion of understanding without real understanding.

As a case in point: the United States tried, in Vietnam, to win that war using quantitative, statistical methods. These were actually implement in the field, in two particular measures: the Bodycount policy and the Hamlet Evaluation System.

The Bodycount was the idea that if you killed enough of them, you would win.

Anyway, we all know what happened in Vietnam.

Numbers often provide the illusion of truth.

Contrast this with Malaysia: similar to Vietnam, except the British won.

What approach did they use? White ops. That is, enforce the law. Aggressively pursue the insurgents. Don't use black ops (things like assassinations, etc). In short, this was an approached based in good faith - that high notions like law-and-order will prevail.

Numbers will not set us free. Only high understandings will. Unless you have high order understandings, like the sanctity of human life, you will just continue to fool yourself with numbers.

 

JPMAHER

11:19 PM ET

March 4, 2012

rhetorical bluster & Just the place for a snark...

Prime example of rheorical paintballs is this froom Dobbs: "Milosevic in his four wars had killed some 125,000 people, more than anyone in Europe since Stalin. But now the Butcher of the Balkans sat in a courtroom that looked rather like a community college classroom, with two Dutch police officers behind him and his cell waiting for him at the end of each day's session, rhetorical bluster his only available weapon against Ball's evidence." In any case Dobbs is the avatar of Lewis Carroll's Bellman -- "what I say three times is true." It wasn't Milosevic who started the war(s), but the secessionist proxies CRO, SLO, & B--H. Nor was Milosevic out for a "Greater Serbia"... That was a warm-over of Austria-Hungary's equivalent of "WMD".

 

MICHAEL DOBBS

12:55 PM ET

March 6, 2012

get your quotes right

I am not the author of the quote you attribute it to me. You are confusing me with Tina Rosenberg.

 

MARTIAL

12:38 AM ET

March 5, 2012

To start witb, justify a paragraph

A direct estimate of killings documented in the four data sources used in this
study is 4 400, and a direct interval estimate of the total number of killings is
given as 7 155 to 10 259.49 There is good reason to believe that these numbers do not represent an accurate count of the killings in Kosovo during this time period. These data were compiled from a collection of interview and exhumation data. Believing that all killings were documented by these sources assumes that all relevant bodies were exhumed and identied, or that all killings were witnessed and reported in the survivors' interviews, or that all killings were captured by at least one of these processes. This scenario is implausible.

_____________

By heavens, what an odd set of sentences. Let's do the same with cancer shall we?

A direct estimate of the incidence of lung cancer cases is X, from histologically proven cases in four different databases, with bootstrap 95% confidence intervals of X - A - X + A. There is good reason to believe that these numbers do not accurately represent the incidence of lung cancer in the United States during this time period. Believing that all cancers were documented by these sources assumes that all tumors were biopsied and correctly identitied. This scenario is implausible.

____________

To me there is no doubt your paragraph is invalid. One may not "capture" all murders by exhumation any more than one might "capture" all cancers by biopsy. Nonetheless, we require biopsy proof before saying a cancer is present. How much more so should require exhumation proof, ultimately, to say that murder is present.

 

MARTIAL

7:51 AM ET

March 5, 2012

Exhumation is "corpus delecti" if you will

One may use proofs, as with the Nazi numbers provided by the Nazis, to determine numbers of deaths. What one should avoid is uncorroborated accounts of refugees, who, being in grave distress, are very likely to remember with certainty that which did not occur and also to fail to remember that which was rather obvious. Again, consider your own life, with minor emergencies, not a battle field, & see if you do not find the above to be the case.

Also, corpses exhumed are different objects than witnessed deaths. Essential would be checking to see if a different modeling procedure was appropriate for the analysis of corpse counts than of witnessed deaths.

 

PATRICKBALL

11:30 AM ET

March 5, 2012

total estimate

The point of the paragraph is to say that although only 4,400 deaths were documented by name, we believe that there were more deaths that had not (at that point) been documented. This is one of the various estimates we made of the total deaths. (we made a series of estimates using slightly different methods, and by stratifying time and space in different ways)

I entirely agree with you that exhumations are an important way to document war deaths, and exhumations were one of the four data sources we used in the estimates. There were about 4000 exhumed cadavers when my colleagues and I were doing this research in 2000-2001. Of those, approximately 2,000 had been identified, if I recall correctly. The exhumations proved that there were at minimum 4,000 violent deaths (there have been more exhumations in Kosovo and in Serbia since I got the data in 2001, but I don't have the newer data).

The statistical work showed the larger pattern, including deaths not documented, the pattern of those deaths over time and space, and the relationship of the time-space patterns with time-space patterns of other phenomena (migration, NATO airstrikes, KLA clashes with Yugoslav forces).

Thus I think the forensic and statistical work are complementary, addressing different questions.

 

MARTIAL

2:44 PM ET

March 5, 2012

Prof. Ball, Good Sir,

fish labeled by human scientists are not corpses in the ground, let alone deaths reported by refugees. No mathematical manipulation of unreliable observations achieves reliable conclusions. You render equivalent exhumed body counts & reported deaths; no statistical analysis properly assesses a chimera. All material submitted for use by a criminal trial, which yours was they say, is forensic by definition.
Another point.
ABA “All interviews were conducted using a standardized questionnaire that allowed for a narrative description of events. . . .For the statistical purposes of the present study, all of the data were recategorized from the original database into new data structures. All data were recoded from their original formats into standard geographic classifcations and date precision codes.”

HRW “All interviews were conducted to elicit open narratives of what the interviewee had seen. Standardized questionnaires were not used (HRW 2001). Despite not having used a standardized questionnaire, the interviews were rich sources of information about killings. They were coded and entered into a database. Coding for the present study was independent of the original HRW database and the statistical work presented earlier in HRW.”

OCSE “OCSE-KVM used standardized interview forms similar to those used by ABA/CEELI. The information was then entered into a database, also similar to that used by the ABA. For our study, the data were independently recoded, as we did for HRW. The semi-structured OCSE interviews were reformatted to be compatible with the format we developed for use with the ABA/CEELI data.”

Differences in data acquisition & handling render likely a person’s being declared unidentified by one system & named by another; inflating the number of unique names overestimates total deaths. This is not a statistical practice Fisher would have applauded.

 

PATRICKBALL

3:52 PM ET

March 5, 2012

unidentified deaths

I share your concern with unidentified deaths failing to match identified deaths. Therefore we excluded all unidentified deaths from the lists used for estimation; there's a section toward the end of Appendix 2 about what the patterns of unidentified deaths look like relative to the identified ones.

However, only records that had at least first and last name were included in the matching and the estimation. There were of course many slight variations in place and date of death, as well as name spellings. We assumed whenever the records might plausibly match, that they did in fact match. This assumption probably biased downward the total count of observed deaths (because it maybe have counted only once two unique deaths), and biased downward the estimate of total deaths (because it increased the overlap between systems).

 

MARTIAL

7:04 PM ET

March 5, 2012

Ah but there may

still be a problem. Were dead persons A,B,C,D are named in list 1 and not in in list 2, 3, or 4, even though they were seen, then there exist four falsely unique names even when unidentified persons are excluded.

Good Sir, I do not wish to cause distress, but must tell you your approach has problems. Wartime civilian corpse counts, having limited accuracy & precision, are perhaps better viewed as perfume to be sniffed than Thunderbird to be guzzled (always get bleary eyed over the stuff). At the very least, it can be said you made a diligent effort at a noble task, one you believed strongly in.

 

D-MAN

4:16 PM ET

March 10, 2012

Complicated, but not the way you think

@MARTIAN,

I think your objections are overblown. You raise some valid points but you're confusing some things. First, what makes "garbage" a dataset is not how it was created, but attributing it properties that it doesn't have. The assumptions of MSE in this sense are surprisingly mild, allowing you to use data that would otherwise be considered "garbage".

It should be clear by now that MSE methods do not need unbiased or representative samples from the target population to work. What they need are just enough samples so that the joint inclusion mechanism, which includes biases and induced correlations, can be reasonably approximated and estimated. Of course, what "reasonably approximated" means is rightly debatable. However "debatable" does not mean "impossible".

There's a conceptual confusion in your discussion of unidentified deaths. MSE methods do not work by modeling deaths but reports of deaths. The detail is how you define a "report". In the Kosovo work, to call something a "report" they required name and surname. This means that if in one of the projects there was a mention of an unidentified death, that was considered a no-report. So, if "dead persons A,B,C,D are named in list 1 and not in list 2, 3, or 4, even though they were seen", there *do not* exist "four falsely unique names", as you imagine. What you have are 4 individual records with a pattern of being listed with name and surname (or "reported") in list 1, but not in the other lists. This is data you can work with. Under that framework, the question that MSE answers *is not* "how many casualties weren't mentioned in the lists?", but "how many casualties weren't listed by name and surname in any of the lists?". Subtle but important difference.

 

FLEM

1:32 PM ET

March 5, 2012

Counting bodies by itself

Counting bodies by itself doesn't conclusively provide evidence on how the fatalities were killed. femmes