FP Explainer

How Does the CIA Know If Its Intel Is Any Good?

Common sense, mostly.

On Sunday, the website Wikileaks published more than 91,000 military documents from the war in Afghanistan, among them reports alleging that Pakistan's top military intelligence service is aiding Taliban fighters. The Pakistani government has claimed that the documents are based on inaccurate field reports that neither it nor U.S. intelligence agencies are taking seriously. So how do intelligence analysts determine whether reports are credible or not?

With common sense, mostly. When they receive a piece of intelligence -- whether it comes from a phone intercept or satellite imagery -- analysts at the Central Intelligence Agency, Defense Intelligence Agency, and elsewhere subject it to more or less the same smell tests that police detectives and reporters use when trying to sort out a story: Is the information internally consistent? Is it consistent with what they're hearing from other sources?

In the case of human intelligence, the source is put to the same scrutiny: Who provided the intelligence? What's in it for them? What axe do they have to grind? (If they're a mentally unstable con man, that's a good thing to know.) And there's what you might call the Bruce Willis Rule: If something sounds too much like the plot of an action movie -- say, Osama bin Laden buying missiles from North Korea -- you probably want to get a second opinion.

Of course, the fact that vetting intelligence is largely intuitive doesn't mean that it's easy. What's simple in theory is immensely difficult in practice: Analysts are deluged with information, some of it good, much of it bad or simply irrelevant, and virtually all of it ambiguous. "It's like trying to put together a jigsaw puzzle," says former CIA officer and presidential adviser Bruce Riedel. "You maybe have 200 pieces of the puzzle. The first thing you don't know is, is this a 500-piece or 1,000-piece puzzle? And then with the 200 pieces you have, maybe half of them don't belong to this puzzle at all. They're in the wrong box. And then every hour or so, someone comes along and dumps 10 more pieces on your desk -- and nine of them aren't even part of it."

There are rules of thumb, official and otherwise. For instance, analysts err on the side of caution with any piece of threat-related intelligence; even questionable chatter about a potential terrorist attack is sent up the agency ladder, with the appropriate caveats. And in general, analysts deal in probabilities more than they do flat-out assertions; there are few absolutes in spycraft.

Still, there are plenty of ways even an expert can go wrong. For instance, when a fresh bit of information contradicts what's known already, it can be hard to distinguish outliers from genuinely new information -- analysts, like journalists, can be seduced by a novel storyline or trapped by a familiar one. Consider American intelligence agencies' assessment of Iran's nuclear program: In 2007, the intelligence community issued a National Intelligence Estimate judging that Iran had given up its nuclear weapons program several years earlier, an assessment that in retrospect may have been overstated. Did the analysts give too much credence to new information they received simply because it was new? Conversely, U.S. intelligence agencies notoriously missed the coming collapse of the Soviet Union and Mikhail Gorbachev's emergence as a reformer in part because they were caught up in their own narrative of the Soviets' power and the venality of their leaders. It turned out that Gorbachev actually meant what he said.

There's also the hazard of information taken out of context. Nothing gets thrown away in intelligence-gathering -- even data and reports that are considered false or irrelevant upon arrival are filed away in vast databases in case in hindsight they turn out not to be. But that long-shot information can easily be dusted off and used piecemeal -- willfully or accidentally -- to support wildly incorrect conclusions. In 2002, the Defense Department created the Office of Special Plans, which produced a report -- based on raw data gathered by the DIA, but not the agency's own analysis of it -- alleging that Saddam Hussein had close ties to al Qaeda and had developed weapons of mass destruction. This was news to the DIA's own analysts and those in other agencies, who had long since judged most of the intelligence that the office used in the report as spurious, and indeed no such ties or weapons were ever found.

A directive issued by the director of national intelligence in 2007 put in writing the best practices underlying good intelligence analysis, in order to avoid these types of failures in the future. Agencies are now required to ask common-sense questions about their data, and identify and assess the credibility of the people who gave it to them. In reality, however, intelligence veterans worry less about the kind of intelligence failures that led to Iraq than the kind the led to September 11; the problem is less bad analysis than it is good analysis that doesn't get the right attention. Intelligence agencies are vast bureaucracies, and plenty of information falls through the cracks within and between them. Umar Farouk Abdulmutallab, the would-be Christmas Day bomber, nearly managed to pull off his airliner attack because the State Department reportedly failed to act on its own intelligence suggesting that Abdulmutallab had visited terrorist training camps in Yemen, and failed to sound the alarm about what it knew. Making sense of information from the field wasn't the problem; getting it to the right people was.

Thanks to Pat Lang, Paul R. Pillar of the Georgetown Center for Peace and Security Studies, and Bruce Riedel of the Brookings Institution.

SAUL LOEB/AFP/Getty Images

FP Explainer

What Do Militaries Actually Practice During War Games?

Communications and figuring who's good at what.

After a meeting in Seoul Tuesday, U.S. Defense Secretary Robert Gates and South Korean Defense Minister Kim Tae-Young announced that the United States and South Korea would conduct massive joint naval drills this weekend. The exercises will include the U.S. aircraft carrier George Washington as well as 20 other ships and submarines and 100 aircraft. According to the joint statement, the drills are "designed to send a clear message to North Korea that its aggressive behavior must stop, and that we are committed together to enhancing our defensive capabilities." The political message sent by exercises like these is certainly clear enough, but are they actually practicing anything?

Yes. Preparedness drills are a constant part of modern naval operations. Almost any time two U.S. naval vessels are in the same area, they will stage some type of combat simulation.. On an international level, joint drills might be intended to send a political message to enemies or enhance relations with an ally, but they're also a useful opportunity to prepare for potential military action and detect weaknesses.

The biggest of these weaknesses is typically "interoperability." Differences in equipment can make it difficult for allied militaries to communicate in the heat of battle -- a problem between the U.S. Army and Navy as well -- and differing command structures can make lines of communication unclear. War games give militaries a chance to set procedures for how information is shared and orders are implemented.

Another important purpose of war games is to assign roles for a potential combat scenario. Most U.S. allies don't have the same ability to project force as the U.S. military, so they generally take the role of holding territory while the American war machine counterattacks. In the closing days of the Cold War, for instance, German troops would practice holding off a Soviet thrust into Central Europe until the arrival of NATO reinforcements.

Similarly, South Korean combat scenarios focus on the "battle of the buildup." South Korea's outnumbered troops, as well as the small U.S. combat force based in the country would have to hold off invading North Korean forces until the U.S. backup team swept in to save the day. Recent drills have involved procedures for bringing new troops and equipment in the event of a contingency.

The details of the upcoming war games are classified, but the focus will probably be on antisubmarine warfare. The March sinking of a South Korean warship -- most likely by a North Korean submarine -- exposed a weakness in South Korean antisubmarine tactics and more practice is clearly needed. North Korean subs, because of their small size and relatively simple design, are difficult to detect on radar.

In an antisubmarine drill, a friendly sub posing as an enemy breaks off from the group and  attempts to approach without being detected. Other scenarios include surface ship-to-surface ship combat and amphibious landings.

With approximately 10 such drills planned by the United States and South Korea for the next few months, they should have plenty of opportunities to practice.

Thanks to John Arquilla, professor of defense analysis at the U.S. Naval Postgraduate School, and retired U.S. Navy commander Bryan McGrath.

PORNCHAI KITTIWONGSAKUL/AFP/Getty Images