National Security

Fix Bayonets!

If the Pentagon cuts R&D, the U.S. military will take a huge step backward.

During the presidential campaign, the importance of advances in military technology made a surprisingly high-profile appearance, when President Obama made his now-famous quip that, yes, the United States has fewer ships than it did in 1916, but it also has "fewer horses and bayonets." Beneath the zinger was an important point: quality can matter more than quantity, capabilities can matter more than numbers. But the reason the United States has been able to go from horses and bayonets to, as Obama put it, "these ships that go underwater, nuclear submarines" is that the Pentagon has long invested a percentage of its budget in basic science and technology. That legacy is now under threat amid budget cuts and the pressure to invest in programs that have immediate economic payoff -- an approach that could have dire consequences for military innovation. As he enters budget negotiations, here are five things President Obama needs to remember if he doesn't want to leave a legacy of bayonets.

1) Protect the Defense Advanced Research Projects Agency (and remember that DARPA is an agency, not a slogan). DARPA is the Pentagon's far-forward-looking research and development arm, currently involved in everything from flying cars to a secretive "Plan X" to help the Pentagon wage cyberwar. Unlike the military services, where research is often focused on specific requirements, DARPA's mandate is to look far out into the future, investing in science and technology that may not pay off for years. During the presidential campaign, the agency proved popular with both candidates: Mitt Romney's energy plan, called "Believe in America," even made direct reference to the agency, saying the DARPA model, which provides "long-term, non-political sources of funding for a wide variety of competing, early-stage technologies," should be applied to energy. Obama has also heaped praise on DARPA and its model of innovation. But talk is cheap, and DARPA's large discretionary budget makes it an attractive target for cuts. DARPA's continuing success relies on protecting its funding and independence. While the administration was praised for only modest reducing its 2013 budget request for DARPA -- to $2.8 billion -- that number is still substantially less than four years prior, when the request was $3.3 billion.

2) Set long-term research goals (and remember that long-term doesn't mean right now): The Pentagon's science and technology cadre thrive on ambitious goals, particularly those set by the president, whether in the space program or computing. But those goals must be long-term, and by their nature, they will not always have an immediate political payoff. Obama appeared last year at Carnegie Mellon University's National Robotics Engineering Center in Pittsburgh, praising a DARPA project that seeks to crowd-source manufacturing of military vehicles, a clear hat tip to his message about job creation: "As futuristic and, let's face it, as cool as some of this stuff is, as much as we are planning for America's future, this partnership is about new, cutting-edge ideas to create new jobs, spark new breakthroughs, reinvigorate American manufacturing today. Right now."  Of course, "right now" is precisely not what long-term research is usually about: the idea is that the country invest in basic research today in the hopes of reaping technological and economic payoffs that may be years away. The president should set clear goals in key military technology areas, such as cybersecurity, aviation, and space -- and stick to them.

3) Beware white elephants (they will eat you out of house and home). Research and development funding too often falls victim to large procurement programs. As major weapons balloon in cost, the easiest way to cover the difference is to steal money from the research budgets (for example, the Army's stealth reconnaissance helicopter, the Comanche, was blamed for tying up the service's rotorcraft research and development budget for many years before it was finally canceled). The Joint Strike Fighter, with its trillion-dollar price tag, is fast becoming the white elephant to dwarf all other white elephants, and will make it difficult, if not impossible, for the services to invest in other aircraft programs. That means, in the case of aviation, investments in future capabilities are likely to go nowhere while the Pentagon struggles to cover the cost of its current procurement. The president, who has spoken in support of capabilities over numbers, needs to ensure the Pentagon reviews its largest weapons programs and actually cancels those that have chronically underperformed.

4) Sometimes picking "winners" works (but only sometimes). Both Obama and Romney agreed on the need to fund basic science and technology, but Romney argued that he would take a different approach. Romney focused on Solyndra, the failed solar company, as an example of the administration's failed strategy of "picking winners." Yet in the defense and intelligence realm, no one seems to complain about precisely this strategy, which is employed by In-Q-Tel, the venture capital arm of the CIA. In-Q-Tel is essentially in the business of picking winners: it invests in early-stage companies that have technology it believes will be of use to the intelligence community. Though that model has yielded an occasional loser, like investment in a Buck Rogers-lightning weapon, it has also resulted in some much heralded successes, most notably its investment in Keyhole, the company that developed the technology that became Google Earth, which has widely benefited the intelligence community, as well as the general public. Picking winners can work in this case because, unlike the Solyndra case, the organization doing the picking, In-Q-Tel, has a direct connection to the consumer, the intelligence community. The question that the president will face is which models work for which problems: just as the "DARPA model" may not work for other agencies (the Department of Homeland Security's "ARPA" has been a notable disappointment), investing in companies typically works best when there's an intimate understanding -- and even influence over -- what the customer wants. That's rarely the case outside of the defense and intelligence business.

5) Invest now in science and technology, or pay the price later. Back in 2001, the Pentagon pledged to keep the science and technology investment at 3 percent of its total budget, a benchmark recommended by the Quadrennial Defense Review. That number was tossed out the door in the years following 9/11, when the ballooning Pentagon budget would have created, at least in the eyes of some defense officials, an unduly large hike in spending for research and development. The Pentagon's spending doubled over the course of that decade to about $800 billion a year. Not so for science and technology, which reached a highpoint of $14 billion in 2004 and has since dropped down to $10 billion -- around the same as in 2001. There is now no clear benchmark for science and technology spending. Former Defense Secretary Gates promised 2 percent real growth each year in the Pentagon's basic science budget (though not for applied research), but it's unclear whether that policy is still in place. Worse, sequestration will hit research and development, just as it hits other parts of the budget, triggering across-the-board cuts that won't allow managers to protect key efforts.

Natasha R. Chalk/U.S. Navy via Getty Images


Why the World Can't Have a Nate Silver

The quants are riding high after Team Data crushed Team Gut in the U.S. election forecasts. But predicting the Electoral College vote is child's play next to some of these hard targets.

After a presidential election that Nate Silver and a smattering of other statistical modelers forecast with remarkable accuracy, quantitative enthusiasts -- quants -- are talking some hard-earned smack. "This is about the triumph of machines and software over gut instinct," Dan Lyons extolled at the tech blog ReadWrite. "The age of voodoo is over. The era of talking about something as a 'dark art' is done. In a world with big computers and big data, there are no dark arts."

If only. As a practicing forecaster who prefers algorithms to expert judgment, I'm thrilled to see statistical forecasting so publicly vindicated, but I'd also like to engage in a bit of expectations management about how quickly these methods might transform international politics. As sci-fi writer William Gibson famously said, "The future is already here -- it's just not very evenly distributed." As imperfect as they still are, statistical forecasts of U.S. elections are on the leading edge of that distribution. Meanwhile, most things foreign-policymakers care about are closer to the far edge.

To see why, it's important to understand that Silver and his ilk didn't succeed simply by using "math" instead of "gut." Yes, the method matters, but statistics isn't alchemy. To build forecasting models that work well, you need reliable measures of things that are usefully predictive. Even tougher is that you need those measures not just for today, but also for a long- and broad-enough swath of history to be able to test your beliefs about what predicts what against some hard evidence before diving into prognostication.

Routine elections in rich countries like the United States are some of the softest targets in political forecasting. Rules are transparent; high-quality data, including surveys of would-be voters, are often available; and the connection between those data and the outcome of interest is fairly straightforward.

Even in these relatively easy cases, though, forecasting can still be challenging. In 2010, Silver -- the man the Economist called "the finest soothsayer this side of Nostradamus" -- tried to predict the outcome of parliamentary elections in Britain and missed pretty badly.

Of course, elections in obviously authoritarian regimes are even easier to forecast. Until Mikhail Gorbachev rolled around, no one needed a model to predict who was going to win election to the Supreme Soviet of the USSR. The task is much tougher in competitive authoritarian regimes, where subtler forms of coercion tilt the field in favor of one party, but don't quite guarantee a specific outcome.

Take October's legislative election in Georgia, where the Georgian Dream coalition upset President Mikheil Saakashvili's ruling United National Movement after late opinion polls appeared to show a solid lead for the incumbents. As Mark Mullen, the chairman of Transparency International Georgia, pointed out, what simple readings of those pre-election polls overlooked was the large share of respondents -- a whopping 46 percent -- who refused to pick a favorite. According to Mullen, that refusal was probably driven by fear of "taking risks that could have put [respondents] on the wrong side of the authorities." In an atmosphere of fraud or intimidation, it is a lot harder to make accurate forecasts, even in the rare cases for which we have professional polling data.

When it comes to predicting major political crises like wars, coups, and popular uprisings, there are many plausible predictors for which we don't have any data at all, and much of what we do have is too sparse or too noisy to incorporate into carefully designed forecasting models. In a perfect world, forecasters would routinely receive survey data that would shed light on the sentiments and intentions of the people who might engage in these activities. In the real world, it's tough to get honest answers to questions about people's willingness to participate in extralegal activities like protests or rebellion -- and that's assuming they could even be reached in the first place.

Absent direct measures of interests and intentions, we're forced to rely on measures of structural conditions that might shape political behavior. This is what some forecasters of presidential elections do, using things like incumbency, job growth, and changes in income to generate predictions months ahead of the vote. These kinds of models perform pretty well, but the forecasts they produce are typically less accurate than their poll-averaging counterparts.

The same logic holds in international affairs. Pretty much every theory of domestic political instability starts from the assumption that, other things being equal, poorer countries are more susceptible to crisis than wealthier ones. Simple, right? Just toss per capita GDP in your algorithm and move on to the next predictor.

Not so fast. As it happens, GDP estimates are produced by government agencies whose data-making capacity is directly related to the thing they're trying to measure. Some countries, including Cuba and North Korea, don't even report national economic statistics to the international bodies that collect them. And that's close to the best-case scenario. Reliable measures of many other oft-mentioned risk factors, like unemployment and income inequality, were simply unavailable for almost all countries until very recently, and coverage is still largely confined to richer parts of the world.

These gaping holes in the historical record don't make it impossible to generate useful statistical forecasts of international affairs. They do mean, however, that the forecasts we can make are much less accurate than the ones the poll-averaging modelers can produce for U.S. elections.

For rare events like coups or outbreaks of civil war -- in most years, only a few of these events will occur worldwide -- it's easy to be right almost all the time by saying nothing will happen anywhere, but that's also not particularly useful. The harder task is identifying where and when the occasional exceptions will occur without crying wolf too often.

This problem bears some resemblance to forecasting U.S. presidential elections, in which most of the 50 states dependably vote Democrat or Republican; the hard part is predicting the dozen or so swing states. In international politics, there are many cases that seem reliably "immune" to certain crises, and there's often also a small but self-evident set of usual suspects. It's the small but critical set of cases in between those two extremes that make us work to earn our paychecks.

Again, though, difficult does not mean impossible. As Pennsylvania State University political scientist Philip Schrodt has pointed out, well-designed models have achieved a respectable level of accuracy on a range of forecasting problems, including outbreaks of civil war and mass atrocities and the occurrence of coups d'état. Still, these models usually aren't as precise as we'd like. For every high-risk case that suffers a crisis, there is usually at least a handful of them that don't, and occasionally a supposedly low-risk case just plain surprises us.

Data gleaned from the deluge of information now pouring over the Internet may soon help fill some of these gaps, but we're not there yet. In the meantime, we must create forecasts with the data we have, not the data we want. It's great that statistical forecasters won wider respect for their methods by nailing the outcome of this year's U.S. presidential election. But it's important for people to appreciate that not every forecasting problem can be solved by sprinkling it with math and silicon.