National Security

Evil in a Haystack

How do you find a terrorist hidden in millions of gigabytes of metadata?

Over the last week, critics and defenders of the National Security Agency have heatedly debated the merits of metadata -- information about the phone activity of millions of Americans that was given to the government via a secret court order.

The information collected includes records of every call placed on the Verizon communications network (and, it appears, every other U.S. phone carrier) including times, dates, lengths of calls, and the phone numbers of the participants, but not the names associated with the accounts.

For some, the collection of these data represent a grave violation of the privacy of American citizens. For others, the privacy issue is negligible, as long as it helps keep us safe from terrorism.

There are indeed privacy issues at play here, but they aren't necessarily the obvious ones. In order to put the most important questions into context, consider the following illustration of a metadata analysis using sample data derived from a real social network. The sample data isn't derived from telephone records, but it's close enough to give a sense of the analysis challenges and privacy issues in play.

While this example is relevant to what happens behind the NSA's closed doors, it is not in any way intended to be a literal or accurate portrayal. While every effort was made to keep this example close to reality, a wide number of hypotheticals and classified procedures ensure the reality is somewhat different.

We start with a classic scenario. U.S. intelligence officials have captured an al Qaeda operative and obtained the phone number of an al Qaeda fundraiser in Yemen.

You are an analyst for a fictionalized version of the NSA, and you have been authorized to search through metadata in order to expose the fundraiser's network, armed with only a single phone number as a starting point.

The first step is refreshingly simple: You type the fundraiser's phone number into the metadata analysis software and click OK.

In our example data, the result is a list of 79 phone numbers that were involved in an incoming or outgoing call with the fundraiser's phone within the last 30 days. The fundraiser is a covert operator and this phone is dedicated to covert activities, so almost anyone who calls the number is a high-value target right out of the gate.

Using the metadata, we can weight each phone number according to the number of calls it was involved in, the lengths of the calls, the location of the other participant, and the time of day the call was placed. Your NSA training manual claims these qualities help indicate the threat level of each participant. Your workstation renders these data as a graph. Each dot represents a phone number, and the size of the dot is bigger when the number scores higher on the "threat" calculus.

This is already a significant intelligence windfall, and you've barely been at this for five minutes. But you can go back to the metadata and query which of these 79 people have been talking to each other in addition to talking to the fundraiser.

Using a common mathematical calculation, each phone number can be weighted based on how it provides links to other numbers in the network (the math is similar to the formula Google uses to rank pages). High-scoring accounts are almost always extremely significant to a social network (although that's not the same thing as being an important terrorist).

Your search reveals that many of these people are talking to each other and not just to the fundraiser. That suggests they may be coordinating their activities. It might suggest that their conversations pertain to al Qaeda business, especially if you factor the criteria from the first graph (the graph above only reflects network position).

What if you looked at all of the incoming and outgoing calls for all 79 of the phone numbers you have examined so far? This is where the rubber hits the Big Data road.

The reason we differentiate Big Data from plain old data is that it exposes insights you can't extract simply by reading and understanding the information itself, in this case a very long list of phone calls. When you collect a lot of data, you can do mathematical analysis to highlight important relationships that would be invisible to manual inspection.

You add in the new numbers, now two degrees of separation from the original fundraiser, including all the calls made and received by each of the 79 phone numbers on the list. Using the sample data, this results in a new network of 47,923 phone numbers, more than 67,535 relationships, and hundreds of thousands of calls, each with lengths, times, and locations.

You've been hired by the NSA, so odds are you're pretty good at math. But the software you're using doesn't necessarily require that you understand the principles. You click OK and it gives you results based on mathematical formulas that have been described to you in very general terms.

In this case, it gives you a set of scores to rank the importance of each account. These scores suggest you can cut the size of the dataset to between 1,200 and 22,500 phone numbers that are statistically significant.

This range is wide, but it's not arbitrary. In this particular case, the set of 22,500 numbers is most mathematically advantageous. You can cut the number of accounts by more than half, but only reduce the scores for importance by about 7.5 percent. That is the best bargain you can get with this particular dataset. (This is an overly simplified explanation of what happens in this transaction, but it roughly conveys the point.)

The more you cut, the worse the bargain gets. If you reduce the set to 1,200 phone numbers, you've cut the number of accounts by 97.5 percent, but you've cut the sum total importance of those accounts by 93 percent. You're still getting a discount, but it's not nearly as attractive.

And no matter how you slice it, we're still talking about a lot of accounts. So you select some additional analysis options from a menu and click OK. The software quickly identifies a number of different segments in the data, which it illustrates using colorful visual graphs.


The graph on top shows groupings of phone numbers based who calls whom most frequently, separating numbers based on their degrees of separation from each other. (Although all the numbers are two degrees of separation from the fundraiser, that doesn't mean they're all two degrees from each other.) These are large groupings, ranging from hundreds of accounts to thousands. The odds are pretty good that if you examined the content of each group individually, you would find some kind of thematic connection, possibly based around where the people who own the phone numbers live or what organizations they support. But until you look at the names of the people, where they live, and maybe the content of calls, you won't know exactly what those themes are.

The lower graph uses colorful dots to highlight "cliques," which are smaller groupings ranging from a handful to a couple dozen accounts each, fewer than 400 accounts in total. These are the most concentrated and active relationships -- very small groups that likely share some kind of direct relationship -- and you can tell (to a certain extent) which of these cliques are most connected to the original fundraiser.

You have access to half a dozen variations on this kind of analysis, each yielding different lists of potential targets, some very small and others quite large.

You could continue to widen the net -- for instance traveling another degree of separation from the fundraiser -- but then you'd be analyzing millions of accounts and facing ever-more challenging complexities. So you call a halt.

It's time to decide.

Which of these accounts should be associated with names and addresses and other data? Which accounts merit additional investigation and more intrusive scrutiny? (This might include cross-referencing the numbers against other databases, referring them to the FBI or CIA for additional investigation, or asking a supervisor to initiate the process of wiretapping the phones and listening to what the subscriber is talking about.)

You can recommend one of the following lists of numbers for additional investigation or data collection:

1. The 79 phone numbers that called the original number;

2. The 24 most important members of the 79-number set;

3. The 47,923 phone numbers that are two degrees of separation from the original;

4. The 1,250 phone numbers (out of 47,923) with the top scores for importance (but representing only 7 percent of the sum total of all importance scores in the network);

5. The 4,500 top-scoring phone numbers, representing 21.5 percent of the total;

6. The 22,500 top-scoring phone numbers, representing 92.5 percent of the total;

7. One of the color groups identified by your analysis, which range in size from hundreds to thousands of accounts (but which color to choose?);

8. One of the cliques, which limits the set to just a few hundred accounts showing high levels of activity, but incurs the risk of missing an al Qaeda operative or cell that deliberately keeps communication to a minimum.

What will you do?

You've already "touched" tens of thousands of customer records, including many belonging to American citizens. Most of those phone records have been used only to perform the necessary math to identify smaller groups that will receive more intrusive scrutiny. So far, you haven't yet even looked at an actual phone number.

But phone numbers are extremely structured data, if you do choose to look. The list of 47,923 includes thousands of phone numbers for accounts based in the United States. Thanks to the area codes, exchanges, and cell phone location metadata, you can easily click a button and get a list showing which towns are associated with the people in the whole set or any of the smaller sets.

The NSA says it has "minimization" procedures to prevent unnecessary intrusions on the privacy of American citizens, presumably by blocking the analyst's access to information on U.S. phone numbers to a greater or lesser degree. But if the list reveals a dozen well-connected phone numbers based in, say, Minneapolis, isn't that exactly the kind of thing you're supposed to detect? When does the relevance of a U.S. account outweigh privacy concerns? If it's part of the 79 original accounts, or the 22,500 most mathematically relevant?

This leads to another critical question: How much should you trust this math? You have access to multiple types of analysis; each one has strengths and weaknesses. Which ones are a good fit for this data? Are any of them?

Network analysis has proven reliability in discovering nodes that are important... to the structure of networks. But that's not necessarily the same thing as being an important or dangerous terrorist. The wider you cast the net, the greater the chance you will find yourself analyzing a social network instead of a terrorist network.

One of the most important parts of your analysis uses the duration and time of day of a call in an effort to determine which calls are more likely linked to terrorist operations. Do these criteria reflect historical trends or are they constantly updated? More importantly, have they been tested for accuracy?

Is there any way to conduct a credible test other than by saying "privacy be damned" and collecting the call content of all the people in a large sample network so you can compare the actual content of the network to your predictions? Can you trust this kind of analysis if it isn't periodically tested?

There are no clear objective answers to most of these questions. But there are factors that influence how the government chooses to answer them.

For one thing, U.S. policies are still informed by the idea that all terrorist attacks should be interdicted. A frequently expressed corollary to that premise states that, while tradeoffs against civil liberties might be bad in the abstract, those issues are meaningless when faced with a ticking time bomb.

But we don't know how many "imminent" terrorist attacks have been prevented by these techniques. Does anyone act on your analysis in real time?

Privacy aside, it's also important to keep your data focused and avoid bloat. When you start off with one seed account (the fundraiser), it's possible that investigating fewer data -- in this case the 79 accounts that contacted the fundraiser -- will produce a better result in terms of both civil liberties and counterterrorism.

But if you have multiple seeds -- say the fundraiser and his banker and four couriers they work with -- that opens the door to much stronger mathematical analysis, at the expense of exponentially increasing the number of accounts you need to analyze.

Analysts are understandably greedy for data even when they don't necessarily need it, and much of government is filled with people for whom Big Data might as well be magic. The inevitable result is that when presidents, lawmakers, and judges are told in vague but enthusiastic terms that more data equals less terrorism, they might be inclined to write blank checks.

But while these matters are complex, they are not impenetrable. Once we get beyond the obvious and not-insignificant issue of whether the Foreign Intelligence Surveillance Act was intended to authorize such broad collection, there are important questions that must be addressed if we're going to continue to use these techniques -- which we almost certainly are.

1. How much contact can an analyst have with a U.S. person's data before it becomes a troublesome violation of privacy? Is it a violation to load a phone record into a graph if the analyst never looks at it individually? Is it a violation to look at the number individually if you don't associate a name? Is it a violation to associate a name if you never take any additional investigative steps?

2. Metadata analysis is more accurate when the data is more complete. Should minimization practices filter metadata on American citizens out of the analysis altogether? What if that means targeting might be less accurate and, ironically, more likely to designate innocent people for more intrusive scrutiny?

3. What percentage of phone traffic to targeted numbers travels only on foreign carriers? Does the absence of those data skew analysis and possibly overemphasize the scoring of phone numbers used by American citizens?

4. On a fundamental level, are we willing to trust mathematical formulas and behavioral models to decide who should receive intrusive scrutiny?

5. Metadata analysis rarely deals in certainties; it almost always produces probabilities. What probability of evil intent should these models demonstrate before the government uses them to help justify a phone tap, or a house search, or a drone strike? 90 percent? 60 percent? Should we allow incremental collection of slightly more intrusive data if they can clarify a marginal case?

6. Have we tested our analytical math to see how accurate its predictions are relative to the actual content of calls? If so, how were these tests done? If not, are we willing to trust these models based on their success in other fields, or do they need to be tested specifically for counterterrorism?

7. If we believe the models do need to be tested for accuracy, are we willing to endure the privacy violations such tests would almost certainly entail? Will more accurate models lead to better privacy in the long run by reducing the number of innocent people subjected to more intrusive scrutiny?

8. Are we willing to trust the government to hold this data? Although the government says this data is currently focused on foreign counterterrorism, do we believe the president might not order the NSA to access metadata in the wake of a terrorist attack of domestic origin?

9. On a related note, what happens if the origin of an attack isn't immediately clear, as in the Boston Marathon bombing? Should the NSA immediately begin a broad analysis of metadata and continue until it's clear where the responsibility lies?

10. If we were to allow the use of this technology in domestic terrorism investigations, during a crisis or otherwise, how do we avoid collecting information on legal political dissent? For instance, targeting anarchists might inadvertently produce a list of influential leaders in the Occupy movement. Targeting militia groups might create a database of gun sellers. When you plunge into a huge dataset, you sometimes get insights you didn't expect.

None of these questions is simple or easy. None of them lends itself to polling or punditry. They aren't easy to discuss in a reasoned and accurate manner during a two-minute TV hit or on the floor of the House of Representatives.

Yet they cut straight to the intersection of Big Data, counterterrorism, and the U.S. legal system, including constitutional protections against unreasonable searches. The founding fathers couldn't have imagined that one day a machine using advanced math might provide an argument in favor of a search warrant.

Our technological capabilities far exceed the wildest dreams of the authors of the Fourth Amendment, and neither the courts nor our laws have kept pace.

If America can't muster the energy to tackle these questions thoughtfully, we are likely to lose control of the outcome and become less free, less secure, or both.

And no one will be able to explain why.

PAUL J. RICHARDS/AFP/Getty Images

Argument

Trade Secrets

The U.S.-EU free trade agreement could be a boon for the global economy, but confidential negotiations are a dangerous threat to democracy.

The winners of the 2012 and 2009 Nobel Peace Prizes are hooking up. At the G8 summit on June 17, President Barack Obama announced that the United States and the European Union would begin trade talks in Washington in July. British Prime Minister David Cameron predicted that if negotiations for the Transatlantic Trade and Investment Partnership (T-TIP) succeed, the trade agreement would bring millions of jobs to the nations bordering the Atlantic and could be "the biggest bilateral trade deal in history." But as important as an EU-U.S. trade union would be for the global economy -- and the resulting free trade area could amount to as much as 40 percent of global gross domestic product (GDP) -- it has even more important implications for the future of democracy.

Trade diplomats from both the United States and the 27 member states of the European Union say they want to create a 21st century trade agreement. They stress that in order to achieve that goal, they must not only reduce visible barriers to trade such as tariffs, but also achieve coherence among a wide range of social and environmental regulations -- everything from food safety and data protection to banking, labor, and environmental standards. Diplomats note that although these regulations have legitimate objectives, they may without intent increase costs for foreign vs. domestic producers. Firms selling to both markets potentially have to comply with regulations from the United States, the 27 EU countries, and the European Community (the bureaucracy governing the EU). However, if the United States and European Union can find common ground on these regulations, firms would have one set of common rules, the costs of production could decrease, more jobs could be created, and trade would expand. So far, however, neither side has made clear whether the end goal for regulatory coherence is harmonization, convergence, or some form mutual recognition where both parties accept the other's regulations without demanding change.

American and European trade negotiators may find that regulatory coherence is difficult to achieve. First, both the European Union (at the national and European Community-wide level) and the United States have honed these regulations over time based on public and business preferences. Regardless of their impact on trade efficiency, the public on both sides of the Atlantic accepts these regulations as democratically determined and hence, legitimate. But U.S. and EU citizens may not feel the same about regulatory compromises developed in secret by trade negotiators. Second, the United States and European Union have very different approaches to designing and implementing such regulations -- differences that stem from two equally different approaches to democratic capitalism and governance. In general, the European Union focuses on risks to society that stem from under-regulation -- such as injury or death from unsafe food, medicine, or working places. The United States, by contrast, is more concerned about the cost effectiveness of regulations. Hence U.S. regulators weigh whether the costs of regulating outweigh the benefits, and whether market forces can better achieve these goals. .

Not surprisingly, the two trade giants also have different regulatory strategies. The European Union tends to regulate in a top down, state-controlled manner with labor, business, and civil society input. The United States, meanwhile, tries to encourage business self-regulation or, when directly regulating, tries to use regulation that encourages market forces (such as transparency) rather than the visible hand of government. Given these fundamental differences, trade diplomats may find that some citizens oppose the T-TIP on both sides of the Atlantic, whether because they believe attempts to achieve regulatory coherence mean deregulation or because they see them as defining regulations downward. At the same time, given the EU's stronger regulatory regime, some trade critics also see opportunity in the T-TIP. According to Leo Gerard, head of the United Steel Workers, because European workers have achieved higher workplace standards and maintained greater union clout, "An agreement, properly designed and implemented, could be a force for progress." The obvious solution to this problem is to facilitate direct public input into the negotiations. Yet that is not the current strategy.

Trade policymaking in both the United States and European Union remains stuck in a 19th century time warp of opacity and secrecy. While trade negotiators require secrecy to discuss sector-specific tariffs or business confidential information, it's hard to understand why such secrecy should apply to the negotiation of chapters on regulatory issues like labor rights, data protection (what the U.S. calls privacy), or the environment. Diplomats have long argued that secrecy builds trust between countries, as they must count on counterparts to keep information confidential. But in this type of negotiation, there is little to be gained from keeping the objectives, strategy or progress secret. On the contrary, by keeping so much of the negotiation behind closed doors, they may engender public distrust.

The United States has sought public comments on the negotiation and the European Parliament has given its assent to the actual negotiating draft. (Although individual members of Congress have weighed in on the agreement, Congress has not yet held hearings on the talks.) But neither the United States nor the EU  has clearly delineated how they might incorporate public comments  into the negotiation process.

The United States, in particular, has not met its promises to ensure transparent and accountable governance. During his first campaign for the presidency, then Sen. Obama promised to restore the American people's trust in government by making it more open and transparent. The president fulfilled this pledge, at least in part, with his Open Government Directive, issued in 2009, requiring government agencies to go public with their data. Nonetheless, the administration's approach to trade negotiations remains decidedly closed. For example, the website for the Office of the U.S. Trade Representative -- charged with negotiating on behalf of the United States -- is essentially a dissemination device, rather than an interactive forum on which citizens can register their input. At a minimum, the website should be used facilitate a broader dialogue with Americans concerned about trade issues; it should be interactive with staff designated to respond to citizen comment.

In general, trade policies in both the United States and the European Union are dictated by senior government officials who are generally responsive to a small group of concerned citizens and business interests. The U.S. Trade Representative does allow some individuals greater insights into the negotiations. For example, cleared advisors, including some members of Congress and congressional staff, are allowed to see up to date information about the negotiations -- but they are required to have a secret clearance and to keep this information confidential. The bulk of these advisors represent commercial and economic interests -- or are individuals with connections to the current administration. Neither the United States nor the European Union has developed an advisory committee infrastructure to examine how to achieve regulatory coherence in a transparent, accountable manner -- so here are two suggestions:

First, Congress and the EU Parliament should keep a close watch on the negotiations. Both bodies should also clarify whether achieving regulatory coherence means harmonization, mutual recognition or some other approach. Second, the U.S. Trade Representative and other agencies involved in the negotiations should become more proactive as well as more interactive online. The Obama administration should develop a website encouraging consistent public feedback and dialogue on the T-TIP throughout the course of the negotiations, rather than solely at the beginning and end. The website should clearly delineate the objectives of negotiations on regulatory coherence as well as the administration's desired outcome. The website should also include regular updates on the negotiations for each chapter of the proposed agreement, particularly those that relate to environmental and social regulations.

Given the economic and political import of these negotiations, neither the United States nor the EU can afford to lose public trust in these negotiations. Policymakers cannot long proceed with secretive negotiations over policies that are not accountable to citizens as the very public scuttling of the Anti-Counterfeiting Trade Agreement and the Multilateral Agreement on Investment reveal. In fact, in its required Open Government Plan, USTR agreed that it needed to change its culture and become more responsive to the public at large, but it is struggling to figure out how to do just that.

Regulatory coherence is an important objective for the United States and the European Union. If the two trade behemoths can find common ground on regulations, their shared standards will set the bar for the global economy and facilitate high standards worldwide. They will also enhance the clout of the world's oldest and largest democracies in the global economy. But policymakers must negotiate these regulations in a transparent and accountable manner that reflects 21st century standards for democratic governance. After all, even in the 18th century, policymakers such as James Madison recognized that a "popular government without popular information, or means of acquiring it, is but a Prologue to a Farce or a Tragedy, or perhaps both."

JEWEL SAMAD/AFP/Getty Images