Think Again

Think Again: Cyberwar

Don't fear the digital bogeyman. Virtual conflict is still more hype than reality.

"Cyberwar Is Already Upon Us."

No way. "Cyberwar is coming!" John Arquilla and David Ronfeldt predicted in a celebrated Rand paper back in 1993. Since then, it seems to have arrived -- at least by the account of the U.S. military establishment, which is busy competing over who should get what share of the fight. Cyberspace is "a domain in which the Air Force flies and fights," Air Force Secretary Michael Wynne claimed in 2006. By 2012, William J. Lynn III, the deputy defense secretary at the time, was writing that cyberwar is "just as critical to military operations as land, sea, air, and space." In January, the Defense Department vowed to equip the U.S. armed forces for "conducting a combined arms campaign across all domains -- land, air, maritime, space, and cyberspace." Meanwhile, growing piles of books and articles explore the threats of cyberwarfare, cyberterrorism, and how to survive them.

Time for a reality check: Cyberwar is still more hype than hazard. Consider the definition of an act of war: It has to be potentially violent, it has to be purposeful, and it has to be political. The cyberattacks we've seen so far, from Estonia to the Stuxnet virus, simply don't meet these criteria.

Take the dubious story of a Soviet pipeline explosion back in 1982, much cited by cyberwar's true believers as the most destructive cyberattack ever. The account goes like this: In June 1982, a Siberian pipeline that the CIA had virtually booby-trapped with a so-called "logic bomb" exploded in a monumental fireball that could be seen from space. The U.S. Air Force estimated the explosion at 3 kilotons, equivalent to a small nuclear device. Targeting a Soviet pipeline linking gas fields in Siberia to European markets, the operation sabotaged the pipeline's control systems with software from a Canadian firm that the CIA had doctored with malicious code. No one died, according to Thomas Reed, a U.S. National Security Council aide at the time who revealed the incident in his 2004 book, At the Abyss; the only harm came to the Soviet economy.

But did it really happen? After Reed's account came out, Vasily Pchelintsev, a former KGB head of the Tyumen region, where the alleged explosion supposedly took place, denied the story. There are also no media reports from 1982 that confirm such an explosion, though accidents and pipeline explosions in the Soviet Union were regularly reported in the early 1980s. Something likely did happen, but Reed's book is the only public mention of the incident and his account relied on a single document. Even after the CIA declassified a redacted version of Reed's source, a note on the so-called Farewell Dossier that describes the effort to provide the Soviet Union with defective technology, the agency did not confirm that such an explosion occurred. The available evidence on the Siberian pipeline blast is so thin that it shouldn't be counted as a proven case of a successful cyberattack.

Most other commonly cited cases of cyberwar are even less remarkable. Take the attacks on Estonia in April 2007, which came in response to the controversial relocation of a Soviet war memorial, the Bronze Soldier. The well-wired country found itself at the receiving end of a massive distributed denial-of-service attack that emanated from up to 85,000 hijacked computers and lasted three weeks. The attacks reached a peak on May 9, when 58 Estonian websites were attacked at once and the online services of Estonia's largest bank were taken down. "What's the difference between a blockade of harbors or airports of sovereign states and the blockade of government institutions and newspaper websites?" asked Estonian Prime Minister Andrus Ansip.

Despite his analogies, the attack was no act of war. It was certainly a nuisance and an emotional strike on the country, but the bank's actual network was not even penetrated; it went down for 90 minutes one day and two hours the next. The attack was not violent, it wasn't purposefully aimed at changing Estonia's behavior, and no political entity took credit for it. The same is true for the vast majority of cyberattacks on record.

Indeed, there is no known cyberattack that has caused the loss of human life. No cyberoffense has ever injured a person or damaged a building. And if an act is not at least potentially violent, it's not an act of war. Separating war from physical violence makes it a metaphorical notion; it would mean that there is no way to distinguish between World War II, say, and the "wars" on obesity and cancer. Yet those ailments, unlike past examples of cyber "war," actually do kill people.

Illustration by Francesco Bongiorni for FP

"A Digital Pearl Harbor Is Only a Matter of Time."

Keep waiting. U.S. Defense Secretary Leon Panetta delivered a stark warning last summer: "We could face a cyberattack that could be the equivalent of Pearl Harbor." Such alarmist predictions have been ricocheting inside the Beltway for the past two decades, and some scaremongers have even upped the ante by raising the alarm about a cyber 9/11. In his 2010 book, Cyber War, former White House counterterrorism czar Richard Clarke invokes the specter of nationwide power blackouts, planes falling out of the sky, trains derailing, refineries burning, pipelines exploding, poisonous gas clouds wafting, and satellites spinning out of orbit -- events that would make the 2001 attacks pale in comparison.

But the empirical record is less hair-raising, even by the standards of the most drastic example available. Gen. Keith Alexander, head of U.S. Cyber Command (established in 2010 and now boasting a budget of more than $3 billion), shared his worst fears in an April 2011 speech at the University of Rhode Island: "What I'm concerned about are destructive attacks," Alexander said, "those that are coming." He then invoked a remarkable accident at Russia's Sayano-Shushenskaya hydroelectric plant to highlight the kind of damage a cyberattack might be able to cause. Shortly after midnight on Aug. 17, 2009, a 900-ton turbine was ripped out of its seat by a so-called "water hammer," a sudden surge in water pressure that then caused a transformer explosion. The turbine's unusually high vibrations had worn down the bolts that kept its cover in place, and an offline sensor failed to detect the malfunction. Seventy-five people died in the accident, energy prices in Russia rose, and rebuilding the plant is slated to cost $1.3 billion.

Tough luck for the Russians, but here's what the head of Cyber Command didn't say: The ill-fated turbine had been malfunctioning for some time, and the plant's management was notoriously poor. On top of that, the key event that ultimately triggered the catastrophe seems to have been a fire at Bratsk power station, about 500 miles away. Because the energy supply from Bratsk dropped, authorities remotely increased the burden on the Sayano-Shushenskaya plant. The sudden spike overwhelmed the turbine, which was two months shy of reaching the end of its 30-year life cycle, sparking the catastrophe.

If anything, the Sayano-Shushenskaya incident highlights how difficult a devastating attack would be to mount. The plant's washout was an accident at the end of a complicated and unique chain of events. Anticipating such vulnerabilities in advance is extraordinarily difficult even for insiders; creating comparable coincidences from cyberspace would be a daunting challenge at best for outsiders. If this is the most drastic incident Cyber Command can conjure up, perhaps it's time for everyone to take a deep breath.

JUNG YEON-JE/AFP/Getty Images

"Cyberattacks Are Becoming Easier."

Just the opposite. U.S. Director of National Intelligence James R. Clapper warned last year that the volume of malicious software on American networks had more than tripled since 2009 and that more than 60,000 pieces of malware are now discovered every day. The United States, he said, is undergoing "a phenomenon known as 'convergence,' which amplifies the opportunity for disruptive cyberattacks, including against physical infrastructures." ("Digital convergence" is a snazzy term for a simple thing: more and more devices able to talk to each other, and formerly separate industries and activities able to work together.)

Just because there's more malware, however, doesn't mean that attacks are becoming easier. In fact, potentially damaging or life-threatening cyberattacks should be more difficult to pull off. Why? Sensitive systems generally have built-in redundancy and safety systems, meaning an attacker's likely objective will not be to shut down a system, since merely forcing the shutdown of one control system, say a power plant, could trigger a backup and cause operators to start looking for the bug. To work as an effective weapon, malware would have to influence an active process -- but not bring it to a screeching halt. If the malicious activity extends over a lengthy period, it has to remain stealthy. That's a more difficult trick than hitting the virtual off-button.

Take Stuxnet, the worm that sabotaged Iran's nuclear program in 2010. It didn't just crudely shut down the centrifuges at the Natanz nuclear facility; rather, the worm subtly manipulated the system. Stuxnet stealthily infiltrated the plant's networks, then hopped onto the protected control systems, intercepted input values from sensors, recorded these data, and then provided the legitimate controller code with pre-recorded fake input signals, according to researchers who have studied the worm. Its objective was not just to fool operators in a control room, but also to circumvent digital safety and monitoring systems so it could secretly manipulate the actual processes.

Building and deploying Stuxnet required extremely detailed intelligence about the systems it was supposed to compromise, and the same will be true for other dangerous cyberweapons. Yes, "convergence," standardization, and sloppy defense of control-systems software could increase the risk of generic attacks, but the same trend has also caused defenses against the most coveted targets to improve steadily and has made reprogramming highly specific installations on legacy systems more complex, not less.

EBRAHIM NOROOZI/AFP/Getty Images

"Cyberweapons Can Create
Massive Collateral Damage."

Very unlikely. When news of Stuxnet broke, the New York Times reported that the most striking aspect of the new weapon was the "collateral damage" it created. The malicious program was "splattered on thousands of computer systems around the world, and much of its impact has been on those systems, rather than on what appears to have been its intended target, Iranian equipment," the Times reported. Such descriptions encouraged the view that computer viruses are akin to highly contagious biological viruses that, once unleashed from the lab, will turn against all vulnerable systems, not just their intended targets.

But this metaphor is deeply flawed. As the destructive potential of a cyberweapon grows, the likelihood that it could do far-reaching damage across many systems shrinks. Stuxnet did infect more than 100,000 computers -- mainly in Iran, Indonesia, and India, though also in Europe and the United States. But it was so specifically programmed that it didn't actually damage those machines, afflicting only Iran's centrifuges at Natanz. The worm's aggressive infection strategy was designed to maximize the likelihood that it would reach its intended target. Because that final target was not networked, "all the functionality required to sabotage a system was embedded directly in the Stuxnet executable," the security software company Symantec observed in its analysis of the worm's code. So yes, Stuxnet was "splattered" far and wide, but it only executed its damaging payload where it was supposed to.

Collateral infection, in short, is not necessarily collateral damage. A sophisticated piece of malware may aggressively infect many systems, but if there is an intended target, the infection will likely have a distinct payload that will be harmless to most computers. Especially in the context of more sophisticated cyberweapons, the image of inadvertent collateral damage doesn't hold up. They're more like a flu virus that only makes one family sick.

RAIGO PAJULA/AFP/Getty Images


"In Cyberspace, Offense Dominates Defense."

Wrong again. The information age has "offense-dominant attributes," Arquilla and Ronfeldt wrote in their influential 1996 book, The Advent of Netwar. This view has spread through the American defense establishment like, well, a virus. A 2011 Pentagon report on cyberspace stressed "the advantage currently enjoyed by the offense in cyberwarfare." The intelligence community stressed the same point in its annual threat report to Congress last year, arguing that offensive tactics -- known as vulnerability discovery and exploitation -- are evolving more rapidly than the federal government and industry can adapt their defensive best practices. The conclusion seemed obvious: Cyberattackers have the advantage over cyberdefenders, "with the trend likely getting worse over the next five years."

A closer examination of the record, however, reveals three factors that put the offense at a disadvantage. First is the high cost of developing a cyberweapon, in terms of time, talent, and target intelligence needed. Stuxnet, experts speculate, took a superb team and a lot of time. Second, the potential for generic offensive weapons may be far smaller than assumed for the same reasons, and significant investments in highly specific attack programs may be deployable only against a very limited target set. Third, once developed, an offensive tool is likely to have a far shorter half-life than the defensive measures put in place against it. Even worse, a weapon may only be able to strike a single time; once the exploits of a specialized piece of malware are discovered, the most critical systems will likely be patched and fixed quickly. And a weapon, even a potent one, is not much of a weapon if an attack cannot be repeated. Any political threat relies on the credible threat to attack or to replicate a successful attack. If that were in doubt, the coercive power of a cyberattack would be drastically reduced.

ALEXEY DRUZHININ/AFP/Getty Images

"We Need a Cyberarms Control Agreement."

We don't. Cyberwar alarmists want the United States to see cybersecurity as a new challenge on a geopolitical scale. They see cyberspace becoming a new area for military competition with rivals such as Russia and China, and they believe new cyberarms limitation agreements are needed to prevent this. There are some rumblings to establish international norms on this topic: The British government convened a conference in London in late 2011, originally intended to make the Internet more secure by agreeing on new rules of the road, and Russia and China proposed at the U.N. General Assembly last September the establishment of an "international code of conduct for information security." Now, diplomats are debating whether the United Nations should try to forge the equivalent of nuclear arms control in cyberspace.

So, should it? The answer is no. Attempts to limit cyberweapons through international agreements have three principal problems. The first difficulty is drawing the line between cybercrime and potentially political activity in cyberspace. In January, for instance, a Saudi hacker stole about 20,000 Israeli credit card numbers from a shopping website and leaked the information to the public. In retaliation, a group of Israeli hackers broke into Saudi shopping sites and threatened to release private credit card information.

Where is the dividing line? Even if it were possible to distinguish criminal from state-sponsored political activity, they often use the same means. A second hitch is practical: Verification would be impossible. Accurately counting the size of nuclear arsenals and monitoring enrichment activities is already a huge challenge; installing cameras to film programmers and "verify" they don't design malicious software is a pipe dream.

The third problem is political, and even more fundamental: Cyberaggressors may act politically, but in sharp contrast with warfare, they are likely to have a strong interest in avoiding attribution. Subversion has always thrived in cyberspace because preserving one's anonymity is easier to achieve than ironclad attribution. That's the root of the political problem: Having a few states agree on cyberarms limitation is about as realistic as a treaty to outlaw espionage and about as practical as outlawing the general subversion of established order.

Aude GENET/AFP/Getty Images

"The West Is Falling Behind Russia and China."

Yes, but not how you think. Russia and China are busy sharpening their cyberweapons and are already well steeped in using them. The Russian military clandestinely crippled Estonia's economy in 2007 and Georgia's government and banks in 2008. The People's Liberation Army's numerous Chinese cyberwarriors have long inserted "logic bombs" and "trapdoors" into America's critical infrastructure, lying dormant and ready to wreak havoc on the country's grid and bourse in case of a crisis. Both countries have access to technology, cash, and talent -- and have more room for malicious maneuvers than law-abiding Western democracies poised to fight cyberwar with one hand tied behind their backs.

Or so the alarmists tell us. Reality looks quite different. Stuxnet, by far the most sophisticated cyberattack on record, was most likely a U.S.-Israeli operation. Yes, Russia and China have demonstrated significant skills in cyberespionage, but the fierceness of Eastern cyberwarriors and their coded weaponry is almost certainly overrated. When it comes to military-grade offensive attacks, America and Israel seem to be well ahead of the curve.

Ironically, it's a different kind of cybersecurity that Russia and China may be more worried about. Why is it that those countries, along with such beacons of liberal democracy as Uzbekistan, have suggested that the United Nations establish an "international code of conduct" for cybersecurity? Cyberespionage was elegantly ignored in the suggested wording for the convention, as virtual break-ins at the Pentagon and Google remain a favorite official and corporate pastime of both countries. But what Western democracies see as constitutionally protected free speech in cyberspace, Moscow and Beijing regard as a new threat to their ability to control their citizens. Cybersecurity has a broader meaning in non-democracies: For them, the worst-case scenario is not collapsing power plants, but collapsing political power.

The social media-fueled Arab Spring has provided dictators with a case study in the need to patrol cyberspace not only for subversive code, but also for subversive ideas. The fall of Egypt's Hosni Mubarak and Libya's Muammar al-Qaddafi surely sent shivers down the spines of officials in Russia and China. No wonder the two countries asked for a code of conduct that helps combat activities that use communications technologies -- "including networks" (read: social networks) -- to undermine "political, economic and social stability."

So Russia and China are ahead of the United States, but mostly in defining cybersecurity as the fight against subversive behavior. This is the true cyberwar they are fighting.

China Photos/Getty Images

Think Again

Think Again: Microfinance

Small loans probably won't lift people out of poverty or empower women. But that doesn't mean they're useless.

 

"Microcredit Is a Proven Weapon Against Poverty."

Alas, no. Microcredit, the strategy of lending sums as small as $100 to help poor people start tiny businesses, has won acclaim like few other recent concepts in economic development, winning plaudits from political leaders, titans of industry, and celebrities. Bill Clinton and Tony Blair love microcredit. So do Queen Rania and Natalie Portman. More than 100 million people in more than 100 countries have received microloans, thanks in no small part to billions of dollars from foreign aid agencies, philanthropists, and "social investors" looking to do well while doing good. In 2006, microcredit pioneer Muhammad Yunus and the Grameen Bank he founded in Bangladesh shared the Nobel Peace Prize. Microcredit has gained a global reputation for lifting people out of poverty and empowering women.

What has made so many so sure of microcredit? The ideas are powerful: a blend of self-reliance and liberation that appeals across the political spectrum. Microfinance promoters told compelling stories of individual men and women whose successes embodied those ideas, and papers in prestigious journals gave convincing evidence that the loans, especially when they went to women, made them less poor.

But the old studies are now discredited. Newer, better ones have found that microloans rarely make an impact on bottom-line indicators of poverty, such as how much a household spends each month and whether its children are in school.

The reversal of this academic verdict is a sign of a larger shift in development economics, toward randomizing in order to pin down cause and effect. If you observe that less-poor people are more likely to have taken microcredit, it is hard to know what caused what: Did the microcredit make them better off, or did being better off make them readier to borrow? If you instead flip a coin to decide who in a village will be offered microcredit and who will not -- randomizing -- and then observe that the fates of the two groups diverge over time, you can more accurately observe what effect the loans are having on those who receive them.

Recent randomized studies in India, Mongolia, Morocco, and the Philippines have found that access to microcredit does stimulate microbusiness start-ups -- raising chickens, say, or sewing saris. But across the 12-18 months over which progress was tracked, the loans did not reduce poverty. So today the best estimate of the impact of microcredit on poverty is zero. (In retrospect, reverse causation cannot be ruled out as the source of the more upbeat findings of earlier, nonrandomized studies.)

This finding clashes with the microcredit mythology. But it comports with common sense. If you're reading this article online, you probably belong to the global middle class, the billion or so people who earn steady wages and lead lives of material comfort. What in your family history lifted you to your enviable perch? It probably wasn't tiny loans to your indigent ancestors so they could raise goats. Then, as now, most poor people's best hope for escaping poverty lies in graduating from tenuous self-employment to steady employment -- to jobs, which are the fruit of industrialization.

FARJANA K. GODHULY/AFP/Getty Images

 

"Microfinance Is Useless."

No. It would be wrong to overreact to the hype about microloans and dismiss the entire enterprise as a waste of money and effort. Twenty years ago, journalist Helen Todd spent a year following the lives of 62 women in two Bangladeshi villages served by Yunus's famous Grameen Bank. Of the 40 who took microcredit from Grameen, all stated business plans to get the loans: They would buy cows to fatten or rice to husk and resell. A few actually did those things, but most used the money to buy or lease land, repay other loans, stock up on rice for the family, or finance dowries and weddings.

That's probably just fine. As the book Portfolios of the Poor shows, the people said to live on $2 a day actually don't. They live on $3 one day, $1 the next, and $2.50 the day after. Or they are farmers who earn money once a season. But their children need to be fed every day, and husbands don't fall ill on convenient schedules. The need to match an unpredictable income to spending needs with different rhythms generates an intense demand among poor people for financial services that help them set aside money in good times, when they need it less, and draw it down in bad.

All financial services help meet this demand, however imperfectly: loans, savings accounts, insurance, money transfers. A mother can pay the doctor for treating her daughter by getting an emergency loan from a friend, depleting savings, persuading her brother in the city to send money, or even -- if she is very lucky -- using health insurance. That is why the microcredit movement became the microfinance movement and today supports other services along with loans.

Poor people have less money than the rich, but they aren't dumber; in fact they are generally more resourceful out of necessity. If a woman uses a microloan to buy rice or repair a roof instead of starting a business, I hesitate to second-guess her. People in wealthy countries see fit to buy everything from food to houses on credit. Should we expect the poor to differ?

ROBERTO SCHMIDT/AFP/Getty Images

 

"Muhammad Yunus Invented Microcredit."

Yes, just as Henry Ford invented the car. Where Ford had the assembly line, Yunus's breakthrough innovation was joint liability, the practice of making small groups of borrowers -- the women of a particular village, for instance -- collectively responsible for each other's loans. The vouching for peers substituted for collateral and produced astonishingly high repayment rates.

Joint liability was not new, however. Proverbs 11:15 warns, "A foolish man hands over his bounty which he pledges for his neighbor as security." A similar concept was also at the core of the credit cooperatives that sprouted across Germany starting in the 1850s, in which groups of poor people would band together, borrow from outside benefactors, and then divvy out the credit among themselves. Around 1900, seeking to quell unrest, the British introduced credit groups into colonial India, which included the territory of modern Bangladesh. In the late 1970s, these already functioning cooperatives inspired Yunus and his students as they built their own microcredit method by trial and error.

Yet the comparison to the carmaker is apt. Truly, Yunus is the Henry Ford of microfinance. Over the course of 28 years, until Bangladesh's prime minster forced him out in 2011 in an act of political spite, Yunus built a bank with thousands of employees delivering useful services to millions of customers. He inspired competition within Bangladesh and imitation beyond, which led to a steady stream of new innovations in the name of serving the poor, including savings accounts and more flexible loans. He was the first leader of the modern microcredit movement to operate in a relatively businesslike way: to mass-produce and charge the poor enough interest to cover most operating costs so that the bank could expand to serve more people.

FARJANA K. GODHULY/AFP/Getty Images

 

"Microcredit Empowers Women."

Not so much. The microcredit movement began in the 1970s. In sync with the global movement for gender equality that began at the same time, microcredit has focused mainly on women. Promoters have asserted that the loans "empower" female borrowers. Women who came home with loans, it was said, gained more leverage vis-à-vis their husbands in household decisions about whether to buy food or beer, to invest or consume. Meanwhile, women who had been traditionally confined by their culture to the domestic sphere, as in Bangladesh, found liberation in being able to conduct business in public at the weekly meetings where loan installments were paid. Some nonprofit microfinance programs include classes about such subjects as basic accounting and prenatal nutrition.

But though credit is a source of possibilities, it is also a bond -- potentially an oppressive one when enforced through peer pressure. Indeed, greater sensitivity to social pressure helps explain why microlenders have favored women: In many cases, they have paid back more reliably, putting up less argument than men.

Anthropological studies have found a mix of stories about the link between credit and empowerment. In some cases, women gain increments of liberation, just as hoped. After studying female microcredit users in Bangladesh in the mid-1990s, Syed Hashemi, Sidney Schuler, and Ann Riley concluded in the academic journal World Development that the Grameen Bank had empowered female borrowers on average. They wrote:

Several of the women … told the field investigators that through Grameen Bank they had "learned to talk," and now they were not afraid to talk to outsiders. In both programs some members have the opportunity to play leadership roles. One woman told the researchers, "I have been made the [borrowing group] Chief. Now all of the other women listen to me and give me their attention. Grameen Bank has made me important."

But there are also sad stories. Anthropologist Lamia Karim has documented how in Bangladesh, where most borrowers are female, women who defaulted have had their possessions -- in extreme cases, their houses -- carted off by their jointly liable peers to be sold to repay their loans.

From what I can tell from the fragmentary evidence, the most famous form of microcredit -- group-based credit as pioneered by Grameen -- is the least empowering and most fraught with risk, because of the way it marshals peer pressure to enforce loan repayment. Individual microloans, given one-on-one, without the burdens of weekly group meetings and peer pressure, appear to have less of a dark side. If microbank staff can't outsource loan decisions to the group, though, they must spend more time vetting customers, making the whole enterprise less profitable and less likely to focus on the neediest.

YURI CORTEZ/AFP/Getty Images

 

"Microcredit Is Immune to the Irrationalities of Mainstream Finance."

Absolutely not. The hype made it seem like more money for microcredit is always better. But microcredit is actually more prone than conventional credit to overheating and bubbles. It suffers from two vulnerabilities: a general lack of credit bureaus to track the indebtedness of low-income people, which leaves creditors flying blind; and the irrational exuberance about microcredit as a way to help the poor, which has unleashed a flood of capital from well-meaning people and institutions.

Most of this cross-border capital flow -- some $3 billion in 2010 -- has gone straight into microloans rather than business-building activities such as training and computer purchases. The stock of outstanding microdebt has grown 30 percent or more per year in many countries. The pace has proved faster than some lenders and borrowers could safely manage. In Nicaragua, after a nationwide debtor's revolt won backing from President Daniel Ortega, the tide of defaults destroyed one of the largest microcreditors, Banex. In the last five years, bubbles have also inflated and popped in Bosnia-Herzegovina, Morocco, and parts of Pakistan. In the short run, that has been good news for borrowers who took loans and then defaulted. (After all, if the lenders lost a lot of money, that money went somewhere!) But in the long view, damaging the industry reduces access to finance.

Then there is the Indian state of Andhra Pradesh, where, before the overheated market could implode on its own, the state government in 2010 essentially shut down the industry overnight. Visiting shortly afterward, I learned of villages where microcreditors were so plentiful that they were known by the day of the week on which their clients gathered to get loans and make payments. Some women had loans for every day of the week.

The bottom line: Microfinance is no silver bullet for poverty, but it does have things to offer. The strength of the movement is not in reducing poverty or empowering women, but in building dynamic institutions that deliver inherently useful services to millions of poor people. Imagine your life without financial services: no bank account, no insurance, no loans for a house or an education; just cash in your pocket or under your mattress. Poor people transact in smaller denominations, but they have to solve financial problems at least as tough as yours. They need and deserve such services too, just as they do clean water and electricity. The microfinance movement is about building businesses and business-like nonprofits that mass-produce financial services for the poor -- not just microcredit, but microsavings, microinsurance, and micro money transfers too.

The well-meaning flood of money into microcredit distorts the industry toward overreliance on this one, risky service. It is the greatest threat to the greatest strength of microfinance as a whole. That is why the hype about microcredit has been not merely misleading but destructive. And that is why less money should go into microcredit, not more.

TANG CHHIN SOTHY/AFP/Getty Images