Opening Gambit: Moore's Flaw

Why the tech industry's unbridled optimism won't save the world.

BY CHARLES HOMANS | NOVEMBER 2010

Next year marks the 50th birthday of the first commercially produced silicon microchip, a device that has arguably reordered society more profoundly than any invention since the internal combustion engine. It will also be the 46th birthday of Moore's Law, the prediction coined by the billionaire co-founder of Intel, Gordon Moore, who more or less told us this would happen.

In an April 1965 essay in the trade journal Electronics, Moore predicted that the number of transistors that could fit onto a single chip would double every year -- later revised to every 18 to 24 months -- causing the power of chips to increase, and their price to fall, exponentially over time. It was an audacious claim, and one that proved accurate enough for Moore's Law to make the leap from obscure technical rule of thumb to omnipresent nugget of pop business philosophy, the kind of thing that MBAs love to invoke alongside Sun Tzu's battlefield aphorisms and Malcolm Gladwell's tipping point.

Moore's Law was a perfect fit with the broader zeitgeist of the early '00s. When the United States invaded Iraq with shock and awe, ABC News technology columnist Michael S. Malone crowed that Defense Secretary Donald Rumsfeld's military strategy, favoring high-tech weapons over heavy troop deployments, demonstrated how the "U.S. military has now jumped aboard Moore's Law." A 2004 editorial in the British Journal of Anaesthesia argued that a Moore's Law-like effect was at work in cardiovascular science and medicine, "both of which are advancing at a similar breakneck pace." And environmentally minded politicians like former U.S. Vice President Al Gore have made a habit of invoking it in the service of the most pressing technological struggle of the early 21st century: the effort to wean humanity off its reliance on fossil fuels.

But is the silicon chip really such a great model for solving the world's problems? Moore's prediction, formulated when its author was a 36-year-old researcher working at a Palo Alto semiconductor laboratory, was never the immutable natural principle its most ardent apostles believed it to be. "There's nothing law-like about Moore's Law," says W. Patrick McCray, a historian of technology at the University of California, Santa Barbara. What it was, first and foremost, was an economic argument.

When Moore wrote his essay, the silicon microchip was one of several possible technologies jostling in the primordial soup of midcentury computer science. Moore was arguing that an investment in silicon-based chips then under development at labs like his own would pay off faster, and more dramatically, than an investment in the alternatives. The particulars may have been dry, but Moore's Law struck a peculiarly American chord, with its promise of a rapturously unlimited future and its swaggeringly specific confidence. It had a whiff of Manifest Destiny, and a bit of Babe Ruth pointing to the bleachers at Wrigley Field.

It became a spectacularly successful self-fulfilling prophecy. By 2000, the integrated circuit had made semiconductor manufacturing into a $200 billion-plus industry and transformed the broader economy dramatically enough for U.S. Federal Reserve Chairman Alan Greenspan to remark that "information innovation lies at the root of productivity and economic growth." Moore's prediction of endless computing power and ever-shrinking prices, meanwhile, largely came to pass. A microchip in the early 1970s contained a couple thousand transistors, each of which cost just shy of a dollar. Intel's Xeon processor, introduced in 2007, is loaded with 820 million transistors, each one less than 1/800th the width of the thinnest human hair and costing less than 1/100,000th of a cent.

The exponential growth had a profound effect on how the Silicon Valley entrepreneurs of the 1980s and 1990s viewed the world. In the 2000s, many used their newly minted fortunes to start enterprises with the aim of tackling the world's big technological problems, the sort of things they had dreamed about in their geeky and idealistic youth. The multibillionaire founders of Amazon and Microsoft launched commercial spaceflight ventures. PayPal's Elon Musk started the electric car company Tesla Motors. Former dot-com executives founded Complete Genomics, a firm that hopes to transform genetic sequencing into an affordable consumer service, like getting an X-ray. And the venture capitalists who funded the 1990s tech boom have embraced the clean-energy business en masse, investing $12 billion from 2001 to 2009 in companies hoping to devise and market alternatives to oil and coal.

ISTVAN BARA/KEYSTONE/GETTY IMAGES

 

Charles Homans is an associate editor at Foreign Policy.

ED_KORCZYNSKI

2:41 PM ET

October 13, 2010

Moore's Law envy

I've interviewed Gordon Moore (1997, Solid State Technology magazine; http://www.electroiq.com/index/display/semiconductors-article-display/3232/articles/solid-state-technology/volume-40/issue-7/features/industry-insights/moores-law-extended-the-return-of-cleverness.html) and he always acknowledges that "so called" Moore's Law was really marketing hype driven by economics. In 1965, discrete semiconductors were the standard, while ICs suffered from horrible yield losses and were generally considered to be unreliable. With the motivation to convince people to start designing ICs, Moore noticed the periodic doubling of devices per IC (BTW, strictly speaking, the Law says that the number of all devices in an IC--not just not transistors--doubles periodically) would lead to tipping-points (couldn't resist use of the term) for different applications, so that reliable electronic functionality could gradually be deployed for different high-volume markets. First calculators, then PCs, then cellphones, then medical diagnostics, and new applications continue to open up...all driven by exponentially higher function and ever lower priced ICs.

While in any given moment it is often correct to extrapolate a straight line forward, no exponential goes on forever. As Gordon Moore stated back in 1997, physical atoms are the limit to Moore's Law...and today's state-of-the-art "32nm node" ICs already contain certain critical structures that are merely one to two atoms thick. However, before we reach the physical limits, we reach electrical and thermal limits as Moore predicted in 1997. Note that commercial ICs clock-speeds have topped out in the low single-digits of GHz, since faster chips generate excessive heat and require expensive active cooling. Consequently, as we reach electrical and thermal limits, we reach economic limits. How many more nodes are left ahead of us today? Intel's own website says the limit could be reached by 2020, which means that after 20+ nodes we have maybe 3-5 remaining. After which, all bets are off. Innovation will continue, but the pace will likely be ploddingly linear.

To be sure, the "end of Moore's Law" has been predicted to be 3-5 nodes out for decades! At one time, it was considered to be too expensive to shrink ICs below 1 micron in critical dimension, yet engineers prevailed. The difference this time is the intrinsic atomic limits. Past limits were based on engineering, even though the science showed possible ways forward. Today's perceived limits are based on both science and engineering, with no proven concept of an economically viable way forward. This time is really different, and IC industry conferences now search for a future under the concept of "More than Moore."

No other industry has every shown a dynamic like Moore's Law, despite ample wishful thinking. Photovoltaics may scale some physical features, but such work produces at best some linear improvements with equivalent costs, and cost-reductions come from manufacturing volumes. Medical technologies likewise continue to improve, but not at exponential rates. It is certainly a flaw to think that a new high tech development will follow exponentials for both functionality improvement and price reduction, but it is certainly understandable why many people have "Moore's Law envy."

 

PAUL 3GV2.COM

4:31 AM ET

October 17, 2010

Hello Charles, I Strongly Disagree ...

though not with the article itself, which is very reasonable.

No, I disagree with the headline which brought me to this article: "No, We're Not Going to Invent Our Way Out of Global Warming".

Being one that doesn't believe even a smidgen that "Global Warming" is real, I'm not concerned about it. It's simply a way to tax the air we breathe because they have taxed everything else and need a way.

I object then to the thought that WERE it real, we would not be smart enough to invent our way to a solution. On the contrary - I believe I've done just that very thing.

My idea has yet to be built, and yet to be tested (searching funding opportunities now) - but I feel it is completely sound in theory. I believe it will achieve the objectives I've set out for it - namely, it completely eliminates the need to get electricity by any other means. Oil, Coal, Nuclear, Wind, Solar, Bio, Water, Magnets, Trash, Algae, And Whatever Else - and it does it less expensively than is currently possible or available. All other "clean tech" solutions that everyone else has been working on with regard to the generation of electricity should become obsolete if I'm correct (and again, I believe I am).

For those who DO believe in Global Warming, this should be a good thing as it means that, in fact, the headline which I strongly disagree with and which brought me to this article (again "No, We're Not Going to Invent Our Way Out of Global Warming") was and is, completely wrong.

Time will tell - I could be severely wrong and misguided in my invention efforts. However because we have the potential to fail doesn't mean we shouldn't try - and it certainly doesn't mean that it can't be done.

In that spirit I hope you wish me the best and, if compelled, make introductions that could lead to this being a reality.

 

PADDYP

7:26 AM ET

November 9, 2010

Paul

Good luck with the funding - have you tried the Flat Earth Society?