Doomsday Preppers

At a new center in Cambridge, a philosopher, an astronomer, and a software pioneer are looking for ways to save humanity from itself.

BY JOSHUA E. KEATING | FEBRUARY 4, 2013

"Sometimes I feel I'm irrationally optimistic," says Huw Price. This is, perhaps, an unlikely statement for the co-founder of an organization dedicated to studying scenarios for the end of life as we know it. Price, an Australian philosopher known for his work on time and cosmology, is working to build the Centre for the Study of Existential Risk (CSER) -- a proposed new think tank at Cambridge University with the ambitious goal of "ensuring that our own species has a long-term future." A less cheery way of putting it is that the center will study possible ways that humanity is planting the seeds of its own destruction. "One of the problems we need to deal with is to find ways of coping with our own optimism," Price says.

To that end, he has partnered with two thinkers who couldn't really be described as glass-half-full guys. Martin Rees, a Cambridge astrophysicist who serves as Britain's royal astronomer, is the author of Our Final Century, a 2002 book predicting that, due to a lethal combination of possible natural and man-made catastrophes, our civilization has only a 50 percent chance of surviving through the year 2100. (In the United States, the book was published as Our Final Hour, because, Rees jokes, "Americans like instant gratification.") A veteran of the nuclear disarmament movement, he has also predicted that by 2020, "bioterror or bioerror will lead to 1 million casualties in a single event."

Rees seems positively cautious compared with the third member of the unlikely trio, Estonian computer programmer and technology theorist Jaan Tallinn, one of the key developers of Skype and, before that, the file-sharing service Kazaa. It was Tallinn who inspired Price to start the center while the two were splitting a cab at a conference in Copenhagen last year by stating matter-of-factly that he believes he has a greater chance of being killed by an artificial intelligence-related accident than by cancer or heart disease -- the leading causes of death for men in his demographic. After all, every advance in technology makes these natural causes less likely and an AI disaster more likely, he explained.

CSER's founders aim to make scientists and developers of new technologies think more about the long-term consequences of their work. They also make the somewhat radical suggestion -- in scientific circles -- that new scientific knowledge is not always worth acquiring. Research on developing more deadly strains of the influenza virus might be one example. "We're trying to embed people whose job it is to think about risks into technology development teams in order to raise the consciousness of people in technology about potential risks," Price says. They hope that the message might resonate more coming from figures like Rees and Tallinn, whom nobody could accuse of Luddism.

The center is still in its fundraising stage, but has already attracted a list of high-profile advisors from a variety of fields, including their Cambridge colleague Stephen F. Hawking, the renowned astrophysicist. Depending on the level of funding they receive, Price says he imagines the center will consist of more than a dozen postdocs working with faculty advisors and will serve as a kind of clearinghouse for research on catastrophic risk from specialists around the world. "People interested in these issues tend to be very scattered in different disciplines and geographically," Price explains.

Some of the eclectic cast members who have already signed up range from development economist Partha Dasgupta -- whose work has explored the value society ought to place on future lives, as opposed to current lives, in the context of disasters like climate change -- to Nick Bostrom, the philosopher of technology known for posing such questions as the Matrix-esque, "Do we live in a computer simulation?"

Some of the risks the center will tackle are well known and frequently discussed -- nuclear war, for instance. "The threat of nuclear annihilation is only in temporary abeyance," says Rees. "We were lucky to get through the Cold War without a catastrophe. Even though the risk of tens of thousands of bombs going off now is less than it was then, we can't rule out a shift in the next 50 years that could lead to a new standoff, handled less well than the Cold War was."

Others subjects the center hopes to tackle are a bit more exotic, such as Tallinn's fears about hyperintelligent machines. Tallinn's ideas build on the work of past theorists like the pioneering computer scientist I.J. Good, who predicted in the 1960s that once machines became intelligent enough to reproduce themselves, it would trigger an "intelligence explosion" that would leave human beings in the dust. "The first ultraintelligent machine is the last invention that man need ever make," Good wrote in 1965. Futurists like Ray Kurzweil and Vernor Vinge developed the idea with their concept of a technological "singularity" -- the point at which artificial intelligence develops so quickly that the consequences become nearly impossible to predict. Tallinn believes there is a "double-digit" chance of the singularity occurring this century.

CHRISTOF STACHE/AFP/Getty Images

 

Joshua E. Keating is an associate editor at Foreign Policy.