What Richard III Can Teach Us Today

The world is grown so bad that wrens make prey where eagles dare not perch. Can Shakespeare’s fallen tyrant help us set it to rights?

I like the idea of the hunchbacked Richard III, newly exhumed from his final resting spot beneath a parking lot in Leicester, England, visiting the Oval Office. You can imagine the late, unlamented English monarch exchanging pleasantries with U.S. President Barack Obama about horseback riding and complaining about what a pain it is to deal with the intolerable French. They might also exchange notes on the inevitable headaches of leadership -- though, in Obama's case, he's not likely to take his skeet-shooting gun and parachute into Helmand province to battle the Taliban.

But the conversation could quickly take a more somber turn. If there is a lesson from the 1485 fall of Richard's House of York, it's that there are worse things than judicious appeasement.

In general, the Wars of the Roses -- the 30-year civil war between Richard's Yorkist family and their Lancastrian cousins -- continued a three-way policy dance that dated far back into the Hundred Years War. The Lancastrians often found support among the French; their bitter enemies, the Yorkists, found their friends among the powerful Dukes of Burgundy. In 1475, Edward IV, Richard's older brother, became the first solvent English king in decades by finally cutting a good deal with the French. In exchange for no longer pressing his dubious claim to be the legitimate king of France, the French gave him a cozy pension and lucrative trading benefits. Yes, Richard's very distant cousin Henry V had been more dashing in all his belligerence at Agincourt, but it had cost his realm dearly. Edward was learning that diplomatic success was easier and cheaper to achieve than military victory, and often had longer lasting results.

By the end of Edward's reign, however, that agreement between France and Yorkist England was starting to weaken. Edward's brother Richard III came to the throne determined to ignore it and to readopt a more swashbuckling attitude toward France. Huge mistake: France eagerly dumped money and resources into the invasion planned by Richard's Lancastrian enemy, Henry Tudor. (Henry had been living in exile first in Brittany and later in France itself.) If Richard had not antagonized the French, they almost certainly would not have supported Henry -- whose claim to the succession was weak at best -- and Richard might have escaped the double ignominy of death in battle and eventual burial under a parking lot, though perhaps he would not have been immortalized by history's greatest playwright.

In general, Richard III is less explicitly focused on foreign policy than Shakespeare's other early history plays. In terms of dramatic interest, murdering little princes and drowning people in malmsey butts trumps treaty negotiations and ambassadorial exchanges. But as Richard rallies his troops to fight Henry, he speaks disparagingly of the "overweening rags of France" who have accompanied Henry back to England. Discounting the significance of French support was a mistake. A fatal one.

Lessons for today? Richard's radically unstable England does not look much like a 21st-century democracy. It lacked the clear principles of succession and state monopolies on violence that characterize the United States and its Western European allies. But with its armed factions marauding over the country, 15th-century England does resemble some of the countries with which the American president has to deal. Shakespeare's hunchbacked monarch would not be the last world leader to muscle himself into power through murder and intimidation. Nor would he be the last to swashbuckle on the international front in a desperate effort to solidify support at home.

Although writing at a moment when dynastic ambition still dictated policy, Shakespeare intuited the intimate relationship between foreign and domestic affairs that later characterized the Westphalian state. What was true of the feudal past remains true of failed states that trouble the modern world. Countries without reliable successions and whose rulers fail to achieve a governing consensus pose a constant threat to their neighbors. From Mali to Eritrea to Syria, strongmen and would-be dictators are the first to disregard international agreements and rake up old quarrels in the vain hope of legitimizing their regimes.

Fortunately for England, Henry Tudor learned from his predecessors' mistakes. Although Henry VII's claim to the throne was even shakier than Richard's, he learned quickly that making peace with his neighbors was a better route to legitimacy than belligerence. The treaties he negotiated with France, Spain, and Scotland brought his realm a stability and prosperity that it had not known for two centuries.

Obama and his advisors should take note of how England's neighbors, France in particular, availed themselves of Richard's fall to settle years of border disputes and to integrate England into a more cooperative set of interstate relations. France even provided Henry a generous annual subsidy that helped him stabilize his realm. At least until the accession of Henry's foolhardy son Henry VIII two decades later, England and all its neighbors profited from his wise domestic governance. There is hope for failed states. They can be reclaimed and integrated within the community of nations. But world governments must recognize and support a potential leader, like Henry Tudor, who work with them to forge lasting, stabilizing agreements.

And what lessons does Richard III hold for the world's petty tyrants? Nothing is more tempting when you have trouble at home than shaking fists at your traditional enemy. That strategy rarely works and often even backfires. Damaging your credibility on the international front will only exacerbate your problems at home. It might even get you buried under a parking lot.

Dan Kitwood/Getty Images


Doomsday Preppers

At a new center in Cambridge, a philosopher, an astronomer, and a software pioneer are looking for ways to save humanity from itself.

"Sometimes I feel I'm irrationally optimistic," says Huw Price. This is, perhaps, an unlikely statement for the co-founder of an organization dedicated to studying scenarios for the end of life as we know it. Price, an Australian philosopher known for his work on time and cosmology, is working to build the Centre for the Study of Existential Risk (CSER) -- a proposed new think tank at Cambridge University with the ambitious goal of "ensuring that our own species has a long-term future." A less cheery way of putting it is that the center will study possible ways that humanity is planting the seeds of its own destruction. "One of the problems we need to deal with is to find ways of coping with our own optimism," Price says.

To that end, he has partnered with two thinkers who couldn't really be described as glass-half-full guys. Martin Rees, a Cambridge astrophysicist who serves as Britain's royal astronomer, is the author of Our Final Century, a 2002 book predicting that, due to a lethal combination of possible natural and man-made catastrophes, our civilization has only a 50 percent chance of surviving through the year 2100. (In the United States, the book was published as Our Final Hour, because, Rees jokes, "Americans like instant gratification.") A veteran of the nuclear disarmament movement, he has also predicted that by 2020, "bioterror or bioerror will lead to 1 million casualties in a single event."

Rees seems positively cautious compared with the third member of the unlikely trio, Estonian computer programmer and technology theorist Jaan Tallinn, one of the key developers of Skype and, before that, the file-sharing service Kazaa. It was Tallinn who inspired Price to start the center while the two were splitting a cab at a conference in Copenhagen last year by stating matter-of-factly that he believes he has a greater chance of being killed by an artificial intelligence-related accident than by cancer or heart disease -- the leading causes of death for men in his demographic. After all, every advance in technology makes these natural causes less likely and an AI disaster more likely, he explained.

CSER's founders aim to make scientists and developers of new technologies think more about the long-term consequences of their work. They also make the somewhat radical suggestion -- in scientific circles -- that new scientific knowledge is not always worth acquiring. Research on developing more deadly strains of the influenza virus might be one example. "We're trying to embed people whose job it is to think about risks into technology development teams in order to raise the consciousness of people in technology about potential risks," Price says. They hope that the message might resonate more coming from figures like Rees and Tallinn, whom nobody could accuse of Luddism.

The center is still in its fundraising stage, but has already attracted a list of high-profile advisors from a variety of fields, including their Cambridge colleague Stephen F. Hawking, the renowned astrophysicist. Depending on the level of funding they receive, Price says he imagines the center will consist of more than a dozen postdocs working with faculty advisors and will serve as a kind of clearinghouse for research on catastrophic risk from specialists around the world. "People interested in these issues tend to be very scattered in different disciplines and geographically," Price explains.

Some of the eclectic cast members who have already signed up range from development economist Partha Dasgupta -- whose work has explored the value society ought to place on future lives, as opposed to current lives, in the context of disasters like climate change -- to Nick Bostrom, the philosopher of technology known for posing such questions as the Matrix-esque, "Do we live in a computer simulation?"

Some of the risks the center will tackle are well known and frequently discussed -- nuclear war, for instance. "The threat of nuclear annihilation is only in temporary abeyance," says Rees. "We were lucky to get through the Cold War without a catastrophe. Even though the risk of tens of thousands of bombs going off now is less than it was then, we can't rule out a shift in the next 50 years that could lead to a new standoff, handled less well than the Cold War was."

Others subjects the center hopes to tackle are a bit more exotic, such as Tallinn's fears about hyperintelligent machines. Tallinn's ideas build on the work of past theorists like the pioneering computer scientist I.J. Good, who predicted in the 1960s that once machines became intelligent enough to reproduce themselves, it would trigger an "intelligence explosion" that would leave human beings in the dust. "The first ultraintelligent machine is the last invention that man need ever make," Good wrote in 1965. Futurists like Ray Kurzweil and Vernor Vinge developed the idea with their concept of a technological "singularity" -- the point at which artificial intelligence develops so quickly that the consequences become nearly impossible to predict. Tallinn believes there is a "double-digit" chance of the singularity occurring this century.

So what makes ultraintelligent machines so dangerous? Couldn't we coexist with our new robot cohorts? Well, for one thing, as Tallinn points out, it's not at all clear they would need to keep us around. "The idea of robots having their own society is just hopeless anthropomorphic," he says. "They are not part of biological evolution. So you might not get a society of machines. You might just get a single machine that bootstraps itself into some really unseen level of intelligence. We're talking about living on a planet whose environment we no longer control, just as other species no longer control their environment right now."

Rebellious machines that turn on humanity are a staple of pop culture, from the 1921 Czech play that coined the word "robot" to modern classics like The Terminator and Battlestar Galactica. The motivation behind Tallinn's work with the center is to discuss such scenarios outside the realm of science fiction and encourage those who work in technology to take them seriously. "I'm not advocating refraining from technology development," he says. "But as tech gets more powerful, we need to consider all the consequences -- positive and negative -- before we proceed. But we can't just be permanently techno-optimist or techno-utopian."

As for Rees, what really keeps him up at night isn't nuclear war or robot uprisings or even asteroid impacts, which he sees as a straightforward problem with a "quantifiable risk which we can say is worth a few hundred million dollars to mitigate." It's the risks we haven't even thought of yet -- the unknown unknowns, if you will.

"The financial crisis was an example of something that no one predicted that went global because of interconnectedness in the world," Rees says. "It's a metaphor for what might happen with other kinds of breakdowns due to error or terror." The work of the center will be to separate the risks that are worth worrying about from those that can be left to the sci-fi authors. "We're probably talking about things that have a less than 50 percent chance individually. But that's true of the fire insurance on your house. Chances are your house won't burn down, but it's worth taking precautions to minimize that risk."

The founders also hope to get an early start in talking about these risks, as it might not be easy to convince policymakers to take them seriously. "If you look at the kinds of influence serious climate scientists have on policy, it isn't very encouraging," Price says. If scientists can't get political leaders to take serious action on an ongoing problem whose worldwide effects are easily noticeable and quantifiable, it's hard to imagine they'll have better luck warning of the risks of, say, hyperintelligent machines.

Then there's the question of whether encouraging more fear in the public is really a worthwhile course of action. From the serious (terrorist attacks) to the depressingly mundane (household products causing cancer) to the ridiculous ("Razor blades in your child's Halloween candy! News at 11."), we're constantly bombarded by things to be afraid of. But Price says this only makes it more important that we prioritize what we should be worried about.

"People tend to worry about the wrong things and make bad judgments about risk," he says. "People worry that their kids might be grabbed by strangers on the way to school, so they won't let them walk to school. Instead, they drive them to school and put them at much higher risk of injury. As a community, given that these kinds of risks are on the table, we need to be better at dealing with them."

A robot uprising might not be high on the list right now, but you can never be too careful.