By Duncan Pike, M.A. Candidate at the Munk School of Global Affairs, University of Toronto.
I want to talk about the apocalypse, and propose measures that we might take to avert it. The prospect of humanity dying off, and thereby joining in extinction the 99% of all other species to ever have existed, has grown more likely in the past century, as our increasingly sophisticated understanding of the natural world gives us powerful new tools of destruction. Alongside the old threats of deadly pandemics, asteroid strikes and supervolcanoes, we must face the new prospect that we may be the authors of our own destruction, either unwittingly or through the deliberate act of a crazed minority.
The 1945 Trinity nuclear test inaugurated the atomic age, and with it the potential for humans to exterminate all life on earth in a matter of hours. Mass extinctions have occurred at least five times in the history of our planet; since Trinity we have had it in our power to initiate the sixth. At the same time, increased global connectedness and technological maturity gives us the capability to avert potential threats to humanity. Nuclear war is now only one of several ‘global catastrophic risks’ to humanity that have a low-probability but extremely high potential harm.
Security studies have evolved since the end of the Cold War away from exclusive focus on ‘hard’ security issues of military strategy, defence spending, and the causes of war, to a more comprehensive understanding of how and why threats and catastrophes occur, whether they be natural or man made. One could hardly conceive of a more comprehensive failure of ‘security’ than the extinction of humanity, or a more appropriate expression of global interdependence and cooperation than shared efforts towards its prevention. We can thus see the study of catastrophic or existential risk as an essential part of the natural evolution of global security policy.
While the probability of any current existential risk is low, the expected value of averting it is very high, given the enormous stakes. Martin Rees, the British Astronomer Royal, offered this rather sour assessment of humanity’s prospects in 2003:
I think the odds are no better than fifty-fifty that our present civilization on Earth will survive to the end of the present century. Our choices and actions could ensure the perpetual future of life (not just on Earth, but perhaps far beyond it, too). Or in contrast, through malign intent, or through misadventure, twenty-first century technology could jeopardise life’s potential, foreclosing its human and post-human future. What happens here on Earth, in this century, could conceivably make the difference between a near eternity filled with ever more complex and subtle forms of life and one filled with nothing but base matter.
Rees speculates, and most researchers in the field agree, the greatest threat in the near-term comes from new and emerging technologies. Specifically this relates to technologies that may radically expand our ability to manipulate the natural world and biological processes, such as molecular nanotechnology, particle accelerator experiments, artificial general intelligence, and biotechnology.
The mitigation of existential risks has gone under-studied and under-funded relative to its consequence. Catastrophic risk reduction is a global public good and is thus vulnerable to market failure. This can lead to a free-rider problem, where everyone benefits from reductions to existential risk, whether or not they contributed to it. This is complicated by the low-probability of the risks in question, and the enormous time-scales used to calculate the expected value of mitigation efforts. Small, hypothetical risks are easily forgotten, and those that are sufficiently large or worrisome to gain attention can be dismissed with the thought that, given the danger, someone must be dealing with it.
This is obviously a hazardous assumption to make, but it does give Canada a unique opportunity to make a potentially momentous contribution to the global interest. The Canadian government should begin to channel funds into the study of possible risks so that a better understanding can be gained of their likelihood and possible means of mitigation. We could spearhead and host an international consortium along the lines of Richard Horton’s proposal in Lancet, where he details a ‘World Institute for Risk Evaluation’ or ‘WIRE’:
This new institute would be an independent research-based agency, mandated to assess and adjudicate global risks… WIRE would provide a view on what is known about a given risk, the likely size of that risk, the precision of such an estimate, areas of uncertainty that required resolution, and data supporting interventions to limit the effects of that risk. WIRE would set the global agenda on threats to human survival. It would aggregate the evidence and make its conclusions available to all.
Aside from funding such an agency, there are various specific measures that can be taken to increase resiliency, or to ensure that survivors of a catastrophe are equipped with the tools and resources needed to rebuild civilization. This could include measures akin to the Svalbard Global Seed Vault, such as massive grain stores in underground concrete silos.
Canada is a medium sized country possessing an ambition to ‘punch above its weight’ in the world but lacking the budget necessary to make an corresponding impact. A wise strategy, then, would be to look for areas where we could have a large impact for comparatively little investment. Catastrophic risk reduction fits the bill perfectly, and is an ideal match for our historical self-conception as an international do-gooder. Time, then, for Canada to don the Captain Canuck uniform, pony up the funding, and prepare to play the hero.