From nuclear war to rogue AI, the top 10 threats facing civilisation

The Centre for the Study of Existential Risk has outlined Earth's apocalyptic threats and how likely they are to happen in our lifetime

In this month's cover story, we meet Earth's Guardians; a team of scholars working to protect the planet from existential threats. Read more: Meet Earth's Guardians, the real-world X-men and women saving us from existential threats

The team, made up of highly educated academics, lawyers, scholars and philosophers, form the Centre for the Study of Existential Risk (CSER, commonly referred to as "caesar") and the Leverhulme Centre for the Future of Intelligence. At times of global threat, the experts meet to assess the biggest threats facing Earth and what can be done to save civilisation from the impending apocalypse.

Below are the risks the team believes are the most pressing. But don't worry, it's coming up with a plan for each of them.

1. Artificial intelligence takes over the world

Machines developing intelligence superior to, and autonomous from, human beings is one of the most important concerns in the X-risk community. Read more: The UK has a new AI centre – so when robots kill, we know who to blame

Bostrom says that if we experience a rate of change comparable to the industrial and agricultural revolutions, the AI train "might not pause or even decelerate at Humanville Station". He predicts that machines will attain 90 per cent of human-level intelligence by 2075.

The other reason for the urgency, as Tallinn, who is also the co-founder of the Future of Life Institute, points out, is that the AI risk increases all the other risks. In other words, there is a risk that intelligent robots will run wild and screw up the environment, or cause a nuclear winter.

Subscribe to WIRED

However, sci-fi scenarios are not on the agenda. If the community has a catchphrase, it is: "This isn't about The Terminator." If you can imagine specific, tractable risks, many of them cease to be risks because it's possible to take preventative action. The challenge is to imagine what we can't imagine and deal with it.

The research in this area ranges from the philosophical (should we align AI moral values with those of humans? In which case, what are our moral values?) to the direct and practical (how exactly should autonomous weaponry be regulated?). It is also controversial. Some AI researchers argue that it doesn't pose an existential threat. The X-risk argument isn't that it does, but that it could, and therefore is high on the organisations' threat agenda.

Likely date of occurrence: 2075 X-risk priority: Very high

2. Pandemic diseases threaten humanity

Natural and engineered pandemic disease is one of the most-studied global risks. It is an area given new urgency by the controversy over "gain of function" experiments. These involve taking a known pathogen and adding extra, risky functionality. For example, in 2011, virologists Ron Fouchier and Yoshihiro Kawaoka created a strain of the bird flu virus that could be transmitted between ferrets. This was done in order to better understand the conditions in which the virus might develop transmitability in the wild. Such experiments can head off certain risks but create an arguably greater one, in that the modified organism might escape the lab and cause a global pandemic.

The risk here is particularly great because it is self-replicating. Whereas a nuclear explosion is localised, in our highly connected world a synthetic, incurable virus could spread around the planet in days. In the past, natural pandemics such as the black death have killed millions and effected wholesale social changes. In the 21st century, advanced biotechnology could create something that makes the black death look like a nasty cold.

The FHI has studied the "pipeline risk" for how such viruses might escape. One possibility is the disgruntled individual, perhaps a lab employee, who might create or steal a virus and travel around the world releasing it. Researcher and author Anders Sandberg is working on a paper exploring the motives of people who, in a Bond-villain mould, want to destroy humanity. "It's hard going actually because relatively few people want to do it. There is very little source material," Sandberg says. This has led him to analyse the extent to which religions and cults might sanction mass murder (most don't, according to Sandberg). When CSER carried out similar research, it led to an important, practical present-time insight: biotech labs have no provision for psychological profiling of their employees.

Sandberg suspects that because of the divergent conditions that would need to coincide, the "lone person" scenario is far less likely than "a disgruntled post-doc, or a laboratory accident due to a biotech startup cutting corners".

Likely date: Today Priority: Very high

3. AI-powered weapons seize control and form a militia

South Korea currently maintains the border with its northern neighbour using Samsung-built robot sentries that can fire bullets, so it's safe to say autonomous weapons are already in use. It's easy to conceive future versions that could, say, use facial recognition software to hunt down targets and 3D-printing technology that would make arms stockpiling easy for any kleptocrat or terrorist.

As ever, there is a paradox. The cold war nuclear arms race involved states building big bombs (blast areas had to be maximised because the targeting technology was so poor) and using human troops who suffered and acted irrationally. Robotic arms allow for specific targeting and robot soldiers would not suffer or intimidate locals.

However, they will also be so small and cheap that ownership will be discoupled from statehood. The most dystopian scenario is that military power would become so removed from the size of a state that, as Tallinn says:"You might have five guys with two truckfuls of tiny automated weapons taking out whole cities. It's possible to imagine a world where you wake up in the morning to news that another list of cities has been destroyed, and no one knows for certain who is behind the attacks."

Stuart Russell, a computer-science professor at the University of California Berkeley, worries that an arms-race mentality will kick in before rational debate and consensus-building leads to the UN ban of autonomous weapons currently under discussion. Russell recalls hearing a US military figure say at a conference, "Bring it on. We already have stuff that china can only dream of."

If that seems scary, consider too that a new arms race could speed the development of risky AI, including machines capable of acquiring arms.

Likely date: Any time Priority: Low

4. Nuclear conflict brings about the end of the civilisation

Although nuclear conflicts are far less discussed now than during the cold war, many thousands of nuclear weapons still exist, and there are serious tensions - in Kashmir, Taiwan and Ukraine, for example - between nuclear states. **

The GCRI studies the subject closely, paying particular attention to the possibility of an accidental war between Russia and the US. The countries own 90 per cent of the world's nuclear arsenal between them. An accidental war happens when one side mistakes a false alarm for a real attack and retaliates with an actual first strike. For 90 per cent of the scenarios the GCRI studied, the annual probability was between 0.07 and 0.00001. The 0.07 figure means that, logically, an accidental nuclear war could occur on average every 14 years.

Radiation from nuclear strikes is not necessarily a threat to the whole of humanity, just the sections of it unlucky enough to be targets. The risk comes from the nuclear winter that could occur if the bombing of enough cities (roughly 100) sent a soot cloud into the stratosphere, blocking the Sun's heat and reducing Earth's temperature. (The "nuclear" prefix here is a misnomer, because any explosion could cause this were it big enough. Some scientists say wildfires already cause localised "winters" that reduce the Earth's temperature by a degree or more.)

The resulting fall in temperatures could reduce food production in the affected areas. If this were to occur in the US, Russia and Europe, the decrease in food supply could be sufficient to trigger the collapse of the remaining intact societies around the world, and thus bring about the end of civilisation.

Likely date: Any time Priority: Low to medium

5. Extreme climate change triggers collapse in infrastructure

Climate change is a major topic of research at CSER, but some X-risk scholars class this as a less urgent problem. For instance, the FHI used to refuse to deal with it because "it's too small a problem," according to Sandberg.That's partly because there are already so many others researching the subject and the fact that climate change is risky is well established - but also, paradoxically, because there are too many unknowns. "It's possible that this century we'll get technology that will fix climate change," Sandberg says. "But then again, we might also get technology that makes it much worse.We don't know how much temperatures will change as a result of human activity, so it's actually very hard to model predictions accurately."

Many of the potentially disastrous effects commonly associated with climate change are of little interest to those studying X-risk. However, the greatest risk, barring a temperature rise that causes people to die of heatstroke en masse (very unlikely), lies in its potential to trigger the collapse of human and natural infrastructure - for example, a species dying out and causing the collapse of an ecosystem through knock-on effects. In other words, what could wipe us out is the resultant pandemics or nuclear wars over dwindling resources.

This kind of catastrophic knock-on effect is known as systemic risk, currently the subject of collaborative research by Sandberg and the Global Catastrophic Risk Institute in New York.

Likely date: Any time Priority: Low to medium

6. An asteroid impact destroys all traces of life

Earth would be hit by small asteroids constantly were it not for the atmosphere, which burns up anything less than ten metres in width. This is convenient, as even a ten-metre rock builds up kinetic energy equivalent to that of the Hiroshima nuclear bomb. The planet is hit by an asteroid or comet measuring more than ten metres once or twice every 1,000 years. Every million years an asteroid spanning at least one kilometre will also hit Earth, which can be enough to affect its climate and cause crop failures that would put the population at risk.

The really serious, existential-threat-level strikes, such as the 180km Chicxulub impactor, which wiped out the dinosaurs around 66 million years ago, come once every 50 to 100 million years. That may be enough to cause worry, but it's reassuring to know that a) astronomers keep a close eye on larger objects posing a danger to Earth, and b) there is a whole interdisciplinary community of scientists working out what to do if they get too close. ("It's a pretty wonderful community actually," Sandberg says.)

Astronomers look for asteroids, mathematicians calculate their orbits, geophysicists think about impacts and consequences and space engineers work out the best ways to deflect one. Possible tactics include: painting the asteroid white so the reflections of solar wind radiation drive it out of orbit; using gravity tractor spacecraft to push it away; crashing spacecraft into it; and the use of lasers or thermonuclear bombs.

X-risk scientists are unlikely to study Earth-threatening asteroids. Instead, they might ask specialists in pandemics or biosecurity to look at the knock-on effects of a strike. Likely date: 50 TO 100 million years Priority (in 2017): Low

7. Life as we know it proves a complex simulation

In order to calculate some complex risks accurately, X-risk researchers have to factor in the limitations on what they can know for sure. At the most esoteric end of such work lie the possibilities that far from being the most intelligent life forms in the Universe (at least until computers overtake us), we are in fact minor players caught up in something we cannot understand. Read more: Theory claims to offer the first 'evidence' our Universe is a hologram

The simulation hypothesis is the supposition that humans, with all our history and culture, are just an experiment or plaything of a bigger entity, as explored in The Truman Show. At the FHI, Bostrom says that this thinking is considered because it "is a constraint on what you might believe about the future and our place in the world".

Similar is the Boltzmann brain concept, named in 2004 after the physicist Ludwig Boltzmann. Essentially, it is the idea that humans are one random coming-together of matter in a multiverse where there are many more things than we will ever know about. Quantum mechanics suggest that the smallest amounts of energy can occasionally generate a molecule of matter. It therefore follows that given infinite time, they could randomly generate a self-aware brain, but it wouldn't necessarily comprehend anything beyond its own experience.

Some proponents use this concept to explain why the Universe seems so incredibly well-ordered. Other philosophers and scientists work hard at proving why Boltzmann brains cannot exist. "Very few people take Boltzmann brains very seriously," Sandberg says, "but they are an annoying issue." They are rarely investigated as existential risks, partly because there's nothing anyone could do about them even if they were true and, as Sandberg adds, "there are only 24 hours in the day, after all."

Likely date: Unknowable Priority: Very low

8. Food shortages cause mass starvation

The global population is forecast to hit 9.6 billion by 2050. Some observers argue that to avoid mass starvation, we will need to increase food production by 70 per cent in just over 30 years. The challenge is that advances in food-production techniques, which have allowed humans to keep pace with population growth since 1950, largely relied on fossil fuels. In addition, cultivable land is being reduced by factors including topsoil erosion.

There are also risks associated directly with the nature of the foods we eat. It's widely believed that humans will need to eat less meat and more grains. However, whereas advances in crop development have produced varieties that can grow in inhospitable places, they've also increased vulnerability to disease. Whole tracts of wheat, the world's third-most popular cereal crop, could be wiped out by fungal infections, for example: synthetic viruses can only increase the risk of catastrophe.

Experts have predicted that the impact will be felt through sharp price rises around 2020, with the situation becoming critical in developing countries by the middle of the century.

Typically, food shortages lead to riots and political instability, but surprisingly little work has been done to model the resulting social breakdown than might be imagined. "It is one of CSER's ambitions," says Shahar Avin, a research associate, "to get a holistic picture of these catastrophes that includes technology, media, ecosystems and health shocks."

Likely date: 2050 Priority: High

9. A true vacuum sucks up the universe at the speed of light

In the esoteric world of existential risk analysis there is a point at which technically serious, doomsday scenarios can easily begin to merge with sci-fi speculation. Particle-accelerator accidents are a case in point.

This is a complex area. For the past ten years there has been conjecture that the particle collisions in accelerators could trigger a reaction that would change the make-up of all matter and the laws of physics. Recently, some academics have argued that what we previously thought were vacuums actually contained particles, and that the Universe does contain a "true vacuum" of absolute nothingness. The true vacuum, so the argument goes, has the potential to suck in the Universe at the speed of light, in a process called vacuum decay. The reason it doesn't is that it is resting in a meta-stable state - but particle accelerators have the potential to disturb it and thus cause it to suck us all up and wipe out our world. Some physicists disagree, saying that Earth has always coped with cosmic rays that potentially have more power to alter matter than a collider. Earth is still here, they argue, so it must be sufficiently robust.

But at the FHI, that argument doesn't convince. Bostrom's PhD dissertation explored how probability calculations can be skewed by our own perspective. His researchers say that our existence cannot be used to prove the impossibility of this kind of decay, because millions of other Earths could have been wiped out by it in the past.

This led to a paper examining the risk of other scientific papers being wrong. It found that one per cent should be retracted because of calculation and modelling errors. "So even a reassuring paper should only ever make you 99 per cent certain there is no risk," Sandberg says. "One per cent is always unknown."

Likely date: Technically now Priority: Very low

10. A tyrannical leader undermines global stability

The morning after Donald Trump was elected US president, staff at CSER held a group meeting to discuss whether his election constituted an existential threat. They held a similar meeting following the EU referendum. Read more: Donald Trump on the Trans-Pacific Partnership, funding Nasa and the climate change 'hoax'

Such events can have significant effects on humankind, in particular their implications on our ability to co-ordinate globally to tackle problems such as climate change and to avoid potentially disastrous conflicts. Even more important, however, are the ways in which events like this impact upon how policy decisions will be made and communicated, such as the rise of "post-factualism", as CSER's Weitzdörfer puts it. "With Trump, we are moving away from a world in which scientific evidence counts in debates," he says. "That impedes our ability to deal with any kind of threats. It makes our governance worse and increases risk."

Some political observers in the X-risk community, such as CSER's Simon Beard, argue that the idea that the US as a whole now represents a greater threat to global stability might have an element of overreaction to it, given that Hillary Clinton won the popular vote. There may even be a very slight benefit, at least for the public profile of X-risk as a discipline. "Artificial intelligence draws a lot of attention for us," Weitzdörfer says. "But with the Trump situation it's becoming plausible to more and more people that there are other serious risks we need to think about."

Likely date: Now Priority: Medium

This article was originally published by WIRED UK