Thoughts on Existential Risk

Existential Risk is an important topic for long-termists. The idea being that since humanity has the potential to spread to the universe (or at least continue existing), any extinction event would prevent potentially trillions of happy lives from occurring. In other words, extinction has a high opportunity cost. Nick Bostrom refers to this as “Astronomical Waste“.

If you accept this premise, then it becomes critical to reduce the chance of human extinction. This warrants work in asteroid defense, AI safety, and nuclear disarmament, among others. One could even argue that these fields are vastly more important than others (though this is not a common view in my understanding, even among long-termists).

I want to point out several crucial distinctions here.

First, there is an important difference between a global catastrophic risk and an existential risk. Catastrophic risks are situations where a massive loss of human life or damage to society occurs. For example, detonating a nuclear weapon could lead to the loss of millions of lives. This would be catastrophic, but it does not fall into the category of an existential risk. This is because detonating a single nuclear weapon presents virtually no threat to wiping out humanity as a whole. Catastrophic risks are very important to prevent, but from the point of view of existential risk, they are not a priority. For someone concerned with the long term future of society, catastrophic events are something humanity can recover from. In fact, we have faced many such catastrophic events in our history (wars, pandemics, natural disasters) but continued to advance despite them. Existential events are of an entirely different flavor. By definition, we have never been through a species ending event, and it would only take a single such catastrophe to prevent billions of years of human society. Additionally, there is no way to learn from our mistakes like we can with catastrophic risks. As such, it becomes important to eliminate any chance of human extinction.

To be clear, I believe that catastrophic risks are very important, but the “existential risk lens” that I am adopting in this post prioritizes the long term, seeing catastrophic risks as more manageable. The confusion over this distinction is clear in this Hacker News thread.

Second, there is a large difference between ending humanity and ending all life. Ending all life on earth is extremely hard. Life has been around for a while and has gotten into everything. Even a large asteroid impact wouldn’t be able to eliminate life in the deep ocean. This is important because intelligent life developed in only 500 million years from small multicellular organisms. It is possible that even if humans get wiped out, intelligent life will re-form, build a society, and spread to the universe anyways. Of course, we shouldn’t take this possibility for granted (What if intelligent life never re-forms? What if a catastrophe occurs while it is forming? What if their society never leaves earth? What if their society contains more suffering?).

Third, we only care about the things we can change. There is no point in worrying about an unstoppable black hole hurtling towards earth if we don’t have the ability to get out of the way! We should instead focus on the things we have the ability to prevent.

Fourth, the higher amount of background existential risk, the lower value we should place in existential risk reduction! Reusing my example, if an unstoppable black hole is hurtling towards earth, there is no point in worrying about (say) asteroids.

Fifth, some non-humanity-ending events can still prevent people from going to the stars or vastly reduce future population. For example, a global totalitarian government could ban space travel, reducing overall human population. This isn’t typically included in considerations of existential risks but I will write about this below.

With that aside, lets look at some possible existential risks:

Supervolcano: A volcanic explosion which blocks the sun for extended period could potentially end humanity if we are not prepared. Fortunately, these are unlikely.

Cosmic Disasters: This include things like gamma ray bursts and asteroid impacts. These would likely end humanity if they were large enough. Asteroid impacts are preventable, GRB’s probably are not. Both are very infrequent.

Nuclear war: This would be catastrophic, but could it end humanity? Probably not. Though, reducing nuclear stockpiles would make me feel safer in this regard.

Runaway Climate Change: This would be catastrophic but many societies live in places which would be able to adapt to a much warmer earth. Thankfully, most known feedback loops are pretty slow. Still, the uncertainties here are high, so emissions reduction and further research on climate feedbacks are important.

Nanotechnology disaster: We don’t have anything near to this technology (it many not even be practical) so this risk is harder to assess. I believe this is more of an AI risk anyways. If it did happen, we could likely find a way to prevent the nanobots from eating everything (nanobot eating nanobots?).

Particle physics disaster: Extremely unlikely with current technology. A mini-black hole would probably evaporate quickly anyways.

False Vacuum: Extremely unlikely, and probably impossible to prevent.

Pandemics and Biotech accidents: I see the irony in writing this during a pandemic, but it seems very unlikely that a pandemic could end the human race. It clearly would be (and has been) catastrophic, so investments in mitigating future pandemics are crucial (EDIT 12/27: it seems to me that pandemics present the most important catastrophic risk). Fortunately, the physical isolation of many settlements, genetic diversity of the human population, human immune system, optimal virulence, and innovations such as vaccines, variolation, and antimicrobials suggests that it would be extremely hard for (even engineered) pathogens to end humanity. The probability this happening is uncertain, so more research on these risks is important.

Global Totalitarianism: It is hard to assess how likely this is. It is possible that this global government could prevent space travel and human flourishing. However, successful space travel only has to happen once for humanity to spread the stars and out of the reach of totalitarianism. The Archipelago might also be a way to combat this risk.

AI Risk: This has an unknown likelihood and unknown impact. It is feasible that an AI could end humanity (though this seems unlikely). These uncertainties make it one of the more important risks.

Thankfully, it is very hard to prevent humanity from carrying on. This makes it rare for something to present a true existential risk. Additionally, we can bound the overall risk of human extinction from “natural” causes like asteroids, supervolcanos, and other risks of non-human origin. I think this suggests that we should focus on human-made risks (e.g. AI, biotechnology, climate change, etc.) over others. It is important to note that for several of these problems we can “kick the can down the road” and rely on future generations to solve them, but for some, it is important to act now. Overall, AI risk and global totalitarianism stand out to me as the most dangerous, immediate, and neglected risks, but there are good arguments for many of the items on this list.

Before rushing to study existential risks, we face an important unanswered question. Does reducing existential risk matter more than the other things we could spend our time on? Catastrophic risks seem important in their own right, why worry about the long term if, as Keynes put it, “in the long run we are all dead”? Aside from these morbid topics, there is a host of charities making a positive impact today. How do these stack up against work on the topics I have listed? Attempting to answer these questions is a topic for future posts.

%d bloggers like this: