Possible global catastrophic risks

I previously discussed our view that in general, further economic development and general human empowerment are likely to be substantially net positive, and are likely to lead to improvement on many dimensions in unexpected ways. In my view, the most worrying counterpoint to this view is the possibility of global catastrophic risks. Broadly speaking, while increasing interconnectedness and power over our environment seem to have many good consequences, these things may also put us at greater risk for a major catastrophe – one that affects the entire world (or a large portion of it) and threatens to reverse, halt, or substantially slow the ongoing global progress in living standards.

This post lists the most worrying global catastrophic risks that I’m aware of, and briefly discusses the role that further technological and economic development could play in exacerbating – or mitigating – them. A future post will discuss how I think about the overall contribution of economic/technological development to exacerbating/mitigating global catastrophic risks in general (including risks that aren’t salient today). The purpose of this post is to (a) continue fleshing out the broad view that further economic development and general human empowerment are likely to be substantially net positive, which is one of the deep value judgments and worldview characteristics underlying our approach to giving recommendations; (b) catalogue some possible candidates for philanthropic focus areas (under the theory that major global catastrophic risks are potentially promising areas for philanthropy to address).

Possible global catastrophic risks that I’m aware of
I consider the following to be the most worrying possibilities I’m aware of for reversing, halting, or substantially slowing the ongoing global progress in living standards. There are likely many such risks I’m not aware of, and likely many such risks that essentially no one today is aware of. I hope that readers of this post will mention important possibilities that I’ve neglected in the comments.

In general, I’m trying to list factors that could do not just large damage, but the kind of damage that could create an unprecedented global challenge.

  1. More powerful technology – particularly in areas such as nuclear weapons, biological weapons, and artificial intelligence – may make wars, terrorist acts, and accidents more dangerous. Further technological progress is likely to lead to technology with far more potential to do damage. Somewhat offsetting this, technological and economic progress may also lead to improved security measures and lower risks of war and terrorism.
  2. A natural pandemic may cause unprecedented damage, perhaps assisted by the development of resistance to today’s common antibiotics. On this front I see technological and economic development as mostly risk-reducing, via the development of better surveillance systems, better antibiotics, better systems for predicting/understanding/responding to pandemics, etc.
  3. Climate change may lead to a major humanitarian crisis (such as unprecedented numbers of refugees due to sea level rise) or to other unanticipated consequences. Economic development may speed this danger by increasing the global rate of CO2 emissions; economic and technological development may mitigate this danger via the development of better energy sources (as well as energy storage and grid systems and other technology for more efficiently using energy), as well as greater wealth leading to more interest in – and perceived ability to afford – emissions reduction.
  4. Technological and economic progress could slow or stop due to a failure to keep innovating at a sufficient rate. Gradual growth in living standards has been the norm for a long time, and a prolonged stagnation could cause unanticipated problems (e.g., values could change significantly if people don’t perceive living standards as continuing to rise).
  5. Global economic growth could become bottlenecked by a scarcity of a particular resource (the most commonly mentioned concern along these lines is “peak oil,” but I have also heard concerns about supplies of food and of water for irrigation). Technological and economic progress could worsen this risk by speeding our consumption of a key resource, or could mitigate it by leading to the development of better technologies for finding and extracting resources and/or effective alternatives to such resources.
  6. An asteroid, supervolcano or solar flare could cause unprecedented damage. Here I largely see economic and technological progress as risk-reducing factors, as they may give us better tools for predicting, preventing and/or mitigating damage from such natural disasters.
  7. An oppressive government may gain power over a substantial part of the world. Technological progress could worsen this risk by improving the tools of such a government to wage war and monitor and control citizens; technological and economic progress could mitigate this risk by strengthening others’ abilities to defend themselves.

I should note that I perceive the odds of complete human extinction from any of the above factors, over the next hundred years or so, to be quite low. #1 would require the development of weapons with destructive potential far in excess of anything that exists today, plus the deployment of such weapons either by superpowers (which seems unlikely if they hold the potential for destroying the human race) or by rogue states/individuals (which seems unlikely since rogue states/individuals don’t have much recent track record of successfully obtaining and deploying the world’s most powerful weapons). #2 would require a disease to emerge with a historically unusual combination of propensity-to-kill and propensity-to-spread. And in either case, the odds of killing all people – taking into account the protected refuges that many governments likely have in place and the substantial number of people who live in remote areas – seem substantially less than the odds of killing many people. We have looked into #3 and and parts of #6 to some degree, and currently believe that there are no particularly likely-seeming scenarios with risk of human extinction.

Global upside possibilities
In addition to global catastrophic risks, there are what I call “global upside possibilities.” That is, future developments may lead to extremely dramatic improvements in quality and quantity of life, and in the robustness of civilization to catastrophic risks. Broadly speaking, these may include

  • Massive reduction or elimination of poverty.
  • Massive improvements in quality of life for the non-poor.
  • Improved intelligence, wisdom, and propensity for making good decisions across society.
  • Increased interconnectedness, empathy and altruism.
  • Space colonization or other developments leading to lowered potential consequences of global catastrophic risks.

I feel that humanity’s future may end up being massively better than its past, and unexpected new developments (particularly technological innovation) may move us toward such a future with surprising speed. Quantifying just how much better such a future would be does not strike me as a very useful exercise, but very broadly, it’s easy for me to imagine a possible future that is at least as desirable as human extinction is undesirable. In other words, if I somehow knew that economic and technological development were equally likely to lead to human extinction or to a brighter long-term future, it’s easy for me to imagine that I could still prefer such development to stagnation.

I see technological and economic development as essential to raising the odds of reaching a much brighter long-term future, and I see such a future as being much less vulnerable to global catastrophic risks than today’s world. I believe that any discussion of technological/economic development global catastrophic risks (and the role of technological/economic development in such risks) is incomplete if it leaves out this consideration.

A future post will discuss how I think about the overall contribution of economic/technological development to our odds of having a very bright, as opposed to very problematic, future. For now, I’d appreciate comments on any major, broad far-future considerations this post has neglected.


Comments

Possible global catastrophic risks — 11 Comments

  1. Holden, have you consider a significant extension in human lifespan (not just in developing countries) as an upside possibility? I know that some of what Peter Thiel is funding is related to that.

  2. For risks (#1 and #7), a couple of different framings for “is economic/technological progress good or bad for this risk?” is as follows:
    1. Will a higher rate of growth make the world adapt to the new technology more quickly or more slowly? Under a higher rate of growth, the world may adapt to new challenges more quickly than it otherwise would, since more empowered people can, in general, meet now challenges more quickly than less empowered people can. This factor could mean less total danger from the risky tech.
    2. How is the *rate* of technological progress is related to the world’s *level* of risk-preparedness at any given level of development? On one hand, people may be less prepared if technology arrives more quickly. On the other, some people have made arguments which suggest that a higher *rate* of progress results in a more open, democratic, moral society at any given *level* of development. (Arguments in this book http://www.amazon.com/Moral-Consequences-Economic-Growth/dp/1400095719 support this view, though the issues are complicated and I haven’t reviewing this book yet.)

    If you answer both questions favorably, it would strengthen the case that a higher rate of economic growth is good with respect to the global catastrophic risks in question. Perhaps these are the answers we’d arrive at if we keep thinking about the issue.

  3. To clarify, I see the above comments as an advance on

    > Further technological progress is likely to lead to technology with far more potential to do damage. Somewhat offsetting this, technological and economic progress may also lead to improved security measures and lower risks of war and terrorism.

    because it may be more feasible to answer the questions I posed than it is to compare the effects mentioned here.

  4. I think it’s important to stress the conceptual and normative difference between global catastrophic risks and existential risks. You do this to some degree, but I still get the impression that your analysis could benefit from greater clarity on this point. For example, you present global upscale possibilities as the flip side of global catastrophic risks; but while your analysis of such risks highlights that most do not constitute major existential threats, you conclude by saying that a realized upscale possibility may be as good, relative to the status quo, as an extinction event would be bad. In the future, you may want to discuss these two types of risks–catastrophic and existential–separately, perhaps even in separate posts.

  5. Related to other items on your list (most obviously #1 and #7, also #3 and #5 and possibly others) is the risk of global war on the scale of say WWI or WWII. People doing straight line extrapolation of economic trends as of say 1910, might be off (warning: what follow is both vague and guesswork) by half a century in terms of their predictions of when certain benchmarks would be hit.

  6. I am far from a singularity believer, but one of the inventors of the singularity in science fiction Vernor Vinge makes what I think is a relevant point in his book Rainbow’s end.

    The problem with the advance of potentially dangerous technology is that the amount of resources to do something might become progressively smaller, so eventually things that would have previously required a superpower require only a group of a few or even one person. I think genetic engineering of pathogens might be one avenue where this could happen.

    This obviously is a problem as it is somewhat more obvious where a threat is coming from if the infrastructure being used is the size of a country rather than one person.

    However, an important lesson we learn from many attempts by individuals or groups to attack people today is that a lot of them are really not very good at it (see e.g. the real but basically unworkable plans of the people who triggered the current ban on flying with liquids when they were discovered). So combined with the inherent uncertainty of something like a biological agent, I don’t imagine that complete human extinction by such a method would be very likely.

    Basically what I am saying is that I agree with the premise that somehow attempting to stem the tide of technological change a) wouldn’t work and b) would prevent huge positive changes, especially because the regulation would be based on what is known, not what may be discovered.

  7. Exactly agree — we need to look at issues broadly, globally, futuristically. I may have missed this, it may be so obvious to everyone, but a personal conern I have is hacking — stated specifically and not just framed as part of “technology” as I believe the implications are differnt and solutions require a different approach. It seems to be that there are a number of skirmishes going on in this realm, and that there is potential for enormous harm to be done, especially with regard to risk #1. Have we been hearing for so long about potential harm to our power grids, nuclear in particular, that we have become innured to it? Is this problem immediate or chronic? Are the implications largely limited in scope, or much broader than I am imagining? In addition, besides the most ominous concerns like penetration of nuclear plans, what would hacking of our financial system do? Are we studying these sufficiently? Can we bring the hackers over to the “good” side? In addition to everything else mentioned, this is the kind of thing that makes me lie awake at night.

  8. Things may be gently globally progressing, or not. But if you live in a place or population with one or more of these big risks, the very idea of this kind of progress (the “whiggish” view, as the Economist calls it, or Pinker’s point, to others) will do you no good. To the contrary, it may hurt you if it makes the world relaxed or complacent. Further, calculating low probabilities of risk at the global level will tend to mask higher probabilities for particular places and populations. Better to remain agnostic on the global view, keep the skeptics on board,and focus on actions needed now and in the future.

  9. Thanks for the comments, all.

    Rohit, I agree that a significant increase in human lifespan should be considered an upside possibility.

    Nick, this post focuses on the role of economic/technological progress in global catastrophic risks, with a frame of “does progress make things more dangerous relative to no progress?” I agree that in some ways the more important question is “Is faster progress preferable to slower progress if we take some rate of progress as given?” A future post will focus on that frame.

    Pablo, the point I was trying to make was that (a) I don’t see specific high-probability risks of human extinction in the next 100 years but that (b) even if I did, I could imagine that such risks could be outweighed (for purposes of answering “Is economic/technological progress desirable?”) by upside possibilities.

    Colin, agreed.

    Tom, I largely agree. I also think it is worth noting how long the lag seems to have generally been between the invention of a powerful weapon and its availability to malicious individuals (nuclear weapons and many other military weapons have been available to governments and not individuals for a long time).

    Dawn, good point – I think the risk of hacking or computer glitches becoming more dangerous as the world becomes more connected and digitized belongs on the list. I think further development can worsen this risk by increasing the degree of complexity and interconnectivity, but may also lower it via redundancies and response plans.

    Jonathan, our values are global, so to the extent we’re interested in the risks to particular populations, it’s because addressing such risks would have outsized “good accomplished per dollar spent.” We are interested in such opportunities, but they weren’t the focus of this post.

  10. Holden, can you (and Rohit) please elaborate on the extension of lifespan as an “upside possibility?” Do you mean that longer-life is intrinsically better? Also, I am not fully satisfied about what you are saying about reponse times with regard to my hacking concern. Seems like a 13 year old could pretty much take the place down now if he or she wanted to. Institutions have a natural disadvantage in the cyber-security-race because knowledge is outdated pretty much as it is rendered usable.

  11. Dawn, I do think that a longer healthy lifespan (as opposed to extending life merely by adding on low-quality, poor-health years) is an intrinsic improvement. I believe the vast majority of people would prefer longer healthy lifespans if given the choice.

    I disagree with your statement that “a 13 year old could pretty much take the place down now if he or she wanted to.” If this were true, I would expect this to have happened by now; I don’t think it’s the case that our systems are relatively stable only because no technically competent people want to disrupt them. Overall, it seems to me that viruses and spam have become less of a problem, not more of one, over the last decade, which provides some evidence against the claim that “institutions have a natural disadvantage,” but my original response to you didn’t presume that institutions or hackers have an advantage. Rather, it stressed the potential value of “redundancies and response plans” in lowering the consequences (as opposed to the probability) of a major disruption.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Pinging is currently not allowed.