Focus areas:
-
- Our basic framework
- Focus areas
- More on this topic, from the blog
Our basic framework
For work in this category, we prioritize the value of the far future. Accordingly, we use the term “global catastrophic risks” to refer to risks that could be globally destabilizing enough to permanently worsen humanity’s future or lead to human extinction.
In choosing focus areas, we’ve looked for causes that are strong on some combination of the following criteria:
-
- Importance: How damaging and destabilizing could a catastrophe be, and how likely are particularly dangerous scenarios to occur over the next century?
- Neglectedness: Are there opportunities to make a difference, or important aspects of the risk, that receive relatively little attention and support? When investigating a cause, we tend to consider multiple different kinds of activities that might make a difference, looking for major gaps. Even if a risk gets major attention from governments, if it gets little attention from philanthropy, there may be an important role for us to play.
- Tractability: What sorts of activities could a philanthropist undertake today to reduce the risk?
Focus areas
Currently, our top two priorities are Biosecurity and Pandemic Preparedness and Potential Risks from Advanced Artificial Intelligence. We feel that these are the two risks most likely to lead to globally destabilizing scenarios that could permanently worsen humanity’s future. We also feel that both risks receive little attention from philanthropy and present reasonable opportunities for action. For more detail on how we investigate and prioritize causes, see our process.
More on this topic, from the blog
-
- Open Philanthropy Project Update: Global Catastrophic Risks (March 2015). We announced our initial set of focus areas and described the evolution of our thinking in general, including our interest in being able to work opportunistically across several focus areas.
- The Long-Term Significance of Reducing Global Catastrophic Risks (August 2015). We discussed what sorts of risks could permanently worsen humanity’s future. We made a case for a “dual focus” on both risks that could lead directly to extinction and on more likely, less damaging (but still potentially globally destabilizing) risks.
- Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity (May 2016). We explained that we planned to devote senior staff time because we see a risk worth taking.
- The Moral Value of the Far Future (July 2014). We examined the idea that most of the people we can help (with our giving, our work, etc.) are people who haven’t been born yet.
- Potential Global Catastrophic Risk Focus Areas (June 2014). We laid out the causes we saw as most promising at the time, along with our reasoning.