Updated Sept. 12, 2014 to change “GiveWell Labs” to “Open Philanthropy Project,” in line with our August 2014 announcement.Throughout the post, “we” refers to GiveWell and Good Ventures, who work as partners on the Open Philanthropy Project.
This post draws substantially on our recent updates on our investigation of policy-oriented philanthropy, including using much of the same language.
As part of our work on the Open Philanthropy Project, we’ve been exploring the possibility of getting involved in efforts to ameliorate potential global catastrophic risks (GCRs), by which we mean risks that could be bad enough to change the very long-term trajectory of humanity in a less favorable direction (e.g. ranging from a dramatic slowdown in the improvement of global standards of living to the end of industrial civilization or human extinction). Examples of such risks could include a large asteroid striking earth, worse-than-expected consequences of climate change, or a threat from a novel technology, such as an engineered pathogen.
In our annual plan for 2014, we set a stretch goal of making substantial commitments to causes within global catastrophic risks by the end of this calendar year. We are still hoping to decide whether to make commitments in this area, and if so which causes to commit to, on that schedule. At this point, we’ve done at least some investigation of most of what we perceive as the best candidates for more philanthropic involvement in this category, and we think it is a good time to start laying out how we’re likely to choose between them (though we have a fair amount of investigative work still to do). This post lays out our current thinking on the GCRs we find most worth working on for the Open Philanthropy Project.
We believe that there are a couple features of global catastrophic risks that make them a conceptually good fit for a global humanitarian philanthropist to focus on. These map reasonably well to two of our criteria for choosing causes, though GCRs generally seem to perform relatively poorly on the third:
- Importance. By definition, if a global catastrophe were to occur, the impact would be devastating. However, most natural GCRs appear to be quite unlikely, making the annual expected mortality from natural GCRs low (e.g., perhaps in the hundreds or thousands; more on the distinction between natural and anthropogenic GCRs below). The potential importance of GCRs comes both from novel technological threats, which could be much more likely to cause devastating impacts, and from considering the very long-term impacts of a low-probability catastrophe: depending on the moral weight one assigns to potential future generations, the expected harm of (even very unlikely) GCRs may be quite high relative to other problems.
- Crowdedness. Because GCRs are generally perceived to have a very low probability, many other social agents that are normally devoted to protecting against risks (e.g. insurance companies, governments in wealthy countries) appear not to pay them much attention. This should not necessarily be surprising, since much of the benefits of averting GCRs seem to accrue to future generations, which cannot hold contemporary institutions accountable, and to the extent they accrue to present generations, they are distributed very widely, with no clear concentrated constituency that has an incentive to prioritize them. The possibility that a long time horizon may be required to justify investment in averting GCRs also seems to make them a good conceptual fit for philanthropy, which, as GiveWell board member Rob Reich has argued, is unusually institutionally suited to long time horizons. This makes it all the more notable that, with the key exception of climate change, most potential global catastrophic risks seem to receive little or no philanthropic attention (though some receive very significant government support). The overall lack of social attention to GCRs is not dispositive, but it suggests that if GCRs are genuinely worthy of concern, a new philanthropist aiming to address them may encounter some low-hanging fruit.
- Tractability. The very low frequencies of GCRs suggest that tractability is likely to be a challenge. Humanity has little experience dealing with such threats, and it may be important to get them right the first time, which seems likely to be difficult. A philanthropist would likely struggle to know whether they were making a difference in reducing risks.
Our tentative conclusion on GCRs as a whole is that the balance of strong performance on the importance and crowdedness criteria outweighs low expected tractability, but we are open to revising that view on the basis of deeper explorations of particularly promising-seeming GCRs.
- Anthropogenic climate change
- Near-Earth asteroids
- Large volcanic eruptions
- Nuclear weapons
- Antibiotic resistance
- Biosecurity risks (e.g. pandemics, bioterrorism)
- Geomagnetic storms
We also have an investigation forthcoming on potential risks from artificial intelligence, and we commissioned former GiveWell employee Nick Beckstead to do a shallow investigation of efforts to improve disaster shelters to increase the likelihood of recovery from a global catastrophe. We are still hoping to conduct shallow investigations of nanotechnology, synthetic biology governance (aimed more at ecological threats than biosecurity), and the field of emerging technology governance, though we may not do so before prioritizing causes within GCRs.
Beyond the shallow level, we have done a deeper investigation on geoengineering research and continued our investigation of biosecurity through a number of additional conversations.
Our investigations have been far from comprehensive; we’ve prioritized causes we’ve had some reason to think were particularly promising, often because we suspected a relative lack of interest from other philanthropists relative to the causes’ humanitarian importance or because we encountered a specific idea from someone in our network.
We have also made attempts to have conversations with people who think broadly and comparatively about global catastrophic risks. As far as we can tell, most such people tend to be connected to the effective altruist community (to which we have strong ties and which tends to take a strong interest in GCRs). Many of our conversations with such people have been informal, but public notes are available from our conversations with Carl Shulman, a research associate at the Future of Humanity Institute, and Seth Baum, executive director of the Global Catastrophic Risk Institute.
“Natural” GCRs appear to be less harmful in expectation.
After a number of shallow investigations, we’ve tentatively concluded that “natural” (i.e. not human-caused) GCRs seem to present smaller threats than “anthropogenic” (i.e. human-caused) GCRs. The specific examples we’ve examined and a general argument point the same direction.
The general argument for being more worried about anthropogenic GCRs is as follows. The human species is fairly old (Homo sapiens sapiens is believed to have evolved several hundred thousand years ago), giving us a priori reason to believe that we do not face high background extinction risk: if we had a random 10% chance of going extinct every 10,000 years, we would have been unlikely to have survived this long (0.9^30 = ~4%). Note that anthropic bias can make this kind of reasoning suspect, but this reasoning also seems to map well to available data about different potential GCRs, as discussed below (i.e., we do not observe natural risks that appear likely to cause human extinction). By contrast with “natural” risks, anthropogenic risks present us with potentially unprecedented situations, for which history cannot serve as much of a guide. Atomic weapons and biotechnology are only decades old, and some of the most dangerous technologies may be those that don’t yet exist. With that said, some “natural” risks could present us with somewhat unprecedented situations, due to the modern world’s historically high level of interconnectedness and reliance on particular infrastructure.
On the specifics of various “natural” GCRs:
- Near earth asteroids. A 2010 U.S. National Research Council report estimates that the background annual probabilities of an impact as large as the one that is believed to have caused the extinction of the dinosaurs and a “possible global catastrophe” are 1/100 million and 1/700,000 respectively (PDF, page 19). NASA reports that it has tracked 93% of the near earth asteroids large enough to cause a “possible global catastrophe” and all of the ones as large as the one believed to have caused the extinction of the dinosaurs (and none of them are on track to hit Earth in the next few centuries), suggesting a residual possibility of a “possible global catastrophe” of ~1/100,000 during the next century (and likely lower). There may be a comparable remaining risk from comets—Vaclav Smil claims that “probabilities of the Earth’s catastrophic encounter with a comet are likely less than 0.001% during the next 50 years,” which would be about the same as the remaining asteroid risk—but our understanding is that comets are much harder to detect. As a result of the attention from NASA and the B612 Foundation, this cause also appears more “crowded” than others, though seemingly more tractable as well.
- Large volcanic eruptions. Estimates of the frequency of volcanic eruptions large enough to count as global catastrophic risks differ by several orders of magnitude, but our current understanding is that volcanic eruptions large enough to cause major crop failures are likely to occur no more frequently than 1/10,000 years, and perhaps significantly less frequently (suggesting a <1% chance of such an eruption in the next century). Large volcanic eruptions may be much more of a cause for concern than asteroid strikes, but this cause performs relatively poorly on tractability, since our ability to predict eruptions is limited, and we are not currently capable of preventing an eruption.
- Antibiotic resistance. Microbes are currently evolving to be resistant to antibiotics faster than new antibiotics are being developed, posing a growing public health threat. However, antibiotic resistance is unlikely to represent a threat to civilization, since humanity survived without antibiotics until ~1940, including during the period when most gains against infectious diseases were made. We also expect other actors to work to address antibiotic resistance as it continues to become a more pressing public health issue. (More at our writeup.)
- Geomagnetic storms. The major threat from geomagnetic storms is to potentially imperil some large-scale power infrastructure, but the risks are not well-understood. A consultant who has contributed to many of the published reports on the topic contends that a worst-case, 1/200 year storm could result in a “years-long global blackout,” but other sources show less concern (e.g. modeling the impact of a ~200 year storm as a risk of a blackout for ~10% of the U.S. population for somewhere between 2 weeks and 2 years).
The only GCRs that receive large amounts of philanthropic attention are nuclear security and climate change.
We do not have precise figures aggregated across causes, but our impression is that climate change is an area in which hundreds of millions of dollars a year are spent by U.S. philanthropic funders, while philanthropic funding addressing nuclear security appears to be in the tens of millions.
We don’t know of philanthropic funding for any of the other GCRs exceeding the single digit millions of dollars per year.
By biosecurity, we mean the constellation of issues around pandemics, bioterrorism, biological weapons, and biotechnology research that could be used to inflict great harm (“dual use research”). Our understanding is that natural pandemics (especially flu pandemics) likely present the greatest current threat, but that the development of novel biotechnology could lead to greater risks over the medium or long term. We see this GCR as having a strong case for “importance” because it seems to combine relatively credible, likely, current threats with more speculative potential longer-term threats in a fairly coherent program area. The space receives significant attention from the U.S. government (with ~$5 billion in funding in 2012) but little from foundations: the Skoll Global Threats Fund is the only U.S. foundation we know to be engaging in this area currently, at a relatively low level, though the Sloan Foundation also used to have a program in this area. (We believe the distinction between government and philanthropic funding is at least potentially meaningful, as the two types of actors have different incentives and constraints; in particular, philanthropic funding could potentially influence a much larger amount of government funding.) Although we are not sure of the activities that would be best for a philanthropist to support, many people we spoke with argued that current preparedness is subpar and that there is significant room for a new philanthropic funder.
Although we have had a number of additional conversations since the completion of our shallow investigation, we continue to regard the question of what a philanthropist should fund within this broad issue as an open one. We expect to address it with a deeper investigation and a declared interest in funding.
Geoengineering research and governance
We see a twofold case for the importance of work on geoengineering research and governance:
- Climate change could turn out to be much worse than anticipated, and solar geoengineering could potentially offer a cheap (in purely financial, not necessarily cost-benefit, terms) and fast-acting response if it does. Further research to determine the viability of solar geoengineering could accordingly be quite valuable. However, our understanding is that geoengineering, should it work, would be a distant second best to a policy of cutting emissions now, and some people have argued that research on geoengineering could undermine current efforts to reduce emissions, making further research potentially harmful.
- The incentives of different countries to adopt solar geoengineering could differ dramatically, and it might be cheap enough for even small countries to do unilaterally, potentially leading to conflict. Questions about whether and how solar geoengineering could be governed are accordingly increasingly salient.
Although solar geoengineering is in the news periodically, research on the science or governance appears to receive relatively little dedicated funding: our rough survey found about $10 million/year in identifiable support from around the world (mostly from government sources), and we are not aware of any institutional philanthropic commitment in the area (though Bill Gates personally supports some research in the area).
Our conversations have led us to believe that there is significant scientific interest in conducting geoengineering research and that funding is an obstacle, but, as with biosecurity, we do not have a very detailed sense of what we might fund. We’re wary of the concern that further geoengineering research could conceptually undermine support for emissions reductions, but we regard it as relatively unlikely, and also find it plausible that further research could contribute significantly to governance efforts.
We expect to address the question of what a philanthropist could support in this area with a deeper investigation and a declared interest in funding. Note that we don’t envision ourselves as trying to encourage geoengineering, but rather as trying to gain better information and governance structures for it, which could make the actual use more or less likely (and given the high potential risks of both climate change and geoengineering, we could imagine that shifting the probabilities in either direction – depending on what comes of more exploratory work – could do great good).
Potential risks from artificial intelligence
We are earlier in this investigation than in investigations of the above two causes, and have not yet produced a writeup. There is internal disagreement about how likely this cause is to end up as a priority; I don’t feel highly confident that it should be above some of the other contenders not discussed in depth here.
In brief, it appears possible that the coming decades will see substantial progress in artificial intelligence, potentially even to the point where machines come to outperform humans in many or nearly all intellectual domains, though it is difficult or impossible to make confident forecasts in this area. Such a scenario could carry great potential benefits, but could carry significant dangers (e.g. technological disemployment, accidents, crime, extremely powerful autonomous agents) as well. The majority of academic artificial intelligence researchers seem not to see the rapid development of powerful autonomous agents as a substantial risk, but to believe that there are some potential risks worth preparing for now (such as accidents in crucial systems or AI-enabled crime; see slides 20-22). However, some people, including the Machine Intelligence Research Institute and computer scientist Stuart Russell, feel that there are important things that should be done today to substantially improve the social outcomes associated with the rapid development of powerful artificial intelligence.
In general, my inclination would be to defer to the preponderance of expert opinion, but I think this area could potentially be promising for philanthropy partly because I have not seen a rigorous public assessment by credible AI researchers to support the (seemingly predominant) lack of concern over risks from the rapid development of powerful autonomous agents. Since this topic seems to be drawing increasing attention from some highly credentialed people, supporting such a public assessment seems like it could be valuable, even if the conclusion is that most researchers are right to not be concerned. The fact that a substantial portion of mainstream AI researchers also seem to think that more traditional risks from AI progress (e.g. accidents, crime) are worth addressing in the near term does increase my interest in the area, though not by much, since I don’t see those issues as GCRs, whereas the rapid development of powerful autonomous agents could conceivably be one. Should we decide to pursue this area further, I would guess that it would be at a lower level of funding than the other potential priority areas described above.
Note from Holden: I currently see this cause as more promising than Alexander does, to a fairly substantial degree. I agree that there are reasons, including the preponderance of expert opinion, to think that there is little preparatory work worth doing today; however, I see the stakes as large enough to justify work in this area even at a relatively low probability of having impact. I would like to see reasonably well-resourced, full-time efforts – with substantial input from mainstream computer scientists – to think about what preparations could be done for major developments in artificial intelligence, and my perception is that efforts fitting this description do not exist currently. We are currently working on trying to understand whether the seeming lack of activity comes from a place of “justified confidence that action is not needed now” or of “lack of action despite a reasonable possibility that action would be helpful now.” My current guess is that the latter is the case, and if so I hope to make this cause a priority.
We will be writing more on this topic in the future.
Why these three risks stand out
Generally speaking, the causes highlighted above (geoengineering, biosecurity and potentially (pending more investigation) artificial intelligence) seem to us to have:
- Greater potential for the most extreme direct harms (extreme enough to make a substantial change to the long-term trajectory of civilization likely) relative to other risks we’ve looked at, with the exception of nuclear weapons (an area that we perceive as more “crowded” than these three).
- Very difficult to quantify, but potentially reasonably high (1%+), risk of such extreme harm in the next 50-100 years.
- Very little philanthropic attention.
Our guess is that most other candidate risks would, upon sufficient investigation, appear less worth working on than at least one of our top candidates – due to presenting less potential for harm, less tractability, or more crowdedness, while being roughly comparable on other dimensions. That said, (a) the specific assessment of artificial intelligence is still in progress and we don’t have internal agreement on it, as discussed above; (b) we have low confidence in our working assessment, and plan both to do more investigation and to seek out more critical viewpoints on our current priorities.
Our shallow investigations have generated a number of follow-up questions that we would like to resolve before committing to causes:
- Our current understanding is that major volcanic eruptions are currently neither predictable nor preventable, making this cause apparently rather intractable. To what extent could further research help remedy these shortcomings, and are there other ways a philanthropist could help address the risk from a large volcanic eruption?
- How do risks from comets compare to the remaining risks from untracked near earth asteroids? Our understanding is that these risks are likely to be an order of magnitude or two lower than volcanic eruption risks that would cause similar harm, but we aren’t sure how they compare in tractability. What could be done about potential risks from comets?
- How credible are existing estimates of the potential harm of geomagnetic storms? In particular, how do experts assess the risks to the power grid from a rare geomagnetic event? How prepared are power companies for geomagnetic storms?
- Are there any important gaps in current funding for efforts to improve nuclear security?
In addition, we are still hoping to conduct shallow investigations of nanotechnology, synthetic biology governance (aimed more at ecological threats than biosecurity), and the field of emerging technology governance as a whole, which we think could potentially be competitive with some of the risks described as potential focus areas.
Comments
This article may be of interest to you. “Existential Risks: Exploring a Robust Risk Reduction Strategy” http://link.springer.com/article/10.1007%2Fs11948-014-9559-3
Sooooo is anyone ever going to explain what these things are catastrophic risks to? What will we have accomplished when we reach 2100CE without going extinct?
Eli, the answer to the first part of your question, I would say, is humans. As George Berkeley famously said, “If a tree falls in a forest and no one is around to hear it, does it make a sound?” Catastrophes and disasters, by their own definition are what happens at the intersection of human activity and any accompanying events of adverse effect (on humans), be they “natural” or human-induced. This has been the central argument by disaster researchers since the early 1920s when this particular field began emerging as an area of study in the U.S. That potential GCRs would most likely stem from mostly human activity is becoming a more pressing issue, as Alexander illustrates there are two sides to progress in both biotech and AI.
The key obstruction to humans circumventing issues like climate change–despite there being an overwhelming evidence to suggest that this is largely a product of human activity which has potential to affect billions (that’s right, not millions)-–is the fact that many of the existing industries thrive on the very processes that exacerbate the climate, yet the pace at which such industries can or are able to change (whether it’s alternative energy or complete overhaul of the business operations) is too slow, whether through the lack of incentives or otherwise.
Where I think the philanthropic funding could eventually encourage or influence government funding is by not just supporting those charitable groups to make further progress in vital scientific or social research where it is due, but by strengthening the public agency (while of course maintaining neutrality in such issues through evidence based research). I read a very interesting article about the open source revolution here http://bit.ly/1nnuu8U where they present a compelling argument by the late Harvard business school professor C. K. Prahalad that “the collective buying power of the five billion poor is four times that of the one billion rich”.
Alexander, I also believe that your turning away from the research on disaster shelters may have been too premature (though I have my own biases as an architect and certainly appreciate the fact that GiveWell’s current priority is to conduct research on causes and projects that support the most tangible impact for donors), mostly because this is in fact a massive industry with high stakes. It would also depend on where you would draw the definition of disaster shelters. Fall out shelters and bunkers are most likely not specialized facilities that were built solely in anticipation of GC events, but are public facilities and common spaces that can double as that if and when it is needed to be used as such. I am fascinated by the concept of geoengineering as an artificial means to combat the effects of climate change, but I remain skeptical for its unforeseeable and unintended potential net negative side-effects for humanity by investing in such high-risk projects in real life.
The volcanic eruptions are certainly not preventable, but there are scientifically proven indicators used by geologists that have been developed and can now fairly reliably predict when it’s going to happen. An obvious strategy for prevention of human casualty would be to not build within close proximity to active volcanoes, much in the same way that building at below sea level within an area that is known for seasonal floods is inadvisable (but developers are too happy to buy up these lands cheaply and many home buyers move in and live with the risk because they have no other alternatives – a prime example being the 2005 Hurricane Katrina).
That humans can build something “disaster-proof” is a complete myth. The best we can do is to build something that resists the effects of disaster just long enough for us escape harm, and cultivate our own resilience by continuing to support positive social change (such as poverty alleviation, nutrition, and education).
Just for me, personally, I am interested in how GiveWell measures the intangible impacts of the charities that you endorse!
Alex, yes, of freaking course people. That still doesn’t specify an actual goal. To minimize risk you must possess an asset, preferably an asset of great value.
It’s easy to say, “ah, well, human lives are of immense value and we mustn’t risk them!”, but that statement is not consistent with most of the rest of our actions, as individuals and as societies. Hence my finding an immense irony: the Great and the Good turn their eye towards preventing global, catastrophic “risks” to an asset (humanity) on which they place damn near zero actual value.
Eli,
Do you plan to say more about your take on GCR reduction as a focus? I’m very interested in your take given your above comments.
Alex – thanks for the comments. I just wanted to reply on volcanoes: while I think “not building near volcanoes” is likely to be an effective strategy for dealing with more common volcanic eruptions, large volcanic eruptions of the kind discussed in our investigation could affect whole continents or the world, rendering that strategy rather inadequate.
Alexander, I wonder whether the following comment should be qualified in light of other information about the opinions of AI researchers:
> The majority of academic artificial intelligence researchers seem not to see the rapid development of powerful autonomous agents as a substantial risk, but to believe that there are some potential risks worth preparing for now (such as accidents in crucial systems or AI-enabled crime; see slides 20-22).
Researchers at the Future of Humanity Institute recently did a survey of the 100 most cited researchers in AI on closely related questions. 29 answered it. The median estimated probability of a rapid (2-year) transition from human-level AI to superintelligent AI was small (5%), the median probability assigned to “extremely bad outcome (existential catastrophe)” from superintelligent AI was also “small” (10%).
There are questions about how to interpret these survey results, but a couple of points stick out to me if you take the survey results at face value.
(1) If you broaden from “rapid AI transition leading to a very bad outcome” to “AI development eventually leading to a very bad outcome,” you couldn’t characterize the risk as insubstantial.
(2) If you just multiplied those numbers through (which I think would be more likely to underestimate the risk than overestimate it), you get a median opinion that creating human-level AI will swiftly lead to an extremely bad outcome/existential catastrophe with a probability of 0.5%, a risk that I would also regard as substantial.
The analysis of these results has not been published yet, but I’ll e-mail what we have at the moment.
I agree that the AAAI panel did not think that this risk was substantial. But I am not convinced that their opinions are representative of the field as a whole. I think the claim that the AI researchers see little to do about global catastrophes from advanced AI is on stronger footing.
Jason, Eli is probably Eliezer Yudkowsky of MIRI and LessWrong, or at least someone else from LessWrong. There are a lot of us who think that human culture has a much more cavalier attitude toward human deaths than it should.
More importantly, some of those are strong advocates for cryonics as a way to even stand a chance of preventing the permanent destruction of people. It’s an unorthodox idea, but you have to admit, the huge difference in moral response to a thousand people dying in a terror attack and a thousand people dying of old age doesn’t make a ton of sense.
I’m happy to see such a great job done covering GCRs. I have a clarifying question.
> I don’t see those issues as GCRs, whereas the rapid development of powerful autonomous agents could conceivably be one.
There are 3 claims this could be referring to:
1) Rapid development of powerful autonomous agents is conceivable.
2) It is conceivable that the rapid development of powerful autonomous agents will happen soon.
3) It’s conceivable that the rapid development of powerful autonomous agents would be a GCR.
I think that (1) is pretty much indisputable, (2) is disputable, with arguments on either side, and (3) is difficult to dispute, though there are arguments on either side. I’d like to know Alex, Holden, and other GiveWell staff’s opinions on these.
Bravo on suggesting that geo-engineering (climate engineering) is, at least, worthy of research. It may be risky but some ways are less risky than others. Bravo. Hope to see more on this…
Clarification: when I wrote:
> (2) If you just multiplied those numbers through (which I think would be more likely to underestimate the risk than overestimate it), you get a median opinion that creating human-level AI will swiftly lead to an extremely bad outcome/existential catastrophe with a probability of 0.5%, a risk that I would also regard as substantial.
I meant that the process would be more likely to underestimate the level of risk that the AI researchers were implicitly assuming, without commenting on whether this implicit assumption was an overestimate or underestimate of risk.
Nick – thanks for the comments. I’ve read previous versions of the data you point to. I think most of our disagreement is linguistic – is a half a percent chance “substantial”? I was trying to say something like “the majority of credible AI researchers would say that the probability that rapid development of human level or superintelligent AI causing a global catastrophe in the next century is small” but I don’t have a great sense for what “small” would have to mean. I find it totally plausible that you could get a majority (of credible researchers) to agree that that probability is above a tenth of a percent, though I don’t know. I also agree that “the claim that the AI researchers see little to do about global catastrophes from advanced AI is on stronger footing.” However, I want to note that I find the AAAI panel, for all its issues, a better gauge of expert opinion on this topic than a survey of very highly cited AI researchers with a 29% response rate. I found the paper’s argument that survey nonresponse was uncorrelated with substantive views unconvincing.
Andrew – I was saying #3.
Alexander — thanks for your response. I’m not sure that our disagreement is purely verbal. It seems to me that you have offered something like “AI researchers don’t think AI risk is very large” as a consideration in favor of not taking AI risk very seriously. We agree that a sizable fraction of the field may think the risk of a global catastrophe following rapid AI development could be somewhere around 0.1% to 0.5%. I believe these numbers are sufficiently large that–for most people who hadn’t thought about AI risk in detail–this would make them more inclined to think that AI risk was an important issue, and not less.
I agree that the AAAI panel is a more important source of information regarding the opinions of AI researchers, but I also believe that the information from the FHI survey should be taken into account. Even if the people who didn’t answer the survey generally think that a rapid AI transition is unlikely to happen or be dangerous, the survey would still suggest that there is a sizable fraction of the field with opinions whose significance is poorly summarized by phrases like “low likelihood of radical outcomes,” even if it’s literally true that the likelihoods are low.
To elaborate a bit on why I think the FHI survey info is also relevant, consider that:
* The AAAI panel may have been organized mainly by people who are less concerned about risk
* The write-up may primarily reflect the view of most people in the field, but neglect a substantial dissenting minority
* The AAAI people may have thought it was fair to summarize a 5% chance of a rapid AI transition and a 10% chance of an existential catastrophe from superintelligence were reasonably summarized as “low likelihood of radical outcomes,” (implicating not extremely important) even if you and I would think that if these numbers were true, they could potentially be extremely important.
Nick – I agree with “a sizable fraction of the field may think the risk of a global catastrophe following rapid AI development could be somewhere around 0.1% to 0.5%,” because I have no idea how most members of the field would respond to that question (and I agree that it is totally compatible with the content of the AAAI language). But my intuition is that people are often instinctively unwilling to give 1000:1 odds, even on things that they believe are qualitatively very unlikely. So I would definitely consider it a negative (i.e. less important) update to my assessment of the potential risk if, in a representative survey, a majority of experts on this gave odds longer than 1000:1. But I’m pretty ambivalent about the epistemic value of those odds, given potential biases in both directions, and accordingly think the summary “the majority of academic artificial intelligence researchers seem not to see the rapid development of powerful autonomous agents as a substantial risk” is qualitatively correct, while being totally compatible with probability estimates that, if true, would be enough to warrant our concern.
Why categorize things into “anthropogenic” and otherwise? There are a number of dichotomies available.
Separately:
“The general argument for being more worried about anthropogenic GCRs is as follows. The human species is fairly old (Homo sapiens sapiens is believed to have evolved several hundred thousand years ago), giving us a priori reason to believe that we do not face high background extinction risk.”
…nor a high self-extermination risk, by that logic, right?
Biosecurity: I understand the “giving to learn” concept that drives the declared interest in funding. But the fact that other philanthropists aren’t involved isn’t exactly causal to impact potential. What are the preliminary reasons to believe you could have an impact?
Climate change: the fact that something (e.g. climate change) could be worse than anticipated is not a call to arms, as anyone who has eaten oysters recently can attest. Can something be cheap in “purely financial, [but] not necessarily in cost-benefit terms”? Lastly, on the subject current funding, if you slice up climate change, or any issue, finely enough, you will always find a dearth of funding. As much as “research on the science or governance” of solar geoengineering gets overlooked, surely you would find yourself in a crowded climate change philanthropy marketplace.
Artificial intelligence for another time. I love what GiveWell does. Thanks for the best source of information and inspiration around.
I congratulate Holden Karnofsky for that piece. Global catastrophic events are very popular musings, heavily supported by the media, governments, lawyers, and most NGOs. Today people that do not subscribe to some, like catastrophic climate change, are demonized. Nevertheless the truth is that, as the above author says, the predictions are not robust, in fact a very elegant and educated word, showing respect for the current believers in the modern and invented “crisis”. Historically, each and every catastrophic prediction has been a failure. And the reasons are, in my view (and others) the inability or refusal to understand the power of human ingenuity driving civilization as well as science and technology. The talk by Matt Ridley * when receiving the Julian Simon Memorial Award dissects the current pessimism. Besides I embrace with enthusiasm the author’s preferences and more, I would say that it is 100 times more robust, efficient and humane to address the current objective problems like disease and poverty. Alleviating poverty, want and disease will liberate an army of future citizens educated and able to address whatever problem the future has in store, problems that we could, only partially, guess.
* http://blog.skepticallibertarian.com/2012/06/20/julian-simon-award-lecture-matt-ridley/
Sincerely
Francisco G Nobrega
Here’s a link to survey by Muller and Bostrom, discussed in my comments above: http://www.sophia.de/pdf/2014_PT-AI_polls.pdf
Chris – thanks for the thoughts and questions. Addressing them in order:
Comments are closed.