Last year, we wrote:
A major goal of 2017 will be to reach and publish better-developed views on:
- Which worldviews we find most plausible: for example, how we allocate resources between giving that primarily focuses on present-day human welfare vs. present-day animal welfare vs. global catastrophic risks.
- How we allocate resources among worldviews.
- How we determine whether it’s better to make a given grant or save the money for a later date.
This post gives an update on this work.
The questions we’re tackling here are complex, and we are still far from having a fully developed framework.
However, we do have a tentative high-level approach to these questions, and some rough expectations about a few high-level conclusions (at least as far as the next few years are concerned). Hopefully, laying these out will clarify - among other things - (a) why we continue to work on multiple highly disparate causes; (b) ranges for what sort of budgets we expect in the next few years for each of the focus areas we currently work in; (c) how we decided how much to recommend that Good Ventures donate to GiveWell’s top charities for 2017.
In brief:
-
- When choosing how to allocate capital, we need to decide between multiple worldviews. We use “worldview” to refer to a highly debatable (and perhaps impossible to evaluate) set of beliefs that favor a certain kind of giving. Worldviews can represent a mix of moral values, views on epistemology and methodology, and heuristics. Worldviews that are particularly salient to us include:
-
- a “long-termist” worldview that ascribes very high value to the long-term future, such that it assesses grants by how well they advance the odds of favorable long-term outcomes for civilization;
- a “near-termist, human-centric” worldview that assesses grants by how well they improve the lives of humans (excluding all animal welfare considerations) on a relatively shorter time horizon;
- a “near-termist, animal-inclusive” view that focuses on a similarly short time horizon but ascribes significant moral weight to animals.
- If we evaluated all grants’ cost-effectiveness in the same terms (e.g., “Persons helped per dollar, including animals” or “Persons helped per dollar, excluding animals”), this would likely result effectively putting all our resources behind a single worldview. For reasons given below (and previously), we don’t want to do this. Instead, we’re likely to divide the available capital into buckets, with different buckets operating according to different worldviews and in some cases other criteria as well. E.g., we might allocate X% of capital to a bucket that aims to maximize impact from a “long-termist” perspective, and Y% of capital to a bucket that aims to maximize impact from a “near-termist” perspective, with further subdivisions to account for other worldview splits (e.g., around the moral weight of animals) and other criteria. These buckets would then, in turn, allocate “their” capital to causes in order to best accomplish their goals; for example, a long-termist bucket might allocate some of its X% to biosecurity and pandemic preparedness, some to causes that seek to improve decision-making generally, etc.
- In setting these allocations, we will need to:
-
- Consider how much credence/weight we assign to each worldview, which in turn will be informed by a number of investigations and writeups that are in progress.
- Consider how appropriate it is to (a) model the different worldviews as different “agents” that have fundamentally different goals (and can make trades and agreements with each other), vs. (b) modeling them as different empirical beliefs that we can assign probabilities to and handle in an expected-value framework.
- To the extent we model the different worldviews as different “agents”: consider what deals and agreements they might make with each other - such as trying to arrange for worldviews to get more capital in situations where the capital is especially valuable. (In other words, we don’t want to allocate capital to worldviews in a vacuum: more should go to worldviews that have more outstanding or “outlier” giving opportunities by their own lights.)
- Allocate some capital to buckets that might not be directly implied by the goals of any of our leading worldviews, for purposes of accomplishing a number of practical and other goals (e.g., practical benefits of worldview diversification).
- This process will require a large number of deep judgment calls about moral values, difficult-to-assess heuristics and empirical claims, etc. As such, we may follow a similar process to GiveWell’s cost-effectiveness analysis: having every interested staff member fill in their inputs for key parameters, discussing our differences, and creating a summary of key disagreements as well as median values for key parameters. However, ultimately, this exercise serves to inform funders about how to allocate their funds, and as such the funders of Good Ventures will make the final call about how to allocate their capital between buckets.
- We don’t expect to set the capital allocation between buckets all at once. We expect a continuing iterative process in which - at any given time - we are making enough guesses and tentative allocations to set our working budgets for existing focus areas and our desired trajectory for total giving over the next few years. We expect the detail and confidence of our capital allocation between buckets to increase in parallel with total giving, and to be fairly high by the time we reach peak giving (which could be 10+ years from now).
- This post includes some high-level outputs from this framework that are reasonably likely to guide our giving over the next 1-5 years, though they could still easily change. These are:
-
- We will probably recommend that a cluster of “long-termist” buckets collectively receive the largest allocation: at least 50% of all available capital. Grants in these buckets will be assessed by how well they advance the odds of favorable long-term outcomes for civilization. We currently believe that global catastrophic risk reduction accounts for some of the most promising work by this standard (though there are many focus areas that can be argued for under it, e.g. work to promote international peace and cooperation).
- We will likely want to ensure that we have substantial, and somewhat diversified, programs in policy-oriented philanthropy and scientific research funding, for a variety of practical reasons. I expect that we will recommend allocating at least $50 million per year to policy-oriented causes, and at least $50 million per year to scientific-research-oriented causes, for at least the next 5 or so years.
- We will likely recommend allocating something like 10% of available capital to a “straightforward charity” bucket (described more below), which will likely correspond to supporting GiveWell recommendations for the near future.
- Other details of our likely allocation are yet to be determined. We expect to ultimately recommend a substantial allocation (and not shrink our current commitment) to farm animal welfare, but it’s a very open question whether and how much this allocation will grow. We also note that making informed and thoughtful decisions about capital allocation is very valuable to us, and we’re open to significant capital allocations aimed specifically at this goal (for example, funding research on the relative merits of the various worldviews and their implicit assumptions) if we see good opportunities to make them.
- A notable outcome of the framework we’re working on is that we will no longer have a single “benchmark” for giving now vs. later, as we did in the past. Rather, each bucket of capital will have its own standards and way of assessing grants to determine whether they qualify for drawing down the capital in that bucket. For example, there might be one bucket that aims to maximize impact according to a long-termist worldview, and another that aims to maximize impact according to a near-termist worldview; each would have different benchmarks and other criteria for deciding on whether to make a given grant or save the money for later. We think this approach is a natural outcome of worldview diversification, and will help us establish more systematic benchmarks than we currently have.
-
- Key worldview choices and why they might call for diversification
-
- Animal-inclusive vs human-centric views
-
- Handling uncertainty about animal-inclusive vs. human-centric views
-
- Issue 1: normative uncertainty and philosophical incommensurability
- Issue 2: methodological uncertainty and practical incommensurability
- Issue 3: practical considerations against “putting all our eggs in one basket”
- Issue 4: the “outlier opportunities” principle
- A simple alternative to the default approach
- Handling uncertainty about animal-inclusive vs. human-centric views
- Long-termist vs. near-termist views
- Some additional notes on worldview choices
- Animal-inclusive vs human-centric views
- Some other criteria for capital allocation
- Allocating capital to buckets and causes
- Likely outputs
-
- Global catastrophic risks and other long-term-oriented causes
- Policy-oriented philanthropy and scientific research funding
- Straightforward charity
- Other outputs
- Smoothing and inertia
- Funding aimed directly at better informing our allocation between buckets
- 2017 allocation to GiveWell top charities
- No more unified benchmark
- Our future plans for this work
Key worldview choices and why they might call for diversification
Over the coming years, we expect to increase the scale of our giving significantly. Before we do so, we’d like to become more systematic about how much we budget for each of our different focus areas, as well as about what we budget for one year vs. another (i.e., how we decide when to give immediately vs. save the money for later).
At first glance, the ideal way to tackle this challenge would be to establish some common metric for grants. To simplify, one might imagine a metric such as “lives improved (adjusted for degree of improvement) per dollar spent.” We could then use this metric to (a) compare what we can accomplish at different budget sizes in different areas; (b) make grants when they seem better than our “last dollar” (more discussion of the “last dollar” concept here), and save the money instead when they don’t. Our past discussions of our approach to “giving now vs. later” (here and here) have implied an approach along these lines. I will refer to this approach as the “default approach” to allocating capital between causes (and will later contrast it with a “diversifying approach” that divides capital into different “buckets” using different metrics).
A major challenge here is that many of the comparisons we’d like to make hinge on very debatable questions involving deep uncertainty. In Worldview Diversification, we characterized this dilemma, gave examples, and defined a “worldview” (for our purposes) as follows:
I’ll use “worldview” to refer to a set of highly debatable (and perhaps impossible to evaluate) beliefs that favor a certain kind of giving. One worldview might imply that evidence-backed charities serving the global poor are far more worthwhile than [other options]; another might imply that farm animal welfare is; another might imply that global catastrophic risk reduction is. A given worldview represents a combination of views, sometimes very difficult to disentangle, such that uncertainty between worldviews is constituted by a mix of empirical uncertainty (uncertainty about facts), normative uncertainty (uncertainty about morality), and methodological uncertainty (e.g. uncertainty about how to handle uncertainty …)
Below, we list some of what we view as the most crucial worldview choices we are facing, along with notes on why they might call for some allocation procedure other than the “default approach” described above - and a brief outline (fleshed out somewhat more in later sections) on what an alternative procedure might look like.
Animal-inclusive vs human-centric views
As we stated in our earlier post on worldview diversification:
Some people think that animals such as chickens have essentially no moral significance compared to that of humans; others think that they should be considered comparably important, or at least 1-10% as important. If you accept the latter view, farm animal welfare looks like an extraordinarily outstanding cause, potentially to the point of dominating other options: billions of chickens are treated incredibly cruelly each year on factory farms, and we estimate that corporate campaigns can spare over 200 hens from cage confinement for each dollar spent. But if you accept the former view, this work is arguably a poor use of money.
(Note: this quote leaves out the caveat that this picture could change dramatically if one adopts the “long-termist” view discussed in a later section. For simplicity, the rest of this section will assume that one is focused on relatively near-term good accomplished rather than taking the “long-termist” view discussed below.)
We’ve since published an extensive report on moral patienthood that grew out of our efforts to become better informed on this topic. However, we still feel that we have relatively little to go on, to the point where the report’s author wasn’t comfortable publishing even his roughest guesses at the relative moral weights of different animals. Although he did publish his subjective probabilities that different species have “consciousness of a sort I intuitively morally care about,” these are not sufficient to establish relative weight, and one of the main inputs into these probabilities is simple ignorance/agnosticism.
There are many potential judgment calls to be made regarding moral weight - for example, two people might agree on the moral weight of cows (relative to humans) while strongly disagreeing on the moral weight of chickens, or agree on chickens but disagree on fish. For our purposes, we focus on one high-level disagreement. The disagreement is between two views:
-
- An “animal-inclusive” view that assigns moral weight using explicit subjective estimates with a high degree of agnosticism, and generally considers members of most relevant1 species of fish, birds, and mammals to carry at least 1% as much moral weight as humans. At a high level, taking an “animal-inclusive view” allows - and, given the current state of the world, is likely to often result in2 - letting the interests of nonhuman animals dominate giving decisions. Beyond that, different versions of this view could have different consequences.
- A “human-centric” view that effectively treats all non-human animals as having zero moral weight. This view could be driven by something like: (a) being suspicious, methodologically speaking, of estimating moral weights in an explicit (and in practice largely agnosticism-based) framework, and therefore opting for a conventional/”non-radical” set of views in one’s state of ignorance; (b) having criteria for moral patienthood that revolve around something other than consciousness, such as the idea that we have special obligations to humans (relative to animals).3
This “animal-inclusive” vs. “human-centric” split is a crude simplification of the many possible disagreements over moral weights, but in practice I believe most people fall into one camp or the other, and that the two camps can have radically different implications (at least when one focuses on near-term good accomplished rather than taking the “long-termist” perspective discussed below).
Handling uncertainty about animal-inclusive vs. human-centric views
If we were taking the “default approach” noted above, we could handle our uncertainty on this front by assigning a subjective probability to each of the “animal-inclusive” or “human-centric” views (or, for more granularity, assigning subjective probability distributions over the “moral weights” of many different species relative to humans) and making all grants according to whatever maximizes some metric such as “expected years of life improved,4 adjusted for moral weight.” For example, if one thinks there’s a 50% chance that one should be weighing the interests of chickens 1% as much as those of humans, and a 50% chance that one should not weigh them at all, one might treat this situation as though chickens have an “expected moral weight” of 0.5% (50% * 1% + 50% * 0) relative to humans. This would imply that (all else equal) a grant that helps 300,000 chickens is better than a grant that helps 1,000 humans, while a grant that helps 100,000 chickens is worse.
This default approach has several undesirable properties. We discussed these somewhat in our previous post on worldview diversification, but since our thinking has evolved, we list the main issues we see with the default approach below.
Issue 1: normative uncertainty and philosophical incommensurability
The “animal-inclusive” vs. “human-centric” divide could be interpreted as being about a form of “normative uncertainty”: uncertainty between two different views of morality. It’s not entirely clear how to create a single “common metric” for adjudicating between two views. Consider:
-
- Comparison method A: say that “a human life improved5” is the main metric valued by the human-centric worldview, and that “a chicken life improved” is worth >1% of these (animal-inclusive view) or 0 of these (human-centric view). In this case, a >10% probability on the animal-inclusive view would lead chickens to be valued >0.1% as much as humans, which would likely imply a great deal of resources devoted to animal welfare relative to near-term human-focused causes.6
- Comparison method B: say that “a chicken life improved” is the main metric valued by the animal-inclusive worldview, and that “a human life improved” is worth <100 of these (animal-inclusive view) or an astronomical number of these (human-centric view). In this case, a >10% probability on the human-inclusive view would be effectively similar to a 100% probability on the human-centric view.
These methods have essentially opposite practical implications. Method A is the more intuitive one for me (it implies that the animal-inclusive view sees “more total value at stake in the world as a whole,” and this implication seems correct), but the lack of a clear principle for choosing between the two should give one pause, and there’s no obviously appropriate way to handle this sort of uncertainty. One could argue that the two views are “philosophically incommensurable” in the sense of dealing with fundamentally different units of value, with no way to identify an equivalence-based conversion factor between the two.
This topic is further discussed in Chapter 4 of MacAskill 2014.
Issue 2: methodological uncertainty and practical incommensurability
As stated above, a major potential reason for taking the human-centric view is “being suspicious, methodologically speaking, of estimating moral weights in an explicit (and in practice largely agnosticism-based) framework, and therefore opting for a conventional/’non-radical’ set of views in one’s state of ignorance.” Yet the default approach essentially comes down to evaluating this concern using an explicit (and in practice largely agnosticism-based) framework and embracing whatever radical implications result. It therefore seems like a question-begging and inappropriate methodology for handling such a concern.
It’s not clear what methodology could adjudicate a concern like this in a way that is “fair” both to the possibility this concern is valid and the possibility that it isn’t. Because of this, one might say that the two views are “practically incommensurable”: there is no available way to reasonably, practically come up with “common metrics” and thus make apples-to-apples comparisons between them.
Issue 3: practical considerations against “putting all our eggs in one basket”
We believe that if we took the default approach in this case, there’s a strong chance that we would end up effectively going “all-in” on something very similar to the animal-inclusive view.10% probability on the animal-inclusive view would lead chickens to be valued >0.1% as much as humans (using the "method A" approach I find most intuitive), which would likely imply a great deal of resources devoted to animal welfare relative to near-term human-focused causes. For now, we bracket the debate over a "long-termist" worldview that could make this distinction fairly moot, though we note it in a later section of this post. " href="#footnote7_3lah10q">7 This could mean focusing our giving on a few cause areas that are currently extremely small, as we believe there are very few people or organizations that are both (a) focused on animal welfare and (b) focused on having highly cost-effective impact (affecting large numbers of animals per dollar). Even if these fields grew in response to our funding, they would likely continue to be quite small and idiosyncratic relative to the wider world of philanthropic causes.
Over time, we aspire to become the go-to experts on impact-focused giving; to become powerful advocates for this broad idea; and to have an influence on the way many philanthropists make choices. Broadly speaking, we think our odds of doing this would fall greatly if we were all-in on animal-focused causes. We would essentially be tying the success of our broad vision for impact-focused philanthropy to a concentrated bet on animal causes (and their idiosyncrasies) in particular. And we’d be giving up many of the practical benefits we listed previously for a more diversified approach. Briefly recapped, these are: (a) being able to provide tangibly useful information to a large set of donors; (b) developing staff capacity to work in many causes in case our best-guess worldview changes over time; (c) using lessons learned in some causes to improve our work in others; (d) presenting an accurate public-facing picture of our values; and (e) increasing the degree to which, over the long run, our expected impact matches our actual impact (which could be beneficial for our own, and others’, ability to evaluate how we’re doing).
Issue 4: the “outlier opportunities” principle
We see a great deal of intuitive appeal in the following principle, which we’ll call the “outlier opportunities” principle:
if we see an opportunity to do a huge, and in some sense “unusual” or “outlier,” amount of good according to worldview A by sacrificing a relatively modest, and in some sense “common” or “normal,” amount of good according to worldview B, we should do so (presuming that we consider both worldview A and worldview B highly plausible and reasonable and have deep uncertainty between them).
To give a hypothetical example, imagine that:
-
- We are allocating $100 million between cage-free reforms and GiveWell’s top charities, and estimate that: (a) work on cage-free reforms could improve 1000 chicken-life-years per dollar spent (e.g. improve 200 chickens’ lives for 5 years each), while (b) GiveWell’s top charities could improve one human-life-year by an equivalent amount for every $500 spent. Suppose further that we imagine these figures to apply to large numbers of giving opportunities, enough to absorb the full $100 million easily, and to be broadly representative of the best giving opportunities one encounters in normal circumstances.
- Along the lines of the default approach, we value chicken-life-years at 0.5% as much as human-life-years. This would imply that cage-free reforms could improve the equivalent of 5 human-life-years per dollar spent, making them 2500x as cost-effective as GiveWell’s top charities from our perspective.
- We then notice a particular opportunity to improve 2 million human-life-years (by an equivalent amount) for $1 million, perhaps by funding a highly promising treatment for a rare disease, and believe that this opportunity would not get funded in our absence. Call this an “outlier opportunity,” notable for its unusual cost-effectiveness in the broader scheme of giving opportunities for the human-centric worldview.
In this hypothetical, the outlier opportunity would be ~1000x as cost-effective as the other top human-centric opportunities, but still <50% as cost-effective as the vast amount of work to be funded on cage-free reforms. In this hypothetical, I think there’s a strong intuitive case for funding the outlier opportunity nonetheless. (I think even more compelling cases can be imagined for some other worldview contrasts, as in the case of the “long-termist” vs. “near-termist” views discussed below.)
The outlier opportunities principle could be defended and debated on a number of grounds, and some version of it may follow straightforwardly from handling “incommensurability” between worldviews as discussed above. However, we think the intuitive appeal of the principle is worth calling out by itself, since one might disagree with specific arguments for the principle while still accepting some version of it.
It’s unclear how to apply the outlier opportunities principle in practice. It’s occurred to us that the first $X we allocate according to a given worldview might, in many cases, be an “outlier opportunity” for that worldview, for the minimum $X that allows us to hire staff, explore giving opportunities, make sure to fund the very best ones, and provide guidance to other donors in the cause. This is highly debatable for any given specific case. More broadly, the outlier opportunities principle may be compelling to some as an indication of some sort of drawback in principle to the default approach.
A simple alternative to the default approach
When considering the animal-inclusive vs. human-centric worldview, one simple alternative to the default approach would be to split our available capital into two equally sized buckets, and have one bucket correspond to each worldview. This would result in allocating half of the capital however one needs to in order to maximize humans helped, and half however one needs to in order to maximize a metric more like “species-adjusted persons helped, where chickens and many other species generally count more than 1% as much as humans.”
I’ll note that this approach is probably intuitively appealing for misleading reasons (more here), i.e., it has less going for it than one might initially guess. I consider it a blunt and unprincipled approach to uncertainty, but one that does seem to simultaneously relieve many of the problems raised above:
-
- If the two views are incommensurable (issues 1 and 2 above), such that one cannot meaningfully reach “common units” of value between them, one way of imagining this situation (metaphorically) might be to imagine the two views as though they are two different agents with fundamentally different and incommensurable goals, disagreeing about how to spend capital. A first-pass way of allocating capital “fairly” in such a situation would be to divide it evenly between the two agents; the approach described here is largely equivalent to that solution.
- This approach would probably ensure that most of the “practical benefits of worldview diversification” (being able to provide tangibly useful information to a large set of donors; developing staff capacity to work in many causes; etc.) are realized.
- It would also ensure that “outlier opportunities” according to both worldviews are funded.
As discussed later in this post, I think this approach can be improved on, but I find it useful as a starting-point alternative to the default approach.
Long-termist vs. near-termist views
We’ve written before about the idea that:
most of the people we can help (with our giving, our work, etc.) are people who haven’t been born yet. By working to lower global catastrophic risks, speed economic development and technological innovation, and generally improve people’s resources, capabilities, and values, we may have an impact that (even if small today) reverberates for generations to come, helping more people in the future than we can hope to help in the present.
In the >3 years since that post, I’ve come to place substantially more weight on this view, for several reasons:
-
- In my experience, it seems that the people who seem most knowledgeable and reflective about how to apply the principles of effective altruism disproportionately favor this view.
- I’ve learned more about the case for this view. In particular:
-
- I now believe that there is a substantial chance (easily over 10%) that if civilization survives the next few hundred years, it could lead to an extremely large, overwhelmingly positive world that is much more robust to global catastrophic risks than today’s world. We expect to publish analysis on this topic in the future.
- My intuitions, like those of many others, say it is far more important to improve the lives of persons who do (or will) exist regardless of our actions, than it is to increase the odds that a large number of positive lives exist in the future. Because of this, I’m instinctively skeptical that the possibility of even a very large positive world should carry too much weight in our moral calculations. However, I now believe there are good arguments to the contrary, such that I assign a substantial chance (easily over 10%) that I will eventually change my mind and come to believe that, e.g., preventing extinction is worth at least a trillion times more (due to the future generations it allows to exist) than a simple “lives saved today” calculation would imply. The arguments on this topic are in the population ethics literature; we expect to publish a review and summary in the future.
- I’ve come to believe that there is highly important, neglected, tractable work to do that is suited to improving long-run outcomes for large numbers of generations. Most of this falls under global catastrophic risk reduction. (Note: I don’t mean to imply that global catastrophic risk reduction can only be justified by appealing to its impact on future generations. Sufficiently significant global catastrophic risk reduction could be justified by its benefits for the current generation alone, and thus could be strong according to the “near-termist” view discussed below.)
I characterize the “long-termist view” as combining: (a) population ethics that assigns reasonably high moral weight to the outcome of “a person with high well-being, who otherwise would not have existed” (at least 1% relative to e.g. “a person has a high well-being, who would otherwise have had low well-being”);8 (b) methodological comfort with statements such as “Good philanthropy may have a nontrivial impact on the likelihood that future civilization is very large (many generations, high population per generation) and very high in well-being; this impact would be very high relative to any short- or medium-term impact that can be had.” I believe that in practice, one who is comfortable with the basic methodological and moral approach here is likely to end up assessing grants primarily based on how well they advance the odds of favorable long-term outcomes for civilization.
(One could also reach a long-termist conclusion (assessing grants primarily based on how well they advance the odds of favorable long-term outcomes for civilization) without accepting the population ethics laid out in (a). However, I feel that this would likely require an even greater degree of (b), methodological willingness to give based on speculation about the long-term future. For example, one might devote all of one’s effort to minimizing the odds of a very large long-term future filled with suffering or attempting to otherwise improve humanity’s long-term trajectory. From a practical perspective, I think that’s a much narrower target to hit than reducing the odds of human extinction.)
An alternative perspective, which I will term the “near-termist view,” holds some appeal for me as well:
-
- I have substantial uncertainty about population ethics. My personal intuitions still lean against placing too-high moral weight on the outcome of “a very large positive world, which would otherwise not have existed” relative to the outcome of “a moderate number of persons with high well-being, who otherwise would have had low well-being.”
- I find it reasonable to be suspicious, from a heuristic perspective, of reaching fairly unusual and “radical” conclusions based on speculation about the long-term future, even when such speculation is discounted based on subjective probability estimates. One reason is that such speculation seems (or at least is widely believed) to have a very poor historical track record. (We have some ongoing analysis aiming to examine whether the track record is really as poor as seems to be widely believed.)
A “near-termist” view might call for assessing grants based on the amount of good done per dollar that could be observable, in principle, within the next ~50 years;9 or perhaps might be willing to count benefits to future generations, but with a cap of something like 10-100x the number of persons alive today. I think either of these versions of “near-termism” would reduce the consequences of the above two concerns, while having the obvious drawback of excluding important potential value from the assessment of grants.
Similarly to the “animal-inclusive” vs. “human-centric” split, the “long-termist” vs. “near-termist” split is a crude simplification of many possible disagreements. However, in practice, I believe that most people either (a) accept the basic logic of the “long-termist” argument, or (b) reject its conclusions wholesale (often for ambiguous reasons that may be combining moral and methodological judgments in unclear ways) and could reasonably be classified as “near-termist” according to something like the definitions above.
The two views have radically different implications, and I think all four issues listed previously apply, in terms of reasons to consider something other than the default approach to allocation:
-
- Issue 1: normative uncertainty and philosophical incommensurability. I think a key question for long-termism vs. near-termism is “How should we weigh the outcome of ‘a very large positive world, which would otherwise not have existed’ relative to the outcome of ‘a moderate number of persons with high well-being, who otherwise would have had low well-being’?” Answering this question alone doesn’t resolve whether one should be a long-termist or near-termist, but I think people who weigh the former highly compared to the latter are likely to see a much stronger case that there are many promising long-termist interventions. And like the question about how to weigh animal vs. human welfare, this question could be interpreted as a question of “normative uncertainty” (uncertainty between two different views of morality), in which case it is not clear how to create a single “common metric” for adjudicating between two views. (See above for more detail on the challenges of a common metric. I think a similar analysis applies here.)
- Issue 2: methodological uncertainty and practical incommensurability. As stated above, a major potential reason for taking the near-termist view is being “suspicious, from a heuristic perspective, of reaching fairly unusual and ‘radical’ conclusions based on speculation about the long-term future.” Yet the default approach to allocation essentially comes down to evaluating this concern via an expected-value calculation that is likely (in practice) to be dominated by a highly speculation-based figure (the amount of value to be gained via future generations), and embracing whatever radical implications result. It therefore seems like a question-begging and inappropriate methodology for handling such a concern.
- Issue 3: practical considerations against “putting all our eggs in one basket.” We believe that if we took the default approach in this case, there’s a strong chance that we would end up effectively going “all-in” on something very similar to the long-termist view. I think this would likely result in our entire portfolio being comprised of causes where most of our impact is effectively unobservable and/or only applicable in extremely low-probability cases (e.g., global catastrophic risk reduction causes). I think this would be a problem for our ability to continually learn and improve, a problem for our ability to build an informative track record, and a problem on other fronts as discussed above.
- Issue 4: the “outlier opportunities” principle. I think the “outlier opportunities” principle described above is quite relevant in this case, even more so than in the “animal-inclusive vs. human-centric” case. Due to the very large amount of potential value attributable to the long-term future, a long-termist view is likely to de-prioritize even extremely outstanding opportunities to do near-term good. That isn’t necessarily the wrong thing to do, but could certainly give us pause in many imaginable cases, as discussed above.
For these reasons, I think there is some appeal to handling long-termism vs. near-termism using something other than the “default approach” (such as the simple approach mentioned above).
Some additional notes on worldview choices
We consider each possible combination of stances on the above two choices to be a “worldview” potentially worthy of consideration. Specifically, we think it’s worth giving serious consideration to each of: (a) the animal-inclusive long-termist worldview; (b) the animal-inclusive near-termist worldview; (c) the human-centric long-termist worldview; (d) the human-centric near-termist worldview.
That said, I currently believe that (a) and (c) have sufficiently overlapping practical implications that they can likely be treated as almost the same: I believe that a single metric (impact on the odds of civilization reaching a highly enlightened, empowered, and robust state at some point in the future) serves both well.
In addition, there may be other worldview choices that raise similar issues to the two listed above, and similarly call for something other than the “default approach” to allocating capital. For example, we have considered whether something along the lines of sequence vs. cluster thinking might call for this treatment. At the moment, though, my best guess is that the main worldviews we are deciding between (and that raise the most serious issues with the default approach to allocation) are “long-termist,” “near-termist animal-inclusive,” and “near-termist human-centric.”
Some other criteria for capital allocation
Our default starting point for capital allocation is to do whatever maximizes “good accomplished per dollar” according to some common unit of “good accomplished.” The first complication to this approach is the set of “worldview choices” discussed above, which may call for dividing capital into “buckets” using different criteria. This section discusses another complication: there are certain types of giving we’d like to allocate capital to in order to realize certain practical and other benefits, even when they otherwise (considering only their direct effects) wouldn’t be optimal from a “good accomplished per dollar” perspective according to any of the worldviews discussed above.
Scientific research funding
We seek to have a strong scientific research funding program (focused for now on life sciences), which means:
-
- Retaining top scientists as advisors (both part-time and full-time).
- Accumulating significant experience making and evaluating grants for scientific research, which ought to include the ability to identify and evaluate breakthrough fundamental science.
- Having a strong sense of what is scientifically promising, particularly but not exclusively among science relevant to our focus areas, at a given time.
We think the benefits of such a program are cross-cutting, and not confined to any one of the worldviews from the previous section:
-
- When there is promising scientific research relevant to any of the worldviews named above, we will be in position to quickly and effectively identify it and support it.
- Being knowledgeable about scientific research funding will, in my view, greatly improve our long-term potential for building relationships with donors beyond the ones we currently work most closely with.
These benefits are similar to those described in the capacity building and option value section of our previous post on worldview diversification.
In order to realize these benefits, I believe that we ought to allocate a significant amount of funding to scientific research (my current estimate is around $50 million per year, based on conversations with scientific advisors), with a reasonable degree of diversity in our portfolio (i.e., not all on one topic) and a substantial component directed at breakthrough fundamental science. (If we lacked any of these, I believe we would have much more trouble attracting top advisors and grantees and/or building the kind of general organizational knowledge we seek.)
Currently, we are supporting a significant amount of scientific research that is primarily aimed at reducing pandemic risk, while also hopefully qualifying as top-notch, cutting-edge, generically impressive scientific advancement. We have also made a substantial investment in Impossible Foods that is primarily aiming to improve animal welfare. However, because we seek a degree of diversity in the portfolio, we’re also pursuing a number of other goals, which we will lay out at another time.
Policy-oriented philanthropy
We seek to have a strong policy-oriented philanthropy program, which means:
-
- Retaining top policy-oriented staff.
- Accumulating significant experience making and evaluating policy-oriented grants.
- Having a sense of what policy-oriented opportunities stand out a given time.
We think the benefits of such a program mirror those discussed in the previous section, and are similarly cross-cutting.
In order to realize these benefits, I believe that we ought to allocate a significant amount of funding to policy-oriented philanthropy, with some degree of diversity in our portfolio (i.e., not all on one topic). In some causes, it may take ~$20 million per year to be the kind of “major player” who can attract top talent as staff and grantees; for some other causes, we can do significant work on a smaller budget.
At the moment, we have a substantial allocation to criminal justice reform. I believe this cause is currently very promising in terms of our practical goals. It has relatively near-term ambitions (some discussion of why this is important below), and Chloe (the Program Officer for this cause) has made notable progress on connecting with external donors (more on this in a future post). At this point, I am inclined to recommend continuing our current allocation to this work for at least the next several years, in order to give it a chance to have the sorts of impacts we’re hoping for (and thus contribute to some of our goals around self-evaluation and learning).
We have smaller allocations to a number of other policy-oriented causes, all of which we are reviewing and may either de-emphasize or increase our commitment to as we progress in our cause prioritization work.
Straightforward charity
Last year, I wrote:
I feel quite comfortable making big bets on unconventional work. But at this stage, given how uncertain I am about many key considerations, I would be uncomfortable if that were all we were doing … I generally believe in trying to be an ethical person by a wide variety of different ethical standards (not all of which are consequentialist). If I were giving away billions of dollars during my lifetime (the hypothetical I generally use to generate recommendations), I would feel that this goal would call for some significant giving to things on the more conventional side of the spectrum. “Significant” need not mean “exclusive” or anything close to it. But I wouldn’t feel that I was satisfying my desired level of personal morality if I were giving $0 (or a trivial amount) to known, outstanding opportunities to help the less fortunate, in order to save as much money as possible for more speculative projects relating to e.g. artificial intelligence.
I still feel this way, and my views on the matter have solidified to some degree. I now would frame this issue as a desire to allocate a significant (though not majority) amount of capital to “straightforward charity”: giving that is clearly and unambiguously driven by a desire to help the less fortunate in a serious, rational, reasonably optimized manner.
Note that this would’t necessarily happen simply due to having a “near-termist, human-centric” allocation. The near-termist and human-centric worldviews are to some extent driven by a suspicion of particular methodologies as justifications for “radicalism,” but both could ultimately be quite consistent with highly unconventional, difficult-to-explain-and-understand giving (in fact, it’s a distinct possibility that some global catastrophic risk reduction work could be justified solely by its impact according to the near-termist, human-centric worldview). It’s possible that optimizing for the worldviews discussed above, by itself, would imply only trivial allocations to straightforward charity, and if so, I’d want to ensure that we explicitly set aside some capital for straightforward charity.
I still haven’t come up with a highly satisfying articulation of why this allocation seems important, and what is lost if we don’t make it. However:
-
- My intuition on this front is strongly shared by others who sit on the Board of Managers of the Open Philanthropy Project LLC (Alexander, Elie, Cari and Dustin).
- I personally see some appeal in an explanation that revolves around “clear, costly, credible signaling of the right values.” I tend to trust people’s values and motives more when I see them taking actions that have real costs and are clearly and unambiguously driven by a desire to help the less fortunate in a serious, rational, reasonably optimized manner. The fact that some alternate version of myself (less steeped in the topics I’m currently steeped in, such as potential risks of advanced AI) would instinctively distrust me without the “straightforward charity” allocation seems important - it seems to indicate that a “straightforward charity” allocation could have both direct instrumental benefits for our ability to connect with the sorts of people we’re hoping to connect with, and a vaguer benefit corresponding to the idea of “taking actions that make me the sort of person that people like me would rationally trust.”
- Other Board of Managers members generally do not share the above reasoning, but they do agree with the overall intuition that a significant “straightforward charity” allocation is important, and would likely agree to a broader statement along the lines of “Without a significant allocation to straightforward charity, I’d be far more nervous about the possibility and consequences of self-deception, especially since many of our focus areas are now quite ambitious.”
I think it’s likely that we will recommend allocating some capital to a “straightforward charity” bucket, which might be described as: “Assess grants by how many people they help and how much, according to reasonably straightforward reasoning and estimates that do not involve highly exotic or speculative claims, or high risk of self-deception.” (Note that this is not the same as prioritizing “high likelihood of success.”) GiveWell was largely created to do just this, and I see it as the current best source of grant recommendations for the “straightforward charity” bucket.
My interest in “straightforward charity” is threshold-based. The things I’m seeking to accomplish here can be fully accomplished as long as there is an allocation that feels “significant” (which means something like “demonstrates a serious, and costly, commitment to this type of giving”). Our current working figure is 10% of all available capital. Hence, if the rest of our process results in less than 10% of capital going to straightforward charity, we will likely recommend “topping up” the straightforward charity allocation.
In general, I feel that the ideal world would be full of people who focus the preponderance of their time, energy and resources on a relatively small number of bold, hits-based bets that go against established conventional wisdom and the status quo - while also aiming to “check boxes” for a number of other ethical desiderata, some of which ask for a (limited) degree of deference to established wisdom and the status quo. I’ve written about this view before here and here. I also generally am in favor of people going “easy on themselves” in the sense of doing things that make their lives considerably easier and more harmonious, even when these things have large costs according to their best-guess framework for estimating good accomplished (as long as the costs are, all together, reducing impact by <50% or so). Consistent with these intuitions, I feel that a <1% allocation to straightforward charity would be clearly too small, while a >50% allocation would be clearly too large if we see non-straightforward giving opportunities that seem likely to do far more good. Something in the range of 10% seems reasonable.
Causes with reasonable-length feedback loops
As noted above, one risk of too much focus on long-termist causes would be that most of our impact is effectively unobservable and/or only applicable in extremely low-probability cases. This would create a problem for our ability to continually learn and improve, a problem for our ability to build an informative track record, and problems on other fronts as discussed above.
Ensuring that we do a significant amount of “near-termist” work partially addresses this issue, but even when using “near-termist” criteria, many appealing causes involve time horizons of 10+-years. I think there is a case for ensuring that some of our work involves shorter feedback loops than that.
Currently, I am relatively happy with Open Philanthropy’s prospects for doing a reasonable amount of work with short (by philanthropic standards) feedback loops. Our work on farm animal welfare and criminal justice reform already seems to have had some impact (more), and seems poised to have more (if all goes well) in the next few years. So I’m not sure any special allocations for “shorter-term feedback loops” will be needed. But this is something we’ll be keeping our eye on as our allocations evolve.
Allocating capital to buckets and causes
Above, we’ve contrasted two approaches to capital allocation:
-
- The “default approach”: establish a common metric for all giving, estimate how all causes would likely perform on this metric, and prioritize this ones that perform best. As noted above, this approach seems prone to effectively going “all-in” on one branch of the “animal-inclusive vs. human-centric” choice, as well as the “long-termist vs. near-termist” choice.
- The “diversifying approach”: divide capital into buckets, each of which follows different criteria. There could be a “long-termist” bucket, a “near-termist human-centric” bucket, and a “near-termist animal-inclusive” bucket, each of which is then allocated to causes based on the goals and assumptions of the corresponding worldview. (These could be further divided into buckets with more specific criteria, as discussed below.)
The simplest version of the “diversifying approach” would be to divide capital equally between buckets. However, we think a better version of the diversifying approach would also take into account:
The credence/weight we place on different worldviews relative to each other. Simply put, if one thinks the long-termist worldview is significantly more plausible/appealing than the near-termist worldview, one should allocate more capital to the long-termist bucket (and vice versa). One way of approaching this is to allocate funding in proportion to something like “the probability that one would endorse this worldview as correct if one went through an extensive reflective process like the one described here.” This is of course a major and subjective judgment call, and we intend to handle it accordingly. We also think it’s important to complete writeups that can help inform these judgments, such as a review of the literature on population ethics and an analysis of some possibilities for the number and size of future generations (relevant to the long-termist vs. near-termist choice), as well as our already-completed report on moral patienthood (relevant to the animal-inclusive vs. human-centric choice).
Differences in “total value at stake.” Imagine that one is allocating capital between Worldview A and Worldview B, and that one’s credences in the two worldviews are 80% and 20%, respectively - but if worldview B is correct, its giving opportunities are 1000x as good as the best giving opportunities if worldview A is correct. In this case, there would be an argument for allocating more capital to the buckets corresponding to worldview B, even though worldview B has lower credence, because it has more “total value at stake” in some sense.10
Put another way, one might want to increase the allocation to worldviews that would be effectively favored under the “default” (as opposed to “diversifying”) approach. We expect to make some degree of compromise between these two approaches: worldviews favored by the default approach will likely receive more capital, but not to a degree as extreme as the default approach alone would imply.
Deals and fairness agreements. We suggest above that the different worldviews might be thought of as different agents with fundamentally different and incommensurable goals, disagreeing about how to spend capital. This metaphor might suggest dividing capital evenly, or according to credence as stated immediately above. It also raises the possibility that such “agents” might make deals or agreements with each other for the sake of mutual benefit and/or fairness.
For example, agents representing (respectively) the long-termist and near-termist worldviews might make a deal along the following lines: “If the risk of permanent civilizational collapse (including for reasons of extinction) in the next 100 years seems to go above X%, then long-termist buckets get more funding than was originally allocated; if the risk of permanent civilizational collapse in the next 100 years seems to go below Y%, near-termist buckets get more funding than was originally allocated.” It is easy to imagine that there is some X and Y such that both parties would benefit, in expectation, from this deal, and would want to make it.
We can further imagine deals that might be made behind a “veil of ignorance” (discussed previously). That is, if we can think of some deal that might have been made while there was little information about e.g. which charitable causes would turn out to be important, neglected, and tractable, then we might “enforce” that deal in setting the allocation. For example, take the hypothetical deal between the long-termist and near-termist worldviews discussed above. We might imagine that this deal had been struck before we knew anything about the major global catastrophic risks that exist, and we can now use the knowledge about global catastrophic risks that we have to “enforce” the deal - in other words, if risks are larger than might reasonably have been expected before we looked into the matter at all, then allocate more to long-termist buckets, and if they are smaller allocate more to near-termist buckets. This would amount to what we term a “fairness agreement” between agents representing the different worldviews: honoring a deal they would have made at some earlier/less knowledgeable point.
Fairness agreements appeal to us as a way to allocate more capital to buckets that seem to have “especially good giving opportunities” in some sense. It seems intuitive that the long-termist view should get a larger allocation if e.g. tractable opportunities to reduce global catastrophic risks seem in some sense “surprisingly strong relative to what one would have expected,” and smaller if they seem “surprisingly weak” (some elaboration on this idea is below).
Methods for coming up with fairness agreements could end up making use of a number of other ideas that have been proposed for making allocations between different agents and/or different incommensurable goods, such as allocating according to minimax relative concession; allocating in order to maximize variance-normalized value; and allocating in a way that tries to account for (and balance out) the allocations of other philanthropists (for example, if we found two worldviews equally appealing but learned that 99% of the world’s philanthropy was effectively using one of them, this would seem to be an argument - which could have a “fairness agreement” flavor - for allocating resources disproportionately to the more “neglected” view). The “total value at stake” idea mentioned above could also be implemented as a form of fairness agreement. We feel quite unsettled in our current take on how best to practically identify deals and “fairness agreements”; we could imagine putting quite a bit more work and discussion into this question.
Practical considerations. We are likely to recommend making some allocations for various practical purposes - in particular, creating buckets for scientific research funding, policy-oriented philanthropy, and straightforward charity, as discussed above.
How will we incorporate all of these considerations? We’ve considered more than one approach to allocation, and we haven’t settled on a definite process yet. For now, a few notes on properties we expect our process to have:
-
- We will likely list a number of buckets we could allocate capital to, each with its own criteria for grants, and each with its own notes on what the best giving opportunities by these criteria will probably look like at different levels of funding. Our take on what each bucket’s best giving opportunities look like will evolve considerably over time, and could become the focus of significant ongoing research. Some buckets might have quite broadly defined criteria (e.g. “maximize impact according to the long-termist worldview”), while some buckets might have considerably more tightly defined criteria (e.g., “fund the best possible breakthrough fundamental science” or even “reduce pandemic risk as much as possible subject to supporting interventions with certain properties”); we will likely experiment with different approaches and think about which ones seem most conducive to highlighting key disagreements.
- Interested staff members will consider both the giving opportunities that we’re guessing correspond to each bucket, and more abstract considerations (e.g., how much credence they place in long-termism vs. near-termism) and write down their own preferred working bucket allocations and total giving trajectory over the next few years. We will likely discuss our differences, and make observations about what each others’ working allocations imply for a number of practical and philosophical considerations, e.g., “This allocation implies very little in the way of redistribution via fairness agreements” or “This allocation misses out on a lot of value according to Worldview X (including by increasing giving too quickly or too slowly).” We’ll try to create a summary of the most important disagreements different staff members have with each other.
- We may try to create something similar to what GiveWell uses for its cost-effectiveness analysis: a spreadsheet where different people can fill in their values for key parameters (such as relative credence in different worldviews, and which ones they think should benefit from various fairness agreements), with explanations and links to writeups with more detail and argumentation for each parameter, and basic analytics on the distribution of inputs (for example, what the median allocation is to each worldview, across all staff members).
- Ultimately, this exercise serves to inform funders about how to allocate their funds, and as such the funders of Good Ventures will make the final call about how they are allocating their capital. Their decision will be informed by summaries of the judgments and reasoning of staff.
Likely outputs
This section discusses some reasonably likely outputs from the above process. All of these could easily change dramatically, but the general outputs listed here seem likely enough to help give readers a picture of what assumptions we are tentatively allowing to affect our planning today.
Global catastrophic risks and other long-term-oriented causes
I see several reasons to expect that we will recommend a very large allocation to global catastrophic risks and other causes primarily aimed at raising the odds of good long-term outcomes for civilization:
-
- My impression is that this work is likely to be very strong according to the goals/metrics of both the “long-termist animal-inclusive” and “long-termist human-centric” worldviews. In fact, I suspect that for practical purposes, it may be reasonable to treat these as a single hybrid worldview.
- As discussed above, I suspect that the “default approach” to allocation would result in a heavy allocation to long-termist causes. In particular, long-termism has “the most value at stake” in some sense, as the potential value to be had via future generations is extremely large.11
-
I also feel that long-termism is likely to perform quite well according to a variety of ways of implementing the “fairness agreements” framework. If our views on transformative artificial intelligence are correct, for example, that would seem to make this point in time something of an outlier, in terms of opportunities to positively affect likely long-term outcomes for civilization. (I think there are further, more general arguments that this point in time is particularly high-leverage.) Furthermore, I believe that global catastrophic risk reduction is quite neglected in the scheme of things. For both of these reasons, it seems that giving opportunities are in some sense “better than one might have initially expected” (more neglected, more tractable) for long-termism.
Put differently, despite my own current skepticism about the population ethics that seems most conducive to long-termism,
-
- It seems to me that there are strikingly strong (perhaps even historical outlier) opportunities to improve long-term outcomes for civilization.
- If (as seems likely enough) the corresponding population ethics is more reasonable than it currently seems to me, such improvement could have enormous stakes, far greater than the value at stake for any other worldview.
- This is true regardless of one’s position on the moral weight of nonhuman animals.
- Such opportunities are largely neglected by other philanthropists.
Given such a situation, it seems reasonable to me to devote a very large part of our resources to this sort of giving.
I note that the case for long-termism has largely been brought to our attention via the effective altruism community, which has emphasized similar points in the past.12 I think the case for this sort of giving is initially unintuitive relative to e.g. focusing on global health, but I think it’s quite strong, and that gives some illustration of the value of effective altruism itself as an intellectual framework and community.
I think it is reasonably likely that we will recommend allocating >50% of all available capital to giving directly aimed at improving the odds of favorable long-term outcomes for civilization. This could include:
-
- Expanding our current work on potential risks from advanced AI, biosecurity and pandemic preparedness and other global catastrophic risks.
- Giving aimed more broadly at promoting international peace and cooperation (in particular, reducing the odds of war between major military powers), which could reduce a number of global catastrophic risks, including those we haven’t specifically identified.
- Giving aimed at improving the process by which high-stakes decisions (particularly with respect to emerging technologies, such as synthetic biology and artificial intelligence) are made. This could include trying to help improve the functioning of governments and democratic processes generally.
- A variety of other causes, including supporting long-termist effective altruist efforts.
Policy-oriented philanthropy and scientific research funding
As indicated above, we will likely want to ensure that we have substantial, and somewhat diversified, programs in policy-oriented philanthropy and scientific research funding, for a variety of practical reasons. I expect that we will recommend allocating at least $50 million per year to policy-oriented causes, and at least $50 million per year to scientific-research-oriented causes, for at least the next 5 or so years.
Many details remain to be worked out on this front. When possible, we’d like to accomplish the goals of these allocations while also accomplishing the goals of other worldviews; for example, we have funded scientific research that we feel is among the best giving opportunities we’ve found for biosecurity and pandemic preparedness, while also making a major contribution to the goals we have for our scientific research program. However, there is also some work that will likely not be strictly optimal (considering only the direct effects) from the point of view of any of the worldviews listed in this section. We choose these partly for reasons of inertia from previous decisions, preferences of specialist staff, etc. as well as an all-else-equal preference for reasonable-length feedback loops (though we will always be taking importance, neglectedness, and tractability strongly into account).
Straightforward charity
As discussed above, we will likely recommend allocating something like 10% of available capital to a “straightforward charity” worldview, which in turn will likely correspond (for the near future) to following GiveWell recommendations. The implications for this year’s allocation to GiveWell’s top charities are discussed below.
Other outputs
I expect to recommend a significant allocation to near-termist animal-inclusive causes, and I expect that this allocation would mostly go to farm animal welfare in the near to medium term.
Beyond the above, I’m quite unsure of how our allocation will end up.
However, knowing the above points gives us a reasonable amount to work with in planning for now. It looks like we will maintain (at least for the next few years), but not necessarily significantly expand, our work on criminal justice reform, farm animal welfare, and scientific research, while probably significantly expanding our work on global catastrophic risk reduction and related causes.
Smoothing and inertia
When working with cause-specific specialist staff, we’ve found it very helpful to establish relatively stable year-to-year budgets. This helps them plan; it also means that we don’t need to explicitly estimate the cost-effectiveness of every grant and compare it to our options in all other causes. The latter is relatively impractical when much of the knowledge about a grant lives with the specialist while much of the knowledge of other causes lives with others. In other words, rather than decide on each grant separately using information that would need to be integrated across multiple staff, we try to get an overall picture of how good the giving opportunities tend to be within a given focus area and then set a relatively stable budget, after which point we leave decisions about which grants to make mostly up to the specialist staff.
We’ve written before about a number of other benefits to committing to causes. In general, I believe that philanthropy (and even more so hits-based philanthropy) operates best on long time frames, and works best when the philanthropist can make use of relationships and knowledge built over the course of years.
For these reasons, the ultimate output of our framework is likely to incorporate aspects of conservatism, such as:
-
- A tendency to stay in causes for which we have built knowledge and connections, hired staff, and started having impact, for at least long enough that we have the opportunity to have significant impact and perform a fair assessment of our work as a whole, as well as fulfill any implicit commitments we’ve made to grantees and potential grantees. We will wind down areas for which we no longer see any strong justification, but when a cause performs well on many of our practical goals, it will often be easier to stay in the cause than to switch to causes that are slightly better in the abstract (but incur large switching costs for ourselves and for others).
- A tendency to avoid overly frequent changes in our allocations to causes. We might tend toward taking a rough guess at what causes seem most promising for a given set of goals, using this to determine likely cause allocations and the accompanying plans around staffing and budgeting, and then sticking with these plans for significant time, revisiting only periodically.
- A practice of “smoothing” major budgetary changes, especially when we are decreasing the budget to a given cause. When we determine that the optimal allocation to a cause would be $X/yr, and it’s currently $Y/yr, we might move from $Y/yr to $X/yr gradually over the course of several years.
Funding aimed directly at better informing our allocation between buckets
Making informed and thoughtful decisions about capital allocation is very valuable to us, and we expect it to be an ongoing use of significant staff time over the coming years.
We’re also open to significant capital allocations aimed specifically at this goal (for example, funding research on the relative merits of the various worldviews and their implicit assumptions) if we see good opportunities to make them. Our best guess is that, by default, there will be a relatively small amount (in dollars) of such opportunities. It’s also possible that we could put significant time into helping support the growth of academic fields relevant to this topic, which could lead to more giving opportunities along these lines; I’m currently unsure about how worthwhile this would be, relative to other possible uses of the same organizational capacity.
2017 allocation to GiveWell top charities
For purposes of our 2017 year-end recommendation, we started from the assumption that 10% of total available capital will eventually go to a “straightforward charity” bucket that is reasonably likely to line up fairly well with GiveWell’s work and recommendations. (Note that some capital from other buckets could go to GiveWell recommendations as well, but since the “straightforward charity” bucket operates on a “threshold” basis as described above, this would not change the allocation unless the total from other worldviews exceeded 10%; it is possible that this will end up happening, but we aren’t currently planning around that.)
We further split this 10% into two buckets of 5% each:
-
- One, the “fixed percentage of total giving” bucket, allocates 5%*(total Open Philanthropy giving) to straightforward charity each year. Because some of the goals of this worldview relate to signaling and to virtue-ethics-like intuitions, they can’t all be met by having given to straightforward charity in the past; they are best met by giving significantly each year. The “fixed percentage of total giving” bucket seeks to ensure that the “straightforward charity” allocation is at least 5% for each year that we are active.
- The other is the “flexible” bucket: 5% of all available capital, restricted to “straightforward charity” but otherwise allocated in whatever way maximizes impact, which could include spending it all during a particularly high-impact time. For this bucket, we presented GiveWell with a range of possible 2017 allocations ranging from “aggressive” (2017 giving that, if repeated, would spend down the whole “flexible” 5% allocation - net of investment returns - in 5 years) to “conservative” (2017 giving that, if repeated, would spend down the whole “flexible” 5% allocation - net of investment returns - in 50+ years).
- GiveWell settled close to the “aggressive” end of the spectrum. It reasoned that the advantages of giving now (mostly listed here) are larger than the expected financial returns, though it wanted to spend down slowly enough to preserve some option value in case it finds unexpectedly strong giving opportunities (something that seems most likely sometime in the next 10 years).
- We took the figure GiveWell had chosen and the figure we had estimated for the “fixed percentage of total giving” bucket, and added these to arrive at an estimated total budget for all GiveWell-related 2017 giving, including general operating support for GiveWell itself, and support for GiveWell Incubation Grants. We then subtracted the amount that has already been allocated to the latter two to arrive at a tentative figure for 2017 giving to GiveWell’s top charities. Finally, we checked to make sure the change wasn’t too big compared to last year’s giving (in line with the “smoothing” idea discussed above) and rounded off to $75 million.
The result of all this was a $75 million allocation to GiveWell’s top charities for 2017. As GiveWell stated, “the amount was based on discussions about how to allocate funding across time and across cause areas. It was not set based on the total size of top charities’ funding gaps or the projection of what others would give.”
No more unified benchmark
A notable outcome of the framework we’re working on is that we will no longer have a single “benchmark” for giving now vs. later, as we did in the past. Rather, grants will be compared to the “last dollar” spent within the same bucket. For example, we will generally make “long-termist” grants when they are better (by “long-termist” criteria) than the last “long-termist” dollar we’d otherwise spend.
We think this approach is a natural outcome of worldview diversification, and will make it far more tractable to start estimating “last dollar” values and making our benchmarks more systematic. It is part of a move from (a) Open Philanthropy making decisions grant-by-grant to (b) Open Philanthropy’s cross-cause staff recommending allocations at a high level, followed by its specialist staff deciding which grants to make within a given cause.
Our future plans for this work
This post has given a broad outline of the framework we are contemplating, and some reasonably likely outputs from this framework. But we have a lot of work left to do before we have a solid approach to cause prioritization.
Over the coming year, we hope to:
-
- Complete some public content on key inputs into the case for and against particular worldviews we’ve highlighted above. These will include writeups on population ethics and on possible properties (particularly size) of civilization in the future, both of which relate to the long-termist vs. near-termist choice.
- Complete an investigation into the literature on normative uncertainty and procedures for handling it appropriately, including the possibility of modeling different normative views as different agents with incommensurable goals who can make deals and “fairness agreements.”
- Further consider (though not settle) the question of which worldview choices we want to handle with something other than the “default approach” to capital allocation.
- Go through an iteration of the exercise sketched above and reach a working guess about our giving trajectory over the next few years, as well as how we will allocate capital to different buckets and how we will think about the threshold for making a given grant vs. saving the money for later over that time frame.
This work has proven quite complex, and we expect that it could take many years to reach reasonably detailed and solid expectations about our long-term giving trajectory and allocations. However, this is arguably the most important choice we are making as a philanthropist - how much we want to allocate to each cause in order to best accomplish the many and varied values we find important. We believe it is much easier to increase the budget for a given cause than to decrease it, and thus, we think it is worth significant effort to come to the best answer we can before ramping up our giving to near-peak levels. This will hopefully mean that the main thing we are deciding on is not what parts of our work to cut (though there may be some of this), but rather which parts of our work are to grow the most.
-
- 1.
Here “relevant” means “animals that we see significant opportunities to cost-effectively help.”
- 2.
See this footnote.
- 3.
I’ll note that I do not feel there is much of a place for a human-centric view based on (c) the empirical view that nonhuman animals are not “conscious” in a morally relevant way: I have become convinced that there is presently no evidence base that could reasonably justify high confidence in this view, and I think the “default approach” to budget allocation would be appropriate for handling one’s uncertainty on this topic.
- 4.
Adjusted for the degree of improvement.
- 5.
Adjusted for degree of improvement.
- 6.
Supporting this statement is outside the scope of this post. We’ve previously written about grants that we estimate can spare over 200 hens from cage confinement for each dollar spent; contrasting this estimate with GiveWell’s cost-effectiveness figures can give an idea of where we’re coming from, but we plan to make more detailed comparisons in the future.
- 7.
As noted above, a >10% probability on the animal-inclusive view would lead chickens to be valued >0.1% as much as humans (using the “method A” approach I find most intuitive), which would likely imply a great deal of resources devoted to animal welfare relative to near-term human-focused causes. For now, we bracket the debate over a “long-termist” worldview that could make this distinction fairly moot, though we note it in a later section of this post.
- 8.
Other assumptions could have similar consequences to (a). (a) has the relevant consequences because it implies that improving the expected long-term trajectory of civilization - including reducing the risk of extinction - could do enormous amounts of good. Other population ethics frameworks could have similar consequences if they hold that the size of the intrinsic moral difference between “humanity has an extremely good future (very large numbers of people, very excellent lives, filling a large fraction of available time and space)” and “humanity does not survive the coming centuries” is sufficiently greater than the intrinsic moral importance of nearer-term (next century or so) events.
- 9.
And barring sufficiently radical societal transformations, i.e., one might exclude impact along the lines of “This intervention could affect a massive number of people if the population explodes massively beyond what is currently projected.”
- 10.
Philosophical incommensurability is a challenge for making this sort of determination. We think it’s unclear where and to what extent philosophical incommensurability applies, which is one reason we are likely to end up with a solution between what the assumption of incommensurability applies and what the assumption of commensurabillity implies.
- 11.
See Greaves and Ord 2017 for an argument along these lines.
- 12.
-
- ‹ Previous Post
- Next Post ›
-
- Holden Karnofsky's blog
- Add new comment
Comments
This is useful to know,
This is useful to know, thanks for writing it all up. I’m repeating myself from a comment on one of Luke’s previous posts, but on 1.1.1.1 I think funding some philosophy PhD students to work on moral uncertainty could be a pretty good use of money. My impression is it’s not a very sexy subject yet, so more attention on it could bring results. Plus philosophers are cheap. :)
Out of interest, why are you ‘generally … in favor of people going “easy on themselves”’?
-
- reply
Speaking intuitively
Speaking intuitively/anecdotally, it seems to me that people who “push themselves” a little at a time (such that they remain somewhat close to their comfort zone at any given time) end up improving a lot over time, relative to people who try to “push themselves” immediately to the behavior/performance they explicitly estimate to be optimal . Thinking about this claim in the context of exercise, diet, or skills development is a good way to imagine the intuition I have here. I think of giving to “non-straightforward charity” as something that causes certain kinds of discomfort, such that the analogy is at least somewhat relevant.
-
- reply
Thanks. I agree, the key is
Thanks. I agree, the key is to drag your emotional comfort zones along with you and not get too far out ahead. I disagree with some of the other connotations of “go easy on yourself”, since I think it’s important to be ethically ambitious and aim at things over the long term that necessarily seem difficult and uncomfortable to your current self. Sounds like we don’t disagree much though, given your comment references pushing your comfort zones rather than settling into them.
-
- reply
The potential for different
The potential for different worldviews to trade against each other sounds generally like a good thing, but it seems worth being aware that it does to some extent allow errors in one bucket / by one “agent” to affect allocations in another bucket / agent.
-
- reply
It sounds like you are
It sounds like you are thinking hard about how to spend a limited amount of resources in a world where there are more needs than resources. Could another approach be to first determine a basic standard or quality of life across all the world views, ask how much it would be to fund it all , and then raise the money? What I am getting at is if it is correct to assume that we can’t find the money to solve multiple problems at the same time? Maybe we haven’t done it in the past not because the money isn’t there, but because the leadership of doing the analysis and galvanizing the money isn’t happening? In saying that, I’m not qualifying how the money is spent -some might be direct relief, others might be system-building solutions. Also, you might be interested in the philosopher Ruth Chang’s online talk on making hard decisions. She explains how when you face a situation where two options are equally good, or equally bad, you can’t use analysis to know what to do, but rather you have to make the decision on whom you want to become - which is a key part of how humans build character. In some ways I find it hard to think about a person making a decision that might impact millions of others based on the process of building their own character and values, yet on the other hand, the reason that I trust other people with important decisions is precisely because of their character… thanks for sharing your thinking and opening this discussion.
-
- reply
I see that you recommended a
I see that you recommended a grant for suicide means restriction in the form of pesticide ban policy work. Pesticides is by far the most important form of means restriction in the world, but I wonder if OpenPhil may be considering moving on to what is imho, #2: charcoal bans in Asia Pacific? Ever consider that? In East Asia, charcoal has greatly increased overall suicides according to numerous studies.
Keep up the good work!
-
- reply
Thanks Austen, I’ll pass this
Thanks Austen, I’ll pass this along to the folks at GiveWell who recommended that grant.
-
- reply
Suicide means restriction is
Suicide means restriction is an interesting one when it comes to worldview diversification… I’ve heard interesting arguments that it could actually be negative. Especially if the future turns out to be dystopian, and a moral norm against suicide prevents people from escaping.
-
- reply
Severe negative utilitarian,
Severe negative utilitarian, I’m familiar. I think that anyone reading about EA should be aware that it is used as a branding tactic by anti-altruists, people who want to cause as much harm to others as possible. Remember, anyone can CALL themselves an “effective altruist.”
-
- reply
I don’t think one needs to be
I don’t think one needs to be a severe negative utilitarian to see this position as worth considering.
-
- reply
I was glad to see that you
I was glad to see that you mentioned that some global catastrophic risk work could be justified with the near-termist perspective. I think this is particularly true for risks like pandemics and nuclear war. Have you thought about how to quantitatively evaluate that advantage? For instance, I have argued that work on alternate foods: https://en.wikipedia.org/wiki/Feeding_Everyone_No_Matter_What that are not dependent on sunlight for catastrophes that block the sun do well from a global catastrophic risk perspective: http://effective-altruism.com/ea/1g9/should_we_be_spending_no_less_on_alternate_foods/, but also are very cost effective from a near-termist perspective: https://link.springer.com/article/10.1007/s13753-016-0097-2.
-
- reply
We haven’t vetted the cost
We haven’t vetted the cost-effectiveness argument linked here, but are likely to do so at some point.
-
- reply
The feedback loops point is
The feedback loops point is an interesting one. I suppose that if you want your grants to be maximally informative, the best grants to give will be the ones where you initially rate the probability of success to be about 50%.
-
- reply
Thanks for this interesting
Thanks for this interesting post. You say “I have substantial uncertainty about population ethics. My personal intuitions still lean against placing too-high moral weight on the outcome of “a very large positive world, which would otherwise not have existed” relative to the outcome of “a moderate number of persons with high well-being, who otherwise would have had low well-being.””
I think it needs to be made clear that this entails that you have substantial (?) credence in favour of a view (person-affecting theory) which implies that we should do absolutely nothing about climate change. This cannot, then, be presented as sturdy common sense, since common sense does not say that if climate change is real and costly to future generations, that we should do nothing about it.
-
- reply
(Re-posting so this gets
(Re-posting so this gets threaded properly)
I disagree that the view I articulated entails what you say it entails. There are many possible population ethics with the property I described; not all are person-affecting and I think few have the precise property you described re: climate change.
I am happy to concede that all approaches to population ethics seem to have some highly unappealing implications (including pluralistic compromises, and “non-philosophical” approaches that don’t have a fully specified utility function and are likely “living with inconsistencies” to a degree).
-
- reply