December 2016

Good Ventures and Giving Now vs. Later (2016 Update)

Note: in this post, “we” refers to the Open Philanthropy Project. I use “I” for cases where I am going into detail on thoughts of mine that don’t necessarily reflect the views of the Open Philanthropy Project as such, though they have factored into our decision-making.

Last year, we wrote about the question:

Once we have investigated a potential grant, how do we decide where the bar is for recommending it? With all the uncertainty about what we’ll find in future years, how do we decide when grant X is better than saving the money and giving later?

(The full post is here; note that it is on the GiveWell website because we had not yet launched the Open Philanthropy Project website.)

In brief, our answer was to consider both:

  • An overall budget for the year, which we set at 5% of available capital. This left room to give a lot more than we gave last year.
  • A benchmark. We determined that we would recommend giving opportunities when they seemed like a better use of money than direct cash transfers to the lowest-income people possible, as carried out by GiveDirectly, subject to some other constraints (being within the budget indicated above, having done enough investigation for an informed decision, and some other complicating factors and adjustments).

This topic is particularly important when deciding how much to recommend that Good Ventures donate to GiveWell’s top charities. It is also becoming more important overall because our staff capacity and total giving has grown significantly this year. Changing the way we think about the “bar for recommending a grant” could potentially change decisions about tens of millions of dollars’ worth of giving.

We have put some thought into this topic since last year, and our thinking has evolved noticeably. This post outlines our current views, while also noting that I believe we failed to put as much thought into this question as should have in 2016, and are hoping to do more in 2017. [node:read-more:link]

Worldview Diversification

In principle, we try to find the best giving opportunities by comparing many possibilities. However, many of the comparisons we’d like to make hinge on very debatable, uncertain questions.

For example:

  • Some people think that animals such as chickens have essentially no moral significance compared to that of humans; others think that they should be considered comparably important, or at least 1-10% as important. If you accept the latter view, farm animal welfare looks like an extraordinarily outstanding cause, potentially to the point of dominating other options: billions of chickens are treated incredibly cruelly each year on factory farms, and we estimate that corporate campaigns can spare over 200 hens from cage confinement for each dollar spent. But if you accept the former view, this work is arguably a poor use of money.
  • Some have argued that the majority of our impact will come via its effect on the long-term future. If true, this could be an argument that reducing global catastrophic risks has overwhelming importance, or that accelerating scientific research does, or that improving the overall functioning of society via policy does. Given how difficult it is to make predictions about the long-term future, it’s very hard to compare work in any of these categories to evidence-backed interventions serving the global poor.
  • We have additional uncertainty over how we should resolve these sorts of uncertainty. We could try to quantify our uncertainties using probabilities (e.g. “There’s a 10% chance that I should value chickens 10% as much as humans”), and arrive at a kind of expected value calculation for each of many broad approaches to giving. But most of the parameters in such a calculation would be very poorly grounded and non-robust, and it’s unclear how to weigh calculations with that property. In addition, such a calculation would run into challenges around normative uncertainty (uncertainty about morality), and it’s quite unclear how to handle such challenges.

In this post, I’ll use “worldview” to refer to a set of highly debatable (and perhaps impossible to evaluate) beliefs that favor a certain kind of giving. One worldview might imply that evidence-backed charities serving the global poor are far more worthwhile than either of the types of giving discussed above; another might imply that farm animal welfare is; another might imply that global catastrophic risk reduction is. A given worldview represents a combination of views, sometimes very difficult to disentangle, such that uncertainty between worldviews is constituted by a mix of empirical uncertainty (uncertainty about facts), normative uncertainty (uncertainty about morality), and methodological uncertainty (e.g. uncertainty about how to handle uncertainty, as laid out in the third bullet point above). Some slightly more detailed descriptions of example worldviews are in a footnote.1

A challenge we face is that we consider multiple different worldviews plausible. We’re drawn to multiple giving opportunities that some would consider outstanding and others would consider relatively low-value. We have to decide how to weigh different worldviews, as we try to do as much good as possible with limited resources.

When deciding between worldviews, there is a case to be made for simply taking our best guess2 and sticking with it. If we did this, we would focus exclusively on animal welfare, or on global catastrophic risks, or global health and development, or on another category of giving, with no attention to the others. However, that’s not the approach we’re currently taking.

Instead, we’re practicing worldview diversification: putting significant resources behind each worldview that we find highly plausible. We think it’s possible for us to be a transformative funder in each of a number of different causes, and we don’t - as of today - want to pass up that opportunity to focus exclusively on one and get rapidly diminishing returns. [node:read-more:link]