Quantcast | The GiveWell Blog
November 29th, 2011

Top charities for holiday season 2011: Against Malaria Foundation and Schistosomiasis Control Initiative

GiveWell has published our annual update on how to accomplish as much good as possible with your donations.

Our top two charities - out of hundreds we’ve examined - are (1) the Against Malaria Foundation, which fights malaria using insecticide-treated bednets, and (2) the Schistosomiasis Control Initiative, which treats children for intestinal worms.

Our update is the result of a full year of intensive research: examining hundreds of charities, contacting the most promising ones, and completing in-depth investigations that include

  • Conversations with representatives
  • Examination of internal documentation including monitoring and evaluation reports, budgets, and plans for using additional funding
  • Reviewing independent literature and evidence of effectiveness of the charities’ programs
  • Site visits to charities’ work in the field

We have published the full details of our process, including a list of all charities examined and reviews for those examined in-depth.

Our top two charities are outstanding on all fronts. They execute proven, cost-effective programs for helping people. They have strong track records. They have concrete future plans and room for more funding. They are transparent and accountable to donors.

We also have identified five other standout organizations for donors interested in other causes. These are GiveDirectly (cash grants to poor households in Kenya), Innovations for Poverty Action (research on how to fight poverty and promote development), Nyaya Health (healthcare in rural Nepal), Pratham (primary education in India), and Small Enterprise Foundation (microfinance in South Africa).

Note that last year’s top-rated charity, VillageReach, does not have projected short-term funding needs (it expects to be able to meet these needs with funds not driven by GiveWell), as discussed previously.

The charities above all work in the developing world. Our top recommendation for donors who want to support causes in the United States is KIPP Houston, an outstanding charter schools facing budget cuts.

Over the last year, we drove over $1.6 million to our top-rated charities. We hope to drive substantially more over the coming year.

November 18th, 2011

New charity recommendations forthcoming by December 1; blog posts on hold until then

As we wrote previously, we’re expecting to have a substantially revised set of charity recommendations by December 1. We’re currently in the final stages of writing up our cases for top contenders and discussing which will be top-rated. Because of this, we plan to suspend our blog posts (which are normally at least weekly) until then.

November 10th, 2011

Maximizing Cost-effectiveness via Critical Inquiry

We’ve recently been writing about the shortcomings of formal cost-effectiveness estimation (i.e., trying to estimate how much good, as measured in lives saved, DALYs or other units, is accomplished per dollar spent). After conceptually arguing that cost-effectiveness estimates can’t be taken literally when they are not robust, we found major problems in one of the most prominent sources of cost-effectiveness estimates for aid, and generalized from these problems to discuss major hurdles to usefulness faced by the endeavor of formal cost-effectiveness estimation.

Despite these misgivings, we would be determined to make cost-effectiveness estimates work, if we thought this were the only way to figure out how to allocate resources for maximal impact. But we don’t. This post argues that when information quality is poor, the best way to maximize cost-effectiveness is to examine charities from as many different angles as possible - looking for ways in which their stories can be checked against reality - and support the charities that have a combination of reasonably high estimated cost-effectiveness and maximally robust evidence. This is the approach GiveWell has taken since our inception, and it is more similar to investigative journalism or early-stage research (other domains in which people look for surprising but valid claims in low-information environments) than to formal estimation of numerical quantities.

The rest of this post

  • Conceptually illustrates (using the mathematical framework laid out previously) the value of examining charities from different angles when seeking to maximize cost-effectiveness.
  • Discusses how this conceptual approach matches the approach GiveWell has taken since inception.

Conceptual illustration

I previously laid out a framework for making a “Bayesian adjustment” to a cost-effectiveness estimate. I stated (and posted the mathematical argument) that when considering a given cost-effectiveness estimate, one must also consider one’s prior distribution (i.e., what is predicted for the value of one’s actions by other life experience and evidence) and the variance of the estimate error around the cost-effectiveness estimate (i.e., how much room for error the estimate has). This section works off of that framework to illustrate the potential importance of examining charities from multiple angles - relative to formally estimating their cost-effectiveness - in low-information environments.

I don’t wish to present this illustration either as official GiveWell analysis or as “the reason” that we believe what we do. This is more of an illustration/explication of my views than a justification; GiveWell has implicitly (and intuitively) operated consistent with the conclusions of this analysis, long before we had a way of formalizing these conclusions or the model behind them. Furthermore, while the conclusions are broadly shared by GiveWell staff, the formal illustration of them should only be attributed to me.

The model

Suppose that:

  • Your prior over the “good accomplished per $1000 given to a charity” is normally distributed with mean 0 and standard deviation 1 (denoted from this point on as N(0,1)). Note that I’m not saying that you believe the average donation has zero effectiveness; I’m just denoting whatever you believe about the impact of your donations in units of standard deviations, such that 0 represents the impact your $1000 has when given to an “average” charity and 1 represents the impact your $1000 has when given to “a charity one standard deviation better than average” (top 16% of charities).
  • You are considering a particular charity, and your back-of-the-envelope initial estimate of the good accomplished by $1000 given to this charity is represented by X. It is a very rough estimate and could easily be completely wrong: specifically, it has a normally distributed “estimate error” with mean 0 (the estimate is as likely to be too optimistic as too pessimistic) and standard deviation X (so 16% of the time, the actual impact of your $1000 will be 0 or “average”).* Thus, your estimate is denoted as N(X,X).

The implications

I use “initial estimate” to refer to the formal cost-effectiveness estimate you create for a charity - along the lines of the DCP2 estimates or Back of the Envelope Guide estimates. I use “final estimate” to refer to the cost-effectiveness you should expect, after considering your initial estimate and making adjustments for the key other factors: your prior distribution and the “estimate error” variance around the initial estimate. The following chart illustrates the relationship between your initial estimate and final estimate based on the above assumptions.

Note that there is an inflection point (X=1), past which point your final estimate falls as your initial estimate rises. With such a rough estimate, the maximum value of your final estimate is 0.5 no matter how high your initial estimate says the value is. In fact, once your initial estimate goes “too high” the final estimated cost-effectiveness falls.

This is in some ways a counterintuitive result. A couple of ways of thinking about it:

  • Informally: estimates that are “too high,” to the point where they go beyond what seems easily plausible, seem - by this very fact - more uncertain and more likely to have something wrong with them. Again, this point applies to very rough back-of-the-envelope style estimates, not to more precise and consistently obtained estimates.
  • Formally: in this model, the higher your estimate of cost-effectiveness goes, the higher the error around that estimate is (both are represented by X), and thus the less information is contained in this estimate in a way that is likely to shift you away from your prior. This will be an unreasonable model for some situations, but I believe it is a reasonable model when discussing very rough (”back-of-the-envelope” style) estimates of good accomplished by disparate charities. The key component of this model is that of holding the “probability that the right cost-effectiveness estimate is actually ‘zero’ [average]” constant. Thus, an estimate of 1 has a 67% confidence interval of 0-2; an estimate of 1000 has a 67% confidence interval of 0-2000; the former is a more concentrated probability distribution.

Now suppose that you make another, independent estimate of the good accomplished by your $1000, for the same charity. Suppose that this estimate is equally rough and comes to the same conclusion: it again has a value of X and a standard deviation of X. So you have two separate, independent “initial estimates” of good accomplished, and both are N(X,X). Properly combining these two estimates into one yields an estimate with the same average (X) but less “estimate error” (standard deviation = X/sqrt(2)). Now the relationship between X and adjusted expected value changes:

Now you have a higher maximum (for the final estimated good accomplished) and a later inflection point - higher estimates can be taken more seriously. But it’s still the case that “too high” initial estimates lead to lower final estimates.

The following charts show what happens if you manage to collect even more independent cost-effectiveness estimates, each one as rough as the others, each one with the same midpoint as the others (i.e., each is N(X,X)).

The pattern here is that when you have many independent estimates, the key figure is X, or “how good” your estimates say the charity is. But when you have very few independent estimates, the key figure is K - how many different independent estimates you have. More broadly - when information quality is good, you should focus on quantifying your different options; when it isn’t, you should focus on raising information quality.

A few other notes:

  • The full calculations behind the above charts are available here (XLS). We also provide another Excel file that is identical except that it assumes a variance for each estimate of X/2, rather than X. This places “0″ just inside your 95% confidence interval for the “correct” version of your estimate. While the inflection points are later and higher, the basic picture is the same.
  • It is important to have a cost-effectiveness estimate. If the initial estimate is too low, then regardless of evidence quality, the charity isn’t a good one. In addition, very high initial estimates can imply higher potential gains to further investigation. However, “the higher the initial estimate of cost-effectiveness, the better” is not strictly true.
  • Independence of estimates is key to the above analysis. In my view, different formal estimates of cost-effectiveness are likely to be very far from independent because they will tend to use the same background data and assumptions and will tend to make the same simplifications that are inherent to cost-effectiveness estimation (see previous discussion of these simplifications here and here).

    Instead, when I think about how to improve the robustness of evidence and thus reduce the variance of “estimate error,” I think about examining a charity from different angles - asking critical questions and looking for places where reality may or may not match the basic narrative being presented. As one collects more data points that support a charity’s basic narrative (and weren’t known to do so prior to investigation), the variance of the estimate falls, which is the same thing that happens when one collects more independent estimates. (Though it doesn’t fall as much with each new data point as it would with one of the idealized “fully independent cost-effectiveness estimates” discussed above.)

  • The specific assumption of a normal distribution isn’t crucial to the above analysis. I believe (based mostly on a conversation with Dario Amodei) that for most commonly occurring distribution types, if you hold the “probability of 0 or less” constant, then as the midpoint of the “estimate/estimate error” distribution approaches infinity the distribution becomes approximately constant (and non-negligible) over the area where the prior probability is non-negligible, resulting in a negligible effect of the estimate on the prior.

    While other distributions may involve later/higher inflection points than normal distributions, the general point that there is a threshold past which higher initial estimates no longer translate to higher final estimates holds for many distributions.

The GiveWell approach

Since the beginning of our project, GiveWell has focused on maximizing the amount of good accomplished per dollar donated. Our original business plan (written in 2007 before we had raised any funding or gone full-time) lays out “ideal metrics” for charities such as

number of people whose jobs produce the income necessary to give them and their families a relatively comfortable lifestyle (including health, nourishment, relatively clean and comfortable shelter, some leisure time, and some room in the budget for luxuries), but would have been unemployed or working completely non-sustaining jobs without the charity’s activities, per dollar per year. (Systematic differences in family size would complicate this.)

Early on, we weren’t sure of whether we would find good enough information to quantify these sorts of things. After some experience, we came to the view that most cost-effectiveness analysis in the world of charity is extraordinarily rough, and we then began using a threshold approach, preferring charities whose cost-effectiveness is above a certain level but not distinguishing past that level. This approach is conceptually in line with the above analysis.

It has been remarked that “GiveWell takes a deliberately critical stance when evaluating any intervention type or charity.” This is true, and in line with how the above analysis implies one should maximize cost-effectiveness. We generally investigate charities whose estimated cost-effectiveness is quite high in the scheme of things, and so for these charities the most important input into their actual cost-effectiveness is the robustness of their case and the number of factors in their favor. We critically examine these charities’ claims and look for places in which they may turn out not to match reality; when we investigate these and find confirmation rather than refutation of charities’ claims, we are finding new data points that support what they’re saying. We’re thus doing something conceptually similar to “increasing K” according to the model above. We’ve recently written about all the different angles we examine when strongly recommending a charity.

We hope that the content we’ve published over the years, including recent content on cost-effectiveness (see the first paragraph of this post), has made it clear why we think we are in fact in a low-information environment, and why, therefore, the best approach is the one we’ve taken, which is more similar to investigative journalism or early-stage research (other domains in which people look for surprising but valid claims in low-information environments) than to formal estimation of numerical quantities.

As long as the impacts of charities remain relatively poorly understood, we feel that focusing on robustness of evidence holds more promise than focusing on quantification of impact.

*This implies that the variance of your estimate error depends on the estimate itself. I think this is a reasonable thing to suppose in the scenario under discussion. Estimating cost-effectiveness for different charities is likely to involve using quite disparate frameworks, and the value of your estimate does contain information about the possible size of the estimate error. In our model, what stays constant across back-of-the-envelope estimates is the probability that the “right estimate” would be 0; this seems reasonable to me.

November 4th, 2011

Some Considerations Against More Investment in Cost-Effectiveness Estimates

When we started GiveWell, we were very interested in cost-effectiveness estimates: calculations aiming to determine, for example, the “cost per life saved” or “cost per DALY saved” of a charity or program. Over time, we’ve found ourselves putting less weight on these calculations, because we’ve been finding that these estimates tend to be extremely rough (and in some cases badly flawed).

One can react to what we’ve been finding in different ways: one can take it as a sign that we need to invest more in cost-effectiveness estimation (in order to make it more accurate and robust), or one can take it as a sign that we need to invest less in cost-effectiveness estimation (if one believes that estimates are unlikely to become robust enough to take literally and that their limited usefulness can be achieved with less investment). At this point we are tentatively leaning more toward the latter view, this post lays out our thinking on why.

This post does not argue against the conceptual goal of maximizing cost-effectiveness, i.e., achieving the maximal amount of good per dollar donated. We strongly support this conceptual goal; rather, we are arguing that focusing on directly estimating cost-effectiveness is not the best way to maximize cost-effectiveness. We believe there are alternative ways of maximizing cost-effectiveness - in particular, making limited use of cost-effectiveness estimates while focusing on finding high-quality evidence (an approach we have argued for previously and will likely flesh out further in a future post).

In a nutshell, we argue that the best currently available cost-effectiveness estimates - despite having extremely strong teams and funding behind them - have the problematic combination of being extremely simplified (ignoring important but difficult-to-quantify factors), extremely sensitive (small changes in assumptions can lead to huge changes in the figures), and not reality-checked (large flaws can persist unchecked - and unnoticed - for years). We believe it is conceptually difficult to improve on all three of these at once: improving on the first two is likely to require substantially greater complexity, which in turn will worsen the ability of outsiders to understand and reality-check estimates. Given the level of resources that have been invested in creating the problematic estimates we see now, we’re not sure that really reliable estimates can be created using reasonable resources - or, perhaps, at all.

We expand on these points using the case study of deworming, the only DCP2 estimate that we have enough detail on to be able to fully understand and reconstruct.

Simplicity of the estimate

The estimate is extremely simplified. It consists of

  • Costs: two possible figures for “cost per child treated,” one for generic drugs and one for name-brand drugs. These figures are drawn from a single paper (a literature review published 3 years prior to the publication of the estimate); costs are assumed to scale linearly with the number of children treated, and to be constant regardless of the region.
  • Drug effectiveness: for each infection, a single “effectiveness” figure is used, i.e., treatment is assumed to reduce disease burden by a set percentage for a given disease. For each infection, a single paper is used as the source of this “effectiveness” figure.
  • Symptoms averted: the prevalence of different symptoms is assumed to be different by region, but the regions are broad (there are 6 total regions). Prevalence figures are taken from a single paper. The severity of each symptom is assumed to be constant regardless of context, using standard disability weights. Effective treatment is presumed to prevent symptoms for exactly one year, with no accounting for externalities, side effects, or long-term effects (in fact, in the original calculation even deaths are assumed to be averted for only one year).
  • Putting it all together: the estimate calculates benefits of deworming by estimating the number of children cured of each symptom for a single year (based on the six regional figures re: how common symptoms are), converting to DALYs using its single set of figures on how severe each symptom is, and multiplying by the single drug effectiveness figure. It divides these DALY-denominated benefits into the costs, which are again done using a single per-child figure.

No sensitivity analysis is included to examine how cost-effectiveness would vary if certain figures or assumptions turned out to be off. No adjustments are made to address issues such as (a) the high uncertainty of many of the figures (which has implications for overall cost-effectiveness); (b) the fact that figures are taken from a relatively small number of studies, and are thus likely to be based on unusually well-observed programs.

In our view, any estimate this simple and broad has very limited application when examining a specific charity operating in a specific context.

Sensitivity of the estimate

The estimate is extremely sensitive to changes in inputs. In the course of examining it and trying different approaches to estimating the cost-effectiveness of deworming, we arrived at each of the following figures at one point or another:

Cost per DALY for STH treatment Key assumptions behind this cost
$3.41 original DCP2 calculation
$23.92 +corrected disability weight of ascariasis symptoms
$256 -corrected disability weight of ascariasis symptoms
+corrected prevalence interpretation for all STHs and symptoms and disability weight of trichuriasis symptoms
$529 +corrected disability weight of ascariasis symptoms
$385 +incorrectly accounting for long-term effects
$326 -incorrectly accounting for long-term effects
+corrected duration of trichuriasis symptoms
$138 +correctly accounting for long-term effects
$82.54 Jonah’s independent estimate for, implicitly accounting for long-term effects and using lower drug costs

Our final corrected version of the DCP2’s estimate varies heavily within regions as well:

Cost per DALY for STH treatment Region
$77.39 East Asia & Pacific
$83.16 Latin America & Caribbean
$412.22 Middle East & North Africa
$202.69 South Asian Seas
$259.57 Sub-Saharan Africa

Lack of reality-checks

As we wrote previously, we believe that a helminth expert reviewing this calculation would have noticed the errors that we pointed to. This is because when one examines the details of the (uncorrected) estimate, it becomes clear that nearly all of the benefits of deworming are projected to come from a single symptom of a single disease - a symptom which is, in fact, only believed to be about 1/20 as severe as the calculation implies, and only about 1/100 as common.

So why wasn’t the error caught between its 2006 publication (and numerous citations) and our 2011 investigation? We can’t be sure, but we can speculate that

  • The DALY metric - while it has the advantage of putting all health benefits in the same units - is unintuitive. We don’t believe it is generally possible to look at a cost-per-DALY figure and compare it with one’s informal knowledge of an intervention’s costs and benefits (though it is more doable when the benefits are concentrated in preventing mortality, which eliminates one of the major issues with interpreting DALYs).
  • That means that in order to reality-check an estimate, one needs to look at the details of how it was calculated.
  • But looking at the details of how an estimate is calculated is generally a significant undertaking - even for an estimate as simple as this one. It requires a familiarity with the DALY framework and with the computational tools being used (in this case Excel) that a subject matter expert - the sort of person who would be best positioned to catch major problems - wouldn’t necessarily have. And it may require more time than such a subject matter expert will realistically have available.

In most domains, a badly flawed calculation - when used - will eventually produce strange results and be noticed. In aid, by contrast, one can use a completely wrong figure indefinitely without ever finding out. The only mechanism for catching problems is to have a figure that is sufficiently easy to understand that outsiders (i.e., those who didn’t create the calculation) can independently notice what’s off. It appears that the DCP2 estimates do not pass this test.

Our point here isn’t about the apparent lack of formal double-check in the DCP2’s process (though this does affect our view of the DCP2) but about the lack of reality-check in the 5 years since publication - the fact that at no point did anyone notice that the figure seemed off, and investigate its origin.

And the problem pertains to more than “catching errors”; it also pertains to being able to notice when the calculation becomes out of line with (for example) new technologies, new information about the diseases and interventions in question, or local conditions in a specific case. An estimate that can’t be - or simply isn’t - continually re-examined for its overall and local relevance may be “correct,” but its real-world usefulness seems severely limited.

The dilemma: the less simplified and sensitive, the more esoteric

It currently appears to us that the general structure of these estimates is too simplified and sensitive to be reliable without relatively constant reality-checks from outsiders (particularly subject matter experts), but so complex and esoteric that these reality-checks haven’t been taking place.

Improving the robustness and precision of the estimates would likely have to mean making them far more complex, which in turn could make it far more difficult for outsiders (including subject matter experts) to make sense of them, adapt them to new information and local conditions, and give helpful feedback.

The resources that have already been invested in these cost-effectiveness estimates are significant. Yet in our view, the estimates are still far too simplified, sensitive, and esoteric to be relied upon. If such a high level of financial and (especially) human-capital investment leaves us this far from having reliable estimates, it may be time to rethink the goal.

All that said - if this sort of analysis were the only way to figure out how to allocate resources for maximal impact, we’d be advocating for more investment in cost-effectiveness analysis and we’d be determined to “get it right.” But in our view, there are other ways of maximizing cost-effectiveness that can work better in this domain - in particular, making limited use of cost-effectiveness estimates while focusing on finding high-quality evidence (an approach we have argued for previously and will likely flesh out further in a future post).

October 26th, 2011

GiveWell is aiming to have a new #1 charity by December

Our current top-rated charity is VillageReach. In 2010, we directed over $1.1 million to it, which met its short-term funding needs (i.e., its needs for the next year or so).

VillageReach still has longer-term needs, and in the absence of other giving opportunities that we consider comparable, we’ve continued to feature it as #1 on our website. However, we’ve also been focusing most of our effort this year on identifying and investigating other potential top-rated charities, with the hope that we can refocus attention on an organization with shorter-term needs this December. (In general, the vast bulk of our impact on donations comes in December.) We believe that we will be able to do so. We don’t believe we’ll be able to recommend a giving opportunity as good as giving to VillageReach was last year, but given VillageReach’s lack of short-term (1-year) room for more funding, we do expect to have a different top recommendation by this December.

We haven’t been updating our rankings continuously; we prefer to do very deep investigations of top contenders, and aim for an all-at-once refresh in time for December. This is largely because we’ve continued to raise the bar for what it takes to become a top charity. For example, since we’ve found field visits to be useful, we now have a strong preference to avoid naming a charity “top-rated” before we’ve seen its work on the ground (for this reason, staff is currently split up between Malawi and India, visiting contender charities; we will post notes and pictures after we return and get the content approved by charities we’ve visited). More generally, we are looking to examine a charity from many different angles and have a high level of confidence before we start directing significant funds to it.

Bottom line - by December, we will have a new “top-rated” charity. This is not a “demotion” of VillageReach; rather, it reflects our success in directing enough funding to it to close its short-term gap.

October 18th, 2011

What it takes to evaluate impact

When someone asks me what makes GiveWell different from other third-party charity evaluators, I often answer by listing all the things we’ve done in order to investigate our current top-rated charity, VillageReach.

All in all, we’ve spent hundreds of hours examining VillageReach - yet we still feel very far from being “settled” on the question of how promising its activities are. Like any outstanding opportunity to do good, VillageReach’s work involves large and complex challenges. We’ll never have 100% of the relevant information or 100% certainty on its merits, but because we’ve recommended VillageReach so highly and moved over $1 million to it, it’s important to us that we do the best we can.

It isn’t realistic to do this kind of in-depth investigation for thousands (or even hundreds) of charities. We have to save our resources for the most promising charities if we want to have a reasonable level of confidence in our top recommendations. That means we take shortcuts on less promising charities, and we don’t put in the work it would take to distinguish between “worst,” “bad,” “mediocre” and “decent” groups - we’re laser-focused on the ones that we consider “best.”

Other independent charity evaluators tend to measure themselves by how many charities they rate. They exist largely for donors who already know where they want to give, and want a basic legitimacy check before they finalize the donation. To accommodate this goal, these other evaluators need to be far less thorough and more simplified than we are. That means - in our view - that they have no realistic chance of ever meaningfully rating impact, i.e., the degree to which a charity is succeeding at its mission.

GiveWell isn’t for everyone. Donors looking to check the charity they already want to give to are better off with other resources. But for donors who don’t already have a charity in mind and are looking to maximize their impact, we don’t know of any other group that provides a comparable product.

October 11th, 2011

GiveWell Labs: Our Criteria for Giving Opportunities

We’re starting a new initiative, GiveWell Labs, an arm of our research process that will be open to any giving opportunity, no matter what form and what sector.

This post lays out, very broadly, what qualities we are looking for in giving opportunities. Future posts will elaborate on each of these criteria, and we will also discuss how we think these criteria apply to specific areas of philanthropy. Readers will hopefully be left with a strong sense of our beliefs and biases and what we’re looking for.

The main things we’re looking for in a giving opportunity are:

  1. Upside: we’d prefer to fund projects that have the potential to go extremely well. Projects aiming to demonstrate a model that can be scaled up, generate new scientific knowledge that can be used by many others, or put a program in place that eventually becomes self-sustaining independent of philanthropic support all have “upside.” Simply aiming to deliver insecticide-treated nets using established delivery methods does not have much “upside” (though it may score well on many of these other criteria).
  2. High likelihood of success: we’d prefer to fund projects that are very likely to do a respectable amount of good per dollar. The “evidence base” of a project - i.e., the set of past well-understood events that can be used to put its likelihood of success in context - is key here. Obviously this criterion will often be in tension with the “upside” criterion; the ideal for us is a project that has both, i.e., a project that’s both very likely to do some good and has some possibility of doing enormous amounts of good (we think that giving to VillageReach in 2010 fit into this category).
  3. Accountability. We’re OK with funding a project that might fail, but it’s very important to us that we be able to recognize, document, publicly discuss, and learn from such a failure if it happens. We thus have a strong preference to fund projects with specific and meaningful deliverables that will give us a strong sense of whether things are going as hoped (as well as permission to publish updates on these deliverables).

    We are relatively new to giving and plan to be doing a lot more of it in the future, so making sure that early projects are learning opportunities is crucial.

  4. People we’re confident in. We prefer to fund projects where we are impressed by and confident in the people involved. However, our take on how to evaluate people seems to be different from that of some other funders; we’ll elaborate in a future post.
  5. Room for more funding. We prefer to fund projects that would not happen without our funding. This means that we aren’t actually looking for the “best ways to spend philanthropic funds”; we’re looking for the “best ways to spend philanthropic funds that aren’t already on the agendas of other funders.”

We don’t have an explicit formula for weighing the above criteria above against each other. Broadly speaking, we’d prefer to fund an opportunity that is strong on all of the following: (a) at least one of #1 and #2; (b) at least one of #3 and #4; (c) #5. (Note that we do not feel the approach of estimating ‘expected good accomplished’ for each project, and simply ranking by this metric, is a good way to maximize actual expected good accomplished; for more, see the body and comments of a recent post on expected-value calculations.)

One more consideration is leverage: we prefer projects where our funding mobilizes more funding from other givers as well, thus multiplying the impact of our funds in some sense. However, we think this is far less important than the criteria listed above. We’d rather fund a great project all on our own, and leave other funders to spend on their own projects, than get a 5:1 or 100:1 funding match from others on a project that is weak on the above criteria.

If you think we’re missing any important impact-related criteria, please let us know.

October 5th, 2011

Update on GiveWell’s web traffic / money moved: Q3 2011

In addition to evaluations of other charities, GiveWell publishes substantial evaluation on itself, from the quality of its research to its impact on donations. This year, we have added quarterly updates regarding two key metrics: (a) donations to top charities directly through our website (b) web traffic.

Money moved

By “money moved” we mean donations to our top charities that we can confidently identify as being made on the strength of our recommendation. This update focuses only on “money moved” that comes through GiveWell’s website; we’ll report on all donations due to GiveWell’s research at the end of the year (when the majority of large gifts occur).

While money moved through the website is only a fraction of overall money moved (and is also far greater in December than in other months), we believe this is a meaningful metric for tracking our progress/growth (as opposed to overall influence).

The charts below show dollars donated and the number of donations by month. Overall, growth in 2011 has been strong.


We report annually money moved to each of our recommended charities, but we don’t plan on including this information in quarterly reports because (a) there are some donations that have been made but we can’t yet to attribute to an organization; (b) overall we don’t feel these figures are very meaningful or good predictors of what the year-end allocation will be.

Web traffic

The table below shows quarterly web traffic to GiveWell’s website.

Quarter Visitors Y/Y growth
Q1 2009 20,681 -
Q2 2009 14,974 -
Q3 2009 18,418 -
Q4 2009 45,956 -
Q1 2010 48,027 132%
Q2 2010 33,173 122%
Q3 2010 27,729 51%
Q4 2010 68,870 50%
Q1 2011 89,588 87%
Q2 2011 102,506 209%
Q3 2011 115,482 316%

The charts below show our web traffic over time, including the latest quarter.


September 29th, 2011

Errors in DCP2 cost-effectiveness estimate for deworming

Two notes on this post:

  • This post discusses flaws in a particular published cost-effectiveness estimate for deworming. It should not be taken as a general argument against deworming as a promising intervention, and it does not address various other publications on deworming including the 2003 paper by Edward Miguel and Michael Kremer.
  • Prior to publication, we sent a draft of this post to several relevant scholars including the authors of the estimate. They have reviewed our work and confirmed the major errors we point out.

Over the past few months, GiveWell has undertaken an in-depth investigation of the cost-effectiveness of deworming, a treatment for parasitic worms that are very common in some parts of the developing world. While our investigation is ongoing, we now believe that one of the key cost-effectiveness estimates for deworming is flawed, and contains several errors that overstate the cost-effectiveness of deworming by a factor of about 100. This finding has implications not just for deworming, but for cost-effectiveness analysis in general: we are now rethinking how we use published cost-effectiveness estimates for which the full calculations and methods are not public.

The cost-effectiveness estimate in question comes from the Disease Control Priorities in Developing Countries (DCP2), a major report funded by the Gates Foundation. This report provides an estimate of $3.41 per disability-adjusted life-year (DALY) for the cost-effectiveness of soil-transmitted-helminth (STH) treatment, implying that STH treatment is one of the most cost-effective interventions for global health. In investigating this figure, we have corresponded, over a period of months, with six scholars who had been directly or indirectly involved in the production of the estimate. Eventually, we were able to obtain the spreadsheet that was used to generate the $3.41/DALY estimate. That spreadsheet contains five separate errors that, when corrected, shift the estimated cost effectiveness of deworming from $3.41 to $326.43. We came to this conclusion a year after learning that the DCP2’s published cost-effectiveness estimate for schistosomiasis treatment - another kind of deworming - contained a crucial typo: the published figure was $3.36-$6.92 per DALY, but the correct figure is $336-$692 per DALY. (This figure appears, correctly, on page 46 of the DCP2.)

We do believe that the corrected DCP2 calculations are too harsh on deworming; our best estimate of the cost-effectiveness of deworming is in between the corrected and uncorrected DCP2 figures, at $30-$80 per DALY. In addition, there are strong arguments for deworming as an excellent intervention that do not depend on these figures. Overall we consider deworming a highly promising (though not the single most promising) intervention; we will be discussing our thoughts on this intervention further in the future. This post focuses not on deworming in general, but on the DCP2 figures and what lessons we should take from the flaws in them.

  • The estimates on deworming are the only DCP2 figures we’ve gotten enough information on to examine in-depth. Getting to this point took a lot of work and communication with a number of different scholars, so we aren’t sure of the extent to which other estimates might also turn out to be flawed if examined closely.
  • We believe that the errors we’ve found in the estimate would have been caught by a helminth expert independently examining the estimate. Therefore, the presence of these errors implies to us that there has been no such examination. If this is the case, it would argue against the reliability of the DCP2’s estimates in general.
  • We’ve previously argued for a limited role for cost-effectiveness estimates; we now think that the appropriate role may be even more limited, at least for opaque estimates (e.g., estimates published without the details necessary for others to independently examine them) like the DCP2’s.
  • More generally, we see this case as a general argument for expecting transparency, rather than taking recommendations on trust - no matter how pedigreed the people making the recommendations. Note that the DCP2 was published by the Disease Control Priorities Project, a joint enterprise of The World Bank, the National Institutes of Health, the World Health Organization, and the Population Reference Bureau, which was funded primarily by a $3.5 million grant from the Gates Foundation. The DCP2 chapter on helminth infections, which contains the $3.41/DALY estimate, has 18 authors, including many of the world’s foremost experts on soil-transmitted helminths.
  • It is possible that we have made errors in our corrections to the calculation. One of the reasons we go to great lengths to be transparent is because we want our errors to be caught as quickly as possible.

Outline for the remainder of this post:

About the DCP2’s estimate

The DCP2 was published by the Disease Control Priorities Project, a joint enterprise of The World Bank, the National Institutes of Health, the World Health Organization, and the Population Reference Bureau, which was funded primarily by a $3.5 million grant from the Gates Foundation.

The Gates Foundation also appears to have invested substantially in the dissemination of the DCP2’s findings, including a $4.4 million grant to the Population Reference Bureau to “disseminate key messages from [the DCP2].â€?

The DCP2 aims to estimate the cost-effectiveness of different health interventions, in terms of dollars per disability-adjusted life-year (DALY) saved, in order to prioritize the most cost-effective interventions–the ones that will have the largest effects in reducing mortality and morbidity for a given amount of funding. The DCP2’s published estimates imply that soil-transmitted helminth (STH) treatment is one of the cheapest ways to improve health: the same “amount of healthâ€? could be provided by spending $1 on STH deworming or roughly $34 on family planning programs or more than $90 on treating drug-resistant tuberculosis. In fact, it appears that the DCP2 rates STH treatment as the second most cost-effective health intervention of all, behind only hygiene promotion (p. 41).

The DCP2’s cost-effectiveness estimates for deworming have been cited widely to advocate a greater focus on treating STH infections, including in:

  • an article (PDF) in The Lancet
  • a report (PDF) by REACH, a consortium of large international NGOs and other organizations working to end child hunger, which labeled deworming one of 11 “promoted interventionsâ€?
  • the most-cited paper (PDF) published in the journal International Health
  • an editorial by Peter Hotez, a co-founder of the Global Network for Neglected Tropical Diseases, which has received more than $40 million in funding from the Gates Foundation
  • work by charity evaluators, such as GiveWell, Giving What We Can, and the University of Pennsylvania’s Center for High Impact Philanthropy.

Why we decided to look into the DCP2’s deworming estimates

We undertook this research because:

  • We wanted to do a case study of a cost-effectiveness estimate from the DCP2, understanding the full details of what goes into it and where the room for error is.
  • We were particularly curious about the estimate for treatment of soil-transmitted helminths since the published $3.41 per DALY averted figure didn’t seem to sync with what we knew about the costs and effectiveness of STH treatment (or the independent estimate of $280/DALY given by another study, as we’ve mentioned previously).
  • We also wanted to focus on STH treatment since the DCP2 rates it as the second most cost-effective health intervention of all, behind only hygiene promotion.
  • Finally, we wanted to learn more about deworming after Elie visited the Schistosomiasis Control Initiative in London and we became more optimistic about this organization than we had been.

Our process for investigating the estimate

GiveWell took the following steps to investigate the DCP2’s estimate for the cost effectiveness of STH deworming:

  • We initially contacted Peter Hotez, the lead author of the DCP2 chapter on intestinal nematode infections; he sent us several papers on the costs and effectiveness of deworming and referred us to another scholar to explain the calculation that the DCP2 had published.
  • This scholar, in turn, referred us to two more, who sent us further references in response to our questions.
  • At this point we had an extended back-and-forth trying to understand the details of the calculation that had been done, and since we weren’t sure we would reach a conclusion on this, we asked volunteer Jonah Sinick to use all the references we’d been sent to create his own best guess estimate for the cost-effectiveness estimate of deworming. This estimate implied a significantly higher cost per DALY than the published figure, which seemed strange since we were now using the references and inputs suggested to us by the chapter authors.
  • The scholars we had been corresponding with sent us a spreadsheet with the full details of the calculation, as well as an accompanying table, which we will call Table 9, that had been used to input some of the figures in the spreadsheet. Here is the PDF of Table 9 that we were sent.
  • However, the interpretation of the numbers from Table 9 was still unclear to us. Table 9 is not clearly labeled; the scholars involved in the calculation appeared to have conflicting interpretations of what the numbers meant, and both meanings were highly counterintuitive to us (details below).
  • So we contacted another scholar who had worked on Table 9 to get her help in interpreting it. She sent us the full paper from which Table 9 was taken, Intestinal Nematode Infections, and this paper appeared to have a different interpretation of Table 9 than the spreadsheet’s. We confirmed this with her.
  • We also found the disability weights being used counterintuitive, and after some investigation we received confirmation that they were erroneous (details below).
  • All in all, we found five errors in the estimate, not all of which were attributable to the creator of the spreadsheet.

Problems with the official estimate of the cost-effectiveness of deworming

The basic approach of the estimate is to:

  • Calculate the benefits of deworming by
    • Starting from a population of schoolchildren being dewormed;
    • Estimating the percentage of these children suffering from different symptoms of infection;
    • Using the above, estimate the number of children cured of these symptoms (the estimate assumes that they are cured for exactly one year, since reinfection can occur after deworming)
    • Incorporating the severity of symptoms to arrive at DALYs saved by the deworming
  • Separately calculate the costs of deworming this population of schoolchildren, and divide costs by DALYs to obtain the cost per DALY.

When we examined the details of the official estimate, it struck us that nearly all of the DALYs saved (i.e., nearly all of the benefit) were coming from the reduction of a single symptom of a single worm infection: cognitive impairment due to ascariasis (we abbreviate this as CIDTA). Specifically, the figures going into the estimate implied that:

  • In a hypothetical population of 208,530 children (age 5-14 in Latin America) treated, 45,060 suffer from CIDTA. (Cells C44 and L44 in “ascariasis” sheet). That’s about 22%.

  • The disability weight of CIDTA is 0.463 (cell E8). While these figures are difficult to interpret, this implies that having CIDTA is about half as bad as being dead (disability weight 1.0), and only slightly less debilitating than being blind (disability weight 0.6). (See the official list of disability weights published alongside the DCP2.) These figures implied (to us) that CIDTA was not a matter of subtle cognitive impairment, but of mental handicap so severe as to truly prevent normal functioning.
  • The intervention in question - a single dose of albendazole - could completely restore normal mental functioning (i.e., completely eliminate disability associated with CIDTA) for one year.

These implications didn’t sync with the information we had from other sources, such as the Global Burden of Disease (GBD) report published alongside the DCP2.

  • If ascariasis caused this sort of symptom, we’d expect to see much more focus on ascariasis (relative to other helminth infections) in the global health and deworming communities.
  • In addition (as we observed when trying to reconcile the official estimate with our own estimate), if 22% of the 110 million 5-14 year olds in Latin America (GBD, 198-199) had a disability with weight 0.463, then this - alone - would result in 11.2 million DALYs lost to ascariasis per year in this region (22% * 110 million * 0.463). However, the official DALY burden for this ascariasis (all symptoms) among this population is only 31,000 (GBD, 198-199) - in fact, the worldwide DALY burden for ascariasis is only 915,000 (GBD, 180-181).

We therefore did further investigation on the CIDTA symptom - both how prevalent it is and how severe it is. It turns out that the official calculation significantly overstates both. For example, among 5-14 year olds in Latin America, CIDTA affects about 0.23% of the population - not 22.6% as the official calculation suggests - and its correct disability weight is 0.024 (the same severity as anemia), not 0.463.

Specifics of these errors:

  • Prevalence of CIDTA. The official calculation starts from a hypothetical population of 1 million people of all ages, then calculates the number of 5-14 year olds (per million people) using demographic data, then takes the number of CIDTA cases directly from Table 9 (this figure is multiplied by 10 before being put in the official spreadsheet). For example, for 5-14 year olds in Latin America, Table 9’s “A/B” column has the figure, “4506″; the official calculation records “45060″ for the number of CIDTA cases among 5-14 year olds.

    The labeling of Table 9 is ambiguous and doesn’t make it clear whether this is the intended meaning of the figures. We contacted one of the original authors who wrote the paper from which Table 9 is taken, received a copy of the (unpublished) paper from her, discussed it with her, and found that this figure’s intended interpretation is different from the official calculations, in two ways:

    • The figure in the “A/B” column refers number of people at risk for a given symptom, not the number of people suffering from that symptom. These are equivalent for Type A and Type C symptoms, but not for Type B symptoms including CIDTA. Intestinal Nematode Infections (PDF), the working paper that contains Table 9, says that “in any annual cohort of heavily infected children some 5% suffer [Type B symptoms, which are the only symptoms that have life-long effects]” (p. 26). Using the figures as the official calculation did would therefore lead to a 20x overstatement in the prevalence of CIDTA.

      This mistake applies not just to cognitive impairment due to ascariasis, but also to cognitive impairment due to trichuriasis and hookworms, similarly leading to a 20x overstatement of the prevalence of cognitive impairment due to those infections as well.

    • The figures in Table 9 refer to the number of children at risk, per 100,000 children of the age group indicated in the row. For 5-14 year olds in Latin America, the figure (for symptoms “A/B”) is “4506″; this means that 4506 out of 100,000 5-14 year olds are at risk for CIDTA. This in turn means that 45060 of every million 5-14 year olds are at risk. However, the official calculation assumes 45060 cases not for one million 5-14 year olds, but for only 208,530 5-14 year olds (which is the number of 5-14 year olds one would expect in a population of 1 million people across the three age groups). Thus, this difference results in overstating the prevalence of CIDTA by about 5x.

      This mistake applies to each of the symptoms of all three soil-transmitted helminths, not just to CIDTA, and therefore leads to an overstate of the prevalence of every symptom of STHs by about 5x.

    Bottom line - the correct interpretation of Table 9 (for 5-14 year olds in Latin America) is that 45060 out of every million 5-14 year olds are at risk for CIDTA, and 5% of these actually have it - so 2253 out of every million 5-14 year olds have CIDTA. The official calculation assumes that in a population of 208,530 5-14 year olds, 45060 have CIDTA. The same types of errors apply to the other regions and conditions as well.

  • Severity of CIDTA. The disability weight of 0.463 is correctly transcribed from the Global Burden of Disease official disability weights, which in turn takes the figure from the earlier 1996 edition (which we examined in a library). However, we still found this figure odd because of the contrast with the other two kinds of helminth infections:
    Helminth type Symptom A - disability weight Symptom A - description Symptom B - disability weight Symptom B - description Symptom C - disability weight Symptom C – description
    Ascariasis 0.006 Reduction in cognitive ability in school-age children, which occurs only while infection persists 0.463 Delayed psychomotor development and impaired performance in language skills, motor skills, and coordination equivalent to a 5- to 10-point deficit in IQ 0.024 Blockage of the intestines due to worm mass
    Trichuriasis 0.006 Reduction in cognitive ability in school-age children, which occurs only while infection persists 0.024 Delayed psychomotor development and impaired performance in language skills, motor skills, and coordination equivalent to a 5- to 10-point deficit in IQ 0.114-0.138 Rectal prolapse and/or tenesmus and/or bloody mucoid stools due to carpeting of intestinal mucosa by worms
    Hookworm NA NA 0.024 Delayed psychomotor development and impaired performance in language skills, motor skills, and coordination equivalent to a 5- to 10-point deficit in IQ 0.024 Anemia due to hookworm infection

    It looked to us as though the weights may have been switched, in the case of ascariasis, for symptoms B and C. We contacted Colin Mathers, the second-listed author on the Global Burden of Disease publication, and he confirmed to us that the weights are in fact switched, stating, “We also noticed this and corrected it in the spreadsheets for WHO estimates, but possibly it has remained uncorrected in some of the summary tables of weights.” Thus, CIDTA’s correct disability weight is 0.024, but the published disability weight in both editions of the GBD - and the weight used in the official cost-effectiveness calculation - is 0.463.

We created a version of the official calculation that corrected for the above errors, as well as two other errors that we found in the process of checking the calculation as thoroughly as we could. (See Footnote 1 below.) Our version is here (XLS).

This calculation leads to a revised cost-effectiveness estimate of $326.43 per DALY, rather than the $3.41 per DALY in the original.

The DCP cost-effectiveness estimates only took into account short term effects of the three diseases, even though they have some long term effects. This seems to have been an intentional decision rather than an error, but our feeling is that a best estimate of the true cost-effectiveness of deworming would likely take these long-term effects into account. We therefore created another version of the estimate that does so, as best as we can. (See Footnote 2 below.) Taking these long-term effects into account, our cost-effectiveness estimate for STH treatment moves to $138.28 per DALY.

These corrections also have implications for the cost-effectiveness estimate for combination deworming (simultaneously addressing both STH and schistosomiasis, another type of infection). The DCP2 reports a cost-effectiveness estimate of $8-$19/DALY averted for combined treatment, depending on whether generic or brand-name drugs are used for schistosomiasis treatment. Using our overall best guess for the revised DCP2 estimate for STH of $138.28/DALY and the DCP2’s estimate for generic schistosomiasis drugs of $336/DALY (note that this is incorrectly presented as “$3.36/DALY” on page 476, but the correct figure - without the erroneous decimal point - appears on page 46), we estimate the cost-effectiveness of a combined program, according to the DCP2, as $177/DALY. Ignoring the long-term effects of STH treatment, as the DCP2 does, changes that figure to $272/DALY.

In our first email to the author of the spreadsheet, we had only caught the first four of the five errors mentioned above, and made substantial mistakes in our attempts to take long-term effects into account. It was only when we checked the figures later that we noticed both of these mistakes. Mistakes are easy to make in this type of situation (for an interesting study on spreadsheet mistakes, see here). Transparency is the best way we can think of to avoid such mistakes. Now that we’ve published the spreadsheets, we look forward to hearing about any other mistakes you find - in the original or ours.

Our independent estimate of the cost-effectiveness of STH treatment

At the same time we were working through the DCP cost-effectiveness estimate for STH deworming, Jonah Sinick, a GiveWell volunteer, was working on an independent set of cost-effectiveness estimates for deworming, separately for both STH and a second type of worm-based disease, schistosomiasis. His report on the results is now available here. His bottom-line best guess for the cost-effectiveness of STH deworming is $82.54/DALY. Jonah’s calculation implicitly takes long-term effects into account, as we do in our more optimistic version of the calculation (the one that comes to $138.28 per DALY). Most of the discrepancy between Jonah’s $82.54/DALY figure and our $138.28 figure can be explained by the DCP’s use of a much higher cost-per-child treated ($0.225 vs. $0.085), though Jonah also finds different levels of disease burden and treatment effectiveness. (See footnote 3 below.)

Jonah also found more promising results for schistosomiasis treatment, another form of deworming that (as mentioned above) can be combined with STH treatment. His estimate ranges from $28.19-$70.48/DALY for schistosomiasis deworming. This is much more optimistic than the DCP’s estimate of $336-$692/DALY because Jonah finds, following the current consensus in the literature, a much higher disability weight for schistosomiasis than the DCP used (0.02-0.05 vs. 0.005-0.006). The DCP’s higher cost-effectiveness estimate also assumes using much more expensive brand-name drugs, while the lower estimate, like Jonah’s, assumes generics.

Conservatively combining Jonah’s estimates for the cost-effectiveness of schistosomiasis and STH deworming (by assuming that no delivery costs are saved), we reach an estimate of $32-72/DALY, depending on the disability weight of schistosomiasis. More liberally assuming that a combined program would eliminate delivery costs equal to half the per-child cost of STH treatment, Jonah’s estimate of the cost-effectiveness of a combined program ranges from $29/DALY to $66/DALY, depending on the disability weight of schistosomiasis.

Implications for donors interested in deworming

These estimates are only a small part of the picture, in our view, regarding how promising deworming is as an intervention. We will be writing more about this in the future.

However, we think it is important to note that the DCP2’s original published figures implied that deworming is among the most cost-effective interventions listed in the publication; with errors corrected, it appears comparable to treating drug-resistant tuberculosis; taking into account long-term effects, it seems comparable to providing family planning services. Neither of those interventions are traditionally considered especially cost-effective. (Note that that according to the DCP2’s original estimate, STH deworming is 30-100X more cost-effective than those interventions.)

Whether or not the long-term effects are taken into account, the corrected DCP2 estimate of STH treatment falls outside of the $100/DALY range that the World Bank initially labeled as highly cost-effective (see page 36 of the DCP2.) With the corrections, a variety of interventions, including vaccinations and insecticide-treated bednets, become substantially more cost-effective than deworming.

The more important takeaway, for us, concerns the DCP2’s cost-effectiveness estimates in general. We believe that the errors we’ve found in the estimate - described above - would have been caught by a helminth expert independently examining the estimate. Therefore, the presence of these errors implies to us that there has been no such examination. If this is the case, it would argue against the reliability of the DCP2’s estimates in general. We have not done similar investigations of other DCP2 estimates, and given the process it took to get the details of this one, we are not planning to do many more until and unless the details of estimates become available publicly.

Our takeaways

  • We’re now much more hesitant to place any weight on DCP2 cost-effectiveness figures except where we can fully understand and check the calculations.
  • More generally, we feel this case illustrates how opaque, formal calculations can obscure important information and demonstrate high sensitivity to minor errors. We see this as support for our position that formalized cost-effectiveness analysis can do more harm than good in trying to maximize actual cost-effectiveness.
  • Explicit cost-effectiveness estimates will continue to play a relatively small role in our decisions between top charities, though we will still use them in deciding which charities are potential top candidates.
  • We’re continuing to investigate deworming as a promising intervention, but one of the most encouraging figures widely cited in its favor appears deeply flawed.
  • Transparency is crucial. Had the scholars we discussed these issues with been less willing to engage with us, or had we been unable to find Intestinal Nematode Infections or the spreadsheet, these substantial errors would not have come to light.

Footnote 1: The other two problems we found in the calculation both have to do with the burden of trichuriasis:

  • The spreadsheet swaps the disability weights for Type B and C symptoms of trichuriasis. In the Global Burden of Disease and Risk Factors (GBD) 1990, which the spreadsheet cites, the Type B symptom of trichuriasis is cognitive impairment, which has a disability weight of 0.024, while the Type C symptom is massive dysentery syndrome, with disability weights ranging from 0.116 to 0.138. In the ‘trichuriasis’ sheet of the spreadsheet, Type B morbidity has disability weights ranging from 0.116 to 0.138 while Type C morbidity has the lower disability weight of 0.024. In the original calculation, this leads to an overestimate of the burden of trichuriasis by nearly 4x, but once the main errors described above are corrected, correcting this error actually makes STH treatment appear more cost-effective.
  • The spreadsheet uses a duration of .05 years for trichuriasis symptom Type C, while Intestinal Nematode Infections suggests that the duration for trichuriasis symptom Type C should be 12 months (pg. 24). This mistake likely occurred because the duration for ascariasis symptom Type C is .05 years.

In the corrected spreadsheet, sheets ‘a.3′, ‘t.5′, and ‘h.3′ contain our corrections to all five of the issues we have identified (for ascariasis, trichuriasis, and hookworm respectively). Most of the corrections should be fairly self-explanatory, but please don’t hesitate to email us or comment here if you have questions. We corrected the second main error above by changing the population of 5-14 year olds treated to 1,000,000 (see, e.g., sheet ‘a.3′ cell C23).

Footnote 2: The Type B symptom of all three diseases treated by STH deworming is called “cognitive impairment,” has a disability weight of 0.024, and lasts a lifetime once it develops. Intestinal Nematode Infections implies that 3% of the population at risk for symptom B (that is, 3% of the population listed in the A/B columns in Table 9) newly acquires a lifelong disability each year (pg. 26). We therefore altered the calculation to reflect lifelong (not just 1-year) benefits for these 3% (replacing the 5% listed in #2 above because that 5% is the total proportion infected during a given year, not the total proportion newly infected). At the same time, we also changed DALYs saved due to prevented mortality to compound to the end of life, rather than just counting the one year of life saved during the treatment. (This, arguably, is an actual error in the DCP2 process, not just a disagreement about how to take long term effects into account. When an intervention prevents someone from dying, it does not seem reasonable to count just one extra year of life saved.)

Footnote 3: We also looked into the possibility that the disability weights for helminth infections are “too low,” as implied by a passage in the DCP2:

The Disease Control Priorities Project helminth working group has determined that the WHO global burden of disease estimates are low because they do not incorporate the full clinical spectrum of helminth-associated morbidity and chronic disability, including anemia, chronic pain, diarrhea, exercise intolerance, and undernutrition (King, Dickman, and Tisch 2005). (DCP2, pg. 471)

Based on our review of the literature and correspondence with relevant scholars, we believe this argument has never been raised specifically in respect to STHs; most of the papers about it are about schistosomiasis, another type of worm infection. There is one paper (Chan 1997) that appears to imply a higher disability burden for STHs than the standard burden, which gives rise to Jonah’s more optimistic STH cost-effectiveness estimate of $11.25/DALY. We think the data from that paper is no longer credible: it appears to have been based on a lower worm threshold for experiencing morbidity than further research has found appropriate (Brooker 2010). Furthermore, the cited source of the relevant data is a working paper, the published version of which does not contain the data cited.

September 20th, 2011

GiveWell Labs: Plans for Our Process and Transparency

We’re starting a new initiative, GiveWell Labs, an arm of our research process that will be open to any giving opportunity, no matter what form and what sector.

One of the major challenges of this initiative (as mentioned in the previous post) will be remaining systematic and transparent despite the very broad mandate of GiveWell labs. It’s core to GiveWell that the thinking behind our recommendations

  • Comes from reasoning and principles that are applied consistently, not from whims.
  • Is transparent, i.e., interested people can read up on why we made the decisions we did and judge our thinking for themselves, with as little need as possible to have trust in us.

This post lays out our plan for accomplishing these.

Process

While we reserve the right to change plans mid-stream, our basic planned process is:

  1. Sourcing general ideas. We plan to cast a wide net initially, looking pretty much anywhere we can for general funding ideas - and/or organizations - that might be promising. Key sources will include:
  2. Going from general ideas to specific proposals. We will maintain a ranked list of the most promising ideas from step 1, and for high-ranked ideas, we will attempt to find the people and/or organizations who can make (and, potentially, execute on) specific proposals. At this stage we’ll just be looking, in each proposal, for rough ideas of (a) costs; (b) what people/organizations will do the execution; (c) what the basic plan is.
  3. Detailed investigation of proposals. We will maintain a ranked list of the most promising proposals from step 2, and for the most promising ones, we will conduct in-depth investigations similar to those we’ve always conducted for promising charities. These will include in-depth conversations with the relevant people/organizations; conversations with others in their space, particularly those who have funded them or chosen not to fund them; site visits when applicable; and requesting technical reports, budgets, and other materials when applicable.
  4. Recommending and funding proposals. We will attempt to get any outstanding submissions from step 3 funded. We will have $1 million to spend at our discretion if we can raise no other funds, but we expect to be able to raise more if we succeed in finding great giving opportunities.

At this point we are most interested in funding others’ ideas, and have a preference for cases where the implementing organization is the same as the organization that hatched the idea and strategy. We have the impression that much philanthropy works differently, as foundation staff design their own strategies and treat grantees to some extent as “contractors” for carrying it out; this model does not currently appeal to us, but we plan on further investigating the history of philanthropy (particularly success stories) to see whether there is more promise in this approach than we’d guess.

Transparency

With a project as broad and open-ended as GiveWell Labs, we expect to make a lot of guesses and judgment calls regarding promising areas/ideas/projects, and we expect to use heuristics and take shortcuts in narrowing the field. We don’t commit to detailed reviews of every idea or every proposal, or to the use of objective formulas to decide between them. (The same applies, and always has applied, to our existing research on top charities.)

However, a core value of ours is that interested parties - no matter who they are - ought to be able to understand as much as possible of (a) which options we considered; (b) why we chose the ones we chose. To this end, we plan on publishing:

  • Extensive discussion of the values and beliefs that are relevant to which sorts of sources we use and which areas we focus on investigating. This discussion will take place via future blog posts. We hope that anyone who reads these posts will understand why we look at the areas and sources we do, and if we aren’t accomplishing this we hope our readers will comment to let us know.
  • A list of the sources we use to generate ideas (step 1), along with detailed notes from particularly noteworthy conversations. Our goal here is to cast the net wide, so if you know of promising sources of ideas that fit with the values/principles we’re expressing and you don’t see us using them, we encourage you to comment.
  • Discussion of the general beliefs (and relevant facts) that lead us to discard certain ideas from step #1 while moving forward to step #2 on other ideas, again via the blog.
  • A full list of the proposals we consider (step #2), along with notes from discussing these proposals.
  • Discussion of the general beliefs (and relevant facts) that guide our choice of particular proposals (step #2) to move to the “deep investigation” phase (step #2), again via the blog.
  • Full details of the materials we collect via deep investigation (step #3) and our notes on the strengths and weaknesses of each giving option that makes it to this stage.

We will withhold information when necessary to respect confidentiality agreements. However, we will make our best effort to obtain clearance for - and share - all important/relevant information. This is the same policy we’ve used in charity investigations, and while some information remains confidential, we’ve still published the vast majority of the information we have (enough so that our views generally don’t need to be taken on trust).

Preserving GiveWell’s core values

GiveWell Labs is different in substantial ways from our existing research (which continues). However, we feel that we will be able to preserve the most important aspects of GiveWell:

  • A focus on finding the best giving opportunities in terms of positive impact, rather than in terms of telling compelling stories or making donors feel good.
  • Recommendations that are transparent enough to allow outsiders to draw their own conclusions and give meaningful feedback.

If we can preserve these things while working in a more open-ended way, we’ll be able to find better giving opportunities and to demonstrate our principles’ broad applicability. This means there will be fewer reasons than ever for other large givers to be keeping their own processes opaque.

September 12th, 2011

Why GiveWell Labs?

We previously announced GiveWell Labs, a new arm of our research process that will be open to any giving opportunity, no matter what form and what sector. Here I share a bit more of the thinking behind why we’re doing this.

What we’re trying to accomplish with this initiative

Our goals are twofold:

Find better giving opportunities. When we laid out our main goals for 2011, #1 was finding more great giving opportunities, and our possible strategies for doing so involved (a) broadening our scope (b) considering project-based funding. With GiveWell Labs, we are doing both simultaneously.

  • We’ve previously come across groups that might have been able to offer great giving opportunities, if we had selected a specific project and provided all the funding necessary to carry it out. However, we couldn’t recommend them to individual donors, not knowing whether $1000 or $1 million would come in as a result. Now, we’ll plan to go back to these groups, open to anything. If we do end up wanting to raise specific amounts of money, this will be a more complex endeavor than simply publishing a recommendation on our website and saying “Give here,” but we now have enough connections to major donors and enough sense of our audience of smaller donors that we think it will be worthwhile to try.
  • We’ve previously come across interesting funding opportunities that didn’t fit neatly into the causes we had chosen to focus on. This won’t be an issue for GiveWell Labs.
  • Examining opportunities with the above qualities (project-based and/or outside the sectors we’re experienced in) will be hard to do systematically, and will be by nature a bit experimental. That’s why we’re allocating only 25% of our research time to GiveWell Labs, with the remainder allocated to carrying out our existing research process (which has some restrictions but is more established and systemized). However, we expect the things we learn through GiveWell Labs to eventually shape the evolution of our more systemized research process.

Position ourselves to advise seven-figure donors.

When analyzing our own impact, we’ve noted that it comes disproportionately from large donors. (We influence more $100 donors than $10,000 donors, but the ratio is far under 100:1, so the $10,000 donors end up accounting for the lion’s share of our money moved.)

This seems logical to us, when considering that GiveWell is a “niche product” - we don’t appeal to large interconnected groups of people, but the rare people who do resonate with our work resonate very strongly with it, and give a lot based on it. The logical implication is that our greatest potential for impact may come from very large donors - and we need to be positioned to be useful to these donors.

The research we’ve done to date - recommending direct-aid charities that can absorb arbitrary amounts of funding - seems best suited to those giving under $1 million per year. When we encounter people who give more, they generally are interested in funding whole projects at once, which gives them options that simply aren’t open to our standard research process. That means our current product is a poor fit with the people who may represent our most potentially impactful audience.

We need to address this issue, and GiveWell Labs will allow us to do so. The $1 million in pre-committed funding is coming from large donors who will be able to give more if we find them great opportunities. More importantly, GiveWell Labs will allow us to move closer to having the same universe of options that seven-figure donors have, which will hopefully improve our ability to connect with and influence seven-figure donors.

Pros and cons of issue-agnostic giving

GiveWell Labs is issue-agnostic, i.e., we are not restricting our work to particular areas of philanthropy (such as international aid, climate change, etc.) We will focus on what we consider the most promising areas, but we will be potentially open to anything.

There are clear disadvantages to issue-agnostic giving:

  • The more different sorts of projects we allow ourselves to consider, the greater the challenge of sorting through them in a coherent, principled, systematic way. It will be particularly challenging to make sure we are applying principles consistently, rather than giving based on whims.
  • There are conceptual advantages to “specializing” in particular sectors over time. Doing so means having the ability to
    • Learn from past successes and failures in an area.
    • Make contacts in an area.
    • Gather evidence about the most promising approaches in an area, particularly informal/qualitative evidence (e.g., site visits).

However, issue-agnostic giving has advantages as well.

  • First and foremost is that when you’re new to giving, you can’t tell where the best opportunities are going to be. Picking a “sector” could be the dominant determinant of how effective your giving will be; taking a guess and sticking with it, therefore, seems very dangerous for a donor seeking to maximize impact. (Note that I’ve changed my view of the most promising cause as I’ve learned more about the different causes we’ve studied.)
  • Even when you’re not new to giving, the highest-impact sectors can change rapidly and chaotically as new philanthropists come on the scene. Choosing to focus on developing-world-oriented medical research may have been a great idea before the Gates Foundation came along, but I’m guessing that opportunities in this area have fallen drastically since, as the Gates Foundation has attempted to fund the best ones.
  • There may be outstanding opportunities that get overlooked by other funders because they don’t fit neatly into a particular “sector.” I think this is possible in today’s environment, simply because issue-agnostic giving is so rare.
  • Regarding the above-mentioned advantages of specialization:
    • We are hoping for a relatively “low-touch” approach to funding: we seek people with ideas but not funding, and we seek to provide funding and not other kinds of support. We hope this approach will diminish the need for us to become “experts” in any given sector.
    • We hope to be very communicative with other funders and people with relevant expertise/experience. We won’t recommend a funding opportunity without getting as many relevant opinions as we can. If people with different specialties are open and communicative with each other, it mitigates the need for funders to have all the expertise themselves.
    • In practice, we will probably find ourselves focusing on certain sectors - not out of a pre-commitment, but because these sectors appear particularly promising to us. This is especially true in light of the fact that we prefer (as we always have) to fund things we can understand well - our past experience and established knowledge do matter. So being issue-agnostic doesn’t actually preclude specializing; it just means that any specialization will happen gradually and out of a desire to maximize impact, rather than being driven by up-front choices of particular sectors.

Bottom line - at this point in our development, we think the advantages of issue-agnostic giving outweigh the disadvantages for us.

September 8th, 2011

Announcing GiveWell Labs

The research we’ve been doing for the last couple of years has been constrained in a couple of key ways:

  • We’ve pre-declared areas of focus (based on our guesses as to where the most promising charities would be found), and disqualified charities for recommendations on the basis of their being “out of scope” (though we’ve been gradually broadening our scope).
  • We’ve needed to decide which organizations to recommend without being able to say in advance how much money would go to them as a result. This has led to challenges with the question of “room for more funding.” We’ve had to find charities that could essentially use any amount of funding (large or small) productively, and this has drastically narrowed our options.

We’re now launching a new initiative within GiveWell that will not be subject to either of these constraints. We plan to invest about 25% of our research time in what we’re calling GiveWell Labs: an arm of our research process that will be open to any giving opportunity, no matter what form and what sector.

Through GiveWell Labs, we will try to identify outstanding giving opportunities (whether they’re organizations or specific projects), publish rankings of these giving opportunities (separate from the top charities list we maintain using our existing research process) and try to raise money for these opportunities. Donors have pre-committed a minimum of $1 million to the GiveWell Labs initiative, meaning that we will have at least $1 million to commit to our choice of projects even if we are able to raise nothing else. (We expect to raise more if and when we find great giving opportunities; the $1 million has been committed based on donors’ trust in our ability to find such opportunities.)

Our existing work of finding outstanding international aid charities - using a more systematic process - continues. Over the coming year, we expect to spend about 75% of our research time on our existing work of finding outstanding international aid charities, and 25% of our research time on GiveWell Labs. Note that our “standard” process continues to gradually evolve and broaden its scope, and hopefully will come to incorporate insights gained through the work on GiveWell Labs. The distinction between the two may even dissolve over time. But at this time, GiveWell Labs is the arm of our process that is open to any giving opportunity, no matter what form and what sector.

In future blog posts, we’ll be giving a lot more information about this project, including:

  • More on why we’re moving in this direction at this time, and why we think a less-constrained, exploratory arm of our research process will help us find better giving opportunities.
  • Our planned process for finding great giving opportunities through GiveWell Labs, and what you can expect from us in terms of transparency.
  • The main qualities we’re looking for in a funding opportunity (when unconstrained by the form or sector of the opportunity), and why we’re looking for them.
  • The areas we think are most likely to yield great giving opportunities, and why.

In the meantime, if you know of any giving opportunities that are (a) not already funded or likely to be funded by others; (b) outstanding opportunities to have a large positive impact, please let us know.

August 30th, 2011

Somalia famine: update

Over the past month, we’ve worked to understand the situation in Somalia and make a recommendation to donors about where they should give. At this point we’re wrapping up our work with the following conclusions:

  • We wouldn’t recommend giving to support Somalia specifically over supporting everyday aid. While the needs are extreme, we aren’t convinced that individual donors can effectively cause more aid to be delivered via their donations.
  • For those who do want to give, we suggest The International Committee of the Red Cross (ICRC), the World Food Programme (WFP) or Doctors Without Borders (MSF), but with serious reservations about each of these.
  • There is a severe lack of transparency on the part of charities and funders, particularly the US government, that has hindered our ability to understand the situation and make a strong recommendation.

A brief note before getting into the details: most of this work was done by our summer intern, Josh Rosenberg. We couldn’t have learned as much as we did about this crisis without his help. Thanks, Josh.

Details follow.

Giving to Somalia relative to giving to everyday international aid

We believe strongly that the needs inside Somalia are great and that, were it possible to send food or medical supplies such that it would reach people in the region, that would accomplish a great deal of good. We recently spoke with an American journalist in the region, and he said:

I went to a hospital in Somalia recently, and we saw kids in very bad shape. There are just no resources there. They don’t have medicine, IV bags, solution. There are dozens, if not hundreds of people arriving each day that need hospitalization. It’s the same situation with camps inside Somalia.

Nevertheless, we don’t have confidence that it is in fact possible for donors to help get more food and supplies to those who need it. There are numerous reports of World Food Programme food aid being stolen by al-Shabaab, a group classified as a terrorist organization that governs much of the famine zone, and many Western NGOs have been explicitly banned from the region by al-Shabaab or have chosen to leave due to security concerns.

Even for the groups operating in the famine zone, it’s not clear to us that additional funds donated will lead to additional services provided, for reasons of donations’ fungibility. Our guess is that given the dire circumstances, aid organizations are likely to spend whatever resources they can, unrestricted or otherwise, to reach those in need, and individuals’ providing additional funds to the few organizations active in the famine zone may make little difference to this specific relief effort.

That said, we have still had limited contact with organizations operating inside famine zones, and we would be interested in hearing from any if they feel they would reach additional people with additional funding. Were we to gain confidence that an organization could do this, we could plausibly view the donation opportunity as on par with our highest rated organizations.

Assuming one wants to give to Somalia, which organization will be most effective?

We struggled to obtain substantive credible information to help us answer this question well. In our previous Somalia update, we listed the questions we sought to answer, and in most cases, we received vague information from charities such that we weren’t able to answer our questions well. For example, we often received budget proposals such as “$5 million for water and sanitation projects” with no detail regarding (a) the type of projects (e.g., digging wells vs trucking water vs purifying water) to be implemented, (b) the cost of each project component, or (c) the location where the projects would be implemented.

Given the lack of substantive, credible information, the three factors we focused most on are:

  1. Where is the organization operating? While we believe that there are great needs throughout the region–inside the famine zone, in the rest of Somalia, and in refugee camps and mainland areas of Ethiopia and Kenya–the greatest needs are in the famine zone. In our conversation with a journalist in the region, we asked “How do the conditions in refugee camps in Kenya and Ethiopia compare to conditions inside Somalia?” and he responded:

    My impression is that the refugee camps are pretty well taken care of right now. Even though they’re burgeoning with people, they’re doing OK. There’ve been some disease outbreaks in Ethiopia. They could use more help but there’s already a huge infrastructure there. In Dadaab there’s a huge compound for western aid workers. There’s a bar and restaurant. In Somalia, the people are near death and have no access to resources… I went to Dadaab, and I saw the same thing and saw starving kids and poor families, but there were people driving CARE cars and wearing MSF badges or Save the Children hats, so there are NGOs in the camps, but there’s no help inside Somalia.

    That is, while the people who reach the refugee camps need assistance, they are being served and we don’t have enough information to say that there is room for more funding in the camps.

  2. How transparent is the organization about its activities and spending? Regarding Somalia, but also other disasters, we’ve found limited information about the impacts or results of charities’ programs. We’ve therefore focused on organizations’ transparency and openness to being held accountable for their activities as a proxy for the organizations that are likely to be most effective.
  3. What other information do we have about the organization that would inform our conclusion about where to give now? To the extent we’ve considered the organization in other contexts, we’ve incorporated any additional information into our views here.

Having completed our analysis, three organizations stood out.

  • The International Committee of the Red Cross (ICRC). The ICRC is appealing for funds solely for use in the famine zone, and our understanding from them and from the journalist we spoke to is that they are active in famine areas. ICRC gave us a uniquely detailed plan for scaling up and using funds. Unfortunately, we aren’t cleared to share this plan publicly, but it was a more comprehensive and detailed plan than we received from other charities.

    However, the plan did not allow us to easily connect what ICRC plans to do with how it would spend money. Also, the journalist we spoke with told us:

    They are working in South Somalia in the al-Shabbab areas where no one else is. But, I’ve been told by some people that they screwed it up for other aid groups because they paid al-Shabbab a tax/bribe to work in those areas, and then al-Shabbab demanded it from other groups. Because al-Shabbab is a designated a terrorist organization by the US government, aid groups had to leave because it wasn’t legal for them to pay money to al-Shabbab. So, while ICRC is doing good work, there’s some resistance to them from other NGOs.

    We have not verified this claim or questioned the ICRC about it.

  • The World Food Programme (WFP). WFP is the only organization we spoke with that makes its detailed reports publicly available on its website. The reports include detailed budgets as well as quantities of food to be delivered and targeted locations. In addition, WFP is one of the largest entities (if not the largest entity) operating in the region, and they have been criticized in the media for mistakes they’ve made. Other things equal, we feel donors are well served to support the groups that will ultimately be seen as “responsible for” the response because they are most likely to be held accountable by donors and the public. Note that al-Shabbab has denied access to WFP in areas it controls.

    The criticisms that have been made raise room for concern as well, particularly regarding reports of World Food Programme food aid being stolen by al-Shabaab.

  • Doctors without Borders (MSF). We’ve spoken several times with MSF, but have received limited information from them. MSF is operating in the famine zone. We maintain our generally good feelings about the organization, but this is based largely on MSF’s transparency about their activities and needs for donations in past disasters. We are disappointed in MSF’s lack of transparency in this case.

The journalist we spoke with also mentioned some other organizations. We don’t endorse these but include his comments here for those interested:

I interviewed one guy and came across one NGO I thought was doing decent work and one of the few that had American people on the ground. It’s the American Refugee Committee. They’re pretty small and maybe their smallness has helped them be more nimble. He has gone to Mogadishu, and come out and gone back in. A few people have asked me whom to give to and I said that I had seen the American Refugee Committee. That was one of the few western organizations working on the ground. I also saw IRC, the International Rescue Committee. I have some friends in the aid business, and they’ve told me that IRC is there. They’re trying to provide help at camps and hospitals….There is a local NGO in Somalia that is doing good work. It’s called the Hawa Abdi Foundation. There’s a Somali woman doctor who set up a clinic in a camp, and she’s helping a lot of people. She was named one of Glamour Magazine’s top ten women of the year. If you steer any donors to local groups, it’s a good one. She has a track record of doing good work and reaching people. World Vision has done some good work inside Somalia. They were run out but are now starting again. In North Kenya, they’ve done good famine prevention work and set up agricultural projects that are helping people in these drought areas become farmers and less reliant just on cattle. I’ve looked closely at the World Vision and they’re pretty brave. They’re working in areas no one wants to go to.

Our struggles and the lack of transparency

One of the most disappointing aspects of our Somalia research has been the opacity of charities and the US government. We are particularly disheartened by USAID’s consistent position that they cannot help us, or by extension, individual donors in any meaningful way.

Over the past month, we have spoken with several representatives at USAID, all of whom have told us the same thing:

  • We cannot comment on or off the record about specific aid organizations.
  • We cannot offer any advice about which organizations are likely most effective or have the greatest need for funds.
  • We cannot share any of the information we’ve received from organizations about what they’re doing or how they’ll spend money.

While there may be specific cases of documents that must be kept private for safety or privacy concerns, we feel that most information can and should be shared. We’ve seen USAID documents in some cases because charities have voluntarily sent them to us, and we haven’t seen anything about these documents that would clearly cause harm if shared more widely. Generally, when charities have asked us to keep information confidential (which we’ve honored), we’ve seen little that seems it would cause harm or danger if shared publicly; confidentiality concerns have seemed to have more to do with charities’ not wanting to be judged in certain ways.

USAID is a government agency funded by the public. USAID has significant, detailed information about NGOs’ activities around the world, and sharing this information publicly would provide significant help to donors aiming to give more effectively. USAID has told us that the information they receive from charities is private and confidential and cannot be shared. This conclusion does not seem valid or just to us.

Donors who care about impact should continue to pressure the charities they support and the US government, which provides funding to them, to be more open with their information.

August 25th, 2011

Working for GiveWell

If you’re interested in working or volunteering for GiveWell, now would be a good time to let us know. We’ve been a 7-person team for the last couple of months, but since two of the hires were temporary, we’re soon going to be back down to 5. We were happy with our productivity when we had 7 people; we have the funds, the management capacity and the desire to get back up to that size if the right people come along.

About the role

We’re looking primarily for Research Analysts - people who will provide support to the goal of finding the best giving opportunities. Research Analyst duties mostly consist of:

  • Reviewing independent research on the best ways to help people and on other issues relevant to giving
  • Reviewing particularly promising charities - including speaking with their representatives and asking critical questions, reviewing and evaluating documents they send, and writing up their answers to critical questions, strengths, and weaknesses
  • Taking part in discussions of which giving opportunities are most promising and of general GiveWell strategy
  • Miscellaneous duties depending on individual preferences, including networking, outreach, writing (e.g., for the blog), and original analysis on research questions
  • We encourage analysts to push their abilities to the limit and take on as much responsibility as they can. An analyst can grow into a major role at GiveWell.

A few practical details on the role:

  • We are located in New York City and currently work in the Tribeca/Chinatown area. Hours are flexible and some telecommuting is allowed, though overall expectations for productivity are high.
  • The general environment is one of intense discussion and debate. We change course and rethink things frequently, and analysts are encouraged to challenge, question and critique their managers.

What we’re looking for

We believe the most important qualities for a Research Analyst are

  • Passion for finding great giving opportunities. There’s little precedent for the kind of work we do, and we can’t train people to the point where little of their own judgment is required. As a result, analysts end up making a lot of judgment calls and it’s important that these judgment calls be oriented toward finding great giving opportunities.

    In the past, we’ve found that the best employees are the ones who come to us looking to volunteer or work for us, demonstrating pre-existing interest in and passion for the project. That’s why we’re starting this search via our own blog - we think the most promising candidates are likely to be among our readers.

  • Critical thinking/analysis skills. Analysts need not have existing proficience with data analysis (though it’s a plus), but they do need to be able to approach claims about charities’ impact with skepticism and good critical questions - whether those claims are made by charities, scholars, or myself and Elie.
  • Attention to detail. Analysts need to do careful, reliable work whose conclusions we can trust.

How to apply

Email us with a resume and a note on why you’d like to work for GiveWell. We will most likely write back asking you to enter our volunteer process; we generally ask people to volunteer for us before being hired, so that we can get a strong sense of their fit with the organization.

We also appreciate referrals to people who might be a good fit.

August 18th, 2011

Why We Can’t Take Expected Value Estimates Literally (Even When They’re Unbiased)

While some people feel that GiveWell puts too much emphasis on the measurable and quantifiable, there are others who go further than we do in quantification, and justify their giving (or other) decisions based on fully explicit expected-value formulas. The latter group tends to critique us - or at least disagree with us - based on our preference for strong evidence over high apparent “expected value,” and based on the heavy role of non-formalized intuition in our decisionmaking. This post is directed at the latter group.

We believe that people in this group are often making a fundamental mistake, one that we have long had intuitive objections to but have recently developed a more formal (though still fairly rough) critique of. The mistake (we believe) is estimating the “expected value” of a donation (or other action) based solely on a fully explicit, quantified formula, many of whose inputs are guesses or very rough estimates. We believe that any estimate along these lines needs to be adjusted using a “Bayesian prior”; that this adjustment can rarely be made (reasonably) using an explicit, formal calculation; and that most attempts to do the latter, even when they seem to be making very conservative downward adjustments to the expected value of an opportunity, are not making nearly large enough downward adjustments to be consistent with the proper Bayesian approach.

This view of ours illustrates why - while we seek to ground our recommendations in relevant facts, calculations and quantifications to the extent possible - every recommendation we make incorporates many different forms of evidence and involves a strong dose of intuition. And we generally prefer to give where we have strong evidence that donations can do a lot of good rather than where we have weak evidence that donations can do far more good - a preference that I believe is inconsistent with the approach of giving based on explicit expected-value formulas (at least those that (a) have significant room for error (b) do not incorporate Bayesian adjustments, which are very rare in these analyses and very difficult to do both formally and reasonably).

The rest of this post will:

  • Lay out the “explicit expected value formula” approach to giving, which we oppose, and give examples.
  • Give the intuitive objections we’ve long had to this approach, i.e., ways in which it seems intuitively problematic.
  • Give a clean example of how a Bayesian adjustment can be done, and can be an improvement on the “explicit expected value formula” approach.
  • Present a versatile formula for making and illustrating Bayesian adjustments that can be applied to charity cost-effectiveness estimates.
  • Show how a Bayesian adjustment avoids the Pascal’s Mugging problem that those who rely on explicit expected value calculations seem prone to.
  • Discuss how one can properly apply Bayesian adjustments in other cases, where less information is available.
  • Conclude with the following takeaways:
    • Any approach to decision-making that relies only on rough estimates of expected value - and does not incorporate preferences for better-grounded estimates over shakier estimates - is flawed.
    • When aiming to maximize expected positive impact, it is not advisable to make giving decisions based fully on explicit formulas. Proper Bayesian adjustments are important and are usually overly difficult to formalize.
    • The above point is a general defense of resisting arguments that both (a) seem intuitively problematic (b) have thin evidential support and/or room for significant error.

The approach we oppose: “explicit expected-value” (EEV) decisionmaking

We term the approach this post argues against the “explicit expected-value” (EEV) approach to decisionmaking. It generally involves an argument of the form:

    I estimate that each dollar spent on Program P has a value of V [in terms of lives saved, disability-adjusted life-years, social return on investment, or some other metric]. Granted, my estimate is extremely rough and unreliable, and involves geometrically combining multiple unreliable figures - but it’s unbiased, i.e., it seems as likely to be too pessimistic as it is to be too optimistic. Therefore, my estimate V represents the per-dollar expected value of Program P.
    I don’t know how good Charity C is at implementing Program P, but even if it wastes 75% of its money or has a 75% chance of failure, its per-dollar expected value is still 25%*V, which is still excellent.

Examples of the EEV approach to decisionmaking:

  • In a 2010 exchange, Will Crouch of Giving What We Can argued:
    DtW [Deworm the World] spends about 74% on technical assistance and scaling up deworming programs within Kenya and India … Let’s assume (very implausibly) that all other money (spent on advocacy etc) is wasted, and assess the charity solely on that 74%. It still would do very well (taking DCP2: $3.4/DALY * (1/0.74) = $4.6/DALY – slightly better than their most optimistic estimate for DOTS (for TB), and far better than their estimates for insecticide treated nets, condom distribution, etc). So, though finding out more about their advocacy work is obviously a great thing to do, the advocacy questions don’t need to be answered in order to make a recommendation: it seems that DtW [is] worth recommending on the basis of their control programs alone.

  • The Back of the Envelope Guide to Philanthropy lists rough calculations for the value of different charitable interventions. These calculations imply (among other things) that donating for political advocacy for higher foreign aid is between 8x and 22x as good an investment as donating to VillageReach, and the presentation and implication are that this calculation ought to be considered decisive.
  • We’ve encountered numerous people who argue that charities working on reducing the risk of sudden human extinction must be the best ones to support, since the value of saving the human race is so high that “any imaginable probability of success” would lead to a higher expected value for these charities than for others.
  • “Pascal’s Mugging” is often seen as the reductio ad absurdum of this sort of reasoning. The idea is that if a person demands $10 in exchange for refraining from an extremely harmful action (one that negatively affects N people for some huge N), then expected-value calculations demand that one give in to the person’s demands: no matter how unlikely the claim, there is some N big enough that the “expected value” of refusing to give the $10 is hugely negative.

The crucial characteristic of the EEV approach is that it does not incorporate a systematic preference for better-grounded estimates over rougher estimates. It ranks charities/actions based simply on their estimated value, ignoring differences in the reliability and robustness of the estimates.

Informal objections to EEV decisionmaking

There are many ways in which the sort of reasoning laid out above seems (to us) to fail a common sense test.

  • There seems to be nothing in EEV that penalizes relative ignorance or relatively poorly grounded estimates, or rewards investigation and the forming of particularly well grounded estimates. If I can literally save a child I see drowning by ruining a $1000 suit, but in the same moment I make a wild guess that this $1000 could save 2 lives if put toward medical research, EEV seems to indicate that I should opt for the latter.
  • Because of this, a world in which people acted based on EEV would seem to be problematic in various ways.
    • In such a world, it seems that nearly all altruists would put nearly all of their resources toward helping people they knew little about, rather than helping themselves, their families and their communities. I believe that the world would be worse off if people behaved in this way, or at least if they took it to an extreme. (There are always more people you know little about than people you know well, and EEV estimates of how much good you can do for people you don’t know seem likely to have higher variance than EEV estimates of how much good you can do for people you do know. Therefore, it seems likely that the highest-EEV action directed at people you don’t know will have higher EEV than the highest-EEV action directed at people you do know.)
    • In such a world, when people decided that a particular endeavor/action had outstandingly high EEV, there would (too often) be no justification for costly skeptical inquiry of this endeavor/action. For example, say that people were trying to manipulate the weather; that someone hypothesized that they had no power for such manipulation; and that the EEV of trying to manipulate the weather was much higher than the EEV of other things that could be done with the same resources. It would be difficult to justify a costly investigation of the “trying to manipulate the weather is a waste of time” hypothesis in this framework. Yet it seems that when people are valuing one action far above others, based on thin information, this is the time when skeptical inquiry is needed most. And more generally, it seems that challenging and investigating our most firmly held, “high-estimated-probability” beliefs - even when doing so has been costly - has been quite beneficial to society.
  • Related: giving based on EEV seems to create bad incentives. EEV doesn’t seem to allow rewarding charities for transparency or penalizing them for opacity: it simply recommends giving to the charity with the highest estimated expected value, regardless of how well-grounded the estimate is. Therefore, in a world in which most donors used EEV to give, charities would have every incentive to announce that they were focusing on the highest expected-value programs, without disclosing any details of their operations that might show they were achieving less value than theoretical estimates said they ought to be.
  • If you are basing your actions on EEV analysis, it seems that you’re very open to being exploited by Pascal’s Mugging: a tiny probability of a huge-value expected outcome can come to dominate your decisionmaking in ways that seem to violate common sense. (We discuss this further below.)
  • If I’m deciding between eating at a new restaurant with 3 Yelp reviews averaging 5 stars and eating at an older restaurant with 200 Yelp reviews averaging 4.75 stars, EEV seems to imply (using Yelp rating as a stand-in for “expected value of the experience”) that I should opt for the former. As discussed in the next section, I think this is the purest demonstration of the problem with EEV and the need for Bayesian adjustments.

In the remainder of this post, I present what I believe is the right formal framework for my objections to EEV. However, I have more confidence in my intuitions - which are related to the above observations - than in the framework itself. I believe I have formalized my thoughts correctly, but if the remainder of this post turned out to be flawed, I would likely remain in objection to EEV until and unless one could address my less formal misgivings.

Simple example of a Bayesian approach vs. an EEV approach

It seems fairly clear that a restaurant with 200 Yelp reviews, averaging 4.75 stars, ought to outrank a restaurant with 3 Yelp reviews, averaging 5 stars. Yet this ranking can’t be justified in an EEV-style framework, in which options are ranked by their estimated average/expected value. How, in fact, does Yelp handle this situation?
Unfortunately, the answer appears to be undisclosed in Yelp’s case, but we can get a hint from a similar site: BeerAdvocate, a site that ranks beers using submitted reviews. It states:

Lists are generated using a Bayesian estimate that pulls data from millions of user reviews (not hand-picked) and normalizes scores based on the number of reviews for each beer. The general statistical formula is:
weighted rank (WR) = (v ÷ (v+m)) × R + (m ÷ (v+m)) × C
where:
R = review average for the beer
v = number of reviews for the beer
m = minimum reviews required to be considered (currently 10)
C = the mean across the list (currently 3.66)

In other words, BeerAdvocate does the equivalent of giving each beer a set number (currently 10) of “average” reviews (i.e., reviews with a score of 3.66, which is the average for all beers on the site). Thus, a beer with zero reviews is assumed to be exactly as good as the average beer on the site; a beer with one review will still be assumed to be close to average, no matter what rating the one review gives; as the number of reviews grows, the beer’s rating is able to deviate more from the average.

To illustrate this, the following chart shows how BeerAdvocate’s formula would rate a beer that has 0-100 five-star reviews. As the number of five-star reviews grows, the formula’s “confidence” in the five-star rating grows, and the beer’s overall rating gets further from “average” and closer to (though never fully reaching) 5 stars.

I find BeerAdvocate’s approach to be quite reasonable and I find the chart above to accord quite well with intuition: a beer with a small handful of five-star reviews should be considered pretty close to average, while a beer with a hundred five-star reviews should be considered to be nearly a five-star beer.

However, there are a couple of complications that make it difficult to apply this approach broadly.

  • BeerAdvocate is making a substantial judgment call regarding what “prior” to use, i.e., how strongly to assume each beer is average until proven otherwise. It currently sets the m in its formula equal to 10, which is like giving each beer a starting point of ten average-level reviews; it gives no formal justification for why it has set m to 10 instead of 1 or 100. It is unclear what such a justification would look like.

    In fact, I believe that BeerAdvocate used to use a stronger “prior” (i.e., it used to set m to a higher value), which meant that beers needed larger numbers of reviews to make the top-rated list. When BeerAdvocate changed its prior, its rankings changed dramatically, as lesser-known, higher-rated beers overtook the mainstream beers that had previously dominated the list.

  • In BeerAdvocate’s case, the basic approach to setting a Bayesian prior seems pretty straightforward: the “prior” rating for a given beer is equal to the average rating for all beers on the site, which is known. By contrast, if we’re looking at the estimate of how much good a charity does, it isn’t clear what “average” one can use for a prior; it isn’t even clear what the appropriate reference class is. Should our prior value for the good-accomplished-per-dollar of a deworming charity be equal to the good-accomplished-per-dollar of the average deworming charity, or of the average health charity, or the average charity, or the average altruistic expenditure, or some weighted average of these? Of course, we don’t actually have any of these figures.

    For this reason, it’s hard to formally justify one’s prior, and differences in priors can cause major disagreements and confusions when they aren’t recognized for what they are. But this doesn’t mean the choice of prior should be ignored or that one should leave the prior out of expected-value calculations (as we believe EEV advocates do).

Applying Bayesian adjustments to cost-effectiveness estimates for donations, actions, etc.

As discussed above, we believe that both Giving What We Can and Back of the Envelope Guide to Philanthropy use forms of EEV analysis in arguing for their charity recommendations. However, when it comes to analyzing the cost-effectiveness estimates they invoke, the BeerAdvocate formula doesn’t seem applicable: there is no “number of reviews” figure that can be used to determine the relative weights of the prior and the estimate.

Instead, we propose a model in which there is a normally (or log-normally) distributed “estimate error” around the cost-effectiveness estimate (with a mean of “no error,” i.e., 0 for normally distributed error and 1 for lognormally distributed error), and in which the prior distribution for cost-effectiveness is normally (or log-normally) distributed as well. (I won’t discuss log-normal distributions in this post, but the analysis I give can be extended by applying it to the log of the variables in question.) The more one feels confident in one’s pre-existing view of how cost-effective an donation or action should be, the smaller the variance of the “prior”; the more one feels confident in the cost-effectiveness estimate itself, the smaller the variance of the “estimate error.”

Following up on our 2010 exchange with Giving What We Can, we asked Dario Amodei to write up the implications of the above model and the form of the proper Bayesian adjustment. You can see his analysis here. The bottom line is that when one applies Bayes’s rule to obtain a distribution for cost-effectiveness based on (a) a normally distributed prior distribution (b) a normally distributed “estimate error,” one obtains a distribution with

  • Mean equal to the average of the two means weighted by their inverse variances
  • Variance equal to the harmonic sum of the two variances

The following charts show what this formula implies in a variety of different simple hypotheticals. In all of these, the prior distribution has mean = 0 and standard deviation = 1, and the estimate has mean = 10, but the “estimate error” varies, with important effects: an estimate with little enough estimate error can almost be taken literally, while an estimate with large enough estimate error ends ought to be almost ignored.

In each of these charts, the black line represents a probability density function for one’s “prior,” the red line for an estimate (with the variance coming from “estimate error”), and the blue line for the final probability distribution, taking both the prior and the estimate into account. Taller, narrower distributions represent cases where probability is concentrated around the midpoint; shorter, wider distributions represent cases where the possibilities/probabilities are more spread out among many values. First, the case where the cost-effectiveness estimate has the same confidence interval around it as the prior:

If one has a relatively reliable estimate (i.e., one with a narrow confidence interval / small variance of “estimate error,”) then the Bayesian-adjusted conclusion ends up very close to the estimate. When we estimate quantities using highly precise and well-understood methods, we can use them (almost) literally.

On the flip side, when the estimate is relatively unreliable (wide confidence interval / large variance of “estimate error”), it has little effect on the final expectation of cost-effectiveness (or whatever is being estimated). And at the point where the one-standard-deviation bands include zero cost-effectiveness (i.e., where there’s a pretty strong probability that the whole cost-effectiveness estimate is worthless), the estimate ends up having practically no effect on one’s final view.

The details of how to apply this sort of analysis to cost-effectiveness estimates for charitable interventions are outside the scope of this post, which focuses on our belief in the importance of the concept of Bayesian adjustments. The big-picture takeaway is that just having the midpoint of a cost-effectiveness estimate is not worth very much in itself; it is important to understand the sources of estimate error, and the degree of estimate error relative to the degree of variation in estimated cost-effectiveness for different interventions.

Pascal’s Mugging

Pascal’s Mugging refers to a case where a claim of extravagant impact is made for a particular action, with little to no evidence:

Now suppose someone comes to me and says, “Give me five dollars, or I’ll use my magic powers … to [harm an imaginably huge number of] people.

Non-Bayesian approaches to evaluating these proposals often take the following form: “Even if we assume that this analysis is 99.99% likely to be wrong, the expected value is still high - and are you willing to bet that this analysis is wrong at 99.99% odds?”

However, this is a case where “estimate error” is probably accounting for the lion’s share of variance in estimated expected value, and therefore I believe that a proper Bayesian adjustment would correctly assign little value where there is little basis for the estimate, no matter how high the midpoint of the estimate.

Say that you’ve come to believe - based on life experience - in a “prior distribution” for the value of your actions, with a mean of zero and a standard deviation of 1. (The unit type you use to value your actions is irrelevant to the point I’m making; so in this case the units I’m using are simply standard deviations based on your prior distribution for the value of your actions). Now say that someone estimates that action A (e.g., giving in to the mugger’s demands) has an expected value of X (same units) - but that the estimate itself is so rough that the right expected value could easily be 0 or 2X. More specifically, say that the error in the expected value estimate has a standard deviation of X.

An EEV approach to this situation might say, “Even if there’s a 99.99% chance that the estimate is completely wrong and that the value of Action A is 0, there’s still an 0.01% probability that Action A has a value of X. Thus, overall Action A has an expected value of at least 0.0001X; the greater X is, the greater this value is, and if X is great enough then, then you should take Action A unless you’re willing to bet at enormous odds that the framework is wrong.”

However, the same formula discussed above indicates that Action X actually has an expected value - after the Bayesian adjustment - of X/(X^2+1), or just under 1/X. In this framework, the greater X is, the lower the expected value of Action A. This syncs well with my intuitions: if someone threatened to harm one person unless you gave them $10, this ought to carry more weight (because it is more plausible in the face of the “prior” of life experience) than if they threatened to harm 100 people, which in turn ought to carry more weight than if they threatened to harm 3^^^3 people (I’m using 3^^^3 here as a representation of an unimaginably huge number).

The point at which a threat or proposal starts to be called “Pascal’s Mugging” can be thought of as the point at which the claimed value of Action A is wildly outside the prior set by life experience (which may cause the feeling that common sense is being violated). If someone claims that giving him/her $10 will accomplish 3^^^3 times as much as a 1-standard-deviation life action from the appropriate reference class, then the actual post-adjustment expected value of Action A will be just under (1/3^^^3) (in standard deviation terms) - only trivially higher than the value of an average action, and likely lower than other actions one could take with the same resources. This is true without applying any particular probability that the person’s framework is wrong - it is simply a function of the fact that their estimate has such enormous possible error. An ungrounded estimate making an extravagant claim ought to be more or less discarded in the face of the “prior distribution” of life experience.

Generalizing the Bayesian approach

In the above cases, I’ve given quantifications of (a) the appropriate prior for cost-effectiveness; (b) the strength/confidence of a given cost-effectiveness estimate. One needs to quantify both (a) and (b) - not just quantify estimated cost-effectiveness - in order to formally make the needed Bayesian adjustment to the initial estimate.

But when it comes to giving, and many other decisions, reasonable quantification of these things usually isn’t possible. To have a prior, you need a reference class, and reference classes are debatable.

It’s my view that my brain instinctively processes huge amounts of information, coming from many different reference classes, and arrives at a prior; if I attempt to formalize my prior, counting only what I can name and justify, I can worsen the accuracy a lot relative to going with my gut. Of course there is a problem here: going with one’s gut can be an excuse for going with what one wants to believe, and a lot of what enters into my gut belief could be irrelevant to proper Bayesian analysis. There is an appeal to formulas, which is that they seem to be susceptible to outsiders’ checking them for fairness and consistency.

But when the formulas are too rough, I think the loss of accuracy outweighs the gains to transparency. Rather than using a formula that is checkable but omits a huge amount of information, I’d prefer to state my intuition - without pretense that it is anything but an intuition - and hope that the ensuing discussion provides the needed check on my intuitions.

I can’t, therefore, usefully say what I think the appropriate prior estimate of charity cost-effectiveness is. I can, however, describe a couple of approaches to Bayesian adjustments that I oppose, and can describe a few heuristics that I use to determine whether I’m making an appropriate Bayesian adjustment.

Approaches to Bayesian adjustment that I oppose

I have seen some argue along the lines of “I have a very weak (or uninformative) prior, which means I can more or less take rough estimates literally.” I think this is a mistake. We do have a lot of information by which to judge what to expect from an action (including a donation), and failure to use all the information we have is a failure to make the appropriate Bayesian adjustment. Even just a sense for the values of the small set of actions you’ve taken in your life, and observed the consequences of, gives you something to work with as far as an “outside view” and a starting probability distribution for the value of your actions; this distribution probably ought to have high variance, but when dealing with a rough estimate that has very high variance of its own, it may still be quite a meaningful prior.

I have seen some using the EEV framework who can tell that their estimates seem too optimistic, so they make various “downward adjustments,” multiplying their EEV by apparently ad hoc figures (1%, 10%, 20%). What isn’t clear is whether the size of the adjustment they’re making has the correct relationship to (a) the weakness of the estimate itself (b) the strength of the prior (c) distance of the estimate from the prior. An example of how this approach can go astray can be seen in the “Pascal’s Mugging” analysis above: assigning one’s framework a 99.99% chance of being totally wrong may seem to be amply conservative, but in fact the proper Bayesian adjustment is much larger and leads to a completely different conclusion.

Heuristics I use to address whether I’m making an appropriate prior-based adjustment

  • The more action is asked of me, the more evidence I require. Anytime I’m asked to take a significant action (giving a significant amount of money, time, effort, etc.), this action has to have higher expected value than the action I would otherwise take. My intuitive feel for the distribution of “how much my actions accomplish” serves as a prior - an adjustment to the value that the asker claims for my action.
  • I pay attention to how much of the variation I see between estimates is likely to be driven by true variation vs. estimate error. As shown above, when an estimate is rough enough so that error might account for the bulk of the observed variation, a proper Bayesian approach can involve a massive discount to the estimate.
  • I put much more weight on conclusions that seem to be supported by multiple different lines of analysis, as unrelated to one another as possible. If one starts with a high-error estimate of expected value, and then starts finding more estimates with the same midpoint, the variance of the aggregate estimate error declines; the less correlated the estimates are, the greater the decline in the variance of the error, and thus the lower the Bayesian adjustment to the final estimate. This is a formal way of observing that “diversified” reasons for believing something lead to more “robust” beliefs, i.e., beliefs that are less likely to fall apart with new information and can be used with less skepticism.
  • I am hesitant to embrace arguments that seem to have anti-common-sense implications (unless the evidence behind these arguments is strong) and I think my prior may often be the reason for this. As seen above, a too-weak prior can lead to many seemingly absurd beliefs and consequences, such as falling prey to “Pascal’s Mugging” and removing the incentive for investigation of strong claims. Strengthening the prior fixes these problems (while over-strengthening the prior results in simply ignoring new evidence). In general, I believe that when a particular kind of reasoning seems to me to have anti-common-sense implications, this may indicate that its implications are well outside my prior.
  • My prior for charity is generally skeptical, as outlined at this post. Giving well seems conceptually quite difficult to me, and it’s been my experience over time that the more we dig on a cost-effectiveness estimate, the more unwarranted optimism we uncover. Also, having an optimistic prior would mean giving to opaque charities, and that seems to violate common sense. Thus, we look for charities with quite strong evidence of effectiveness, and tend to prefer very strong charities with reasonably high estimated cost-effectiveness to weaker charities with very high estimated cost-effectiveness

Conclusion

  • I feel that any giving approach that relies only on estimated expected-value - and does not incorporate preferences for better-grounded estimates over shakier estimates - is flawed.
  • Thus, when aiming to maximize expected positive impact, it is not advisable to make giving decisions based fully on explicit formulas. Proper Bayesian adjustments are important and are usually overly difficult to formalize.