Last updated: August 2022
We aim to find giving opportunities that save or improve lives as much as possible. We generally start by identifying promising programs, then identify organizations that effectively implement those programs.1
Our program review process operates like a funnel: We conduct short, shallow reviews of a large number of programs, then prioritize more intensive reviews only for programs that seem more promising. We assess how promising a program seems based on strength of evidence, cost-effectiveness, and room for more funding. We provide the most in-depth reviews of the most promising programs. We designate organizations that effectively implement these programs and meet our criteria as top charities and recommend grants to support their work. We also recommend other grants to support additional research and/or fund high-impact programs.
Our prioritized list of programs is in this spreadsheet. We plan to update our list as we review additional programs and update or revisit our reviews of existing programs.
Table of Contents
Our program review process
Why program research?
GiveWell aims to identify and direct funding to giving opportunities that save or improve lives as much as possible.
We assess opportunities based on the following criteria: evidence of effectiveness, cost-effectiveness, room for more funding, and transparency. We aim to direct grants to the highest-impact giving opportunities we identify, some of which may not be represented by organizations on our top charities list (these organizations must meet additional criteria).
The program an organization chooses to implement is a major factor in the overall impact of its work and thus its performance along our criteria. Because its choice of program(s) has such a strong effect on the amount of good an organization can accomplish, we typically focus on identifying promising programs before we start looking for organizations that are implementing them.
How we identify programs to review
We generally rely on independent, academic evidence to assess the effectiveness of programs at saving or improving lives. We also research the potential funding landscape for programs.
We focus on global health and poverty alleviation programs because that's where we've found donations can make the biggest difference, according to our criteria.
We identify programs to review from a number of different sources:
- We have set up a system of alerts on Google Scholar to learn of new evidence on program effectiveness, with a focus on newly-published randomized controlled trials and Cochrane Library meta-analyses of programs that serve the global poor.2 We also follow blogs and social media sources to learn of new papers relevant to our work.
- We regularly speak with implementing organizations to understand the programs they implement in order to determine if their programs fit our criteria and ask for their input on programs we should consider investigating.
- We learn of some programs by speaking with organizations and academics working in the field of global health and development and by attending conferences.
- We have considered the largest sources of years of life lost and the largest sources of morbidity, according to the Institute of Health Metrics and Evaluation's Global Burden of Disease database, and searched for programs that aim to address them.
- We have sourced programs from reviews published by other organizations, including the Copenhagen Consensus, the Disease Control Priorities Report, Millions Saved, and the Tufts Medical Center Cost-Effectiveness Analysis Registry.3
- In past years, we have used additional approaches to identify promising programs and organizations. More information is here.
- We receive inbound suggestions for programs to review. If you have a suggestion, please email us at info@givewell.org.
Reviewing programs
We review dozens of programs each year. Our research process operates like a funnel: We start by reviewing a program at a shallow level and either (a) deprioritize it, if it does not seem likely to meet our criteria, or (b) assess it at increasing levels of depth, if it seems promising by our criteria. We spend the most time with the programs that seem the most promising. We recommend funding to organizations that are effectively implementing the most promising programs.
The programs that we consider most promising have the following characteristics, corresponding to our criteria for grantmaking:
- The evidence in support of their effectiveness appears to be strong.4
- They appear to be highly cost-effective.
- It is plausible that there is room for more funding for the program.
Because we ultimately aim to identify organizations that implement promising programs well, at the program review stage, we also take into account how likely it is that we would find a charity that implements a particular program and how challenging a program would be to implement.5
Prioritized list of programs
Our prioritized list of programs
A prioritized list of programs we have reviewed is below.
Full prioritized list of programs
How we categorize programs we've reviewed
We categorize the programs we've reviewed as follows:
- Actively supporting: We recommended grants to support one or more organizations to implement this program and the work supported by the grant is ongoing (or "active"). We would likely consider recommending additional grants to other organizations working on this program if we found opportunities to do so.
- Further work planned: We expect to learn more about these programs and may consider recommending grants to support them.
- Revisit in the future: We have a specific reason to wait to conduct further analysis. A common reason is that we're waiting for the completion of a study that we know is in progress. Another possible reason is that we're waiting for the scale-up of manufacturing of a critical commodity for the program.
- No further work planned: These are cases where we believe there isn't enough evidence of effectiveness or there is limited room for funding in the program, and/or where we estimate that the program's cost-effectiveness is well below the threshold we'd consider directing funding to. We think it is unlikely that additional analysis will meaningfully update our views on these programs, though we plan to consider updating our reviews of these programs if new information comes to our attention (e.g., additional research that we did not anticipate, updated information on program costs). We hope that if our reasoning is incorrect, this will be brought to our attention. You can reach us at info@givewell.org if you believe our assessment is flawed.
On the "Program reviews" sheet of our prioritized list of programs, programs are grouped by the above categories and listed in alphabetical order within each category grouping.
Programs for which our review is out of date are listed in a separate tab of the spreadsheet, "Out of date program reviews." These are typically cases where we believe that we would likely approach our investigation differently if we did it today, and so are unsure if we would reach the same conclusion.
How we plan to update our prioritized list of programs in the future
Adding programs
As of April 2021, this list does not fully capture all programs on which we have formed a view. We have written about our failure to publish our research in a timely manner here. Although sharing the prioritized list of programs in April 2021 is part of our work to address this failure, in some cases we have completed analytical work and have not yet prepared it for publication in the spreadsheet. We plan to add more of these programs over time.
We also plan to review additional programs that come to our attention. Our goal is to add them to the spreadsheet once we have spent one to five days reviewing them. This is the shallowest level of review we would expect to do before publishing our initial prioritization of the program and assessing whether to spend more time analyzing it.
Updating prioritization levels for programs we're actively considering
For all except the programs in the "no further work planned" category, we plan to either recommend funding to organizations implementing the program or to learn more about it. We plan to update our prioritized list of programs when:
- We recommend a grant for the first time to support a program or add a new implementing organization for a program we have supported before.
- We do additional research that updates the prioritization level or reasons for a prioritization level for a program.
- We decide not to pursue additional work on a program and designate it as "no further work planned."
Revisiting other programs
We plan to revisit our conclusions for programs listed as "no further work planned" when (a) new information comes to our attention or (b) every five years, whichever happens first.
In addition, we plan to gradually update our reviews of the programs listed in the "Out of date program reviews" sheet.
Older versions of this page
- March 2021 version
- August 2017 version
- 2012 version
- Our summary of our 2009-2011 criteria for evaluating programs
- 2009 version
- 1A note on terminology: We use "program" to refer to an intervention an organization implements. For example, the Against Malaria Foundation is an organization (and specifically, a GiveWell top charity). The program it implements is distributing insecticide-treated nets to prevent malaria. Organizations may implement one program or multiple programs.
- 2
-
Randomized controlled trial: "A randomized controlled trial (in this context) is a study in which a set of people is identified as potential program participants, and then randomly divided into one or more 'treatment group(s)' (group(s) participating in the program in question) and a “control group” (a group that experiences no intervention). When this is done, it is generally presumed that any sufficiently large differences that emerge between the treatment and control groups were caused by the program.
Many, including us, consider the randomized controlled trial to be the 'gold standard' in terms of causal attribution." The GiveWell blog: How we evaluate a study, August 23, 2012
- Meta-analyses: "In some cases it is possible to perform meta-analysis: combining the results from multiple studies to get a single 'pooled' quantitative result." The GiveWell blog: Surveying the research on a topic, September 6, 2012
-
- 3
More information on how we used the Copenhagen Consensus and Millions Saved in our review process is here.
- 4More on how we assess evidence of effectiveness is in these blog posts:
- http://blog.givewell.org/2012/08/17/our-principles-for-assessing-evidence/
- http://blog.givewell.org/2012/08/23/how-we-evaluate-a-study/
- http://blog.givewell.org/2012/09/06/surveying-the-research-on-a-topic/
- 5All else equal, we prioritize programs where (i) there are likely to be organizations implementing the program at scale already and (ii) implementing the program is less complicated and, as a result, there is a lower risk the organization fails to implement the program well.