Would other organizations have funded AMF’s bednet distributions if AMF hadn’t?

An important question to ask when deciding where to give is “what would happen if this charity didn’t receive my donation?”

To investigate this, we focus on charities’ “room for more funding,” i.e., what will additional funding for this organization allow it to do that it would not be able to do without additional support from the donors GiveWell influences.

This question is relevant to the Against Malaria Foundation (AMF), currently our #1 rated charity, which provides funding to support malaria net distributions in Sub-Saharan Africa. In the past, we focused intensely on the question of whether AMF would be able to absorb and commit additional funds.

Recently, we asked another question: how likely is it that the bednet distributions that AMF supports would have been funded by others if AMF hadn’t provided funding? That is, would another funder have stepped in to provide funding in AMF’s absence?

If this were the case, our assessment of AMF’s impact would be diminished because it would seem likely that, in the absence of giving to AMF, the distributions it might have supported would occur anyway.

We can’t know what other funders might do in the future, so to learn more about this we looked back at cases from 2012 and 2013 where AMF had initially considered a distribution but then didn’t end up providing funding. We asked whether, and when, those distributions were eventually funded by others.

Our investigation

We looked at five cases where AMF considered funding a distribution but did not end up moving forward. In short:

  • In two cases, major delays (18 months and ~36 months) occurred before people in the area received bednets from other sources.
  • In two cases, other funders filled the gap six to nine months later than AMF would have.
  • In one case, funding was committed soon after AMF’s talks fell through.

(For context, we use an “8%-20%-50%” model to estimate the longevity of bednets, which assumes that 92% of nets are still in use through the first year, 80% through the second, and 50% through the third (and none after the end of the third year). On average, then, we estimate that nets last about 27 months.)

More details are available in our full report on this investigation.

Of course, these cases aren’t necessarily predictive:

  • It’s possible that the distributions were atypical, and that the reasons that led AMF to not carry out these distributions were the same reasons that led other funders to not fund them. This would mean that a typical AMF distribution might, in fact, be more likely to be funded by someone else, if AMF doesn’t fund it, than these results predict.
  • It’s possible the global funding situation has changed since the cases we investigated in 2012 and 2013 – if more funding is now available overall, it would make it more likely that if AMF didn’t carry out a given distribution, another funder would step in.

That said, even if other funders would always step in if AMF didn’t carry out a distribution, it’s still possible that AMF is increasing the total number of bednets distributed, if there’s an overall funding gap for bednets globally. We’ve written more about the global bednet gap here. For this to be the case, it would likely require there exists some additional pool of funding that can be directed to bednets when necessary.

Overall, we think that the cases we looked at offer support to our conclusion that there is a real need for additional funding for bednets, and that AMF is not primarily displacing other funding for bednets.

Deworming might have huge impact, but might have close to zero impact

We try to communicate that there are risks involved with all of our top charity recommendations, and that none of our recommendations are a “sure thing.”

Our recommendation of deworming programs (the Schistosomiasis Control Initiative and the Deworm the World Initiative), though, carries particularly significant risk (in the sense of possibly not doing much/any good, rather than in the sense of potentially doing harm). In our 2015 top charities announcement, we wrote:

Most GiveWell staff members would agree that deworming programs are more likely than not to have very little or no impact, but there is some possibility that they have a very large impact. (Our cost-effectiveness model implies that most staff members believe there is at most a 1-2% chance that deworming programs conducted today have similar impacts to those directly implied by the randomized controlled trials on which we rely most heavily, which differed from modern-day deworming programs in a number of important ways.)

The goal of this post is to explain this view and why we still recommend deworming.

Some basics for this post

What is deworming?

Deworming is a program that involves treating people at risk of intestinal parasitic worm infections with parasite-killing drugs. Mass treatment is very inexpensive (in the range of $0.50-$1 per person treated), and because treatment is cheaper than diagnosis and side effects of the drugs are believed to be minor, typically all children in an area where worms are common are treated without being individually tested for infections.

Does it work?

There is strong evidence that administration of the drugs reduces worm loads, but many of the infections appear to be asymptomatic and evidence for short-term health impacts is thin (though a recent meta-analysis that we have not yet fully reviewed reports that deworming led to short-term weight gains). The main evidence we rely on to make the case for deworming comes from a handful of longer term trials that found positive impacts on income or test scores later in life.

For more background on deworming programs see our full report on combination deworming.

Why do we believe it’s more likely than not that deworming programs have little or no impact?

The “1-2% chance” doesn’t mean that we think that there’s a 98-99% chance that deworming programs have no effect at all, but that we think it’s appropriate to use a 1-2% multiplier compared to the impact found in the original trials – this could be thought of as assigning some chance that deworming programs have no impact, and some chance that the impact exists but will be smaller than was measured in those trials. For instance, as we describe below, worm infection rates are much lower in present contexts than they were in the trials.

Where does this view come from?

Our overall recommendation of deworming relies heavily on a randomized controlled trial (RCT) (the type of study we consider to be the “gold standard” in terms of causal attribution) first written about in Miguel and Kremer 2004 and followed by 10-year follow up data reported in Baird et al. 2011, which found very large long-term effects on recipients’ income. We reviewed this study very carefully (see here and here) and we felt that its analysis largely held up to scrutiny.

There’s also some other evidence, including a study that found higher test scores in Ugandan parishes that were dewormed in an earlier RCT, and a high-quality study that is not an RCT but found especially large increases in income in areas in the American South that received deworming campaigns in the early 20th century. However, we consider Baird et al. 2011 to be the most significant result because of its size and the fact that the follow-up found increases in individual income.

While our recommendation relies on the long-term effects, the evidence for short-term effects of deworming on health is thin, so we have little evidence of a mechanism through which deworming programs might bring about long-term impact (though a recent meta-analysis that we have not yet fully reviewed reports that deworming led to short-term weight gains). This raises concerns about whether the long-term impact exists at all, and may suggest that the program is more likely than not to have no significant impact.

Even if there is some long-term impact, we downgrade our expectation of how much impact to expect, due to factors that differ between real-world implementations and the Miguel and Kremer trial. In particular, worm loads were particularly high during the Miguel and Kremer trial in Western Kenya in 1998, in part due to flooding from El Niño, and in part because baseline infection rates are lower in places where SCI and Deworm the World work than in the relevant studies.

Our cost-effectiveness model estimates that the baseline worm infections in the trial we mainly rely on were roughly 4 to 5 times as high as in places where SCI and Deworm the World operate today, and that El Niño further inflated those worm loads during the trial. (These estimates combine data on the prevalence of infections and intensity of infections, and so are especially rough because there is limited data on whether prevalence or intensity of worms is a bigger driver of impact). Further, we don’t know of any evidence that would allow us to disconfirm the possibility that the relationship between worm infection rates and the effectiveness of deworming is nonlinear, and thus that many children in the Miguel and Kremer trial were above a clinically relevant “threshold” of infection that few children treated by our recommended charities are above.

We also downgrade our estimate of the expected value of the impact based on: concerns that the limited number of replications and lack of obvious causal mechanism might mean there is no impact at all, expectation that deworming throughout childhood could have diminishing returns compared to the ~2.4 marginal years of deworming provided in the Miguel and Kremer trial, and the fact that the trial only found a significant income effect on those participants who ended up working in a wage-earning job. See our cost-effectiveness model for more information.

Why do we recommend deworming despite the reasonably high probability that there’s no impact?

Because mass deworming is so cheap, there is a good case for donating to support deworming even when in substantial doubt about the evidence. We estimate the expected value of deworming programs to be as cost-effective as any program we’ve found, even after the substantial adjustments discussed above: our best guess considering those discounts is that it’s still roughly 5-10 times as cost-effective as cash transfers, in expectation. But that expected value arises from combining the possibility of potentially enormous cost-effectiveness with the alternative possibility of little or none.

GiveWell isn’t seeking certainty – we’re seeking outstanding opportunities backed by relatively strong evidence, and deworming meets that standard. For donors interested in trying to do as much good as possible with their donations, we think that deworming is a worthwhile bet.

What could change this recommendation – will more evidence be collected?

To our knowledge, there are currently no large, randomized controlled trials being conducted that are likely to be suitable for long-term follow up to measure impacts on income when the recipients are adults, so we don’t expect to see a high-quality replication of the Miguel and Kremer study in the foreseeable future.

That said, there are some possible sources of additional information:

  • The follow-up data that found increased incomes among recipients in the original Miguel and Kremer study was collected roughly 10 years after the trial was conducted. Our understanding is that 15 year follow-up data has been collected and we expect to receive an initial analysis of it from the researchers this summer.
  • A recent study from Uganda didn’t involve data collection for the purpose of evaluating a randomized controlled trial; rather, the paper identified an old, short-term trial of deworming and an unrelated data set of parish-level test scores collected by a different organization in the same area. Because some of the parishes overlap, it’s possible to compare the test scores from those that were dewormed to those that weren’t. It’s possible that more overlapping data sets will be discovered and so we may see more similar studies in the future.
  • We’ve considered whether to recommend funding for an additional study to replicate Baird et al. 2011: run a new deworming trial that could be followed for a decade to track long term income effects. However, it would take 10+ years to get relevant results, and by that time deworming may be fully funded by the largest global health funders. It would also need to include a very large number of participants to be adequately powered to find plausible effects (since the original trial in Baird et al. 2011 benefited from particularly high infection rates, which likely made it easier to detect an effect), so it would likely be extremely expensive.

For the time being, based on our best guess about the expected cost-effectiveness of the program when all the factors are considered, we continue to recommend deworming programs.

Update on GiveWell’s web traffic / money moved: Q1 2016

In addition to evaluations of other charities, GiveWell publishes substantial evaluation of ourselves, from progress against our goals to our impact on donations. We generally publish quarterly updates regarding two key metrics: (a) donations to top charities and (b) web traffic (though going forward, we may provide less frequent updates).

The tables and chart below present basic information about our growth in money moved and web traffic in the first quarter of 2016 compared to the previous two years (note 1).

Money moved and donors: first quarter

Table_2016Q1MoneyMoved.png

Money moved by donors who have never given more than $5,000 in a year increased about 50% to $1.1 million. The total number of donors in the first quarter increased about 30% to about 4,500 (note 2).

Most of our money moved is donated near the end of the year (we tracked 70% or more of our total money moved in the fourth quarter each of the last three years) and is driven by a relatively small number of large donors. Because of this, we do not think we can reliably predict our growth and think that our year-to-date total money moved provides relatively limited information about what our year-end money moved is likely to be (note 3). We therefore look at the data above as an indication of growth in our audience.

Web traffic through April 2016

Table_2016Q1WebTraffic.png

Growth in web traffic excluding Google AdWords increased 10% in the first quarter. GiveWell’s website receives elevated web traffic during “giving season” around December of each year. To adjust for this and emphasize the trend, the chart below shows the rolling sum of unique visitors over the previous twelve months, starting in December 2009 (the first period for which we have 12 months of reliable data due to an issue tracking visits in 2008).

Chart_2016Q1WebTraffic.png

We use web analytics data from two sources: Clicky and Google Analytics (except for those months for which we only have reliable data from one source). The raw data we used to generate the chart and table above (as well as notes on the issues we’ve had and adjustments we’ve made) is in this spreadsheet. (Note on how we count unique visitors.)



Note 1: Since our 2012 annual metrics report we have shifted to a reporting year that starts on February 1, rather than January 1, in order to better capture year-on-year growth in the peak giving months of December and January. Therefore, metrics for the “first quarter” reported here are for February through April.

Note 2: Our measure of the total number of donors may overestimate the true number. We identify individual donors based on the reported name and email. Donors may donate directly to our recommended charities and not opt to share their contact information with us, or donors may use different information for subsequent donations (for example, a different email), in which case, we may mistakenly count a donation from a past donor as if it was made by a new donor. We are unsure but would guess that the impact of this issue is relatively small and that the data shown are generally reflective of our growth from year to year.

Note 3: In total, GiveWell donors directed $2.6 million to our top charities in the first quarter of 2016, compared to $2.0 million that we had tracked in the first quarter of 2015. For the reason described above, we don’t find this number to be particularly meaningful at this time of year.

Note 4: We count unique visitors over a period as the sum of monthly unique visitors. In other words, if the same person visits the site multiple times in a calendar month, they are counted once. If they visit in multiple months, they are counted once per month.

Weighing organizational strength vs. estimated cost-effectiveness

A major question we’ve asked ourselves internally over the last few years is how we should weigh organizational quality versus the value of the intervention that the organization is carrying out.

In particular, is it better to recommend an organization we’re very impressed by and confident in that’s carrying out a good program, or better to recommend an organization we’re much less confident in that’s carrying out an exceptional program? This question has been most salient when deciding how to rank giving to GiveDirectly vs giving to the Schistosomiasis Control Initiative.

GiveDirectly vs SCI

GiveDirectly is an organization that we’re very impressed by and confident in, more so than any other charity we’ve come across in our history. Reasons for this:

But, we estimate that marginal dollars to the program it implements — direct cash transfers — are significantly less cost-effective than bednets and deworming programs. Excluding organizational factors, our best guess is that deworming programs — which SCI supports — are roughly 5 times as cost-effective as cash transfers. As discussed further below, our cost effectiveness estimates are generally based on extremely limited information and are therefore extremely rough, so we are cautious in assigning too much weight to them.

Despite the better cost-effectiveness of deworming, we’ve had significant issues with SCI as an organization. The two most important:

  • We originally relied on a set of studies showing dramatic drops in worm infection coinciding with SCI-run deworming programs to evaluate SCI’s track record; we later discovered flaws in the study methodology that led us to conclude that they did not demonstrate that SCI had a strong track record. We wrote about these flaws in 2013 and 2014.
  • We’ve seen limited and at times erroneous financial information from SCI over the years. We have seen some improvements in SCI’s financial reporting in 2016, but we still have some concerns, as detailed in our most recent report.

More broadly, both of these cases are examples of general problems we’ve had communicating with SCI over the years. And we don’t believe SCI’s trajectory has generated evidence of overall impressiveness comparable to GiveDirectly’s, discussed above.

Which should we recommend?

One argument is that GiveWell should only recommend exceptional organizations, and so the issues we’ve seen with SCI should disqualify them.

But, we think that the ~5x difference in cost-effectiveness is meaningful. There’s a large degree of uncertainty in our cost-effectiveness analyses, which is something we’ve written a lot about in the past, but this multiplier appears somewhat stable (it has persisted in this range over time, and currently is consistent with the individual estimates of many staff members), and a ~5x difference gives a fair amount of room for SCI to do more good even accounting both for possible errors in our analysis and for differences in organizational efficiency.

A separate argument that we’ve made in the past is that great organizations have upside that goes beyond the value of conducting the specific program they’re implementing. For example, early funding to a great organization may have allow it to grow faster and increase the amount of money going to their program globally, either through proving the model or through their own fundraising. And GiveDirectly has shown some propensity for potentially innovative projects, as discussed above.

We think that earlier funding to GiveDirectly had this benefit, but it’s less of a consideration now that GiveDirectly is a more mature organization.  We believe this upside exists for what we’ve called “capacity-relevant” funding, which is the type of funding need that we consider to be most valuable when ranking the importance of marginal dollars to each of our top charities, and refers to funding gaps that we expect will allow organizations to grow in an outsized way in the future, for instance by going into a new country.

Bottom line

Our most recent recommendations ranked SCI’s funding gap higher than GiveDirectly’s due to SCI’s cost-effectiveness. We think that SCI is a strong organization overall, despite the issues we’ve noted, and we think that the “upside” for GiveDirectly is limited on the margin, so ultimately our estimated 5x multiplier looks meaningful enough to be determinative.

We remain conflicted about this tradeoff and regularly debate it internally, and we think reasonable donors may disagree about which organization to support.

Why we don’t currently recommend charities focused on vaccine distribution

GiveWell does not recommend any charities focused on vaccine funding and distribution. But we remain excited about vaccinations as a health intervention. The vaccination programs we’ve researched have been backed by strong, independent evidence of effectiveness and appear likely to be competitive with our top charities in their cost-effectiveness. We’d be excited to support a charity to implement these programs. This post will describe why we don’t right now.

In brief, we’ve been looking for vaccination giving opportunities over the last few years, but have continued to fail to find them. This is due to (a) lack of room for more funding and (b) UNICEF’s decision not to participate in our review process.

In particular, over the past 1-2 years, we’ve been looking for funding opportunities for measles, meningitis A, and maternal and neonatal tetanus vaccination. Each of these is discussed in greater detail below.

Measles and Rubella Initiative

We have nearly completed an assessment of the evidence and cost-effectiveness for supplementary measles and rubella campaigns. (We’ve summarized our current take below; more detail will be available in our full intervention report, which we hope to publish on our website this year.) These campaigns supplement routine, childhood immunization and aim to vaccinate all children against measles and rubella. In particular, children between the ages of 9 months to 14 or 15 years are targeted. The evidence that such campaigns — when targeting children under age 5, for whom the disease is most likely to be fatal — are effective appears to be strong and the cost-effectiveness (per measles death averted) is competitive with that of our top charities.

However, we don’t believe that the Measles and Rubella Initiative (the primary entity that supports these campaigns) has room for more funding to vaccinate children under age 5. We spoke with M&RI in January 2016 and representatives there told us (p. 4 at that link) that M&RI has a funding gap of approximately $36 million in 2016. Of the $36 million, $31 million would fund a campaign targeting 5-14 year olds in Ethiopia. Vaccinating children age 5-14 would, by reducing the number of people who could contract and transmit the disease, reduce infections in children under 5 and could potentially save lives. We have not pursued this opportunity because our guess is that this will be less cost-effective than our top charities. Gavi, a large alliance which funds vaccinations, has fully funded M&RI’s gap for vaccinations of <5-year-olds in Ethiopia but has not fully filled its 5-14 year-old gap. Though we dont have information about why Gavi made this funding decision, it is consistent with our impression that filling the 5-14 year-old gap may be less cost-effective than the <5-year-old gap.

We believe the remaining $5 million gap for 2016 is very small compared to the size of M&RI’s total budget. As of November 2015, M&RI estimated (p. 15 at that link) $662.6 million in resource requirements for 2016. Note that this does not include the large campaign in Ethiopia, which has been carried over from 2015 to 2016. (Details here (p. 35-36) and here (p. 3).)

Thus, our impression is that the $5 million funding gap beyond the Ethiopia campaign represents an extremely small fraction of the total M&RI budget for 2016. The $5 million gap is so small relative to the total that we would guess that M&RI would be able to raise funding for it if it represented a pressing need, either from Gavi or another source. (We would guess that it doesn’t, since Gavi funds so much of M&RI’s work that it seems very unlikely that it would leave such a small gap.)

Meningitis A

It also appears to us that there is no room for more funding in meningitis A vaccinations. We didn’t complete a full assessment of the evidence of effectiveness and cost-effectiveness for meningitis A vaccines because we learned early on in our investigation that there was unlikely to be room for more funding. However, our guess is that this intervention would be competitive with our top charities’ cost-effectiveness. Our meningitis A write-up concludes:

…[I]t seems unlikely that there will be room for more funding to support additional mass campaigns (or related immunization activities) in the meningitis belt in the near future. Gavi, a large funding vehicle for vaccinations, appears to have enough funding to fulfill its commitment to support all such activities in all 26 countries in the meningitis belt.

We’re not aware of any organizations other than Gavi funding meningitis A vaccine programs.

Maternal and neonatal tetanus campaigns

Vaccination campaigns to prevent maternal and neonatal tetanus appear potentially as cost-effective as our top charities. (More details are in our full report on this intervention.)

We have been following UNICEF’s work in this area, the Maternal and Neonatal Tetanus Elimination Initiative, since 2012. UNICEF recently informed us that it was declining to participate in our review process. We plan to write more about our understanding of UNICEF’s decision in the future. Our impression is that UNICEF is the primary funding vehicle for maternal and neonatal tetanus campaigns.

This is another example where we tried but failed to find a way to fund vaccinations. We are not aware of organizations conducting similar work, but would be interested in considering a similar opportunity if we identified one.

Challenges in finding a great Vitamin A charity

Vitamin A supplementation involves giving Vitamin A to children at risk of deficiency to prevent death and other negative health impacts. We’d be interested in supporting a charity to carry out this program, but so far we have not found one we’d like to recommend.

The evidence on the effectiveness of the program raises a number of questions that we’d need a charity to answer, and we haven’t found one that can satisfactorily answer them. This post lays out the state of the evidence regarding Vitamin A, and the questions charities would need to answer to receive our recommendation.

This is a summary of our full report on Vitamin A Supplementation as a charitable program. As with our recent post about water quality interventions, we’re interested in providing more accessible summaries of our research to illustrate the challenges of identifying the most effective charities.

The key points of this post:

  • Vitamin A supplementation has a mixed evidence base that seems to suggest that the program is particularly effective in certain circumstances.
  • Because of the mixed evidence base, we have a set of questions any Vitamin A charity would need to answer before we would be willing to make them one of our top charities.

What is the problem?

Vitamin A deficiency (VAD) is a common, potentially deadly, condition in the developing world. Symptoms include:

  • stunting
  • anemia
  • dry eyes (the leading cause of preventable childhood blindness)
  • susceptibility to infection
  • death

Providing children with doses of Vitamin A two to three times per year can combat Vitamin A deficiency, and is typically relatively inexpensive. Doses cost less than a dollar per person per year, although distribution can be more costly.

Does Vitamin A supplementation work?

Studies of Vitamin A supplementation primarily focus on whether giving vitamin A pills to children can reduce their risk of dying.

Many large, randomized controlled trials (which we consider a particularly strong method of evaluating global health programs) have been conducted to determine the impact of Vitamin A supplementation, in which children are randomly chosen to receive the pills or not, and the mortality rate between recipients and non-recipients can be compared.

Most of the results look very promising: a Cochrane Collaboration review of seventeen randomized studies, mostly conducted in the 1980s and 1990s, found that Vitamin A supplementation reduces all-cause mortality by 24%.[1]

However, one major study, with four times as many participants as all the studies included in the Cochrane review combined, contradicts these results.

The Deworming and Enhanced Vitamin A Study (DEVTA) was published after the Cochrane review, and did not find a statistically significant effect of giving children Vitamin A pills.[2]

Reconciling DEVTA and earlier trials

What should we think about whether Vitamin A is an effective program to support given that 17 trials found a large, significant effect of giving children Vitamin A supplements, while DEVTA found no statistically significant effect? There are a few possibilities:

  1. The world changed between the time the initial 17 studies were conducted and DEVTA, and Vitamin A supplementation is no longer as effective. Vitamin A supplementation may only be effective in areas with extremely high Vitamin A deficiency or child mortality – if so, worldwide improvements in health may mean Vitamin A supplementation is not as impactful as it once was, on average.
  2. The best guide to impact is averaging the effects of DEVTA and the other 17 trials. If we believe DEVTA’s results were due to chance, we could assume that the average effect of Vitamin A supplementation will be a 12% reduction in childhood mortality, based on the Cochrane meta-analysis that includes DEVTA. This is about half the size of the effect that was estimated prior to DEVTA.
  3. DEVTA’s lack of results were due to specific features of DEVTA, and charities conducting work under conditions more similar to the original 17 trials are likely to be highly impactful. Some differences between DEVTA and the original trials include (read more):
      1. DEVTA may have failed to reach enough children. DEVTA reported treating 86% of the children in the study, close to the rate achieved in previous trials, but some researchers have called this number into question, believing the study was not implemented as rigorously as previous trials and that it is implausible to achieve such high coverage at such low cost.
      2. DEVTA may have treated a population with less severe or less prevalent Vitamin A deficiency than in previous trials. However, DEVTA reports that a similar percentage of children had Vitamin A deficiency as in previous trials, although reliable, comparable data on Vitamin A deficiency is scarce.
      3. The population treated by DEVTA may have had better overall health than previously studied populations. Deaths prevented by Vitamin A Supplementation may be due to reduced mortality from diarrhea or measles, so if DEVTA participants were less vulnerable to dying from these diseases than participants in other studies, we would expect Vitamin A to have a smaller effect on mortality. The mortality rate in the control group in DEVTA was lower than control group mortality in 4 of the 5 trials that account for 80% of the weight in the Cochrane review, although the prevalence of diarrhea and measles does not appear to be very different.

We would guess that the best available explanation for the discrepancy between DEVTA and the 17 earlier trials is the lower baseline child mortality rate – and possibly better overall health – among DEVTA participants.

Finding a Vitamin A charity to recommend

For this reason, we would be interested in recommending a charity that could demonstrate that the conditions of the area it targets are similar to those of the original 17 studies considered in the Cochrane review, and so we have a list of questions that a charity would have to answer in order for us to consider them as a top charity.

Our list of questions includes:

  • Is the charity working in an area with high child mortality?
  • To what extent do children in the targeted area suffer from Vitamin A deficiency?
  • Can the charity provide evidence that it successfully reaches the children it targets?

Our current conclusion

Considering all of these factors, we believe that Vitamin A supplementation may be one of the most cost-effective ways to save lives when the program is high quality and delivered in locations with high child mortality rates. Before recommending a charity, we’d need to see compelling evidence to satisfy these concerns about the conditions under which providing children with Vitamin A supplements is likely to be effective. Our concerns have not been satisfactorily addressed for any charity we’ve spoken with so far.

Notes

[1]

95% confidence interval: 17% to 31%

[2]

The study found a 4% reduction in child mortality, but with a 95% confidence interval between a 3% increase in child mortality and an 11% decrease in child mortality, so the study leaves open the possibility that giving children Vitamin A supplements has no effect on their risk of death.