Quantcast 2008 November | The GiveWell Blog
November 27th, 2008

Thanks

We want to thank the people who have invested the most of their own time and/or money to date. We’re lucky to be working on a project that brings out the passion and energy of such great people.

Our donors and GiveWell Pledgers. Now including not only former coworkers, but ~60 people with no previous connection to us who have donated or pledged. These “early adopters” are putting big chunks of their charitable budget behind our research - adding evidence, one donor at a time, to the notion that proving effectiveness can be a fundraising strategy.

Our Board of Directors, which has significantly stepped up its involvement and commitment this year.

Omar K, Gordon S, Tom R, and Ari H, who have been particularly aggressive - and successful - in getting us in front of potential donors and pledgers.

Simon K, Nick B, Brian S, Ron N, and Damian B, our strongest volunteers. They’ve done valuable work for our developing-world research.

Rob S and Phil S, who have taken on “mentor” roles, providing regular feedback on our plans and progress.

Peter Singer, Matthew Bonds, and Molly M, who act as “research advisors”: people with significant on-the-ground aid experience and/or significant knowledge of relevant literature, who provide regular feedback on our ongoing research.

Miriam M, Teddy K, David C, and Jordan of Fresh Milk Design for their feedback on our marketing materials and efforts.

Our friends and family for continuing to put up with us.

November 21st, 2008

We’re hiring

We are looking to hire a Research Analyst to help us collect and analyze data on several hundred international charities, and ultimately identify the ones that can best use donations to change lives. If you believe you are a good fit - or know someone who is - please send a resume to info@givewell.net.

About the role

We are currently conducting an in-depth examination of international aid charities. At first, the Analyst will focus on helping us to collect information from charities’ websites and annual reports about what sorts of programs they run; we will match this information with academic literature on which sorts of programs are highly cost-effective, in order to identify charities with the greatest potential to have a real impact. Over time, the Analyst may become a permanent member of our research team, with broader responsibilities.

This is an entry-level, full-time position. The start date will be on or before Jan. 1. You must have permission to work in the U.S.

No particular experience or skills are required. Instead, we are looking for a quick learner and independent thinker, with genuine passion for our mission and interest in our work.

The Analyst will be our third employee. The first two are located in New York City and Boston. The new employee can work from anywhere as long as s/he is accessible via phone and IM. This role is full-time, but will be terminated within a few months if the fit is not good. It is not a good fit for someone whose primary concern is job security. It is an excellent fit for someone who is genuinely passionate about our vision of a world where charities raise money not just by traditional marketing techniques, but by truly demonstrating their ability to change lives.

November 17th, 2008

Banerjee/Duflo interview

Philanthropy Action (co-maintained by Board member Tim Ogden) has an interview up with Abhijit Banerjee and Esther Duflo, principals of the Poverty Action Lab (one of our favorite groups). This quote (from Esther Duflo) particularly resonated with me:

There is no evaluation yet of the impact of a microfinance loan – we have the first preliminary results ever of the impact of a plain vanilla, group lending microfinance model. That’s it. It is not as if there have been mixed results before now. The studies don’t exist. And that is microfinance, where there are already a hundred economists studying it.

Microfinance fortunately did not go out of fashion before [randomized evaluation] came into fashion, so we have a chance to have a meeting of minds here. But other things did. Take fertilizer subsidies: at first they were fashionable, now they are unfashionable. In the meantime we have not learned what fertilizer subsidies do. They might be good or bad, at this point I don’t know. They’re coming back in fashion, by the way. All of this without a single piece of evidence about whether a subsidy changes the demand for fertilizer. This is just one example of what to me is the biggest mistake, which is doing the same things over and over and over again without learning from the experience, whether it is fertilizer subsidies or microfinance.

I recommend the whole thing.

November 13th, 2008

Finally, a competitor!

This week, The Chronicle of Philanthropy wrote an article about the creation of the Alliance for Effective Social Investing. We wholeheartedly applaud Steve Butz and other members of the Alliance for their efforts, and really hope they succeed. There are too few organizations focusing on the effectiveness of charitable programs, and we’re excited to see their first results.

I’ve briefly looked at the survey they plan to send to nonprofits and here are some quick thoughts. Ultimately, the survey focuses on procedures and processes as opposed to impact and results and therefore has two problems:

  1. Charities often say they track outcomes even if they don’t. This happened to us consistently last year. (You can view our Round 1 application we sent to international charities linked on this page along with all the materials we received. In particular, look at the answers to section III on the application. )
  2. There are no specifics about what each organization does and what effects it has. All the questions are abstract about whether or not each organization tracks their outcomes. Donors need to know what impact they can expect from their donations not whether or not a charity has a “process in place” to track outcomes.

[As an aside, I’d really appreciate a tool that simply lists all of an organization’s programs. If there’s one type of information I’d like for all charities, it’s a simple specification of what they do and where they work. This is not currently available anywhere. (Guidestar offers very brief summaries of a charity’s programs off its 990, but nowhere can I see the specifics of each of its activities.) We’re currently working on building such a tool for international aid, with the help of some great volunteers.]

Finally, I’m concerned that this survey won’t accomplish its most important goal: distinguishing between effective and ineffective organizations. We’ve analyzed the Children’s Scholarship Fund and the Nurse-Family Partnership. Both organizations collect a large amount of data about their clients, and I believe they would each answer the Alliance’s survey identically. Nevertheless, we believe that NFP is running an effective, impactful program and we strongly recommend them; we think that CSF’s strategy is marginally effective (i.e., not making a substantial difference) at best and ineffective at worst.

Any useful evaluation tool has to distinguish between two programs like CSF and NFP. If it doesn’t, it falls short in the most important way.

All of this criticism is offered in the hope of dialogue and improvement. We’re rooting hard for our competitor, and if they want any help or information from us, they’ll get it.

November 12th, 2008

Is volunteering just a show?

To me, the most interesting part of the recent discussion of FORGE (see the last several posts on Tactical Philanthropy) is the disclosure that moving to a more effective model directly caused a loss of revenue, because it lowered volunteer involvement.

In a nutshell, FORGE runs programs for refugee communities; it shifted from having volunteers manage the programs to having the refugees themselves manage them. (More here). I’ll take FORGE at its word that the refugees were easier to manage (it’s plausible to me that they were more plugged into their communities and therefore more effective).

But apparently, the lack of work for volunteers translated directly into a loss of funding, because volunteers doubled as fundraisers. Logically, I’d think that if you were volunteering for a cause you were passionate about, and then you were released in order to make the program more effective, you would now be more excited (not to mention having more time) to raise money from your friends. But that isn’t what happened.

This story matches with anecdotes we’ve heard from many people in the nonprofit sector, claiming that volunteers are essentially useless in program terms (i.e., they cost more time to manage than the value they add). I believe that to many charities, using volunteers is a way to get people personally involved with, excited about, and personally invested in the organization so that they’ll donate and fundraise, the real value-added.

I’ve generally found that adding a new person into a work process nearly always costs a lot of time, especially up front, for training and managing. It can be worth it if (a) they’re going to put in enough hours to overcome that cost eventually; (b) the task they’re working on is extremely well-defined, meaning minimal management. As we get more systematic about our research process, we are able to use volunteers more effectively (and in fact have several working well now, with more slots open); but there have been times in the past when we’ve had far more requests for volunteer work than useful things for people to do. (When this has happened we’ve simply turned away the volunteers - our policy is to take volunteers only when we have good work for them.)

Next time you’re thinking of volunteering for a charity, ask yourself if you’re looking to do good or feel good. If the former, take a hard look at whether what you’re doing is really worth as much to the charity as a donation.

(As a side note on FORGE: I applaud FORGE’s honesty about past mistakes in this area. I agree with Sean’s claim that “in a world with limited transparency, we need to celebrate transparency on its own.” And I even think that there’s some argument to be made for promoting and supporting FORGE just for showing unusual honesty. However, I also agree with with Curtis Chang that FORGE hasn’t yet made a good case for its actual impact on people’s lives.)

November 8th, 2008

General questions about international aid

In addition to our charity-specific investigations, we’re looking to review as much literature as possible on the following questions. Note that these were originally posted to our email list, before it went public.

  1. What is the evidence that aid works/has worked at all? That it has caused reductions in infant mortality, economic growth, or anything else?
  2. Has aid worked better in some parts of the world than others? Are there any broad patterns in where and when aid works (as opposed to what interventions)?
  3. Can we expect health aid to create economic growth? Can we expect economic aid to work in areas where health is poor?
  4. Why have some parts of the world emerged from poverty while others haven’t? Is there anything aid can do to make the former more likely? (#2 is about whether aid has accomplished proximate goals like improving health - #3 asks what the biggest success stories are and whether there’s any plausible case that aid *could* accelerate them.)
  5. What are the risks of aid causing harm, and what evidence is there for their severity? Possibly ways that aid can cause harm include:
    • Overpopulation due to declining mortality
    • Crowding out government aid; encouraging governments to remain corrupt
    • Talent drain: turning all of Africa’s brightest into health/aid workers
    • Economic distortion: outcompeting private farmers and for-profit aid companies with subsidized prices
  6. What is the current allocation of aid across the world? How much of it is going to programs that don’t work or aren’t proven? How much of it is going to programs that appear overfunded?
  7. How can one determine whether an intervention is funded to capacity?
November 3rd, 2008

Discouraging evidence on preschool?

Via Joanne Jacobs: San Francisco Chronicle reports that Oklahoma and Georgia have seen no improvement on achievement test scores since implementing universal preschool programs. It also refers to a discouraging-sounding large-scale study of Tennessee’s preschool program, although it doesn’t give a specific citation (and I can’t find one online).

A couple things to keep in mind:

  • All of the discouraging results cited here refer to achievement test scores. Possible impacts on mental health, later life outcomes, etc. are not discussed.
  • The Tennessee finding is reported as excluding “at-risk kids.” We’ve always thought it very possible that early childhood care is most beneficial to at-risk children, and indeed that the gains for such children may account for the entire observed effects.

Our existing position on large-scale preschool programs is that no strong evidence exists for their effectiveness. The programs discussed here are unusually high-intensity programs, so the findings do call into question whether replicating the encouraging results of model programs is even theoretically possible.

Note that none of this discussion pertains directly to our current top charity in early childhood care, the Nurse-Family Partnership (our review here).

November 3rd, 2008

Malaria and lymphatic filariasis

Malaria is one of sub-Saharan Africa’s biggest killers; lymphatic filariasis is one of its most debilitating. Malaria matters reports on new efforts to combat both at once, as they are both mosquito-transmitted. We’re surprised that there isn’t a longer history of such efforts.