Scientific Research

Our ‘Second Chance’ Program for NIH Transformative Research Applicants

As part of getting started in science funding, we’ve explored several different methods of finding high-impact giving opportunities, including scanning published research, networking in fields of interest, and considering proposals sent to us by people we know. We recently announced four grants totalling $10.8 million that represent another approach: piggybacking on a government grant program designed to find transformative research.

The approach, in brief:

  • The National Institutes of Health has a program specifically for higher-risk, high-impact research.
  • The NIH has been able to fund only a small portion of proposals received through that program. Some projects considered worthy by peer review were ultimately rejected.
  • The NIH sent out a notice on our behalf to all unfunded 2016 applicants, and more than half re-submitted their applications to us. We received 120 proposals in three weeks.
  • We viewed this RFP as a way to both identify high-risk, high-reward projects and to test our hypothesis that high-risk, high-reward research is underfunded in general.

Update on Investigating Neglected Goals in Biological Research

We divide our scientific research funding into two categories: neglected goals and basic research. We believe that some research areas are underfunded because achieving the relevant research objectives is underrated by the “broad market” (according to our values). We call such research objectives “neglected goals.”

In 2014, we set a goal to be in a position to identify focus areas in science by the end of 2016. This post explains our initial plan for this work, our original hopes and expectations, what we have done so far, and our plans for work in this area going forward. In brief:

  • Our initial plan was to identify focus areas using a series of shallow and medium-depth investigations, analogous to the process we used to identify focus areas in U.S. policy and global catastrophic risks.
  • We found that our investigations took longer than expected and we felt that they gave us an inadequate basis to declare focus areas and hire specialist program staff to lead our work in those areas. Moreover, we could not envision investigations with acceptable time costs that would form an adequate basis for making such decisions.
  • However, our investigations did, in multiple cases, result in our science advisors’ identifying “standout” giving opportunities: giving opportunities that seemed unusually promising by the standards of the field they were investigating, and strong compared to giving opportunities we’ve seen generally.
  • We decided to pivot to a model in which generalist scientific advisors are given a broad mandate to opportunistically identify standout giving opportunities within about a dozen areas. Rather than investigating each area in depth and choosing a few as focus areas, they investigate one at a time, looking primarily for standout opportunities, and choose which area to investigate based on their subjective estimate of the odds of finding standout opportunities. We’re very excited by the giving opportunities that the science team is finding under this model, and it’s unclear whether it would have been better to use our previous model and hire staff specializing in just a couple of program areas.
  • A spreadsheet summarizing our list of priorities and cause-specific progress so far (listed in alphabetical order) is here.

We are likely to give a separate, shorter update on basic research in the future.1 [node:read-more:link]

Differential Technological Development: Some Early Thinking

Note: this post aims to help a particular subset of our audience understand the assumptions behind our work on science philanthropy and global catastrophic risks. Throughout, “we” refers to positions taken by the Open Philanthropy Project as an entity rather than to a consensus of all staff.

Two priorities for the Open Philanthropy Project are our work on science philanthropy and global catastrophic risks. These interests are related because—in addition to greatly advancing civilization’s wealth and prosperity—advances in certain areas of science and technology may be key to exacerbating or addressing what we believe are the largest global catastrophic risks. (For detail on the idea that advances in technology could be a driver, see “ ‘Natural’ GCRs appear to be less harmful in expectation” in this post.) For example, nuclear engineering created the possibility of nuclear war, but also provided a source of energy that does not depend on fossil fuels, making it a potential tool in the fight against climate change. Similarly, future advances in bioengineering, genetic engineering, geoengineering, computer science (including artificial intelligence), nanotechnology, neuroscience, and robotics could have the potential to affect the long-term future of humanity in both positive and negative ways.

Therefore, we’ve been considering the possible consequences of advancing the pace of development of various individual areas of science and technology in order to have more informed opinions about which might be especially promising to speed up and which might create additional risks if accelerated. Following Nick Bostrom, we call this topic “differential technological development.” We believe that our views on this topic will inform our priorities in scientific research, and to a lesser extent, global catastrophic risks. We believe our ability to predict and plan for future factors such as these is highly limited, and we generally favor a default presumption that economic and technological development is positive, but we also think it’s worth putting some effort into understanding the interplay between scientific progress and global catastrophic risks in case any considerations seem strong enough to influence our priorities.

The first question our investigation of differential technological development looked into was the effect of speeding progress toward advanced AI on global catastrophic risk. This post gives our initial take on that question. One idea we sometimes hear is that it would be harmful to speed up the development of artificial intelligence because not enough work has been done to ensure that when very advanced artificial intelligence is created, it will be safe. This problem, it is argued, would be even worse if progress in the field accelerated. However, very advanced artificial intelligence could be a useful tool for overcoming other potential global catastrophic risks. If it comes sooner—and the world manages to avoid the risks that it poses directly—the world will spend less time at risk from these other factors.

Curious about how to compare these two factors, I tried looking at a simple model of the implications of a survey of participants at a 2008 conference on global catastrophic risk organized by the Future of Humanity Institute at Oxford University. I found that speeding up advanced artificial intelligence—according to my simple interpretation of these survey results—could easily result in reduced net exposure to the most extreme global catastrophic risks (e.g., those that could cause human extinction), and that what one believes on this topic is highly sensitive to some very difficult-to-estimate parameters (so that other estimates of those parameters could yield the opposite conclusion). This conclusion seems to be in tension with the view that speeding up artificial intelligence research would increase risk of human extinction on net, so I decided to write up this finding, both to get reactions and to illustrate the general kind of work we’re doing to think through the issue of differential technological development.

Below, I:

  • Describe our simplified model of the consequences of speeding up the development of advanced AI on the risk of human extinction using a survey of participants at a 2008 conference on global catastrophic risk organized by the Future of Humanity Institute at Oxford University.
  • Explain why, in this model, the effect of faster progress on artificial intelligence on the risk of human extinction is very unclear.
  • Describe several of the model’s many limitations, illustrating the challenges involved with this kind of analysis.

We are working on developing a broader understanding of this set of issues, as they apply to the areas of science and technology described above, and as they relate to the global catastrophic risks we focus on. [node:read-more:link]

Our Updated Agenda for Science Philanthropy

Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.

We’re hoping to set the Open Philanthropy Project’s initial priorities within scientific research this year. That means being in a place roughly comparable to where we currently are on U.S. policy and global catastrophic risks: having a ranked list of focus areas and goals for hiring and grantmaking. [node:read-more:link]

Science Policy and Infrastructure

Note: Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.

We’ve tried to approach scientific research funding - focusing initially on life sciences - by looking for gaps and deficiencies in the current system for supporting scientific research. We’ve identified several possibilities, including a set of systematic issues that make it difficult to support attempts at breakthrough fundamental science. [node:read-more:link]

Pages

Subscribe to RSS - Scientific Research