- published: 29 Oct 2015
- views: 4647
The Future of Humanity Institute (FHI) is an interdisciplinary research centre focused on predicting and preventing large-scale risks to human civilization. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School at the University of Oxford, England, United Kingdom. Its director is philosopher Nick Bostrom, and its research staff and associates include futurist Anders Sandberg, engineer K. Eric Drexler, economist Robin Hanson, and Giving What We Can founder Toby Ord.
The Institute's stated objective is to develop and utilize scientific and philosophical methods for reasoning about topics of fundamental importance to humanity, such as the effect of future technology on the human condition and the possibility of global catastrophes. It engages in a mix of academic and outreach activities, seeking to promote informed discussion and public engagement in government, businesses, universities, and other organizations.
Nick Bostrom established the Institute in November 2005 as part of the Oxford Martin School, then the James Martin 21st Century School. Between 2008 and 2010, FHI hosted the Global Catastrophic Risks conference, wrote 22 academic journal articles, and published 34 chapters in academic volumes. FHI researchers have been mentioned over 5,000 times in the media and have given policy advice at the World Economic Forum, to the private and non-profit sector (such as the Macarthur Foundation, and the World Health Organization), as well as to governmental bodies in Sweden, Singapore, Belgium, the United Kingdom, and the United States. Bostrom and bioethicist Julian Savulescu also published the book Human Enhancement in March 2009. Most recently, FHI has focused on the dangers of advanced artificial intelligence (AI). In 2014, its researchers published several books on AI risk, including Stuart Armstrong's Smarter Than Us and Bostrom's Superintelligence: Paths, Dangers, Strategies.
A global catastrophic risk is a hypothetical future event with the potential to seriously damage human well-being on a global scale. Some events could destroy or cripple modern civilization. Any event that could cause human extinction is also known as an existential risk.
Potential global catastrophic risks include but are not limited to hostile artificial intelligence, nanotechnology weapons, climate change, nuclear warfare, and pandemics.
Researchers experience difficulty in studying human extinction directly, since humanity has never been destroyed before. While this does not mean that it will not be in the future, it does make modelling existential risks difficult, due in part to survivorship bias.
Philosopher Nick Bostrom classifies risks according to their scope and intensity. He considers risks that are at least "global" in scope and "endurable" in intensity to be global catastrophic risks. Those that are at least "trans-generational" (affecting all future generations) in scope and "terminal" in intensity are classified as existential risks. While a global catastrophic risk may kill the vast majority of life on earth, humanity could still potentially recover. An existential risk, on the other hand, is one that either destroys humanity entirely or prevents any chance of civilization recovering. Bostrom considers existential risks to be far more significant.