Future of Humanity Institute

FHI is a multidisciplinary research institute at the University of Oxford. Academics at FHI bring the tools of mathematics, philosophy and social sciences to bear on big-picture questions about humanity and its prospects. The Institute is led by Founding Director Professor Nick Bostrom.

Subscribe

Quarterly news and occasional announcements from FHI.

The Windfall Clause: Distributing the Benefits of AI

Over the long run, technology has improved the human condition. Nevertheless, the economic progress from technological innovation has not arrived equitably or smoothly. While innovation often produces great wealth, it has also often been disruptive to labor, society, and world order. In light of ongoing advances in artificial intelligence (“AI”), we should prepare for the […]

The Vulnerable World Hypothesis

Scientific and technological progress might change people’s capabilities or incentives in ways that would destabilize civilization. This paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the “semi-anarchic default condition”.

Reframing Superintelligence: Comprehensive AI Services as General Intelligence

Reframing Superintelligence Abstract: Studies of superintelligent-level systems have typically posited AI functionality that plays the role of a mind in a rational utility-directed agent, and hence employ an abstraction initially developed as an idealized model of human decision makers. Today, developments in AI technology highlight intelligent systems that are quite unlike minds, and provide a […]

Quarterly Update Winter 2020

This is the FHI quarterly update for January to March 2020. It was an exceptional quarter due to the challenges and restrictions created by the COVID-19 pandemic. We implemented enhanced precautions a little while before the rest of the University and the UK as a whole kicked into action. We’re currently operating entirely online, with [...]

Why we need worst-case thinking to prevent pandemics

A ‘long read’ piece in The Guardian newspaper today examines Covid-19 in the context of earlier pandemics, and reflects on how to manage the existential risk posed by bio-technology today. It is an edited extract from Toby Ord’s new book The Precipice. Read the piece on The Guardian. Toby is also cited in a piece […]

AI Alignment Visiting Fellowship

Overview The Future of Humanity Institute is now opening applications for our AI Alignment Visiting Fellowship. This fellowship allows individuals to visit us for a period of three or more months to pursue research related to the theory or design of human-aligned AI. It is supervised largely by Michael Cohen, Stuart Armstrong and Ryan Carey […]