Toby Ord and CSER‘s Seán Ó hÉigeartaigh have released a joint statement on the UK’s newly-released National AI Strategy: We are pleased to see the UK Government set out its National AI Strategy. AI is a transformative technology, with the potential to bring significant benefits to the UK. It is encouraging to see issues such […]
New paper from Robin Hanson, Daniel Martin, Calvin McCarter, & Jonathan Paulson. Forthcoming in Astrophysics Journal, preprint on arXiv. Abstract: If life on Earth had to achieve n “hard steps” to reach humanity’s level, then the chance of this event rose as time to the n-th power. Integrating this over habitable star formation and planet […]
Learned, quasilinguistic neural representations (QNRs) that upgrade words to embeddings and syntax to graphs can provide a semantic medium that is both more expressive and more computationally tractable than natural language, a medium able to support formal and informal reasoning, human and inter-agent communication, and the development of scalable quasilinguistic corpora with characteristics of both literatures and associative memory. […]
The UK government must act now to ensure the UK is prepared for future extreme risks even greater than Covid-19, a new report finds. Disease warfare, leaks of dangerous pathogens from high-security labs, and misuse of artificial intelligence (AI) systems are identified as key threats to the UK and the wider world. The report, Future […]
Jan Brauner (DPhil Scholar) and Mrinank Sharma (DPhil Affiliate) published Inferring the effectiveness of government interventions against COVID-19 in Science. The paper currently has the 7th highest attention score (measured by Altmetric) of all Science publications.
In late 2020, a working group was established to use causal models to study incentive concepts, and their application to AI safety. It was founded by Ryan Carey (FHI) and Tom Everitt (DeepMind) and includes other researchers from Oxford and the University of Toronto. Since the formation of the working group, it has published four […]
The UN’s latest Human Development Report features a 7-page piece by Toby Ord on existential risk and the protection of humanity’s longterm potential. Toby said of the piece: I’ve long admired the HDR, and referred to it often when I worked on global poverty. So it’s great to be able to give something back, and […]
The Future of Humanity Institute’s Research Scholars Programme is hiring a second cohort of research scholars, likely to start in Spring 2021. It is a selective, two-year research programme, with lots of latitude for exploration as well as significant training and support elements. We will offer around eight salaried positions to early-career researchers who aim […]
Next steps for departing research scholars FHI is delighted to announce where the first scholars to graduate from the Research Scholars Programme are going next. Of the ten scholars who are leaving this autumn, two do not wish to make their plans public yet, and eight have plans they’re happy to have mentioned publicly: Ashwin […]
A team lead by researchers from FHI have released a large empirical study of the effects and perceived social burden of nonpharmaceutical interventions (NPIs) against COVID-19 transmission. To our knowledge, it is the largest data-driven study of NPI effectiveness to date. What did we do? We collected chronological data on 9 NPIs in 41 countries […]
A ‘long read’ piece in The Guardian newspaper today examines Covid-19 in the context of earlier pandemics, and reflects on how to manage the existential risk posed by bio-technology today. It is an edited extract from Toby Ord’s new book The Precipice. Read the piece on The Guardian. Toby is also cited in a piece […]
Applications for the 2020 Summer Research Fellowship have now closed. Impact of Covid-19 on the Summer Research Fellowship Timing: we confirm that the fellowship will take place in the 6 weeks from July 13th. In exceptional circumstances we may take applicants who are only available for some of that time period. Location: we have decided […]
Toby Ord’s new book The Precipice is about existential risk and the future of humanity. Humanity stands at a precipice. We live at a time of unprecedented innovation. Technology is accelerating faster than at any point in history, granting us ever greater power, and creating ever greater risk. In the twentieth century, we developed the […]
[Applications have now closed for this post.] Applications are invited for a Research Assistant for Professor Nick Bostrom, Director of FHI. We are looking for a general-purpose research assistant willing to conduct research on diverse topics meaningful to the work of the Director. We are able to sponsor a visa for applicants who do not […]
[Applications have now closed for this post.] Applications are invited for a Website and Media Outreach manager. The successful candidate will be responsible for maintaining FHI’s website, social media outreach, and providing design support. The duties of this post are expected to evolve and change in response to the rapid advance of software technology and […]
THESE POSITIONS ARE NOW CLOSED The Future of Humanity Institute is opening several research positions to hire researchers who specialise either in one of our most visible current areas of research (Macrostrategy, Technical AI safety, Center for the Governance of AI, and Biosecurity), or in areas where we are looking to build capacity (mentioned below). As FHI grows in […]
THIS POSITION IS NOW CLOSED The Future of Humanity Institute’s Research Scholars Programme is hiring a Project Coordinator to manage day-to-day operations as the programme scales, and take a lead on side projects. About the Research Scholars Programme The Research Scholars Programme (RSP) was launched in October 2018. The programme employs a small number of […]
THIS POSITION IS NOW CLOSED Applications are invited for a high impact Project Manager for FHI’s Macrostrategy research group – senior researchers who investigate which crucial considerations are shaping what is at stake for the future of humanity. Macrostrategy research is developing better intellectual tools for analyzing the connections between current actions and long-term outcomes, and […]
The Future of Humanity Institute is delighted to open applications to its new DPhil Scholarships programme. The scholarships are open to incoming Oxford DPhil students whose research seeks to improve the long-term prospects of humanity, and offer full funding as well as office space within FHI. See here for more information about the program and […]
THESE POSITIONS ARE NOW CLOSED Executive Assistant We will be hiring two Executive Assistants for the Future of Humanity Institute (FHI), a multidisciplinary research centre. We are looking to recruit one Executive Assistant to support the Director of the Institute, Professor Nick Bostrom, and a second Executive Assistant to support Senior Researchers at the Institute […]
THIS POSITION IS NOW CLOSED The Future of Humanity Institute’s Research Scholars Programme is excited to open applications for its 2019 cohort. RSP is a selective, two-year research programme, with lots of latitude for exploration as well as significant training and support elements. We will offer roughly six to ten salaried positions to early-career researchers who […]
A report published by the Center for the Governance of AI (GovAI), housed in the Future of Humanity Institute, surveys Americans’ attitudes on artificial intelligence. The impact of artificial intelligence technology on society is likely to be large. While the technology industry and governments currently predominate policy conversations on AI, the authors expect the public […]
Reframing Superintelligence Abstract: Studies of superintelligent-level systems have typically posited AI functionality that plays the role of a mind in a rational utility-directed agent, and hence employ an abstraction initially developed as an idealized model of human decision makers. Today, developments in AI technology highlight intelligent systems that are quite unlike minds, and provide a […]
From 7-9 November 2018, 42 senior policy leaders and scientific and technical experts in science, engineering, bio-defence and bio-security, science policy, public health, infectious diseases, and catastrophic risks gathered at Wilton Park to consider powerful actor, high impact bio-threats. The initial report of the meeting is available here. This meeting was organised in partnership between Wilton Park, […]
THIS POSITION IS NOW CLOSED FHI is excited to invite applications for a full-time Website and Communications officer. The post is fixed-term for 24 months from the date of appointment. The role holder will be responsible for developing and implementing a communications strategy for all activities of the institute. S/he will develop and maintain FHI’s […]
A recent FHI project investigated whether AI systems can predict human deliberative judgments. Today’s AI systems are good at imitating quick, “intuitive” human judgments in areas including vision, speech recognition, and sentiment analysis. Yet some important decisions can’t be made quickly. They require careful thinking, research, and analysis. For example, a judge should not decide […]
A recent paper by FHI researcher Stuart Armstrong and former intern Soren Mindermann (now at the Vector Institute) has been accepted at NeurIPS 2018. The paper, Impossibility of deducing preferences and rationality from human policy, considers the scenario in which AI system learns the values and biases of a human agent concurrently. This extends an existing […]
The applications for this position are now closed. The FHI is extremely excited to announce applications are now open for the position of a full-time Head of Operations. The Head of Operations will play a leadership and co-ordinating role for FHI’s operations. Reporting to the Director of the Future of Humanity Institute, the person will […]
Oxford University’s Future of Humanity Institute (FHI) is pleased to announce a contribution of up to £13.3 million from the Open Philanthropy Project. The donation, which includes a £6 million up-front commitment with the rest contingent on hiring, is the largest in the Faculty of Philosophy’s history. It will support FHI in its mission of […]
THIS POSITION IS NOW CLOSED FHI is excited to invite applications for a full time Senior Administrator to work with the faculty of Philosophy, University of Oxford with responsibility for overseeing the effective and efficient day-to-day non-academic management and administration of two of the Faculty’s research centres, the Future of Humanity Institute (FHI) and […]
THIS POSITION IS NOW CLOSED FHI is excited to invite applications for a full-time Research Fellow within the Future of Humanity Institute (FHI) at the University of Oxford. The post is fixed-term for 24 months from the date of appointment. You will be responsible for conducting technical research in AI Safety. You can find […]
THIS POSITION IS NOW CLOSED Applications are invited for a full-time Research Fellow within the Future of Humanity Institute (FHI) at the University of Oxford. This is a fixed-term post for 24 months from the date of appointment, located at the FHI offices in the beautiful city of Oxford. Reporting to the Director of Research […]
Ought and FHI’s AI Safety Group are collecting data on how people come to judgments over time. Take part and play games about: (A) Fermi arithmetic problems (B) Fact-checking political statements (decide if a statement is “fake news”) (C) Deciding how much you like a Machine Learning paper You’ll get feedback on your progress over […]
From the entire FHI team, we wish you all a great start into 2018! In this post, we would like to provide you with a summary of the FHI highlights over the last quarter of 2017. Highlights FHI launches the Governance of AI Program FHI is delighted to announce the formation of the Governance of AI […]
APPLICATIONS ARE NOW CLOSED FHI is excited to announce applications are now open for the position of full-time Executive Assistant to Professor Nick Bostrom, Director of the Future of Humanity Institute at the University of Oxford. This post is a core part of FHI’s operations team. By freeing Prof. Bostrom’s schedule, prioritising items for his […]
APPLICATIONS ARE NOW CLOSED FHI is excited to invite applications for a full-time Administrative Assistant within the Future of Humanity Institute (FHI) at the University of Oxford. The post is fixed-term for 12 months from the date of appointment. You will be responsible for providing broad secretarial and general office support to administrative and research […]
APPLICATIONS ARE NOW CLOSED FHI is excited to invite applications for a full-time Post-Doctoral Research Scientist within the Future of Humanity Institute (FHI) at the University of Oxford. The post is fixed-term for 24 months from the date of appointment. You will advance the field of AI safety by conducting technical research. You can find […]
Nick Bostrom, Miles Brundage and Allan Dafoe are advising the UK government on issues concerning the developments in artificial intelligence. Miles Brundage presented evidence on 11 September on the topic ‘Governance, social and organisational perspective for AI’ (evidence meeting 5), looking at AI and cultural systems and new forms of organisational structure. On 10 October, […]
In the third quarter of 2017, FHI staff have continued their work in the institute’s four focus areas: AI Safety, AI Strategy, Biorisk, and Macrostrategy. Below, we outline some of our key outputs over the last quarter, current vacancies, and details on what our researchers have recently been working on. This quarter, we are saying […]
Three papers by FHI researchers in the area of biosecurity are forthcoming in the latest issue of Health Security.
How can AI systems learn safely in the real world? Self-driving cars have safety drivers, people who sit in the driver’s seat and constantly monitor the road, ready to take control if an accident looks imminent. Could reinforcement learning systems also learn safely by having a human overseer?
The Future of Humanity Institute is seeking a Senior Research Fellow on AI Macrostrategy, to identify crucial considerations for improving humanity’s long-run potential. We are looking for a polymath with an academic background related to economics, mathematics, physical sciences, computer science, philosophy, political science, or international governance, who has both outstanding analytical ability and a […]
Nick Bostrom delivered a talk at Group of Thirty (G30) in London on the 10th of June. Professor Bostrom spoke alongside Deepmind Co-Founder and CEO Demis Hassabis about Machine Learning and the AI Horizon.
At the World Intelligence Congress on the 1st of July in the city of Tianjin in China, Professor Bostrom spoke about the future of machine intelligence over the coming decades. Other speakers included leaders in Chinese tech and academia such as Jack Ma, Robin Li, and Bo Zhang. This follows a recent talk he gave to leading […]
Key outputs and activities from the first quarter of 2017 at FHI.
APPLICATIONS FOR THIS POSITION ARE NOW CLOSED. Please see our Jobs Page for information on current vacancies at FHI. — Applications are invited for a full time Research Assistant within the Future of Humanity Institute (FHI) at the University of Oxford. The post is fixed-term for 6 months from the date of appointment. Reporting to […]
The Future of Humanity Institute (FHI) will be joining the Partnership on AI, a non-profit organisation founded by Amazon, Apple, Google/DeepMind, Facebook, IBM, and Microsoft, with the goal of formulating best practices for socially beneficial AI development. We will be joining the Partnership alongside technology firms like Sony as well as third sector groups like […]
FHI’s Owain Evans just released an online book describing and implementing models of rational agents for (PO)MDPs and Reinforcement Learning in collaboration with Andreas Stuhlmüller, John Salvatier, and Daniel Filan. The book aims to educate its readers on the creation of richer models of human planning, capturing human biases and bounded rationality. The book uses […]
Key outputs and activities from the first quarter of 2017 at FHI.
The Future of Humanity Institute at the University of Oxford seeks interns to contribute to our work in the area of technical AI safety. Examples of this type of work include Cooperative Reinforcement Learning, Learning the Preferences of Ignorant, Inconsistent Agents, Learning the Preferences of Bounded Agents, and Safely Interrupible Agents.
The Open Philanthropy Project recently announced a grant of £1,620,452 to the Future of Humanity Institute (FHI) to provide general support as well as a grant of £88,922 to allow us to hire Piers Millet to lead our work on biosecurity . Most of the larger grant adds unrestricted funding to FHI’s reserves, which will […]
On the 19th and 20th of February, FHI hosted a workshop on the potential risks posed by the malicious misuse of emerging technologies in machine learning and artificial intelligence. The workshop, co-chaired by Miles Brundage at FHI and Shahar Avin of the Centre for the Study of Existential Risk, invited experts in cybersecurity, AI governance, […]
On the 10th February, the Future of Humanity Institute (FHI) hosted the Normative Uncertainty Workshop.
AI researchers gathered at Asilomar from the 3rd-8th of January 2017 for a conference on Beneficial Artificial Intelligence organised by the Future of Life Institute. Nick Bostrom spoke about his recent research on the interaction between AI control problems and governance strategy within AI risk, and the role of openness (slides/video). Bostrom and co-authors have […]
Future of Humanity Institute Annual Review 2016 [pdf] In 2016, we continued our mission of helping the world think more systematically about how to craft a better future. We advanced our core research areas of macrostrategy, technical artificial intelligence (AI) safety, AI strategy, and biotechnology safety. The Future of Humanity Institute (FHI) has grown by one third, […]
The workshop explored the potential technical overlap between AI Safety and blockchain technologies and the possibilities for using blockchain, crypto-economics, and cryptocurrencies to facilitate greater global coordination. Key topics of discussion were the coordination of political actors, AI strategy and policy, blockchain frontiers and trends, prediction markets, coordination failures, and the potential impact of blockchain on governance. Attendees included: Vitalik Buterin, the inventor of Ethereum; Jaan Tallinn, a founding engineer of Skype and Kazaa; and Wei Dai, the creator of b-money and Crypto++.
Machine superintelligence could plausibly be developed in the coming decades or century. The prospect of this transformative development presents a host of political challenges and opportunities.
FHI Research Associate Allan Dafoe and Stuart Russell have published “Yes, We Are Worried About the Existential Risk of Artificial Intelligence” in the MIT Technology Review as a response to an article by Oren Etzioni.
At the start of November FHI researchers, Piers Millett and Eric Drexler participated in a one day biological engineering horizon scanning workshop hosted by the Centre for the Study of Existential Risk (CESR). The workshop was the culmination of a process that ran for several months in which experts in the biosciences, biotechnology, biosecurity, bioethics as well as existential and global catastrophic risks identified recent developments likely to have the greatest impact on our societies in the short to medium term.
The Future of Humanity Institute are delighted to announce the hiring of our first policy specialist on biotechnology, Piers Millett. Dr. Millett is the former Acting Head of the Implementation Support Unit, UN Biological Weapons Convention.
On the 19th October, the Future of Humanity Institute (FHI) organised a workshop and public talk on the ‘The age of Em: work, love, and life when robots rule the earth”, with Research Associate Professor Robin Hanson.
The UK House of Commons Science and Technology Committee have released a report concluding their recent enquiry on robotics and artificial intelligence. The report cites oral evidence given by FHI researcher Dr. Owen Cotton-Barratt, and discusses the work of FHI researcher Dr. Stuart Armstrong.
Wired magazine has published a long interview between MIT’s Joi Ito, Wired’s Scott Dadich, and US President Barack Obama. The interview, titled “Barack Obama, neural nets, self-driving cars, and the future of the world”, discusses a range of topics including Prof. Nick Bostrom’s work on superintelligence.
We introduce exploration potential, a quantity for that measures how much a reinforcement learning agent has explored its environment class. In contrast to information gain, exploration potential takes the problem’s reward structure into account. This leads to an exploration criterion that is both necessary and sufficient for asymptotic optimality (learning to act optimally across the entire environment class). Our experiments in multi-armed bandits use exploration potential to illustrate how different algorithms make the tradeoff between exploration and exploitation.
The Director of the United States Intelligence Advanced Research Projects Activity (IARPA) visited the Future of Humanity Institute (FHI) today. Dr. Matheny joined researchers for discussions of biosecurity, artificial intelligence safety and existential risk reduction policy, among other topics.
This post outlines activities at the Future of Humanity Institute during July, August and September 2016. We published three new papers, attended several conferences, hired Prof. William MacAskill, hosted four interns and one summer research fellow, and made progress in a number of research areas.
The Future of Humanity Institute and DeepMind are co-hosting monthly seminars aimed at deepening the ongoing fruitful collaboration between AI safety researchers in these organisations. The Future of Humanity Institute played host to the seminar series for the first time last week.
Last month, FHI researchers in collaboration with the Centre for Effective Altruism met in Helsinki to discuss existential risk policy with a number of Finnish government agencies. A full-day workshop was followed by meetings held at the Office of the President, as well as with groups in policy planning and arms control.
FHI Research Fellow Miles Brundage recently met with policy-makers and analysts at the European Commission in Brussels. He participated in a roundtable discussion at the European Political Strategy Center (EPSC) featuring officials working on various aspects of trade, innovation, and research policy.
MIRI have uploaded a third set of videos from their co-hosted workshop with the Future of Humanity Institute; Colloquium Series on Robust and Beneficial AI.
Miles Brundage, Anders Sandberg and Andrew Snyder-Beattie attended the Symposium on Ethics of Autonomous Systems (SEAS), an event organised by the Institute of Electrical and Electronics Engineers (IEEE) and attended by globally recognised experts from a diversity of fields.
MIRI have uploaded a third set of videos from their co-hosted workshop with the Future of Humanity Institute; Colloquium Series on Robust and Beneficial AI.
We recently teamed up with the Machine Intelligence Research Institute (MIRI) to co-host a 22-day Colloquium Series on Robust and Beneficial AI (CSRBAI) at the MIRI office. The colloquium was aimed at bringing together safety-conscious AI scientists from academia and industry to share their recent work. The event served that purpose well, initiating some new collaborations and a number of new conversations between researchers who hadn’t interacted before or had only talked remotely.
The Future of Humanity Institute is delighted to announce the hiring of Jan Leike, and Miles Brundage for the Strategic Artificial Intelligence Research Centre (SAIRC)
Jan Leike’s research, co-authored with Tor Lattimore, Laurent Orseau and Marcus Hutter, discusses a variant of Thompson sampling for nonparametric reinforcement learning in countable classes of general stochastic environments. These environments can be non-Markov, nonergodic, and partially observable. It show that Thompson sampling learns the environment class in the sense that (1) asymptotically its value converges to the optimal value in mean and (2) given a recoverability assumption regret is sublinear.
Future of Humanity Institute Research Fellow Jan Leike and Machine Intelligence Research Institute Research Fellows Jessica Taylor and Benya Fallenstein presented new results at UAI 2016 that resolve a longstanding open problem in game theory.
The paper describes the first general reduction of game-theoretic reasoning to expected utility maximization.
This Q2 update highlights some of our key achievements and provides links to particularly interesting outputs. We conducted two workshops, released a report with the Global Priorities Project and the Global Challenges Foundation, published a paper with DeepMind, and hired Jan Leike and Miles Brundage.
Congratulations to Research Associate Robin Hanson for the publication of his book, Age of Em. Summary from the book’s website: Robots may one day rule the world, but what is a robot-ruled Earth like? Many think the first truly smart robots will be brain emulations or ems. Scan a human brain, then run a model with […]
FHI Researcher Owen Cotton-Barratt recently gave evidence to the UK Parliament’s Science and Technology Commons Select Committee.
In keeping with past leadership efforts, The US National Academies of Sciences, Engineering, and Medicine have launched a new initiative to inform decision making related to recent advances in human gene-editing research. As part of the comprehensive study, the committee convened a group of experts in Paris to review the principles underlying human gene editing governance […]
This working paper, by Prof. Nick Bostrom, attempts a preliminary analysis of the global desirability of different forms of openness in AI development (including openness about source code, science, data, safety techniques, capabilities, and goals).
FHI’s Assistant Director Niel Bowerman gives oral evidence at the European Parliament
Nick Bostrom delivered a Flagship Seminar on Macrostrategy at the Bank of England. Go to the Bank of England event page for the video recording Nick discussed some of the challenges that appear if one is seeking to maximize the expected value of the long-term consequences of present actions, particularly if one’s objective function has a time-neutral altruistic […]
This piece has been cross-posted from the Global Priorities Project. Please press here to see the original post. The recent US moratorium on certain types of Gain-of-Function (GoF) research made it clear that a new approach is needed to balance the costs and benefits of potentially risky research. Current risk management tools work well in […]
On February 8th and 9th, twenty leading academics and policy-makers from the UK, USA, Germany, Finland, and Sweden gathered at the University of Oxford to discuss governance in existential risks. This brought together a mixture of specialists in relevant subject domains, diplomats, policy experts, and researchers with broad methodological expertise in existential risk. The event […]
A new research centre to explore the opportunities and challenges to humanity from the development of artificial intelligence has been launched this week after a £10 million grant from the Leverhulme Trust.
The latest article of The New Yorker titled ‘The Doomsday Invention’ about Nick Bostrom and his best-seller book ‘Superintelligence: Paths, Dangers, Strategies’ has now been published. The writer of the article, Raffi Khatchadourian, who was nominated for a National Magazine Award in profile writing, throws light on Nick Bostrom’s profile and on The Future of Humanity Institute’s research.
On October 7th Nick Bostrom will be speaking alongside Max Tegmark from the Future of Life Institute at the United Nations Headquarters in New York.
The event is titled CBRN National Action Plans: Rising to the Challenges of International Security and the Emergence of Artificial Intelligence.
On long timescales, where is humanity headed? What are the big uncertainties? What does that mean for decisions today? In this series of lectures, we will tackle these issues, and explore the questions that feed into them. Many are multidisciplinary, and progress often draws on knowledge and tools from economics and other sciences, philosophy, and mathematics.
The guests on HARDtalk are people who do much to shape our world. More often than not they’re testament to the talent and potential of the human species.
But what if we’re living on the cusp of a new era? Shaped not by mankind, but by machines using artificial intelligence to build a post-human world. Science fiction?
Not according to HARDtalk’s guest scientist and philosopher Nick Bostrom who runs the Future of Humanity Institute. Stephen Sackur asks, when truly intelligent machines arrive, what happens to us?
Nick Bostrom was recently awarded a €2 million ERC Advanced Grant, widely considered to be the most prestigious grant available from the European Research Council. The grant will allow Nick Bostrom and a team of FHI researchers to continue their work on existential risk and crucial considerations.
The title of the grant is “UnPrEDICT: Uncertainty and Precaution—Ethical Decisions Involving Catastrophic Threats.”
The Future of Humanity Institute at Oxford University and the Centre for the Study of Existential Risk at Cambridge University are to receive a £1m grant for policy and technical research into the development of machine intelligence.
The grant is from the Future of Life Institute in Boston, USA, and has been funded by the Open Philanthropy Project and Elon Musk, CEO of Tesla Motors and Space X.
At a lecture at the Cambridge Centre for the Study of Existential Risk, Dr. Toby Ord discussed the relative likelihood of natural existential risk, as opposed to anthropogenic risks. His analysis of the issue indicates a much higher probability of anthropogenic existential risk.
On June 2nd Professor Marc Lipsitch will be giving a public lecture at FHI on the ethics of creating of potential pandemic pathogens. Professor Lipsitch is director of the Center of Communicable Disease Dynamics and Professor of Epidemiology at Harvard.
In a recent open letter, Toby Ord describes FHI’s position on experiments that create potential pandemic pathogens, noting that “the experiments involve risks of killing hundreds of thousands (or even millions) of individuals in the process.”
At the latest TED conference in Vancouver, Professor Nick Bostrom discussed concerns about machine superintelligence and FHI’s research on AI safety.
In a recent discussion with Baidu CEO Robert Li, Bill Gates discussed FHI’s research, stating that he would “highly recommend” Superintelligence.
In a newly published FHI Technical Report, “MDL Intelligence Distillation: Exploring strategies for safe access to superintelligent problem-solving capabilities”, Eric Drexler explores a general approach to separating learning capacity from domain knowledge, and then using controlled input and retention of specialised domain knowledge to focus and implicitly constrain the capabilities of domain-specific superintelligent problem solvers.
FHI researcher Toby Ord has published recent research on moral trade in Ethics. Differing ethical viewpoints can allow for moral trade, arrangements that improve the state of affairs from all involved viewpoints.
In a recent technical report, Dr. Owen Cotton-Barratt discusses how we ought to allocate existential risk mitigation effort across time. The primary finding is that all else being equal we should prefer work earlier and prefer to work on risks that might come early.
The Future of Humanity Institute is pleased to announce the results for the 2014 Thesis Prize Competition: Crucial Considerations for the Future of Humanity. Entrants submitted a two-page ‘thesis proposal’ consisting of a 300 word abstract and an outline plan of a thesis on crucial considerations for humanity’s future. Professor Nick Bostrom, Dr Toby Ord […]
In a recent report, FHI researchers examine the strengths and weaknesses of two existing definitions of existential risk, and suggest a new definition based on expected value. This leads to a parallel concept: ‘existential hope’, the chance of something extremely good happening.
Over the weekend of January 2, much of our research staff from the Oxford Martin Programme on the Impacts of Future Technology attended The Future of AI: Opportunities and Challenges, a conference held by the Future of Life Institute to bring together AI researchers from academia and industry, AI safety researchers, lawyers, economists, and many […]
In 2014 FHI produced over 20 publications and policy reports, and our research was the topic of over 1000 media pieces. The highlight of the year was the publication of Superintelligence, Paths, Dangers, Strategies, which has opened a broader discussion on how to ensure our future AI systems remain safe.
Nick Bostrom has completed a draft paper on value porosity and utility diversification. This theory could used as part of a ‘Hail Mary’ approach to the AI safety problem.
On December 16th, FHI researcher Carl Frey published a piece in Scientific American describing the challenges of a digital economy.
In a recent contribution to The Edge, Professor Stuart Russell describes FHI’s position on the opportunities and risks of future AI systems.
The 2014 UK Chief Scientific Advisor’s report has included a chapter on existential risk, written by FHI researchers Toby Ord and Nick Beckstead. The report describes the risks posed by AI, biotechnology, and geoengineering, as well as the ethical framework under which we ought to evaluate existential risk.
On November 5th, FHI’s recent work on the future dangers of artificial intelligence was featured in the New York Times.
On October 13th Professor Nick Bostrom will present his recent book Superintelligence: Paths, Dangers, Strategies at the Oxford Martin School. The lecture will be followed by a book signing and drink reception, open to the public.
On October 13th, Dr. Seth Baum, the executive director of the Global Catastrophic Risk Institute, will lead a seminar on deterrence theory and global catastrophic risk reduction at FHI.
Today Carl Frey presented his economics research in an article in the Financial Times.
Professor Marc Lipsitch will be giving a talk on recent experiments with potential pandemic pathogens and their ethical alternatives on September 25th. Professor Lipsitch is a professor of epidemiology and the director of the Centre for Communicable Disease Dynamics at Harvard University.
The Chronicle of Higher Education highlighted work done at FHI in an article about the risks of artificial intelligence and other advanced technologies.
Superintelligence: Paths, Dangers, Strategies has been featured on the NYT Science Bestseller’s list, sharing the list with Malcolm Gladwell’s David and Goliath and Daniel Kahneman’s Thinking, Fast and Slow.
In an article featured in Scientific American, Oxford Martin and FHI research fellow Carl Frey discusses how cities can manage technological change, noting that the process of creative destruction works best when new occupations are fostered.
Superintelligence: Paths, Dangers, Strategies is now available in the United States. To mark the event, Nick Bostrom is starting his book tour in Washington DC at Noblis.
Nick Bostrom recently advised Obama’s Presidential Commission for the Study of Bioethical Issues on issues regarding ethical considerations in cognitive enhancement. Discussion included how concerns about distributive justice and fairness might be addressed in light of potential individual or societal benefits of cognitive enhancement.
Superintelligence: Paths, Dangers, Strategies has been featured on the New York Times bestseller list. Ranked in the top 25 for nonfiction e-books, this week Superintelligence has topped books such as Daniel Kahneman’s Thinking, Fast and Slow.
Nick Bostrom will be touring the United States to discuss Superintelligence: Paths, Dangers, Strategies from September 3-12th.
On July 21st Professor Steve Stedman visited the Future of Humanity Institute to discuss global catastrophic risks and emerging technology. Professor Stedman is the former Assistant Secretary General to the United Nations, where he proposed and implemented the United Nations Task Force on Counter-terrorism, among other accomplishments.
A recent Financial Times review of Superintelligence states “there is no doubting the force of [Bostrom’s] arguments … the problem is a research challenge worthy of the next generation’s best mathematical talent. Human civilisation is at stake.”
In collaboration with the Machine Intelligence Research Institute, FHI hosted a MIRIx Workshop to develop the technical agenda for AI safety. Attendees generated new strategic considerations for technical agenda setting, technical research ideas, and comments on existing topics in the technical agenda.
Anders Sandberg gave an invited talk about enhancement ethics and emerging technologies at the Army Research Labs Adelphi Center. His main theme was how automation will shift occupational demand – both in society at large and in a military setting – more towards skills and abilities where human enhancement is relevant.
“Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era.” – Stuart Russell
Superintelligence: Paths, Dangers, Strategies, has been placed at the top of the Financial Times scientific summer reading list, stating that “Bostrom … casts a philosopher’s eye at the past, present and future of artificial intelligence.”
Nick Bostrom will be presenting his new book Superintelligence: Paths, Dangers, Strategies at the Royal Society of Arts on July 3rd. Join the waiting list here.
How should one construct a prior for unprecedented events? Last week, Toby Ord described Laplace’s law of succession in an FHI seminar.
Last month Edward Perello, Cofounder of Desktop Genetics Ltd, gave a lecture on emerging technologies and biosecurity at the Future of Humanity Institute. Topics included DNA synthesis, the biohacking movement, and government regulation.
Four representatives from the Future of Humanity Institute will be speaking at Good Done Right, a conference on effective altruism taking place on July 7th-9th at All Souls College in Oxford. The conference will seek to use insights from ethical theory, economics, and related disciplines to identify the best means to secure and promote the […]
Last week Anders Sandberg wrote an article in the Conversation entitled “the five biggest threats to human existence.” The article discusses the risks of nuclear weapons, biotechnology, and superintelligence.
Over the years a number of researchers have participated in our work, but have now continued to other positions.
On May 24, the Future of Life Institute at MIT will host an opening event with board members Jaan Tallinn, George Church, Alan Alda, and Frank Wilczek. The Future of Life Institute is dedicated to responsible innovation and the reduction of existential risk.
On May 12th, researchers from FHI participated in a public “ask me anything” series on Reddit, hosted by The Conversation. Topics covered climate change, pandemics, bioethics, artificial intelligence, and existential risk, with the session achieving Reddit’s front-page status.
Is artificial intelligence an existential threat to humanity? On May 13th, Dr. Joanna Bryson will be delivering a lecture at the Oxford Martin School discussing the notion of an intelligence explosion.
Nick Bostrom has been included in Prospect Magazine’s top 15 world thinkers, taking the position of highest ranked analytic philosopher and the 3rd highest ranked philosopher in all areas.
Citing work done by the Future of Humanity Institute, Stephen Hawking warned that dismissing the dangers of advanced artificial intelligence could be the “worst mistake in history.”
Can philosophical research contribute to securing a long and prosperous future for humanity and its descendants? What would you think about if you really wanted to make a difference?
Following on Stephen Hawking et al’s article on superintelligence, Daniel Dewey discussed artificial intelligence and existential risk in an interview on Motherboard.
Nick Bostrom has been included in Prospect Magazine’s top 15 world thinkers, an honour shared with entrepreneur Elon Musk, Pope Francis, and Nobel Prize winners Peter Higgs and Daniel Kahneman. Out of all philosophers, Bostrom was ranked 3rd, and out of analytic philosophers Bostrom was ranked 1st.
Nick Bostrom’s latest book is now available for preorder. Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us?
In an article published last week, notable scientists Steven Hawking, Max Tegmark, Frank Wilczek, and Stuart Russell discuss the risks associated with advances in artificial intelligence, citing work done by the Future of Humanity Institute.
FHI’s Toby Ord recently presented his talk, “How to Save Hundreds of Lives,” at a TEDx event in Cambridge University. In it, Toby demonstrates that even a modest amount of giving can save thousands of quality adjusted life years.
Could a “zero marginal cost society” maintain an equitable distribution of wealth? An article in The Guardian takes on this issue, citing Carl Frey’s work on automation and employment.
What are some plausible future risks from advanced AI? Last week Daniel Dewey was interviewed on NonTheology, a science and philosophy podcast.
Seán Ó hÉigeartaigh gave the opening talk at Belgium’s TEDx UHasselt Conference last Saturday, titled “Technological risk: how unexpected connections will help us tackle humanity’s greatest challenge.”
FHI director Nick Bostrom has made the Prospect Magazine’s top 50 World Thinkers, along with Pope Francis, Peter Higgs, and Elon Musk. Online voting for a top World Thinker continues here until April 11th.
On March 28th, Toby Ord spoke at the Oxford Literary Festival about the forthcoming book he contributed to: Is The Planet Full? Toby’s chapter, ‘Overpopulation or Underpopulation?’ raises a number of philosophical questions concerning how we should think about population, involving ethics and technology.
Should the rate of innovation be accelerated? Nick Bostrom will discuss the risks and benefits in a keynote lecture at the Age of Wonder in Brussels this Friday.
On March 14th Dr. Ilya Shpister discussed the merits of causal decision theory (CDT). His talk demonstrated the potential of CDT to resolve decision theory problems (such as Newcomb’s problem) so long as they are expressed in the correct causal graph.
On March 13th, Carl Frey and Mike Osborne presented their results on automation and employment at the Oxford Martin School. Topics included creative destruction, applications of machine learning, and the future role of human workers.
Last week Stuart Armstrong was interviewed on the risks of artificial intelligence. While predictions and consequences remain uncertain, FHI highlighted the need for more research on these issues.
Carl Frey discussed the future of automation and employment at the House of Commons on February 25th. Our current research within the Oxford Martin School’s Programme on the Impacts of Future Technology suggests almost half of the modern workforce is vulnerable to increased automation.
The Cambridge Centre for the Study of Existential Risk will be hosting a public lecture on February 26th entitled “Existential Risk: Surviving the 21st Century.”
As physics shines more light on the nature of our universe, Nick Bostrom’s simulation argument has received more attention. Last week, the New York Times cited Bostrom’s work in explaining why physicists are conducting certain experiments.
Would Asimov’s laws or a simple kill switch be sufficient to avoid the harms of a superintelligent AI? Nick Bostrom and James Barrat explain the difficulties of ensuring positive outcomes of an intelligence explosion on BBC Radio.
The Future of Humanity Institute is pleased to announce the Amlin-Oxford Martin School Conference on Systemic Risk, taking place on February 11th-12th. Speakers will include Lord Robert May, Professor Didier Sornette, Professor Ian Goldin, and Professor Doyne Farmer.
On February 4th, a diverse group met at the Future of Humanity Institute to discuss projects in agent based modelling (ABM). Experts in cultural anthropology, neuroscience, complex systems, and ecology all shared insights into how ABM is used in their respective fields.
Today Dr. Stuart Armstrong lauded Google’s decision to establish an artificial intelligence (AI) ethics board following their acquisition of DeepMind. It is a positive step forward, addressing the risks associated with continuing improvements in AI.
David Christian, hosted in part by our Programme on the Impacts of Future Technology, will be presenting his work on “Big History” at the Oxford Martin School on January 31st at 16:00.
In the past, technological innovation has increased long run employment. Will this pattern hold in the future? The Economist has featured Carl Frey and Mike Osborne’s paper on automation and unemployment in the January 18th print edition.
Luke Muehlhauser, executive director of the Machine Intelligence Research Institute, was interviewed by io9 regarding a paper he co-wrote with Nick Bostrom about the dangers of artificial intelligence.
Eric Drexler, an academic visitor at the Future of Humanity Institute, will give a lecture on atomically precise manufacturing at the Oxford Martin School on January 22nd. The talk will be based on his new book, Radical Abundance.
The Future of Humanity Institute is hosting a maths workshop led by the Machine Intelligence Research Institute (MIRI). The week long workshop covers topics such as mathematical logic, probability theory, and how these tools relate to artificial intelligence.
Nick Bostrom visited 10 Downing Street on Tuesday 12 November 2013 to advise on topics ranging from existential risk to more effective institutions.
On Wednesday 09 November, ideas from Stuart Armstrong and Anders Sandberg were featured in George Dvorsky’s article on self-replicating space probes.
On Saturday 02 November, Daniel Dewey will join speakers such as Aubrey de Gray and Mark Post at TEDx Vienna.
Professor Nick Bostrom, Dr. Anders Sandberg, and Dr. Eric Drexler will speak on September 28 at Futurefest 2013, an event designed to “enlarge our sense of what’s possible, so that we can all play our part in shaping things to come.”
The Future of Humanity Institute is pleased to announce the winners of our 2013 competition.
The Oxford Martin Programme on the Impacts of Future Technology will be co-hosting the 2013 Philosophy and Theory of AI conference in Oxford this weekend, September 21-22.
Professor Nick Bostrom will be giving a closing keynote at the Re.Work Technology Summit this evening at LSO St Luke’s, London.
Dr. Stuart Armstrong gave a talk at the IARU Summer School on the Ethics of Technology. The talk addressed many of the research areas of our institute.
Dr. Anders Sandberg comments in SMC on a recent PNAS study of rat neurophysiology following heart failure.
On Friday 09 August Dr. Anders Sandberg will be one of the invited panelists discussing “The Future of AI: What if We Succeed?” at the 2013 Joint Conference on Artificial Intelligence in Beijing, China.
“Why is the FHI interested in looking for aliens? Isn’t it the Future of Humanity Institute?”
Professor Bostrom has given a closing keynote presentation at this year’s Guardian Activate London Summit which took place on Tuesday 09 July 2013.
Last week Dr. Toby Ord went to 10 Downing Street, where he met with a special advisor to the Prime Minister.
Professor Nick Bostrom is participating at the Annual Humanitarian Affairs Segment for the United Nations Economic and Social Council (ECOSOC) held in Geneva, 15-17 July 2013.
Dr. Anders Sandberg, on the Oxford Martin School blog: …unaccountable surveillance is much easier turned into a tool for evil than accountable surveillance: the key question is not who got what information about whom, or even security versus freedom, but whether there is appropriate oversight and safeguards for civil liberties.
The Future of Humanity Institute mourns the passing of James Martin, the visionary founder of the Oxford Martin School. The FHI owes its establishment and much of its success over the years to Dr. Martin’s support and guidance.
Professor Nick Bostrom is quoted in an editorial and a featured article in The Observer, discussing the topics of prosthetics and robotics, human enhancement, and transhumanism.
On 15 June 2013, Dr Anders Sandberg gave a talk entitled “Making Minds Morally: the Research Ethics of Brain Emulation” at the GF2045 International Congress.
Max Tegmark, from the Massachusetts Institute of Technology and the Foundational Questions Institute (FQXi), presents a cosmic perspective on the future of life, covering our increasing scientific knowledge, the cosmic background radiation, the ultimate fate of the universe, and what we need to do to ensure the human race’s survival and flourishing in the short […]
The Founders Forum is a global network of digital leaders which connect the brightest and most dynamic digital start-ups to key investors, select CEO’s and policy makers. The members meet at four key events around the globe – in NYC, Mumbai, Rio and at our flagship event in London. Founders Forum events are all about […]
Vincent C. Müller and Anders Sandberg will present their paper on “Brain Surveillance” at the 2nd Ethics of Surveillance Conference, Leeds, June 24-25, 2013.
The Future of Humanity Institute and the Department of Physics at the University of Oxford are pleased to invite you to a talk by one of the world’s foremost researchers in the field of cosmology.
Why systemic risk in catastrophe modelling is crucially important.
The Future of Humanity Institute is pleased to announce the establishment of the FHI-Amlin Research Collaboration on Systemic Risk of Modelling.
The “Pint of Science Festival” in Oxford included a set of excellent talks on the brain, the body, and biotech in three of Oxford’s best pubs.
Professor Nick Bostrom, Director of the Future of Humanity Institute, will be speaking on both BBC4’s The World Tonight at 10PM and BBC5 at 10:30 tonight, elaborating on some of the topics discussed in today’s BBC article on existential risk.
BBC News covers the FHI: An international team of scientists, mathematicians and philosophers at Oxford University’s Future of Humanity Institute is investigating the biggest dangers.
Carl Frey joins Michael Osborne (Oxford Department of Engineering Science) in hosting an interdisciplinary workshop on the future effects of automation on the employment market.
On March 5, 2013, Nick Bostrom gave a talk at The Economist’s “Technology Frontiers” conference, on the topic of human nature and the future of humanity.
Ross Andersen has published an essay about his visit to the Future of Humanity Institute at Aeon Magazine: When we peer into the fog of the deep future what do we see – human extinction or a future among the stars?
The Future of Humanity Institute and the Programme on the Impacts of Future Technology are pleased to announce the winners of the Crucial Considerations for the Future of Humanity Thesis Abstract Competition.
Date: 14-17 January 2011 Venue: St Catherine’s College; Jesus College, Oxford This unusual conference, bridging philosophy, cognitive science, and machine intelligence brought together experts and students from a wide range of backgrounds for a long weekend of intense deliberation about the big questions: What holds together our experiences? What forms can intelligence take? How can […]
This unusual conference, bridging philosophy, cognitive science, and machine intelligence brought together experts and students from a wide range of backgrounds for a long weekend of intense deliberation about the big questions: What holds together our experiences? What forms can intelligence take? How can we create effective collective or artificial intelligence?