August 2020

Distraction and Vanity temporarily lowered: a good time for Philosophizing.

For more, see e.g. New Yorker profile (now a bit obsolete), Bio, CV.

Sign up for newsletter.

Some recent additions

Selected papers

Ethics & Policy

The Reversal Test: Eliminating Status Quo Bias in Applied Ethics 

We present a heuristic for correcting for one kind of bias (status quo bias), which we suggest affects many of our judgments about the consequences of modifying human nature. We apply this heuristic to the case of cognitive enhancements, and argue that the consequentialist case for this is much stronger than commonly recognized.

[w/ Toby Ord] [Ethics, Vol. 116, No. 4 (2006): 656–679] [pdf]

Astronomical Waste: The Opportunity Cost of Delayed Technological Development

Suns are illuminating and heating empty rooms, unused energy is being flushed down black holes, and our great common endowment of negentropy is being irreversibly degraded into entropy on a cosmic scale. These are resources that an advanced civilization could have used to create value-structures, such as sentient beings living worthwhile lives...

Infinite Ethics

Cosmology shows that we might well be living in an infinite universe that contains infinitely many happy and sad people. Given some assumptions, aggregative ethics implies that such a world contains an infinite amount of positive value and an infinite amount of negative value. But you can presumably do only a finite amount of good or bad. Since an infinite cardinal quantity is unchanged by the addition or subtraction of a finite quantity, it looks as though you can't change the value of the world. Aggregative consequentialism (and many other important ethical theories) are threatened by total paralysis. We explore a variety of potential cures, and discover that none works perfectly and all have serious side-effects. Is aggregative ethics doomed?

[Analysis and Metaphysics, Vol. 10 (2011): 9–59] [Original draft was available in 2003] [html] [pdf]

The Unilateralist's Curse: The Case for a Principle of Conformity

In cases where several altruistic agents each have an opportunity to undertake some initiative, a phenomenon arises that is analogous to the winner's curse in auction theory. To combat this problem, we propose a principle of conformity. It has applications in technology policy and many other areas.

[w/ Thomas Douglas & Anders Sandberg] [Social Epistemology, Vol. 30, No. 4 (2016): 350–371] [pdf]

Dignity and Enhancement

Does human enhancement threaten our dignity as some have asserted? Or could our dignity perhaps be technologically enhanced? After disentangling several different concepts of dignity, this essay focuses on the idea of dignity as a quality (a kind of excellence admitting of degrees). The interactions between enhancement and dignity as a quality are complex and link into fundamental issues in ethics and value theory.

[Human Dignity and Bioethics: Essays Commissioned by the President’s Council on Bioethics (Washington, D.C., 2008): 173–207] [pdf]

In Defense of Posthuman Dignity

Brief paper, critiques a host of bioconservative pundits who believe that enhancing human capacities and extending human healthspan would undermine our dignity.

[Bioethics, Vol. 19, No. 3 (2005): 202–214] [translations: Italian, Slovenian, Portuguese] [Was chosen for inclusion in a special anthology of the best papers published in this journal in the past two decades] [html] [pdf]

Human Enhancement

Original essays by various prominent moral philosophers on the ethics of human enhancement.

[Eds. Nick Bostrom & Julian Savulescu (Oxford University Press, 2009)]

Ethical Issues in Human Enhancement

Anthology chapter on the ethics of human enhancement

[w/ Rebecca Roache] [New Waves in Applied Ethics, eds. Jesper Ryberg et al. (Palgrave Macmillan, 2007): 120–152] [html] [pdf]

The Ethics of Artificial Intelligence

Overview of ethical issues raised by the possibility of creating intelligent machines. Questions relate both to ensuring such machines do not harm humans and to the moral status of the machines themselves.

[w/ Eliezer Yudkowsky] [The Cambridge Handbook of Artificial Intelligence, eds. Keith Frankish & William (Cambridge University Press, 2014): 316–334] [translation: Portuguese] [pdf]

Ethical Issues In Advanced Artificial Intelligence

Some cursory notes; not very in-depth.

[Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Volume 2, eds. I. Smit et al. (Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003): 12–17] [translation: Italian] [html] [pdf]

Smart Policy: Cognitive Enhancement and the Public Interest

Short article summarizing some of the key issues and offering specific recommendations, illustrating the opportunity and need for "smart policy": the integration into public policy of a broad-spectrum of approaches aimed at protecting and enhancing cognitive capacities and epistemic performance of individuals and institutions.

[w/ Rebecca Roache] [Enhancing Human Capacities, eds. Julian Savulescu et al. (Wiley-Blackwell, 2011): 138–149] [pdf]

Transhumanism

Why I Want to be a Posthuman When I Grow Up

After some definitions and conceptual clarification, I argue for two theses. First, some posthuman modes of being would be extremely worthwhile. Second, it could be good for human beings to become posthuman.

[Medical Enhancement and Posthumanity, eds. Bert Gordijn & Ruth Chadwick (Springer, 2008): 107–137] [pdf]

Letter from Utopia 

The good life: just how good could it be? A vision of the future from the future.

This is an improved version (v. 3.0) from 2019
[Studies in Ethics, Law, and Technology, Vol. 2, No. 1 (2008): 1–7]
[translations: French, Italian, Hungarian, Russian] [html] [pdf] [mp3]

The Transhumanist FAQ

The revised version 2.1. The document represents an effort to develop a broadly based consensus articulation of the basics of responsible transhumanism. Some one hundred people collaborated with me in creating this text. Feels like from another era.

[2003] [translations: German, Hungarian, Dutch, Russian, Polish, Finnish, Greek, Italian] [pdf]

Transhumanist Values

Wonderful ways of being may be located in the "posthuman realm", but we can't reach them. If we enhance ourselves using technology, however, we can go out there and realize these values. This paper sketches a transhumanist axiology.

[Ethical Issues for the 21st Century, ed. Frederick Adams (Philosophy Documentation Center Press, 2003); reprinted in Review of Contemporary Philosophy, Vol. 4, No. 1–2 (2005)] [translations: Polish, Portuguese, Spanish] [html] [pdf]

A History of Transhumanist Thought

The human desire to acquire new capacities, to extend life and overcome obstacles to happiness is as ancient as the species itself. But transhumanism has emerged gradually as a distinctive outlook, with no one person being responsible for its present shape. Here's one account of how it happened.

[Journal of Evolution and Technology, Vol. 14, No. 1 (2005): 1–25] [translation: Spanish] [pdf]

Risk & The Future

Existential Risk Reduction as Global Priority

Existential risks are those that threaten the entire future of humanity. This paper elaborates the concept of existential risk and its relation to basic issues in axiology and develops an improved classification scheme for such risks. It also describes some of the theoretical and practical challenges posed by various existential risks and suggests a new way of thinking about the ideal of sustainability.

[Global Policy, Vol. 4, No. 1 (2013): 15–31] [translation: Portuguese] [html] [pdf]

How Unlikely is a Doomsday Catastrophe?

Examines the risk from physics experiments and natural events to the local fabric of spacetime. Argues that the Brookhaven report overlooks an observation selection effect. Shows how this limitation can be overcome by using data on planet formation rates.

[w/ Max Tegmark] [expanded; Nature, Vol. 438 (2005): 754] [translation: Russian] [pdf]

The Future of Humanity

This paper discusses four families of scenarios for humanity’s future: extinction, recurrent collapse, plateau, and posthumanity.

[New Waves in Philosophy of Technology, eds. Evan Selinger & Soren Riis (Palgrave Macmillan, 2009): 186–215] [pdf] [html]

Global Catastrophic Risks

Twenty-six leading experts look at the gravest risks facing humanity in the 21st century, including natural catastrophes, nuclear war, terrorism, global warming, biological weapons, totalitarianism, advanced nanotechnology, general artificial intelligence, and social collapse. The book also addresses overarching issues—policy responses and methods for predicting and managing catastrophes. Foreword by Lord Martin Rees.

[Eds. Nick Bostrom & Milan Ćirković (Oxford University Press, 2008)]. Introduction chapter free here [pdf]

The Future of Human Evolution

This paper explores some dystopian scenarios where freewheeling evolutionary developments, while continuing to produce complex and intelligent forms of organization, lead to the gradual elimination of all forms of being worth caring about. We then discuss how such outcomes could be avoided and argue that under certain conditions the only possible remedy would be a globally coordinated effort to control human evolution by adopting social policies that modify the default fitness function of future life forms.

[Death and Anti-Death, Volume 2, ed. Charles Tandy (Ria University Press, 2004): 339–371] [pdf] [html]

Technological Revolutions: Ethics and Policy in the Dark

Technological revolutions are among the most important things that happen to humanity. This paper discusses some of the ethical and policy issues raised by anticipated technological revolutions, such as nanotechnology.

[Nanoscale: Issues and Perspectives for the Nano Century, eds. Nigel M. de S. Cameron & M. Ellen Mitchell (John Wiley, 2007): 129–156] [pdf]

Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards

Existential risks are ways in which we could screw up badly and permanently. Remarkably, relatively little serious work has been done in this important area. The point, of course, is not to welter in doom and gloom but to better understand where the biggest dangers are so that we can develop strategies for reducing them.

[Journal of Evolution and Technology, Vol. 9, No. 1 (2002)] [html] [pdf] [translations: Russian, Belarusian]

Information Hazards: A Typology of Potential Harms from Knowledge

Information hazards are risks that arise from the dissemination or the potential dissemination of true information that may cause harm or enable some agent to cause harm. Such hazards are often subtler than direct physical threats, and, as a consequence, are easily overlooked. They can, however, be important.

[Review of Contemporary Philosophy, Vol. 10 (2011): 44–79 (first version: 2009)] [pdf]

What is a Singleton?

Concept describing a kind of social structure.

[Linguistic and Philosophical Investigations, Vol. 5, No. 2 (2006): 48–54] [translation: Polish]

Technology Issues

Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer?

The embryo selection during IVF can be vastly potentiated when the technology for stem-cell derived gametes becomes available for use in humans. This would enable iterated embryo selection (IES), compressing the effective generation time in a selection program from decades to months.

[w/ Carl Shulman] [Global Policy, Vol. 5, No. 1 (2014): 85–92] [pdf]

How Hard is AI? Evolutionary Arguments and Selection Effects

Some have argued that because blind evolutionary processes produced human intelligence on Earth, it should be feasible for clever human engineers to create human-level artificial intelligence in the not-too-distant future. We evaluate this argument.

[w/ Carl Shulman] [Journal of Consciousness Studies, Vol. 19, No. 7–8 (2012): 103–130] [pdf]

The Wisdom of Nature: An Evolutionary Heuristic for Human Enhancement

Human beings are a marvel of evolved complexity. Such systems can be difficult to enhance. Here we describe a heuristic for identifying and evaluating the practicality, safety and efficacy of potential human enhancements, based on evolutionary considerations.

[w/ Anders Sandberg] [Human Enhancement, eds. Nick Bostrom & Julian Savulescu (Oxford University Press, 2008): 375–416] [pdf]

Whole Brain Emulation: A Roadmap

A 130-page report on the technological prerequisites for whole brain emulation (aka "mind uploading").

[w/ Anders Sandberg] [Technical Report #2008–3, Future of Humanity Institute, Oxford University (2008)] [pdf]

Converging Cognitive Enhancements

Cognitive enhancements in the context of converging technologies.

[w/ Anders Sandberg] [Annals of the New York Academy of Sciences, Vol. 1093, No. 1 (2006): 201–227] [pdf]

Racing to the Precipice: a Model of Artificial Intelligence Development

Game theory model of a technology race to develop AI. Participants skimp on safety precautions to get there first. Analyzes factors that determine level of risk in the Nash equilibrium.

[w/ Stuart Armstrong & Carl Shulman] [Technical Report #2013–1, Future of Humanity Institute, Oxford University: 1–8] [AI & Society, Vol. 31, No. 2 (2016): 201–206] [pdf]

Cognitive Enhancement: Methods, Ethics, Regulatory Challenges

Cognitive enhancement comes in many diverse forms. In this paper, we survey the current state of the art in cognitive enhancement methods and consider their prospects for the near-term future. We then review some of ethical issues arising from these technologies. We conclude with a discussion of the challenges for public policy and regulation created by present and anticipated methods for cognitive enhancement.

[w/ Anders Sandberg] [Science and Engineering Ethics, Vol. 15, No. 3 (2009): 311–341] [pdf]

Are You Living in a Computer Simulation? 

This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching the posthuman stage; (2) any posthuman civilization is extremely unlikely to run significant number of simulations or (variations) of their evolutionary history; (3) we are almost certainly living in a computer simulation. It follows that the naïve transhumanist dogma that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.

[The Philosophical Quarterly, Vol. 53, No. 211 (2003): 243–255] [pdf] [html] [Also with a Reply to Brian Weatherson's comments [The Philosophical Quarterly, Vol. 55, No. 218 (2005): 90–97]; and a Reply to Anthony Brueckner, [Analysis, Vol. 69, No. 3 (2009): 458–461]. And a new paper [w/ Marcin Kulczycki] [Analysis, Vol. 71, No. 1 (2011): 54–61].

The New Book

“I highly recommend this book.”—Bill Gates
“terribly important … groundbreaking” “extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines – engineering, natural sciences, medicine, social sciences and philosophy – into a comprehensible whole” “If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Spring from 1962, or ever.”—Olle Haggstrom, Professor of Mathematical Statistics
“Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. … It marks the beginning of a new era.”—Stuart Russell, Professor of Computer Science, University of California, Berkley
“Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book.” —Martin Rees, Past President, Royal Society
“Worth reading…. We need to be super careful with AI. Potentially more dangerous than nukes”—Elon Musk
“There is no doubting the force of [Bostrom's] arguments … the problem is a research challenge worthy of the next generation's best mathematical talent. Human civilisation is at stake.” —Financial Times
“This superb analysis by one of the world's clearest thinkers tackles one of humanity's greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn't become the last?” —Professor Max Tegmark, MIT
“a damn hard read” —The Telegraph

Anthropics & Probability

Anthropic Bias: Observation Selection Effects in Science and Philosophy 

Failure to consider observation selection effects result in a kind of bias that infest many branches of science and philosophy. This book presented the first mathematical theory for how to correct for these biases. It also discusses some implications for cosmology, evolutionary biology, game theory, the foundations of quantum mechanics, the Doomsday argument, the Sleeping Beauty problem, the search for extraterrestrial life, the question of whether God exists, and traffic planning.

[Complete book available for free online; also as paperback; there is also a brief primer. [primer translation: Belarusian] [Routledge, 2002]

Self-Locating Belief in Big Worlds: Cosmology's Missing Link to Observation

Current cosmological theories say that the world is so big that all possible observations are in fact made. But then, how can such theories be tested? What could count as negative evidence? To answer that, we need to consider observation selection effects.

[The Journal of Philosophy, Vol. 99, No. 12 (2002): 607–623] [html] [pdf]

Anthropic Shadow: Observation Selection Effects and Human Extinction Risks

"Anthropic shadow" is an observation selection effect that prevent observers from observing certain kinds of catastrophes in their recent geological and evolutionary past. We risk underestimating the risk of catastrophe types that lie in this shadow.

[w/ Milan M. Ćirković & Anders Sandberg] [Risk Analysis, Vol. 30, No. 10 (2010): 1495–1506] [Won best paper of the year award by the journal editors] [translation: Russian] [pdf]

A Primer on the Doomsday argument

The Doomsday argument purports to prove, from basic probability theory and a few seemingly innocuous empirical premises, that the risk that our species will go extinct soon is much greater than previously thought. My view is that the Doomsday argument is inconclusive—although not for any trivial reason. In my book, I argued that a theory of observation selection effects is needed to explain where it goes wrong.

[Colloquia Manilana, Vol. 7 (1999)] [translation: Russian]

Sleeping Beauty and Self-Location: A Hybrid Model

The Sleeping Beauty problem is an important test stone for theories about self-locating belief. I argue against both the traditional views on this problem and propose a new synthetic approach.

[Synthese, Vol. 157, No. 1 (2007): 59–78] [pdf]

Cars In the Other Lane Really Do Go Faster

When driving on the motorway, have you ever wondered about (and cursed!) the fact that cars in the other lane seem to be getting ahead faster than you? One might be tempted to account for this by invoking Murphy's Law ("If anything can go wrong, it will", discovered by Edward A. Murphy, Jr, in 1949). But there is an alternative explanation, based on observational selection effects…

[PLUS, No. 17 (2001)]

Cosmological Constant and the Final Anthropic Hypothesis

Examines the implications of recent evidence for a cosmological constant for the prospects of indefinite information processing in the multiverse. Co-authored with Milan M. Cirkovic.

[w/ Milan M Ćirković] [Astrophysics and Space Science, Vol. 274, No. 4 (2000): 675–687] [pdf]

Philosophy of Mind

Quantity of Experience: Brain-Duplication and Degrees of Consciousness

If two brains are in identical states, are there two numerically distinct phenomenal experiences or only one? Two, I argue. But what happens in intermediary cases? This paper looks in detail at this question and suggests that there can be a fractional (non-integer) number of qualitatively identical experiences. This has implications for what it is to implement a computation and for Chalmer's Fading Qualia thought experiment.

[Minds and Machines, Vol. 16, No. 2 (2006): 185–200] [pdf]

Decision Theory

Bio

Nick Bostrom is a Swedish-born philosopher and polymath with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is a Professor at Oxford University, where he leads the Future of Humanity Institute as its founding director. (The FHI is a multidisciplinary university research centre; it is also home to the Centre for the Governance of Artificial Intelligence and to teams working on AI safety, biosecurity, macrostrategy, and various other technology or foundational questions.) He is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which helped spark a global conversation about artificial intelligence. Bostrom’s widely influential work, which traverses philosophy, science, ethics, and technology, has illuminated the links between our present actions and long-term global outcomes, thereby casting a new light on the human condition.

He is recipient of a Eugene R. Gannon Award, and has been listed on Foreign Policy’s Top 100 Global Thinkers list twice. He was included on Prospect’s World Thinkers list, the youngest person in the top 15. His writings have been translated into 28 languages, and there have been more than 100 translations and reprints of his works. He is a repeat TED speaker and has done more than 2,000 interviews with television, radio, and print media. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the doom and gloom of his Swedish roots.

For more background, see profiles in e.g. The New Yorker or Aeon.

Work

My interests cut across many disciplines and may therefore at the surface appear somewhat scattered, but they all reflect a desire to figure out how to orient ourselves with respect to important values. I refer to this as “macrostrategy”: the study of how long-term outcomes for humanity may be connected to present-day actions. My research seeks to contribute to this by answering particular sub-questions or by developing conceptual tools that help us think about such questions more clearly.

A key part of the challenge is often to notice that a problem even exists – to find it, formulate it, and then make enough initial progress in understanding it to let us break it into more tractable components and research tasks. Much of my work (and that of the Future of Humanity Institute) operates in such a pre-paradigm environment. We tend to work on problems that the rest of academia ignores either because the problems are not yet recognized as important or because it is unclear how one could conceivably go about doing research on them; and we try to advance understanding of them to the point where it becomes possible for a larger intellectual community to engage with them productively. For example, a few years ago, AI alignment fell into this category: hardly anybody thought it was important, and it seemed like the kind of thing a science fiction author might write novels about but that there was no way to study scientifically. By now, it has emerged as a bona fide research field, with people writing code and equations and making incremental progress. Significant cognitive work was required to get to this point.

cont→

cont→

I have also originated or contributed to the development of ideas such as the simulation argument, existential risk, transhumanism, information hazards, superintelligence strategy, astronomical waste, crucial considerations, observation selection effects in cosmology and other contexts of self-locating belief, anthropic shadow, the unilateralist’s curse, the parliamentary model of decision-making under normative uncertainty, the notion of a singleton, the vulnerable world hypothesis, along with a number of analyses of future technological capabilities and concomitant ethical issues, risks, and opportunities.

Technology is a theme in much of my work (and that of the FHI) because it is plausible that the long-term outcomes for our civilization depend sensitively on how we handle the introduction of certain transformative capabilities. Machine intelligence, in particular, is a big focus. We also work on biotechnology (both for its human enhancement applications and because of biosecurity concerns), nanotechnology, surveillance technology, and a bunch of other potential developments that could alter fundamental parameters of the human condition.

There is a “why” beyond mere curiosity behind my interest in these questions, namely the hope that insight here may produce good effects. In terms of directing our efforts as a civilization, it would seem useful to have some notion of which direction is “up” and which is “down”—what we should promote and what we should discourage. Yet regarding macrostrategy, the situation is far from obvious. We really have very little clue which of the actions available to present-day agents would increase or decrease the expected value of the long-term future, let alone which ones would do so the most effectively. In fact, I believe it is likely that we are overlooking one or more crucial considerations: ideas or arguments that might plausibly reveal the need for not just some minor course adjustment in our endeavours but a major change of direction or priority. If we have overlooked even just one such crucial consideration, then all our best efforts might be for naught—or they might even be making things worse. Those seeking to make the world better should therefore take it as important to get to the bottom of these matters, or else to find some way of dealing wisely with our cluelessness if it is inescapable.

The FHI works closely with the effective altruism community (e.g., we share office space with the Centre for Effective Altruism) as well as with AI leaders, philanthropic foundations, and other policymakers, scientists, and organizations to ensure that our research has impact. These communication efforts are sometimes complicated by information hazard concerns. Although many in the academic world take it as axiomatic that discovering and publishing truths is good, this assumption may be incorrect; certainly it may admit of exceptions. For instance, if the world is vulnerable in some way, it may or may not be desirable to describe the precise way it is so. I often feel like I’m frozen in an ice block of inhibition because of reflections of this sort. How much easier things would be if one could have had a guarantee that all one’s outputs would be either positive or neutral, and one could go full blast!

Contact

For administrative matters, scheduling, speaking engagements, etc., please contact my assistant:

+44 (0)1865 286800

FHI is always looking for top talent. If you're interested, please see: https://www.fhi.ox.ac.uk/vacancies/


If you need to contact me directly (I regret I am not always able to respond to emails):

Newsletter

Receive rare updates:

Virtual Estate

www.simulation-argument.comDevoted to the question, "Are you living in a computer simulation?"

www.fhi.ox.ac.ukFuture of Humanity Institute

www.anthropic-principle.comPapers on observational selection effects

www.existential-risk.orgHuman extinction scenarios and related concerns



ON THE BANK

On the bank at the end
Of what was there before us
Gazing over to the other side
On what we can become
Veiled in the mist of naïve speculation
We are busy here preparing
Rafts to carry us across
Before the light goes out leaving us
In the eternal night of could-have-been

Some Videos & Lectures

TED2019  

Professor Nick Bostrom chats about the vulnerable world hypothesis with Chris Anderson.

Vancouver (2019), 21 mins

AI Podcast with Lex Fridman 

Discussion on the Simulation Argument for the AI Podcast with Lex Fridman

Oxford (2020), 1 hour 41 mins

Some additional (old, cobwebbed) papers

Interviews

Omens 

Long article by Ross Andersen about the work of the Future of Humanity Institute

(February, 2013)

Interview for The European

Interviewed by Martin Eiermann about existential risks, genetic enhancements, and ethical discourses about technological progress.

(13 June 2011)

Policy

The Future of Identity

On the future of "human identity" in relation to information and communication technologies, automation and robotics, and biotechnology and medicine.

[w/ Anders Sandberg] [Report, Commissioned by the UK's Government Office for Science, 2011] [pdf]

Smart Policy: Cognitive Enhancement and the Public Interest

Summarizing some of the key issues and offering policy recommendations for a "smart policy" on biomedical methods of enhancing cognitive performance.

[w/ Rebecca Roache] [Enhancing Human Capacities, eds. Julian Savulescu et al. (Wiley-Blackwell, 2011): 138–149] [pdf]

Why we need Friendly AI

Humans will not always be the most intelligent agents on Earth, the ones steering the future. What will happen to us when we no longer play that role, and how can we prepare for this transition?

[w/ Luke Muehlhauser] [Think, Vol. 13, No. 36 (2014): 41–47] [pdf] [translation: Russian]

Three Ways to Advance Science

Those who seek the advancement of science should focus more on scientific research that facilitates further research across a wide range of domainsparticularly cognitive enhancement.

[Nature Podcast, 31 January 2008] [pdf]

Miscellaneous

Golden

Fictional interview of an uploaded dog by Larry King.

Synkrotron

A poetry cycle… in Swedish, unfortunately. I stopped writing poetry after this, although I've had a few relapses in the English language.

The World in 2050

Imaginary dialogue, set in the year 2050, in which three pundits debate the big issues of their time

Moralist, meet Scientist

Review of Kwame Anthony Appiah's book "Experiments in Ethics".

[Nature, Vol. 453, No. 7195 (2008): 593–594] [pdf]

How Long Before Superintelligence?

This paper, now a few years old, examines how likely it might be that we will develop superhuman artificial intelligence within the first third of this century.

[Updated version (2008) of the original in International Journal of Futures Studies, Vol. 2 (1998)] [html] [translation: Russian]

When Machines Outsmart Humans

This slightly more recent (but still obsolete) article briefly reviews the argument set out in the previous one, and notes four immediate consequences of human-level machine intelligence.

[Futures, Vol. 35, No. 7 (2003): 759–764, where it appears as the target paper of a symposium, together with five commentaries by other people, to which I had the opportunity to reply in the subsequent issue.] [html]

Everything

Response to 2008 Edge Question: "What have you changed your mind about?"

Most Still to Come

Response to 2010 Edge Question: "How is the Internet changing the way you think?"

Dinosaurs, Dodos, Humans?

Short article on existential risks.

[Global Agenda, Feb (2006): 230–231; the annual publication of the World Economic Forum] [pdf]