Crucial Considerations The rogue’s yarn that runs through my works is a concern with "crucial considerations". A crucial consideration is an idea or argument that might plausibly reveal the need for not just some minor course adjustment in our practical endeavours, but a major change of direction or priority. If we have overlooked even just one such consideration, then all our best efforts might be for naught---or less. When headed the wrong way, the last thing needed is progress. It is therefore important to pursue such lines of inquiry as have some chance of disclosing any crucial consideration to which we might have hitherto have been oblivious. Some of the relevant questions are about moral philosophy and values. Others have to do with rationality and reasoning under uncertainty. Still others pertain to specific issues and possibilities, such as existential risks, the simulation hypothesis, human enhancement, infinite utilities, anthropic reasoning, information hazards, the future of machine intelligence, or the singularity hypothesis. My working assumption: These high-leverage questions deserve to be studied with at least the same level of scholarship that academics routinely apply to all manner of arcane trivia. This assumption might be wrong. Perhaps we are so irredeemably inept at thinking about the big picture that it is good that we usually don’t. Perhaps attempting to wake up will only result in bad dreams. But how will we know unless we try? |
A Patch for the Simulation Argument. w/ Marcin Kulczycki. Analysis, forthcoming
Anthropic Shadow: Observation Selection Effects and Existential Risks. w/ Milan Cirkovic & Anders Sandberg. Risk Analysis, forthcoming
Pascal's Mugging and The Simulation Argument: Some Explanations. Two short papers for Analysis.
The Future of Humanity. Book chapter on macro-prospects for humanity
Where Are They? Why I hope that the search for extraterrestrial life finds nothing. MIT Technology Review
|
|
|
ETHICS & POLICY
Astronomical Waste: The Opportunity Cost of Delayed Technological Development Suns are illuminating and heating empty rooms, unused energy is being flushed down black holes, and our great common endowment of negentropy is being irreversibly degraded into entropy on a cosmic scale. These are resources that an advanced civilization could have used to create value-structures, such as sentient beings living worthwhile lives... [Utilitas, Vol. 15, No. 3 (2003): pp. 308-314] [html | pdf]
Human Enhancement Original essays by various prominent moral philosophers on the ethics of human enhancement. [Eds. Nick Bostrom and Julian Savulescu (Oxford University Press, Oxford, 2009)]. Enhancement Ethics: The State of the Debate The introductory chapter from the book (w/ Julian Savulescu): pp. 1-22 [pdf]
TRANSHUMANISM
Transhumanist Values Wonderful ways of being may be located in the "posthuman realm", but we can't reach them. If we enhance ourselves using technology, however, we can go out there and realize these values. This paper sketches a transhumanist axiology. [Ethical Issues for the 21st Century, ed. Frederick Adams, Philosophical Documentation Center Press, 2003; reprinted in Review of Contemporary Philosophy, 2005, Vol. 4, May] [html | pdf] [translations: Polish]
|
GLOBAL RISK & THE FUTURE
Global Catastrophic Risks Twenty-six leading experts look at the gravest risks facing humanity in the 21st century, including natural catastrophes, nuclear war, terrorism, global warming, biological weapons, totalitarianism, advanced nanotechnology, general artificial intelligence, and social collapse. The book also addresses over-arching issues--policy responses and methods for predicting and managing catastrophes. Foreword by Lord Martin Rees. [Eds. Nick Bostrom and Milan Cirkovic (Oxford University Press, Oxford, 2008)]. Introduction chapter free here [pdf]
TECHNOLOGY ISSUES
|
ANTHROPIC REASONING & PROBABILITY
Beyond the Doomsday Argument: Reply to Sowers and Further Remarks Argues against George Sower's refutation of the doomsday argument, and outlines what I think is the real flaw. [pdf] Cars In the Other Lane Really Do Go Faster When driving on the motorway, have you ever wondered about (and cursed!) the fact that cars in the other lane seem to be getting ahead faster than you? One might be tempted to account for this by invoking Murphy's Law ("If anything can go wrong, it will", discovered by Edward A. Murphy, Jr, in 1949). But there is an alternative explanation, based on observational selection effects... [PLUS, 2001, No. 17] Observer-relative chances in anthropic reasoning? A paradoxical thought experiment [Erkenntnis, 2000, Vol. 52, pp. 93-108] Examines the implications
of recent evidence for a cosmological constant for the prospects of
indefinite information processing in the multiverse. Co-authored with
Milan M. Cirkovic. [Astrophysics and Space Science, 2000, Vol.
279, No. 4, pp. 675-687] [pdf]
PHILOSOPHY OF MIND
DECISION THEORY
|
FAILED STAND-UP COMEDIAN Prior to taking up my current post as the founding director of the Future of Humanity Institute at Oxford University, I was a British Academy Postdoctoral Research Fellow in the Faculty of Philosophy. Before that, I was a lecturer at Yale University, in the Department of Philosophy and the Institute for Social and Policy Studies. Beside philosophy, I also have a background in physics, computational neuroscience, mathematical logic, and artificial intelligence. My performance as an undergraduate set a national record in my native Sweden. I was a busy young man. Before degenerating into a tweedy academic, I also dabbled in painting, poetry, and drama, and for a while I did stand-up comedy in London. I co-founded the World Transhumanist Association in 1998 to encourage public engagement with the prospect of future technologies being used to enhance human capacities---for example, prolonging healthy lifespan, augmenting elements of cognition such as memory, concentration, and mental energy, and improving emotional well-being. The WTA (later renamed "Humanity Plus"), is a non-profit grassroots organization with some 5,000 members from all over the world, and local chapters in many countries. In 2004, I co-founded the Institute for Ethics and Emerging Technologies, a virtual think tank on technology policy. In the early days, a common reaction was "this is just science fiction". But in the last several years, academia (and to some extent the wider public) has begun paying attention to the possibility of using our growing technological powers to change not only the world around us but also ourselves. Discussions were no longer stuck on whether human enhancement will ever be possible. Instead, the focus was increasingly on ethics: whether it ought to be done. This was a bit of progress. Hopefully, we are now finally entering the constructive phase where we ask, not whether biomedical enhancement is in general good, yes or no, but rather questions like: Which particular enhancements are worth pursuing? How do we overcome the vast technical difficulties? What kind of social and regulatory changes might be needed? How does biomedical enhancement interact with our broader set of priorities? Pleased possibly to have made some contribution to those developments, I have since stepped down from active duty. THE BIG PICTURE My real focus, however, is research. Since 2006, I’ve been directing a unique multidisciplinary research institute at Oxford University, the preposterously but descriptively named Future of Humanity Institute; and I was made full professor in the Faculty of Philosophy in 2008. The FHI is part of the Faculty of Philosophy and the Oxford Martin School. As this page reveals, my research interests are multifarious. The common denominator is that they are all parts of a quest to think more rationally about big picture questions for humanity, in the hope that this will help improve the world. Let’s try a little credo: I see philosophy and science as overlapping parts of a continuum. Many of my interests lie in the intersection. I tend to think in terms of probability distributions rather than dichotomous epistemic categories. I guess that in the far future the human condition will have changed profoundly, for better or worse. I think there is a non-trivial chance that this "far" future will be reached in this century. Regarding many big picture questions, I think there is a real possibility that our views are very wrong. Improving the ways in which we reason, act, and prioritize under this uncertainty would have wide relevance to many of our biggest challenges. I’m probably best known for my work in four areas (i) as a leading light of the transhumanist movement, with many related writings in bioethics and on consequences of future technologies; (ii) for the concept of existential risk; (iii) for the simulation argument; and (iv) for the first mathematically explicit theory of observation selection effects. A fifth area of my work, which has attracted less attention but which I think of as also significant, is on the question of what a consequentialist should do (see e.g. Astronomical Waste, Infinite Ethics, Technological Revolutions). I would like to think that my most important contributions are still to come. |
VIRTUAL ESTATE Papers
on observational selection effects
Devoted
to the question, "Are you living in a computer simulation?"
|
||||||
On the bank at the end (2002) |
The astral glockenspiel quivers (2007) |
See the plucked chicken See the man running See the fountain gushing (2008) |
DRAFTS
POLICY
|
POWERPOINTS, VIDEO, INTERVIEWS, ...
|
MISCELLANEOUS Fictional
interview of an uploaded dog by Larry King. [html]
|
|
|
|