Crucial Considerations

The rogue’s yarn that runs through my works is a concern with "crucial considerations". A crucial consideration is an idea or argument that might plausibly reveal the need for not just some minor course adjustment in our practical endeavours, but a major change of direction or priority.

If we have overlooked even just one such consideration, then all our best efforts might be for naught---or less. When headed the wrong way, the last thing needed is progress. It is therefore important to pursue such lines of inquiry as have some chance of disclosing any crucial consideration to which we might have hitherto have been oblivious.

Some of the relevant questions are about moral philosophy and values. Others have to do with rationality and reasoning under uncertainty. Still others pertain to specific issues and possibilities, such as existential risks, the simulation hypothesis, human enhancement, infinite utilities, anthropic reasoning, information hazards, the future of machine intelligence, or the singularity hypothesis.

My working assumption: These high-leverage questions deserve to be studied with at least the same level of scholarship that academics routinely apply to all manner of arcane trivia. This assumption might be wrong. Perhaps we are so irredeemably inept at thinking about the big picture that it is good that we usually don’t. Perhaps attempting to wake up will only result in bad dreams. But how will we know unless we try?

 
OCTOBER 2010
Currently mainly working on existential risks, in particular trying to understand the prospects and implications of future developments in machine intelligence. This work might develop into one or two books. - Added two papers, one with a patch for the simulation argument, the other on anthropic shadow. Also uploaded an improved version of the Letter from Utopia. Other fairly recent items: named one of the world's top 100 thinkers by Foreign Policy Magazine, and was awarded the Eugene R. Gannon Jr. Award.
RECENT ADDITIONS
A Patch for the Simulation Argument. w/ Marcin Kulczycki. Analysis, forthcoming
Anthropic Shadow: Observation Selection Effects and Existential Risks. w/ Milan Cirkovic & Anders Sandberg. Risk Analysis, forthcoming
The Future of Humanity. Book chapter on macro-prospects for humanity
Whole Brain Emulation: A Roadmap. w Anders Sandberg

Selected papers

ETHICS & POLICY

dragonThe Fable of the Dragon-Tyrant
Recounts the Tale of a most vicious Dragon that ate thousands of people every day, and of the actions that the King, the People, and an assembly of Dragonologists took with respect thereof. [J Med Ethics, Vol. 31, No. 5 (2005): pp. 273-277] [translations: Hebrew, Finnish, Spanish, French, Slovenian, Dutch, Russian] [html] [pdf] [mp3]
The Reversal Test: Eliminating Status Quo Bias in Applied Ethics
We present a heuristic for correcting for one kind of bias (status quo bias), which we suggest affects many of our judgments about the consequences of modifying human nature. We apply this heuristic to the case of cognitive enhancements, and argue that the consequentialist case for this is much stronger than commonly recognized. (w/ Toby Ord) [Ethics, Vol. 116, No. 4 (2006): pp. 656-680] [pdf]

Astronomical Waste: The Opportunity Cost of Delayed Technological Development Suns are illuminating and heating empty rooms, unused energy is being flushed down black holes, and our great common endowment of negentropy is being irreversibly degraded into entropy on a cosmic scale. These are resources that an advanced civilization could have used to create value-structures, such as sentient beings living worthwhile lives... [Utilitas, Vol. 15, No. 3 (2003): pp. 308-314] [html | pdf]

aleph Infinite Ethics new
Cosmology shows that we might well be living in an infinite universe that contains infinitely many happy and sad people. Given some assumptions, aggregative ethics implies that such a world contains an infinite amount of positive value and an infinite amount of negative value. But you can presumably do only a finite amount of good or bad. Since an infinite cardinal quantity is unchanged by the addition or subtraction of a finite quantity, it looks as though you can't change the value of the world. Aggregative consequentialism (and many other important ethical theories) are threatened by total paralysis. We explore a variety of potential cures, and discover that none works perfectly and all have serious side-effects. Is aggregative ethics doomed? (original 2003, revised version 2009) [pdf]
Dignity and Enhancement new
Does human enhancement threaten our dignity as some have asserted? Or could our dignity perhaps be technologically enhanced? After disentangling several different concepts of dignity, this essay focuses on the idea of dignity as a quality (a kind of excellence admitting of degrees). The interactions between enhancement and dignity as a quality are complex and link into fundamental issues in ethics and value theory. [In Human Dignity and Bioethics:  Essays Commissioned by the President’s Council on Bioethics (Washington, D.C.:  2008): pp. 173-207] [pdf]
In Defense of Posthuman Dignity
Brief paper, critiques a host of bioconservative pundits who believe that enhancing human capacities and extending human healthspan would undermine our dignity. [Bioethics, Vol. 19, No. 3 (2005): pp. 202-214] [translations: Italian, Slovenian] [Was chosen for inclusion in a special anthology of the best papers published in this journal in the past two decades] [html | pdf]

enhancement bookHuman Enhancement new Original essays by various prominent moral philosophers on the ethics of human enhancement. [Eds. Nick Bostrom and Julian Savulescu (Oxford University Press, Oxford, 2009)].

Enhancement Ethics: The State of the Debate The introductory chapter from the book (w/ Julian Savulescu): pp. 1-22 [pdf]

 

Human Genetic Enhancements: A Transhumanist Perspective
A transhumanist ethical framework for public policy regarding genetic enhancements, particularly human germ-line genetic engineering [Journal of Value Inquiry, Vol. 37, No. 4 (2003): pp. 493-506] [html | pdf]
Ethical Issues in Human Enhancement
Anthology chapter on the ethics of human enhancement [In New Waves in Applied Ethics, ed. Jesper Ryberg et al. (Palgrave Macmillan, 2008] [w/ Rebecca Roache] [pdf]
Ethical Issues In Advanced Artificial Intelligence
Some cursory notes; not very in-depth. [Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12-17] [html | pdf] [translations: Italian]
Smart Policy: Cognitive Enhancement and the Public Interest new
Short article summarizing some of the key issues and offering specific recommendations, illustrating the opportunity and need for "smart policy": the integration into public policy of a broad-spectrum of approaches aimed at protecting and enhancing cognitive capacities and epistemic performance of individuals and institutions. [Forthcoming in Enhancing Human Capacities, eds. J. Savulescu, R, ter Muelen, and G. Kahane (Oxford: Wiley-Blackwell, 2009)] w/ Rebecca Roache] [pdf]
Recent Developments in the Ethics, Science, and Politics of Life-Extension
A review/commentary on The Fountain of Youth (OUP, 2004). [Aging Horizons, No. 3, Autumn/Winter issue (2005): pp. 28-34] [html | pdf]

TRANSHUMANISM

utopiaLetter from Utopia
The good life: just how good could it be? A vision of the future from the future. [Studies in Ethics, Law, and Technology, Vol. 2, No. 1 (2008): pp. 1-7] [pdf is an improved version (2010), forthcoming in Nexus Journal] [translations: Italian, Spanish] [html] [pdf] [mp3]
Why I Want to be a Posthuman When I Grow Up new
After some definitions and conceptual clarification, I argue for two theses. First, some posthuman modes of being would be extremely worthwhile. Second, it could be good for human beings to become posthuman. [Medical Enhancement and Posthumanity, eds. Bert Gordijn and Ruth Chadwick (Springer, 2008): pp. 107-137] [pdf]
The Transhumanist FAQ
The revised version 2.1. The document represents an effort to develop a broadly based consensus articulation of the basics of responsible transhumanism. Some one hundred people collaborated with me in creating this text. [Published by the WTA; also in German, Hungarian, Dutch, Russian, Polish, Finnish, Greek, Italian] [pdf]

Transhumanist Values Wonderful ways of being may be located in the "posthuman realm", but we can't reach them. If we enhance ourselves using technology, however, we can go out there and realize these values. This paper sketches a transhumanist axiology. [Ethical Issues for the 21st Century, ed. Frederick Adams, Philosophical Documentation Center Press, 2003; reprinted in Review of Contemporary Philosophy, 2005, Vol. 4, May] [html | pdf] [translations: Polish]

A History of Transhumanist Thought
The human desire to acquire new capacities, to extend life and overcome obstacles to happiness is as ancient as the species itself. But transhumanism has emerged gradually as a distinctive outlook, with no one person being responsible for its present shape. Here's one account of how it happened. [Journal of Evolution and Technology, 2005, Vol.14, No. 1] [pdf]

GLOBAL RISK & THE FUTURE

Information Hazards: A Typology of Potential Harms from Knowledge
Information hazards are risks that arise from the dissemination or the potential dissemination of true information that may cause harm or enable some agent to cause harm. Such hazards are often subtler than direct physical threats, and, as a consequence, are easily overlooked. They can, however, be important. (2009, draft) [pdf]
How Unlikely is a Doomsday Catastrophe?
Examines the risk from physics experiments and natural events to the local fabric of spacetime. Argues that the Brookhaven report overlooks an observation selection effect. Shows how this limitation can be overcome by using data on planet formation rates. [w/ Max Tegmark] [expanded; original in Nature, Vol. 438 (2005): p. 754] [translations: Russian] [pdf]

Global Catastrophic Risks Twenty-six leading experts look at the gravest risks facing humanity in the 21st century, including natural catastrophes, nuclear war, terrorism, global warming, biological weapons, totalitarianism, advanced nanotechnology, general artificial intelligence, and social collapse. The book also addresses over-arching issues--policy responses and methods for predicting and managing catastrophes. Foreword by Lord Martin Rees. [Eds. Nick Bostrom and Milan Cirkovic (Oxford University Press, Oxford, 2008)]. Introduction chapter free here [pdf]

The Future of Humanity
This paper discusses four families of scenarios for humanity’s future: extinction, recurrent collapse, plateau, and posthumanity. [In New Waves in Philosophy of Technology, eds. Jan-Kyrre Berg Olsen, Evan Selinger, & Soren Riis (New York: Palgrave McMillan, 2009) [pdf]
The Future of Human Evolution
This paper explores some dystopian scenarios where freewheeling evolutionary developments, while continuing to produce complex and intelligent forms of organization, lead to the gradual elimination of all forms of being worth caring about. We then discuss how such outcomes could be avoided and argue that under certain conditions the only possible remedy would be a globally coordinated effort to control human evolution by adopting social policies that modify the default fitness function of future life forms. [In Death and Anti-Death, ed. Charles Tandy (Ria University Press, 2005)] [pdf | html]
Technological Revolutions: Ethics and Policy in the Dark
Technological revolutions are among the most important things that happen to humanity. This paper discusses some of the ethical and policy issues raised by anticipated technological revolutions, such as nanotechnology. [In Nanoscale: Issues and Perspectives for the Nano Century, eds. Nigel M. de S. Cameron and M. Ellen Mitchell (John Wiley, 2007): pp. 129-152.] [pdf]
Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards
Existential risks are ways in which we could screw up badly and permanently. Remarkably, relatively little serious work has been done in this important area. The point, of course, is not to welter in doom and gloom but to better understand where the biggest dangers are so that we can develop strategies for reducing them. [Journal of Evolution and Technology, 2002, vol. 9] [html | pdf] [translations: Russian]
Dinosaurs, Dodos, Humans?
Short article on existential risks. [Global Agenda, Feb (2006): pp. 230-231; the annual publication of the World Economic Forum] [pdf] [translations: Italian]
What is a Singleton?
Concept describing a kind of social structure. [Linguistic and Philosophical Investigations, Vol. 5, No. 2 (2006): pp. 48-54.]
Where Are They? Why I hope the search for extraterrestrial life finds nothing
Discusses the Fermi paradox, and explains why I hope we find no signs of life, whether extinct or still active, on Mars or anywhere else we may look. [Technology Review, 2008, May/June issue, pp. 72-77.] [pdf] [translations: Italian]

TECHNOLOGY ISSUES

The Wisdom of Nature: An Evolutionary Heuristic for Human Enhancement
Human beings are a marvel of evolved complexity. Such systems can be difficult to enhance. Here we describe a heuristic for identifying and evaluating the practicality, safety and efficacy of potential human enhancements, based on evolutionary considerations. [w/ Anders Sandberg] [In Enhancing Humans, eds. Julian Savulescu and Nick Bostrom (Oxford University Press, 2009): pp. 365-416] [pdf]
Whole Brain Emulation: A Roadmap
A 130-page report on the technological prerequisites for whole brain emulation (aka "mind uploading"). (w/ Anders Sandberg) [Technical Report #2008-3, Future of Humanity Institute, Oxford University (2008)] [pdf]
Converging Cognitive Enhancements
Cognitive enhancements in the context of converging technologies. [Annals of the New York Academy of Sciences, 2006, Vol. 1093, pp. 201-207] [w/ Anders Sandberg] [pdf]
When Machines Outsmart Humans
This slightly more recent article briefly reviews the argument set out in the previous one, and notes four immediate consequences of human-level machine intelligence. [Futures, 2003, Vol. 35, No. 7, pp. 759 - 764, where it appears as the target paper of a symposium, together with five commentaries by other people, to which I had the opportunity to reply in the next issue.]
How Long Before Superintelligence?
This paper, now a few years old, examines how likely it might be that we will develop superhuman artificial intelligence within the first third of this century. [Updated version of the original in Int. Jour. of Future Studies, 1998, vol. 2] [translations: Russian]
Cognitive Enhancement: Methods, Ethics, Regulatory Challenges
Cognitive enhancement comes in many diverse forms. In this paper, we survey the current state of the art in cognitive enhancement methods and consider their prospects for the near-term future. We then review some of ethical issues arising from these technologies. We conclude with a discussion of the challenges for public policy and regulation created by present and anticipated methods for cognitive enhancement. [w/ Anders Sandberg] [Science and Engineering Ethics, 2009, forthcoming] [pdf]
simulationAre You Living in a Computer Simulation?
This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching the posthuman stage; (2) any posthuman civilization is extremely unlikely to run significant number of simulations or (variations) of their evolutionary history; (3) we are almost certainly living in a computer simulation. It follows that the naïve transhumanist dogma that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed. [Philosophical Quarterly, Vol. 53, No. 211 (2003): pp. 243-255] [pdf | html] [Also with a Reply to Brian Weatherson's comments [Philosophical Quarterly, Vol. 55, No. 218, pp. 90-97; and a Reply to Anthony Brueckner, Analysis, Vol. 69, No. 3 (2009): pp. 458-461] And a new paper co-authored w/ Marcin Kulczycki [(2010) Analysis, forthcoming] new

 

 

ANTHROPIC REASONING & PROBABILITY

anthropicAnthropic Bias: Observation Selection Effects in Science and Philosophy
Failure to consider observation selection effects result in a kind of bias that infest many branches of science and philosophy. This book presented the first mathematical theory for how to correct for these biases. It also discusses some implications for cosmology, evolutionary biology, game theory, the foundations of quantum mechanics, the Doomsday argument, the Sleeping Beauty problem, the search for extraterrestrial life, the question of whether God exists, and traffic planning. Five sample chapters are online along with a brief primer. [primer translations: Russian] [Routledge, New York, 2002]
Self-Locating Belief in Big Worlds: Cosmology's Missing Link to Observation
Current cosmological theories say that the world is so big that all possible observations are in fact made. But then, how can such theories be tested? What could count as negative evidence? To answer that, we need to consider observation selection effects. [Journal of Philosophy, Vol. 99, No. 12 (2002): pp. 607-623] [html | pdf]
The Mysteries of Self-Locating Belief and Anthropic Reasoning
Summary of some of the difficulties that a theory of observation selection effects faces and sketch of a solution. [Harvard Review of Philosophy, Vol. 11, Spring (2003): pp. 59-74] [pdf]
Anthropic Shadow: Observation Selection Effects and Human Extinction Risks
"Anthropic shadow" is an observation selection effect that prevent observers from observing certain kinds of catastrophes in their recent geological and evolutionary past. We risk underestimating the risk of catastrophe types that lie in this shadow. (w/ Milan Cirkovic & Anders Sandberg) [Risk Analysis, forthcoming (2010)] [pdf]
Observation Selection Effects, Measures, and Infinite Spacetimes
An advanced Introduction to observation selection theory and its application to the cosmological fine-tuning problem [Universe or Multiverse?, ed. Bernard Carr (Cambridge University Press, 2007)] [pdf]
The Doomsday argument and the Self-Indication Assumption: Reply to Olum
Argues against Olum and the Self-Indication Assumption. [Philosophical Quarterly, Vol. 53, No. 210 (2003): pp. 83-91] [w/ Milan Cirkovic] [pdf]
The Doomsday Argument is Alive and Kicking
Have Korb and Oliver refuted the doomsday argument? No. [Mind, Vol.108, No.431 (1999): pp. 539-550] [translations: Russian]
The Doomsday Argument, Adam & Eve, UN++, and Quantum Joe
On the Doomsday argument and related paradoxes. [Synthese, Vol. 127, No. 3 (2001): pp. 359-387] [html | pdf]
A Primer on the Doomsday argument
The Doomsday argument purports to prove, from basic probability theory and a few seemingly innocuous empirical premises, that the risk that our species will go extinct soon is much greater than previously thought. My view is that the Doomsday argument is inconclusive - although not for any trivial reason. In my book, I argued that a theory of observation selection effects is needed to explain where it goes wrong. [Colloquia Manilana (PDCIS), 1999, Vol. 7; reprinted in The Actuary, March 2001, and in ephilosopher.com, 2001] [translations: Russian]
Sleeping Beauty and Self-Location: A Hybrid Model
The Sleeping Beauty problem is an important test stone for theories about self-locating belief. I argue against both the traditional views on this problem and propose a new synthetic approach. [Synthese, Vol. 157, No. 1 (2007): pp. 59-78] [pdf]

Beyond the Doomsday Argument: Reply to Sowers and Further Remarks Argues against George Sower's refutation of the doomsday argument, and outlines what I think is the real flaw. [pdf]

carsCars In the Other Lane Really Do Go Faster When driving on the motorway, have you ever wondered about (and cursed!) the fact that cars in the other lane seem to be getting ahead faster than you? One might be tempted to account for this by invoking Murphy's Law ("If anything can go wrong, it will", discovered by Edward A. Murphy, Jr, in 1949). But there is an alternative explanation, based on observational selection effects... [PLUS, 2001, No. 17]

Observer-relative chances in anthropic reasoning? A paradoxical thought experiment [Erkenntnis, 2000, Vol. 52, pp. 93-108]

Examines the implications of recent evidence for a cosmological constant for the prospects of indefinite information processing in the multiverse. Co-authored with Milan M. Cirkovic. [Astrophysics and Space Science, 2000, Vol. 279, No. 4, pp. 675-687] [pdf]

 

PHILOSOPHY OF MIND

If two brains are in identical states, are there two numerically distinct phenomenal experiences or only one? Two, I argue. But what happens in intermediary cases? This paper looks in detail at this question and suggests that there can be a fractional (non-integer) number of qualitatively identical experiences. This has implications for what it is to implement a computation and for Chalmer's Fading Qualia thought experiment. [Minds and Machines, 2006, Vol. 16, No. 2, pp. 185-200] [pdf]

 

DECISION THEORY

A self-undermining variant of the Newcomb problem. [Analysis, Vol. 61, No. 4 (2001) pp. 309-310] [html | pdf]
Pascal's Mugging new
Finite version of Pascal's Wager. [Analysis, Vol. 69, No. 3 2009: 443-445 [pdf]

FAILED STAND-UP COMEDIAN

Prior to taking up my current post as the founding director of the Future of Humanity Institute at Oxford University, I was a British Academy Postdoctoral Research Fellow in the Faculty of Philosophy. Before that, I was a lecturer at Yale University, in the Department of Philosophy and the Institute for Social and Policy Studies.

Beside philosophy, I also have a background in physics, computational neuroscience, mathematical logic, and artificial intelligence. My performance as an undergraduate set a national record in my native Sweden. I was a busy young man.  Before degenerating into a tweedy academic, I also dabbled in painting, poetry, and drama, and for a while I did stand-up comedy in London.

I co-founded the World Transhumanist Association in 1998 to encourage public engagement with the prospect of future technologies being used to enhance human capacities---for example, prolonging healthy lifespan, augmenting elements of cognition such as memory, concentration, and mental energy, and improving emotional well-being.  The WTA (later renamed "Humanity Plus"), is a non-profit grassroots organization with some 5,000 members from all over the world, and local chapters in many countries. In 2004, I co-founded the Institute for Ethics and Emerging Technologies, a virtual think tank on technology policy.

In the early days, a common reaction was "this is just science fiction". But in the last several years, academia (and to some extent the wider public) has begun paying attention to the possibility of using our growing technological powers to change not only the world around us but also ourselves.  Discussions were no longer stuck on whether human enhancement will ever be possible.  Instead, the focus was increasingly on ethics: whether it ought to be done.  This was a bit of progress.

Hopefully, we are now finally entering the constructive phase where we ask, not whether biomedical enhancement is in general good, yes or no, but rather questions like: Which particular enhancements are worth pursuing? How do we overcome the vast technical difficulties? What kind of social and regulatory changes might be needed?  How does biomedical enhancement interact with our broader set of priorities?

Pleased possibly to have made some contribution to those developments, I have since stepped down from active duty.

THE BIG PICTURE

My real focus, however, is research. Since 2006, I’ve been directing a unique multidisciplinary research institute at Oxford University, the preposterously but descriptively named Future of Humanity Institute; and I was made full professor in the Faculty of Philosophy in 2008. The FHI is part of the Faculty of Philosophy and the Oxford Martin School.

As this page reveals, my research interests are multifarious. The common denominator is that they are all parts of a quest to think more rationally about big picture questions for humanity, in the hope that this will help improve the world.

Let’s try a little credo: I see philosophy and science as overlapping parts of a continuum. Many of my interests lie in the intersection. I tend to think in terms of probability distributions rather than dichotomous epistemic categories. I guess that in the far future the human condition will have changed profoundly, for better or worse. I think there is a non-trivial chance that this "far" future will be reached in this century. Regarding many big picture questions, I think there is a real possibility that our views are very wrong. Improving the ways in which we reason, act, and prioritize under this uncertainty would have wide relevance to many of our biggest challenges.

I’m probably best known for my work in four areas (i) as a leading light of the transhumanist movement, with many related writings in bioethics and on consequences of future technologies; (ii) for the concept of existential risk; (iii) for the simulation argument; and (iv) for the first mathematically explicit theory of observation selection effects. A fifth area of my work, which has attracted less attention but which I think of as also significant, is on the question of what a consequentialist should do (see e.g. Astronomical Waste, Infinite Ethics, Technological Revolutions).

I would like to think that my most important contributions are still to come.

 

 

CONTACT

For administrative matters, scheduling, invitations, etc. please contact Lisa Makros:

Email: fhi[at]philosophy[dot]ox[dot]ac[dot]uk
Phone: +44 (0)1865 286279

If you need to contact me directly (I regret it is not possible for me to respond to all emails):
Email: nick[at]nickbostrom[dot]com
Phone (cell): +44 (0)7789 74 42 42
Phone (office): +44 (0)1865 28 68 89
Snailmail: St. Cross College | St. Giles | Oxford | OX1 3LZ | United Kingdom

 

VIRTUAL ESTATE

Papers on observational selection effects
Devoted to the question, "Are you living in a computer simulation?"

 

ON THE BANK

On the bank at the end
Of what was there before us
Gazing over to the other side
On what we can become
Veiled in the mist of naïve speculation
We are busy here preparing
Rafts to carry us across
Before the light goes out leaving us
In the eternal night of could-have-been

(2002)

 

PYTHAGOREAN JAMBOREE

The astral glockenspiel quivers
As our bodies align in the orbit of Venus;
Galloping stallions and mares
Print with their hooves, pixel by pixel,
The lights and shadows of mortal life,
Pink flesh for the gods’ inspection ‒
Who clap their hands together at the sight;
For the heavens love the authentic peep.
Whence the orbs appear to us sublunars
Empty, mute, and dimly lit;
While on the other side the jamboree,
Abuzz with primal harmony,
Fluoresces with the ecstasy of being.

(2007)

 

UNPRETTY POEM

See the plucked chicken
Its throat ineptly slit
Over the abattoir drain
Bleeding its life away

See the man running
Running for his life
Chased by a rabid dog
The pit-bull called Eternity

See the fountain gushing
From fifty fishes’ mouths
Various parabolas
Same filthy water

(2008)

 

DRAFTS

Ethical Principles in the Creation of Artificial Minds
A brief proposal. Revised in 2005. [html]
Discusses the role of time in desire-satisfactionism. E.g. is it more important that a desire gets satisfied if it has been held longer? Do past desires count? (Note: This paper needs major revising.) [pdf]

POLICY

Smart Policy: Cognitive Enhancement and the Public Interest new
Summarizing some of the key issues and offering policy recommendations for a "smart policy" on biomedical methods of enhancing cognitive performance. [Forthcoming in Enhancing Human Capacities, eds. J. Savulescu, R, ter Muelen, and G. Kahane (Oxford: Wiley-Blackwell, 2009)] w/ Rebecca Roache] [pdf]
Three Ways to Advance Science
Those who seek the advancement of science should focus more on scientific research that facilitates further research across a wide range of domains---particularly cognitive enhancement. [Nature Podcast, 31 January 2008] [pdf]
 
 
Drugs can be used to treat more than disease
Short letter to the editor on obstacles to the development of better cognitive enhancement drugs. [Nature, Vol. 452, No. 7178 (2008): p. 520] [pdf]

POWERPOINTS, VIDEO, INTERVIEWS, ...

"Three Big Problems"
This short talk (prepared on short notice and delivered in a somewhat sleep-deprived state) was delivered to a popular audience at the TED conference in Oxford, July 2005. Not sure they knew what to make of it.
Global Catastrophic Risk new
Lecture for the Chancellor's Court of Benefactors, Oxford 22 Nov 2008 [video]
In the Great Silence there is Great Hope
Radio lecture on extraterrestrial life and the Fermi Paradox, commissioned for the BBC Radio 3 (aired on 19 July 2007) [pdf] [no audio available]
Three Ways to Advance Science
[Nature Podcast, 31 January 2008] [pdf] [mp3] (my segment starts about 19:30 in)
 
Interview for Oxford Today Alumni Magazine new
With Peter Snow, July 17, 2009.
Short podcast segment on global catastrophic risks new
Interviewed by David Edmonds, October 6, 2009
Interview for Nature
With Kerri Smith, August 22, 2006.
Interview for The Guardian
With John Sutherland, May 9, 2006. [pdf]
 

MISCELLANEOUS

Fictional interview of an uploaded dog by Larry King. [html]
Synkrotron
An old volume of poetry... all in Swedish. I quit writing poetry because the world already has quite a lot of it. I've written just a few poems in English more recently (see above).
The World in 2050
Imaginary dialogue, set in the year 2050, in which three pundits debate the big issues of their time.
Transhumanism: The World's Most Dangerous Idea?
According to Francis Fukuyama, yes. This is my response. [Short version in Foreign Policy, in press; full version in Betterhumans, issue 10/19/2004] [html] [translations: Italian]
Moralist, meet Scientist
Review of Kwame Anthony Appiah's book "Experiments in Ethics". [Nature, Vol. 453 (2008): pp. 593-594] [pdf]
Everything
Response to 2008 Edge Question: "What have you changed your mind about?" [pdf]
Superintelligence
Response to 2009 Edge Question: "What will change everything?" [pdf]

 

 

 

Some moldy old stuff

Predictions from Philosophy?
How analytical philosophers could help forecast our technological future. Argues that academic philosophers can do something useful if they become scientific generalists, polymaths, with a thorough grounding in several sciences. Also contains specific remarks about the Fermi paradox, superintelligence, sociological attractors and other things. [Colloquia Manilana (PDCIS), 2000, Vol. 7]
What to say to the Skeptic
A discussion, in dialog form, of the position of the radical skeptic, who doubts that any inductive knowledge is possible. Very early work.
Human Reproductive Cloning from the Perspective of the Future
Boy have I been asked the cloning question too many times! But here is a statement of 27 Dec 2002.
Heart of the Matter, BBC1 Television
Script: "Against Aging". (March 2000).
The Epistemological Mystique of Self-Locating Belief
Some puzzling problems related to self-location
Cortical Integration
Possible Solutions to the Binding and Linking Problems in Perception, Reasoning and Long Term Memory. (My MSc-thesis from 1996 in computational neuroscience on the problem of finding neurologically plausible dynamical binding mechanisms in the brain for producing and storing structured representations.) [Consciousness and Cognition, 2000, Vol. 9, No. 2, pp. 39S-40S]
Understanding Quine's Theses of Indeterminacy
My old MA-thesis in philosophy. Boring. [Linguistic and Philosophical Investigations, 2005, Vol. 9, March]
Observational Selection Effects and Probability
Doctoral dissertation, which presented the first mathematically explicit "observation selection theory". It has now been transfigured into a book, which I'd recommend instead.
What is transhumanism?
An obsolete introduction but with a more recent postscript. [Earlier version in Sawaal, August 2000; reprinted in Doctor Tandy's First Guide to Life Extension and Transhumanity, 2001, Ria University Press, Palo Alto]
Some older online interviews
Nanotechnology now (2001) | Resonance Publications (2000)