David Pearce - Global Catastrophic & Existential Risk - Sleepwalking into the Abyss
Existential risk? I think the greatest underlying source
of existential and global catastrophic risk lies in male human primates doing what evolution "designed" male human primates to do, namely wage war. (cf.
http://ieet.org/index
.php/
IEET/more/4576 )
Unfortunately, we now have thermonuclear weapons to do so.
Does the study of
ERR diminish or enhance ER? One man's risk is another man's opportunity. 2) Is the existence of suffering itself a form of ER insofar as it increases the likelihood of intelligent agency pressing a global
OFF button, cleanly or otherwise?
If I focussed on ERR, phasing out suffering would be high on the
To Do list.
AGI?
Well, I'd argue it's a form anthropomorphic projection on our part to ascribe intelligence or mind to digital computers.
Believers in digital sentience, let alone digital (super)intelligence, need to explain
Moravec's paradox.
(cf. http://en.wikipedia.org/wiki/
Moravec's_paradox) For sure, digital computers can be used to model everything from the weather to the
Big Bang to thermonuclear reactions. Yet why is, say, a bumble bee more successful in navigating its environment in open-field contexts than the most advanced artificial robot the
Pentagon can build today? The success of biological lifeforms since the
Cambrian Explosion has turned on the computational capacity of organic robots to solve the binding problem (http://tracker.preterhuman.net/texts/body_and_health/Neurology/
Binding.pdf) and generate cross-morally matched, real-time simulations of the mind-independent world. On theoretical grounds, I predict digital computers will never be capable of generating unitary phenomenal minds, unitary selves or unitary virtual worlds. In short, classical digital computers are invincibly ignorant zombies. (cf. http://ieet.org/index.php/IEET/more/pearce20120510) They can never "wake up" and explore the manifold varieties of sentience
.
..
So why support initiatives to reduce existential and global catastrophic risk? Such advocacy might seem especially paradoxical if you're inclined to believe (as
I am) that
Hubble volumes where primordial information-bearing self-replicators arise more than once are vanishingly rare - and therefore cosmic rescue missions may be infeasible. Suffering sentience may exist in terrible abundance beyond our cosmological horizon and in googols of other
Everett branches. But on current understanding, it's hard to see how rational agency can do anything about it
.
..
The bad news?
I fear we're sleepwalking towards the abyss. Some of the trillions of dollars of weaponry we're stockpiling designed to kill and maim rival humans will be used in armed conflict between nation states.
Tens of millions and possibly hundreds of millions of people may perish in thermonuclear war.
Multiple possible flash-points exist. I don't know if global catastrophe can be averted. For evolutionary reasons, male humans are biologically primed for competition and violence.
Perhaps the least sociologically implausible prevention-measure would be a voluntary transfer of the monopoly of violence currently claimed by state actors to the
United Nations. But I wouldn't count on any such transfer of power this side of
Armageddon.
http://www.hedweb.com/social-media/pre2014
.html
Does the study of existential and global catastrophic risk increase or decrease the likelihood of catastrophe?
The issue is complicated by the divergent senses that researchers attach to the term "existential risk":
http://www.abolitionist.com/anti-natalism.html
An important strand of Bostrom’s research concerns the future of humanity and long-term outcomes. He introduced the concept of an existential risk, which he defines as one in which an "adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential
." In the 2008 volume "
Global Catastrophic Risks", editors Bostrom and Cirkovic characterize the relation between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects[8] and the
Fermi paradox.[9] In In a 2013-paper in the journal
Global Policy, Bostrom offers a taxonomy of existential risk and proposes a reconceptualization of sustainability in dynamic terms, as a developmental trajectory that minimizes existential risk.
Bostrom has argued that, from a consequentialist perspective, even small reductions in the cumulative amount of existential risk that humanity will face is extremely valuable, to the
point where the traditional utilitarian imperative—to maximize expected utility—can be simplified to the Maxipok principle: maximize the probability of an OK outcome (where an OK outcome is any that avoids existential catastrophe).
Subscribe to this
Channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
Science,
Technology & the
Future: http://scifuture.org
Facebook group on Existential
Risk: https://www.facebook.com/groups/ExistentialRisk