Wednesday, June 13, 2007

Existential Risks

Part of the evolutionary process concerns development with risk; both human-created risks and more cosmic risks that are beyond the bounds of human hands. AI commentator Eliezer Yudkowsky has written quite extensively on these, and two pdf essays - “Cognitive Biases Potentially Affecting Judgment of Global Risks” and “Artificial Intelligence as a Positive and Negative Factor in Global Risk“ - can be found on The Singularity Institute Blog post titled 'Artificial Intelligence, Cognitive Biases, and Global Risk'.

In this post there is an extract from his conclusion of 'Cognitive Biases Potentially Affecting Judgment of Global Risks':

In addition to standard biases, I have personally observed what look like harmful modes of thinking specific to existential risks. The Spanish flu of 1918 killed 25-50 million people. World War II killed 60 million people. 10^7 is the order of the largest catastrophes in humanity’s written history. Substantially larger numbers, such as 500 million deaths, and especially qualitatively different scenarios such as the extinction of the entire human species, seem to trigger a different mode of thinking - enter into a “separate magisterium”. People who would never dream of hurting a child hear of an existential risk, and say, “Well, maybe the human species doesn’t really deserve to survive.


With large numbers it's easy to think of survival as meaning death to many - yet the future is about much, much more: what's at stake is the conscious part of a very cosmic experiment...

2 comments:

Anonymous said...

These claims seem to be somewhat strangely related to Stalin's famous aphorism that "a single death is a tragedy, a million deaths is statistics" - what would 500 million deaths be?

Kingsley said...

Mmmm.. yes - risks are easy to talk about from privileged positions..

Take a peak at: http://lifeboat.com/ex/programs

Cheers - Kingsley