Existential risk: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
No edit summary
No edit summary
Line 5: Line 5:
Among the grimmest warnings of existential risks from advanced technology are those of computer scientist [[Bill Joy]], who envisages the possibility of global destruction as new technologies become increasingly powerful and uncontrollable, and [[Martin Rees]] who has written about an extensive range of risks to human survival. Environmentalist [[Bill McKibben]] fears that human life will come to seem meaningless if certain historical limits to human technological capabilities are exceeded. The risk perceived here is psychological, rather than physical, but it would involve a universal and irreversible diminution of human life.
Among the grimmest warnings of existential risks from advanced technology are those of computer scientist [[Bill Joy]], who envisages the possibility of global destruction as new technologies become increasingly powerful and uncontrollable, and [[Martin Rees]] who has written about an extensive range of risks to human survival. Environmentalist [[Bill McKibben]] fears that human life will come to seem meaningless if certain historical limits to human technological capabilities are exceeded. The risk perceived here is psychological, rather than physical, but it would involve a universal and irreversible diminution of human life.


While [[transhumanism]] advocates the development of advanced technologies to enhance human physical and mental powers, transhumanist thinkers typically acknowledge that the same technologies could bring existential risks. Generally, transhumanism holds that the potential benefits are worth the existential risks (or that the technology would be impossible to prevent anyway), and many transhumanists, including Bostrom, are actively engaged in consideration of how these risks might best be reduced or mitigated.
While [[transhumanism]] advocates the development of advanced technologies to enhance human physical and mental powers, transhumanist thinkers typically acknowledge that the same technologies could bring existential risks. Generally, transhumanism holds that the potential benefits are at least equal in scope and magnitude to the existential risks (or that the risky technology would be impossible to prevent regardless), and many transhumanists, including Bostrom, are actively engaged in consideration of how these risks might best be reduced or mitigated.


[[Joel Garreau]]'s book ''Radical Evolution'' contains extensive discussion of possible existential risks (and possible radical benefits) from emerging technologies.
[[Joel Garreau]]'s book ''Radical Evolution'' contains extensive discussion of possible existential risks (and possible radical benefits) from emerging technologies.

Revision as of 08:35, 7 October 2007

File:Ex-risks.PNG
Scope/intensity grid from Bostrom's existential risk paper.

In future studies, an existential risk is a risk that is both global and terminal. Nick Bostrom defines an existential risk as a risk "where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential." The term is frequently used in transhumanist and Singularitarian communities to describe disaster and doomsday scenarios caused by bioterrorism, non-Friendly superintelligence, misuse of molecular nanotechnology, or other sources of danger.

Among the grimmest warnings of existential risks from advanced technology are those of computer scientist Bill Joy, who envisages the possibility of global destruction as new technologies become increasingly powerful and uncontrollable, and Martin Rees who has written about an extensive range of risks to human survival. Environmentalist Bill McKibben fears that human life will come to seem meaningless if certain historical limits to human technological capabilities are exceeded. The risk perceived here is psychological, rather than physical, but it would involve a universal and irreversible diminution of human life.

While transhumanism advocates the development of advanced technologies to enhance human physical and mental powers, transhumanist thinkers typically acknowledge that the same technologies could bring existential risks. Generally, transhumanism holds that the potential benefits are at least equal in scope and magnitude to the existential risks (or that the risky technology would be impossible to prevent regardless), and many transhumanists, including Bostrom, are actively engaged in consideration of how these risks might best be reduced or mitigated.

Joel Garreau's book Radical Evolution contains extensive discussion of possible existential risks (and possible radical benefits) from emerging technologies.

The articles risks to civilization, humans and planet Earth and human extinction list a number of potential existential risks.


Quote

Our approach to existential risks cannot be one of trial-and-error. There is no opportunity to learn from errors. The reactive approach – see what happens, limit damages, and learn from experience – is unworkable. Rather, we must take a proactive approach. This requires foresight to anticipate new types of threats and a willingness to take decisive preventive action and to bear the costs (moral and economic) of such actions.
-- Nick Bostrom

See also

References

External links

Bibliography

  • Joel Garreau, Radical Evolution, 2005
  • Martin Rees, Our Final Hour (UK title: "Our Final Century"), 2003, ISBN 0-465-06862-6