Nick Bostrom

from Wikipedia, the free encyclopedia
Nick Bostrom (2014)

Nick Bostrom (born March 10, 1973 as Niklas Boström ) is a Swedish philosopher at the University of Oxford .

He is known for his research and publications in the fields of bioethics and technology assessment , in particular on existential risk , the anthropic principle , ethical aspects of technical improvement of the human body and the risks of superintelligence . In doing so, he often employs formal epistemology , which draws on formalizations and findings from statistics and decision theory when analyzing and developing arguments. Together with Toby Ord, he developed the “Reversal Test” as a heuristic to neutralize the status quo bias . He advocates consequentialism . Bostrom is a regular contributor to English-language media on topics such as ethics, politics, futurology, and transhumanism .

Career

In 1998, Bostrom founded the World Transhumanist Association with David Pearce . In 2004 he founded the Institute for Ethics and Emerging Technologies with James Hughes . He is not currently employed by any of these organizations. Bostrom received his doctorate ( Ph.D. ) from the London School of Economics in 2000 .

In 2005 he became director of the newly created Oxford Future of Humanity Institute (FHI). As part of the movement of effective altruism , the FHI pursues the goal of achieving the highest possible positive effect for humanity in the long term through its research.

His publication list includes over 200 titles including the New York Times bestseller Superintelligence. Paths, Dangers, Strategies and Anthropic Bias . He received the Eugene R. Gannon Award 2009 and is on the list Top 100 Global Thinkers of Foreign Policy . Thanks to their mention by Elon Musk and Bill Gates , Bostrom's work on superintelligence also became known to a wider public. Together with Bostrom, Stephen Hawking , Max Tegmark and Martin Rees, Musk is one of the signatories of a manifesto of the Future of Life Institute, which declares the study of superintelligence and its harmless utilization as a valid and urgent research topic.

Work

Existential risks

Bostrom is known for his research on existential risks , events that are able to extinguish the intelligent life that has arisen on earth or to drastically and permanently restrict its development.

Super intelligence

Bostrom has played a key role in the treatment of the topic of superintelligence in recent years, including in his book Superintelligence: Scenarios of a Coming Revolution . He regards artificial intelligences, which would be far superior to human, both as a step towards transhumanism and as an existential risk. Bostrom sees a rapid transition to an intelligence explosion as more likely than a moderate one, and much more likely than a slow transition. According to Bostrom, such a quick takeoff harbors the risk that people lose control of the technology that subsequently no longer needs them. That could mean the end of mankind.

Anthropic Reasoning

In his work on the anthropic principle , he diagnoses a cognitive distortion that can be traced back to observer effects , the observer selection bias, for the previous debates . The selection and evaluation of various possible answers to a question, the answer or decision of which is incomplete, is influenced by the fact that there is someone who is able to ask the question. Every observer would accordingly consider events or alternative events that make his own existence improbable to be improbable or neglect it. In cosmology, based on the anthropic principle, this leads to inconclusive or incorrect assumptions about the probability of the validity of cosmological models.

Bostrom formulates various maxims for arguments and considerations with anthropic principles.

  • Self Sampling Assumption (SSA): "One should conclude as if one were a random selection from the set of all observers in his reference class."
  • Strong Self Sampling Assumption (SSSA): "One should conclude as if the current observation time was a random selection from the set of all observation times in its reference class."
  • Self Indication Assumption (SIA): "One should conclude as if one were a random selection from the set of all possible observers."

Since the exact determination of the reference class, i.e. H. the class of all entities, from which an observer can reasonably assume to be randomly selected, but is in many cases uncertain, Bostrom believes above all such evidence with the aid of anthropic principles to be credible, the results of which are as independent as possible of the choice of reference class. Bostrom uses various thought experiments to check the validity of these maxims.

The self-selection assumptions expanded Bostrom into a model of anthropic bias (anthropic bias) and anthropischem Close (anthropic reasoning) from the uncertainty of the obscurity of their own place in our universe - or who even "we are." It could also be a way to overcome various cognitive bias boundaries inherent in the people who make the observations and express models of our universe through mathematics.

Simulation hypothesis

In a thinking model that is also often received in popular science , Bostrom deals with the simulation hypothesis . In his argument, a more highly developed, posthuman civilization could be able and willing to simulate reality, including the entire universe, with computer technology. This could possibly already have happened. The possibility that our universe is a simulated reality and that all living things are consequently part of this simulation is one of the three following alternatives, of which at least one is true according to Bostrom.

Bostrom compares three scenarios:

  1. The probability that a civilization will reach a "posthuman" stage of development is almost zero
  2. Almost no "posthuman" civilization is interested in simulating human predecessors.
  3. There is a high probability that we are living in a simulated reality.

If assumptions 1 and 2 are wrong, then, according to Bostrom, a high probability must be assumed that our universe is actually a simulation (simulation hypothesis). He assumes that there should be a multitude of such civilizations, if there can be. From this, Bostrom concludes that either mankind will never be able to reach the necessary technological level to realize a simulation of our current state or that there is a high probability that we live in such a simulation.

He arrives at this trilemma through the following mathematical consideration:

Be there

the percentage of civilizations that reach a higher level of development.
the average number of ancestral simulations performed in civilizations that have reached a stage of development where they can do so (the so-called posthuman level).
the average number of individuals who lived in a civilization before it was able to run such simulations.

Then

the percentage of observers with human-like experiences living in a simulation.

If we now take as the percentage of advanced civilizations in which there is an interest in simulations of reality (at least among individuals who have the necessary technology and resources), and as the average number of simulations of ancestors of such a civilization at our level of development, we get:

It follows:

Since assuming enormous value, at least one of the following three sentences will be true:

Works

Books

  • Nick Bostrom: Superintelligence. Paths, Dangers, Strategies . Oxford University Press, Oxford 2013, ISBN 978-0-19-967811-2 .
    • German: Superintelligence - scenarios of a coming revolution . Suhrkamp, ​​Berlin 2014, ISBN 978-3-518-58612-9 (English: Superintelligence. Paths, Dangers, Strategies . Translated by Jan-Erik Strasser).
  • Nick Bostrom, Julian Savulescu : Human Enhancement . Oxford University Press, 2011, ISBN 978-0-19-959496-2 .
  • Nick Bostrom, Milan M. Cirkovic: Global Catastrophic Risks . Oxford University Press, 2011, ISBN 978-0-19-960650-4 .
  • Nick Bostrom: Anthropic Bias. Observation Selection Effects in Science and Philosophy . Routledge, New York 2002, ISBN 0-415-93858-9 .

Article (selection)

  • Nick Bostrom: The Vulnerable World Hypothesis . In: Global Policy . tape 10 , 2019, pp. 455–476 , doi : 10.1111 / 1758-5899.12718 (English, archive.org [PDF]).
  • Nick Bostrom: Existential Risk Prevention as Global Priority . In: Global Policy . 1st edition. tape 4 , 2013, p. 15–31 ( archive.org ).
  • Nick Bostrom: Infinite Ethics . In: Analysis and Metaphysics . 9th edition. tape 10 , 2011 ( archive.org [PDF]).
  • Nick Bostrom, Toby Ord : The Reversal Test: Eliminating Status Quo Bias in Applied Ethics . In: Ethics . tape 116 , no. 4 , 2006, p. 656–679 , doi : 10.1086 / 505233 (English, nickbostrom.com [PDF]).
  • Nick Bostrom: Are We Living in a Computer Simulation? In: The Philosophical Quarterly . 211th edition. tape 53 , 2003, p. 243–255 , doi : 10.1111 / 1467-9213.00309 (English, archive.org [PDF]).
  • Nick Bostrom: Astronomical Waste: The Opportunity Cost of Delayed Technological Development . In: Utilitas . 3. Edition. tape 15 . Cambridge University Press, 2003, pp. 308-314 , doi : 10.1017 / S0953820800004076 ( archive.org [PDF]).

Web links

Commons : Nick Bostrom  - collection of pictures, videos and audio files

Individual evidence

  1. ^ Future of Humanity Institute - FHI: About FHI - Future of Humanity Institute. Retrieved July 3, 2017 (UK English).
  2. Will humans be around in a billion years? Or a trillion? - Ross Andersen | Aeon essays . In: Aeon . ( aeon.co [accessed July 3, 2017]).
  3. ^ The viability of Transcendence: the science behind the film . In: OUPblog . Retrieved October 16, 2014.
  4. ^ New York Times . New York Times. Retrieved February 19, 2015.
  5. ^ Oxford University Press . Oxford University Thanks Press. Retrieved March 4, 2015.
  6. ^ Forbes . Forbes. Retrieved February 19, 2015.
  7. ^ The Fiscal Times . The Fiscal Times. Retrieved February 19, 2015.
  8. ^ The New York Times Blog . The New York Times. Retrieved March 4, 2015.
  9. ^ The Future of Life Institute Open Letter . The Future of Life Institute. Retrieved March 4, 2015.
  10. ^ Nick Bostrom: Existential Risks . In: Journal of Evolution and Technology . tape March 9 , 2002 ( jetpress.org ).
  11. Ross Andersen: Human extinction . In: Aeon Essays . February 25, 2013 ( aeon.co [accessed October 16, 2014]).
  12. ^ Nick Bostrom: Superintelligence. Scenarios of a coming revolution. Suhrkamp 2016.
  13. Nick Bostrom: Anthropic Bias: Observation Selection Effects in Science and Philosophy (Studies in Philosophy) . Routledge, London 2010, ISBN 978-0-415-88394-8 ( anthropic-principle.com [accessed October 21, 2015]).
  14. Markus Becker: Felt Reality: Does Humanity Live in the Matrix? SPIEGEL Online, November 16, 2004, accessed October 21, 2015 .
  15. a b Nick Bostrom: Are You living in a Computer Simulation? In: Faculty of Philosophy, Oxford University (Ed.): Philosophical Quarterly . tape 53 , no. 211 , 2003, p. 243–255 (English, simulation-argument.com [PDF; 240 kB ; accessed on August 2, 2011]).