Technological singularity

from Wikipedia, the free encyclopedia

Various theories in futurology are summarized under the term technological singularity . It is predominantly understood as a hypothetical future point in time at which artificial intelligence (AI) surpasses human intelligence and would thus rapidly improve itself and make new inventions , whereby technical progress would be irreversible and accelerated to such an extent that the future of humanity behind this event would no longer be predictable. The first superintelligence would therefore be the last invention that mankind has to make, since future inventions would then largely be developed by machines.

The term became popular through the 2005 book "The Singularity Is Near: When Humans Transcend Biology" (German title: " Menschheit 2.0: Die Singularität Nahe ") by the American computer pioneer and author Raymond Kurzweil , who added the date of the singularity to the Year 2045 estimates.

The predicted time of the singularity has been postponed several decades into the future by futurologists. However, it is likely that it will come as a surprise, possibly even for those involved in the development.

The term is closely related to the theories and ideas of transhumanism and posthumanism . Some of their representatives assume that the associated technological progress can significantly increase the duration of human life expectancy or even achieve biological immortality .

Content of the theory

Exponential performance development over the course of computer history

The expectation of a technological singularity is based on the observation that technology and science have been developing ever faster since the dawn of mankind and that many numerical developments such as knowledge and economic development have followed at least exponential growth . This includes in particular the computing power of computers, as Gordon Moore pointed out in 1965 ( Moore's law ). Since around 1940, the computing power available for US $ 1,000 has doubled in ever shorter time intervals, most recently every 18 months, so far by a total of twelve orders of magnitude . This rapid technical progress contrasts with the constant performance of the human brain with regard to certain abilities. Hans Moravec put the computing power of the brain at 100 teraflops , Raymond Kurzweil at 10,000 teraflops. Supercomputers have already clearly exceeded this computing power . For comparison, a graphics card for 800 euros (7/2019) has a performance of 11 teraflops. Even if one ascribes higher computing power to the human ability to think, the technological singularity is only a matter of time.

Using the example of speech recognition, Anthony Berglas dares to roughly compare the computing power of humans and computers. Today's (2012) desktop computers would be sufficient for speech recognition on a human level. In the human brain, known regions that are used for speech recognition certainly account for more than 0.01% of the total brain. If the rest of human intelligence could just as easily be converted into algorithms, only a few orders of magnitude would be missing before computers achieve the thinking ability of humans. At this point, Berglas does not go into the functionality of the brain regions mentioned in this comparison that go beyond the mere identification of the spoken words.

As a further basic condition for a singularity, in addition to pure computing power, there is also strong artificial intelligence, which does not have to be programmed specifically for a task. A strong artificial intelligence with more computing power than the human brain would be a so-called superintelligence. If it were to improve itself, there would be rapid technical progress that people could no longer intellectually follow.

In addition to artificial intelligence, other technologies are also traded that could lead to a technological singularity: Technical implants with brain-computer interfaces or genetic engineering could increase the performance of the human mind to such an extent that people could no longer follow developments without this equipment. These scenarios are in the futurology, the term " enhanced intelligence " (English. Augmented intelligence out).

Development of theories

John von Neumann around 1940

First mentions (1958-1970)

The first known mention of the concept of a technological singularity goes back to Stanisław Ulam , who commented on a conversation with John von Neumann in May 1958 as follows :

“One of the discussions revolved around the constant acceleration of technical progress and changes in the way we live, which seems to lead to a crucial singularity in human history, according to which living conditions as we know them could not continue. "

In 1965, the statistician IJ Good described a concept that came even closer to the dominant meaning of singularity today in that it included the role of artificial intelligence:

“An ultra-intelligent machine is defined as a machine that can by far exceed the intellectual abilities of every person, no matter how intelligent they are. Since building such machines is one of those intellectual skills, an ultra-intelligent machine can make even better machines; undoubtedly there would then be an explosive development of intelligence, and human intelligence would lag far behind. The first ultra-intelligent machine is therefore the last invention that humans have to make. "

The economic trend analysis Future Shock by Alvin Toffler , published in 1970, also referred to the singularity.

Vernor Vinge and Ray Kurzweil

Ray Kurzweil (2006)

In the 1980s, mathematician and author Vernor Vinge began to speak of a singularity , and in 1993 he published his ideas in the article Technological Singularity. This is where the frequently quoted prognosis comes from that “within 30 years we will have the technological means to create superhuman intelligence. A little later, the era of humans is over. "

Vinge postulates that superhuman intelligence, regardless of whether it is achieved through cybernetic human intelligence or through artificial intelligence, will in turn be even better able to increase its intelligence. "If progress is driven by superhuman intelligence," says Vinge, "this progress will be much faster." This type of feedback is said to lead to enormous progress in a very short time ( intelligence explosion ).

In the article The Law of Accelerating Returns , published in 2001, Ray Kurzweil proposes that Moore's Law is just a special case of a more general law according to which the entire technological evolution takes place. He believes the exponential growth described by Moore's Law will continue in the technologies that will supersede today's microprocessors, ultimately leading to the singularity he defines as “technological change” that is “so rapid and all-encompassing that it represents a break in the structure of human history. "

Technologies

Futurologists speculate about various technologies that could contribute to the occurrence of a singularity. However, there is no consensus as to when and whether these technologies can be implemented at all.

An artificial intelligence with the ability to improve itself is called Seed AI. Many supporters of the singularity believe that a seed AI is the most likely cause of a singularity.

Many supporters of the singularity consider nanotechnology to be one of the greatest threats to the future of mankind ( Gray Goo ). For this reason, some are calling for molecular nanotechnology not to be practiced until seed AI occurs. The Foresight Institute , on the other hand, is of the opinion that a responsible use of this technology is also possible in times of presingularity and that the realization of a singularity that is positive for humanity can be accelerated.

In addition to artificial intelligence and nanotechnology, other technologies have also been associated with the singularity: Direct brain-computer interfaces, which belong to the field of augmented intelligence , could lead to improved memory, more extensive knowledge or greater computing capacity in our brain. Speech and handwriting recognition, performance-enhancing drugs and genetic engineering methods also fall into this area.

As an alternative method for creating artificial intelligence , the method of mind uploading has been suggested, primarily by science fiction authors such as Roger Zelazny . Instead of directly programming intelligence, the structure of a human brain is transferred to the computer via a scan. The necessary resolution of the scanner, the necessary computing power and the necessary knowledge about the brain make this rather improbable in the presingularity period.

In addition, the emergence of intelligent behavior from a sufficiently complex computer network, so-called swarm intelligence , was considered, for example by George Dyson in the book Darwin Among the Machines.

The potential performance of quantum computers - should they ever be scalable to many qbits - is immense. At the same time, quantum computers are difficult to program because, firstly, every operation affects all 2 n superimposed states and, secondly, the calculation cannot be followed: only at the end of the calculation, after the collapse of the wave function, does the result come up with n bits. This paradox led Michael Nielsen to speculate at the Singularity Summit that only the artificial intelligences beyond the singularity could be able to use quantum computers meaningfully, but that then a second singularity could arise, after which intelligences executed on quantum computers would be just as incomprehensible for their programmers acted like the classic AIs for us humans.

Effects

According to the proponents of the hypothesis, the technological singularity limits the human horizon of experience. The resulting super-intelligence could acquire an understanding of reality beyond imagination; the effects could not be grasped by human consciousness at any present time, since they would be determined by an intelligence that would be continually superior to human. Evolution could switch from the field of biology to that of technology.

Many proponents from the ranks of trans- and posthumanism yearn for the technological singularity. They take the view that it is the end point towards which evolution will inevitably lead. Ultimately, they hope to create superhuman beings who will provide an answer to the meaning of life or simply transform the universe into a more livable place. They see no danger in this higher intelligence, because precisely because it is more highly developed, it has a peaceful ethical awareness that is superior to humans.

Critics emphasize, however, that the occurrence of a technological singularity must be prevented. A superior intelligence does not necessarily go hand in hand with a peaceful disposition and the emerging super-intelligence can easily exterminate humanity. They already see a flaw in the pursuit of a technological singularity, because the purpose of technology is precisely to make people's lives easier; Technology that thinks for itself violates this purpose and is therefore not desirable.

The topic of machine-independent, ethical determination of goals and values ​​in the context of super intelligence and technological singularity is discussed as a value problem.

Critical assessment

The theory of technological singularity has been criticized from various quarters.

One point of criticism is incorrect data interpretation and thus a lack of scientific knowledge. Moore's law is extrapolated into the future, although the previous increase in computing power is based on the reduction in the size of the computer circuits, which is physically limited. In addition, the performance data are only theoretical maximum values, which also differ exponentially from practical values, since the memory bandwidth grows much more slowly ( memory wall ). The brain does not store data separately from the circuitry used to process it. Although form Perceptron (simple neural networks) the human perceptual process with few neurons after. However, the simulation of a complete brain is already in prospect in various projects such as the Human Brain Project .

One point of criticism was the stagnating research in the field of strong AI. Despite decades of research, there was no sign of a breakthrough for a long time. The economic interest in the topic had already died in the 1970s and had turned to weak AI - highly specialized problem solvers for individual tasks. Jordan Pollack dampened expectations of rapid progress in this direction: "Having worked on AI for 30 years, I can say with certainty that Moore's law doesn't apply to software design." This point of criticism has been made in recent years by breakthroughs such as Deepminds Victory in Go against Lee Sedol has been weakened.

Other criticisms express doubts about the concept of singularity. There have been many times in human history when the world we live in today would have been completely unimaginable. If one restricts the definition of the singularity to "exceeds the limits of our imagination", then the singularity mentioned is not a one-off, but an event that occurs again and again.

Finally, there is criticism of the originality of the theory. For the science fiction writer Ken MacLeod, for example, the idea of ​​technological singularity is nothing more than the technological re-edition of the theological concept of the doctrine of hopes for the perfection of the world (so-called eschatology ), similar to Jaron Lanier . The conception of the technological singularity is sometimes classified as a technical variation of the superman . Thomas Wagner even sees the desire for man to become God. Kurzweil writes:

When we have saturated all the matter and energy of the universe with our intelligence, the universe will awaken, become conscious - and have fantastic intelligence. I think that comes pretty close to God. "

Organizations

The Machine Intelligence Research Institute ( MIRI ) is a non-profit educational and research organization dedicated to the research and implementation of "friendly" artificial intelligence ( Friendly AI ). This institution was founded by Eliezer Yudkowsky , Brian Atkins and Sabine Atkins.

The Acceleration Studies Foundation ( ASF ) aims to draw the attention of business, science and technology to accelerating technological progress. To this end, the Accelerating Change conference held at Stanford University is held every year . The Acceleration Watch organization also operates a futurological information website that looks at current developments in the light of increasing technological change.

The Future of Humanity Institute ( FHI ) is an interdisciplinary research center attached to Oxford University with a focus on predicting and preventing significant threats to human civilization. Research subjects include a. the dangers of artificial intelligence, the doomsday argument, and the Fermi paradox .

According to the German sociologist Sascha Dickel, the number of members and research activities of these institutes are "factually negligible".

Singularity University, not mentioned there , founded in 2008 by Ray Kurzweil and Peter Diamandis , has been a commercial educational institution, Singularity Education Group , since 2013 .

Prominent representatives

literature

  • Damien Broderick: The Spike. How Our Lives Are Being Transformed by Rapidly Advancing Technologies. Forge, New York 2001, ISBN 0-312-87781-1 .
  • Roman Brinzanik, Tobias Hülswitt : Will we live forever? Talks about the future of people and technology. Suhrkamp, ​​2010, ISBN 978-3-518-26030-2 .
  • David J. Chalmers : The Singularity: A Philosophical Analysis. In: Journal of Consciousness Studies. Volume 17, No. 9-10, pp. 7-65 (2010).
  • Raymond Kurzweil: The Singularity Is Near. When Humans Transcend Biology. Viking, New York 2005, ISBN 0-670-03384-7 .
  • Charles Stross : Accelerando. Heyne, Munich 2006, ISBN 3-453-52195-1 .
  • Vincent C. Müller, Nick Bostrom: Future progress in artificial intelligence: A survey of expert opinion. In: Vincent C. Müller (Ed.): Fundamental Issues of Artificial Intelligence (= Synthesis Library. ). Berlin: Springer, 2016, pp. 553-571 (online).
  • Anders Sandberg: An overview of models of technological singularity. Conference "Roadmaps to AGI and the future of AGI", Lugano, Switzerland, March 2010 (online).
  • Alvin Toffler : The future shock. Strategies for the world of tomorrow. Goldmann, Munich 1983, ISBN 3-442-11364-4 .
  • Vernor Vinge: Article. VISION-21 Symposium, March 1993.
  • Philipp von Becker: The new belief in immortality. Transhumanism, Biotechnology and Digital Capitalism. Passagen Verlag, Vienna 2015, ISBN 978-3-7092-0164-0 .
  • Thomas Wagner : Robocracy. Google, Silicon Valley and humans are being phased out. Papyrossa, Cologne 2015, ISBN 978-3-89438-581-1 .

Web links

Individual evidence

  1. a b c Bernd Vowinkel: Is the technological singularity coming? In: Humanistic press service. September 2, 2016, accessed October 15, 2019 .
  2. ^ A b Ray Kurzweil: Mankind 2.0. The singularity is approaching. Berlin 2013, ISBN 978-3-944203-04-1 , p. 385.
  3. ^ Vinge, 1993.
  4. One eighth of a life: Ray Kurzweil on his 60th birthday. At: heise online. February 12, 2008.
  5. Search for the new world. At: derStandard.at. February 10, 2012.
  6. RTX 2080 Super, benchmark review 11 TFlops. At: pcgameshardware.de. 23rd November 2019.
  7. ^ Anthony Berglas: Artificial Intelligence Will Kill Our Grandchildren (Singularity). Draft 2012 (English).
  8. ^ Stanisław Ulam: Tribute to John von Neumann. In: Bulletin of the American Mathematical Society. 64, No. 3, Part 2, May 1958, p. 5 (online).
  9. ^ Vernor Vinge: Technological Singularity. Department of Mathematical Sciences, San Diego State University, 1993 (online).
  10. ^ Ray Kurzweil: The Law of Accelerating Returns. Essay, March 7, 2001.
  11. Michael Nielsen: Quantum Computing: What It Is, What It Is Not, What We Have Yet to Learn. Talk at the Singularity Summit. November 5, 2009, from 29:11.
  12. Peter Kassan: AI Gone Awry: The Futile Quest for Artificial Intelligence. At: Skeptic.com. Volume 12, 2006.
  13. ^ Jordan B. Pollack: Seven Questions for the Age of Robots. Yale Bioethics Seminar , January 2004.
  14. Harald Bögeholz: Google AlphaGo beats top professionals 4: 1 in Go. In: c't. Retrieved April 25, 2016 .
  15. The Myth Of AI.
  16. ^ Karen Joisten in: Nietzsche research. Volume 9, 2002, ISBN 3-05-003709-1 , p. 39.
  17. Thomas Wagner: The advance of the robocrats. Silicon Valley and the self-abolition of people. Sheets for German and International Politics , March 2015, p. 113.
  18. Acceleration Watch (English) - website on the subject of futurology .
  19. Sascha Dickel: Breaking the boundaries of feasibility? Biopolitical utopias of enhancement. In: Peter Böhlemann (Ed.): The feasible man? Lit, Berlin 2010, ISBN 978-3-643-10426-7 , pp. 75-84. limited preview in Google Book search