Super intelligence

from Wikipedia, the free encyclopedia

Super Intelligence (literally. Over- intelligence ) called beings or machines with humans in many or all areas of superior intelligence. The term is used in particular in transhumanism and in the field of artificial intelligence (AI). According to the current state of knowledge, we do not know of an actually mentally superior being who meets the criteria of a superintelligence.

definition

Superintelligence is defined as an intellect that is superior to the best human brain in most or all areas, both in terms of creative and problem-solving intelligence as well as social skills. It remains to be seen whether it can be implemented biologically, technically or as a hybrid in between. The question of whether she has a self-confidence or a memory is left out. A distinction is made between strong and weak super intelligence. The weak superintelligence is an intellect that works qualitatively on the level of human thought processes, but quantitatively many times faster. A strong super-intelligence, on the other hand, works on a qualitatively superior level.

Demarcation

Colloquially, gifted people or islanders (so-called “savants”) are referred to as super-intelligent people, but they do not have the superordinate abilities of super-intelligence. Furthermore, particularly good or fast search engines or the semantic web are also referred to as super-intelligent; however, they are cognitively superior to humans only in certain aspects. A worldwide “community of researchers” cannot be classified as super intelligence either, since it cannot be clearly delimited, let alone represent a single entity.

history

In 1965, IJ Good was the first to come up with the idea of ​​super intelligence and a possible intelligence explosion:

“An ultra-intelligent machine is defined as a machine that can by far exceed the intellectual abilities of every person, no matter how intelligent they are. Since building such machines is one of those intellectual skills, an ultra-intelligent machine can make even better machines; undoubtedly there would then be an explosive development of intelligence, and human intelligence would lag far behind. The first ultra-intelligent machine is therefore the last invention that humans have to make. "

In addition, there is an idea by Mihai Nadin , who in his work MIND - Anticipation and Chaos from 1997, presents that a so-called critical mass of normal intelligences can be connected through interaction, which then interacts transcendently .

The author Ray Kurzweil assumes that computers will surpass human beings in terms of intelligence by 2030. He established this thesis in 1999 in his book The Age of Spiritual Machines (German title: Homo S @ piens ). This is further elaborated under the term technological singularity .

realization

There is no single-minded, serious project for the direct creation of a superintelligence. However, initial successes are emerging in the field of simulation, which have been achieved in full within the framework of various large-scale projects such as the EU's Human Brain Project (budget EUR 1 billion) and the US Brain Activity Map Project (budget US $ 3 billion) Replica of the human brain should lead to. A completely simulated brain could form the basis of a (weak) super intelligence.

In transhumanism and posthumanism, there has so far been no consensus on the concrete way to realize a superintelligence and on the assessment of existing technology . It might be enough just to combine existing and future technologies in the right way, or it is necessary that the corresponding technology as well as the concepts have yet to be developed.

Some transhumanists assume that the ever faster advancing development in all areas of science could lead to the realization of a superintelligence within the next decades (25 years to 50 years). In 2009, Henry Markram believed that an artificial brain on a par with humans was possible within a decade.

Possible directions of development for the realization of a superintelligence are the technical advancement of computers, the genetic engineering advancement of humans, the fusion of both in cyborgs , as well as the simulation on the neural level:

  • Artificial intelligence: A pure acceleration of the computing speed of computers does not lead to intelligence any more than the implementation of stronger algorithms . However, it is considered possible to implement a self-learning program, a so-called strong artificial intelligence . Research has endeavored for decades to create self-improving Artificial Intelligences, but so far only with success with weak AI. Nevertheless, of the four alternatives discussed here for realizing a superintelligence, the route via a strong artificial intelligence is given the best chance.
  • Biological cognition: Using genetic engineering , supermen could be bred - they would not immediately achieve the status of superintelligence, but in several generations there would at least be a chance for intelligence enhancement. In the context of embryo selection it is discussed to shorten the cumbersome generation succession. Embryos would have to be removed from somatic cells at an early stage and converted into germ cells, which establish a new generation. Birth and growth could be avoided in this way and a multi-generation selection process for the desired genetic properties of intelligence, which would have to be known, could be drastically shortened and made efficient. This direction is largely rejected as eugenics . However, it cannot be ruled out that the first positive successes in a country that is not opposed to this will lead to a change in perception in critical states, as has happened in other medical cases in the past, for example with the pill or with Organ transplants.
  • Brain-computer interface: The approach preferred by many transhumanists is to equip people with enhancing implants, such as microprocessors , to massively increase their thinking skills. There is disagreement about how to proceed in detail, the proposed end goal of transhumanism is the existence of human consciousness only in digital memories that are located in robotic bodies or cyborgs. Here, too, the technical implementation is still in the early stages. Breakthroughs in the integration of the human brain and artificial implants - especially prostheses for the treatment of disabilities - occasionally lead to euphoria in the press, but an actual improvement of the human brain through a computer interface is still lacking.
  • Brain emulation: Another possibility, which is also practically pursued, is to completely simulate the human brain in the computer and thus to simulate its functions as quickly as possible ( weak super intelligence ). There are two major projects for this, the BRAIN Initiative is initially limited to a complete mapping. The Human Brain Project aims at a full simulation. Full simulation has been the aim of the Blue Brain project since 2005 . Its head, Henry Markram , thought an artificial brain was possible until 2019. In 2007 a milestone was reached, the simulation of a complete neurocortical column (consisting of 100 million synapses - the cerebrum consists of 1 million interconnected columns). In 2013, the Japanese research institute RIKEN, in cooperation with Forschungszentrum Jülich, succeeded in simulating around 1% of brain activity for 1 second on the 10 petaflop K supercomputer in 40 minutes (1.7 billion neurons with 10 billion synapses). The head of the research group predicts that with the new generation of exa-scale computers expected within the next decade, the whole brain will be fully simulable.

Valuation problem

A superintelligence must be able to give itself goals and values ​​and to adapt them depending on the situation. Examples of such values ​​are freedom , justice , happiness or more specifically: “Minimize injustice and unnecessary suffering”, “be friendly”, “ maximize company profits ”. Technically, the valuation must be programmed with a mathematical utility function that assigns a measurable benefit to the actual conditions of the environment of the super intelligence . In today's programming languages, there are no value concepts such as “luck” and others. The use of terms from philosophy turns out to be difficult to solve because they cannot be converted into computer syntax. The problem is also a mirror image of the human problem of formulating and codifying goals and values ​​in a measurable and verifiable manner, since values ​​arise from our complex world, of which we are often not aware. There are also good reasons to believe that our moral ideas are wrong in various ways, making them unsuitable, if not dangerous, for adoption in an AI system. If the approach is followed to first give the system the simplest possible values ​​from the outside, from which it should further develop and learn its values with the help of its own seed AI , a variety of new problems arise. They can consist in the fact that values ​​are no longer desired in a changed world or that unforeseen conflicting goals arise and the system is expected to recognize and correct them. In many cases, however, conflicting goals cannot be resolved. The valuation problem is therefore unsolved. It is not known how a super-intelligence could install understandable human values ​​on the way through value learning. Even if the problem were solved, the next problem would be what values ​​to choose and what selection criteria to use. Further problem areas arise: Should the intentions of the superintelligence be checked again by humans before they are carried out? In view of the complexity of the issues, is that even feasible for people in the necessary time frame? Does the system allow such control over the long term?

Control problem

The control problem results from a false or insufficient value for superintelligence from the point of view of humans. This is to ensure that humanity remains in control of the machine. Bostrom demonstrates the problem with the following example: “Imagine a machine that was programmed with the aim of producing as many paper clips as possible, for example in a factory. This machine doesn't hate people. Nor does she want to break free from her subjugation. All that drives them is to make paper clips, the more the better. To achieve this goal, the machine must remain functional. She knows that. So she will prevent people from turning her off at all costs. She will do everything possible to secure her energy supply. And it will grow - and will not stop even after it has turned humanity, the earth and the Milky Way into paper clips. That follows logically from her target, which she does not question, but rather fulfills as best as possible. "

How can this development be prevented? In principle, Bostrom describes two potential dangers. The first concerns the motivation of the designers of the super-intelligent machine. Are you developing the machine for your personal gain or out of scientific interest or for the benefit of mankind? This risk can be averted by controlling the developer by the client. The second potential danger concerns the control of the superintelligence by its designers. Can a machine with higher qualifications at the end of development be monitored by a less qualified developer? The control measures required for this would then have to be planned in advance and built into the machine without the machine being able to manipulate them again afterwards. There are two approaches to this: controlling the capabilities and controlling the motivation of the machine. If even one of the two slips away, the superintelligence can take control of humanity.

Intelligence explosion

In connection with super intelligence, IJ Good already spoke of a possible intelligence explosion that could occur in the cycle of recursive self-improvement. Today this scenario is presented as a process in several stages. First, the current system has capabilities well below the basic human level, defined as general intellectual ability. At some point in the future it will come to that same level. Nick Bostrom describes this level as the start of the takeoff . With further continuous progress, superintelligence acquires the combined intellectual abilities of all of humanity. The system becomes a "strong super-intelligence" and eventually screws itself to a level far above the combined intellectual possibilities of contemporary humanity. The takeoff ends here; system intelligence is only increasing slowly. During the takeoff, the system could exceed a critical threshold. From this threshold onwards, the improvements in the system are mostly inherent in the system. External interventions are no longer relevant. The duration of the takeoff depends on many factors. Acceleration effects of the system are summarized under the term optimizing force, inhibiting effects under the term "resistance". The rate of change, i.e. the rate of acceleration for a possible intelligence explosion, is then, according to Bostrom, Optimization Force ./. Resistance. Factors that influence the optimization power or the resistance of the system are, for example, the question of whether only one superintelligence contests the process or whether several or even many superintelligence groups are in competition with one another. Another decisive factor is what percentage of the gross world product is invested in the development of one or many superintelligences (construction effort). Rapid, drastic hardware improvements would be an indication of a quick takeoff and thus of an intelligence explosion. It could expire in a few days or hours. The dimension of an intelligence explosion is exemplified by the fact that global GDP will double in a year or two or that the system could write a dissertation in a few minutes. In this case, the person would hardly have time to react. His fate would largely depend on precautions taken beforehand. In the extreme case, he would be faced with the no longer able to act of a technological singularity . According to Bostrom's analysis, a moderate takeoff will lead to geopolitical, social and economic upheavals when interest groups try to reposition themselves in terms of power politics in the face of the drastic change that is about to take place. According to Bostrom, a slow takeoff is unlikely.

criticism

The philosophical and ethical implications of a superintelligence are controversial both inside and outside the transhumanist movement. There are various criticisms of the goal of creating a superintelligence.

Skeptics doubt whether a superintelligence is technically feasible at all. The processes in a biological brain are far too complex to be deciphered, and also far too complex to be imitated with a technical device. The connection of human synapses to electronics in a cyborg is also problematic, since a fast but rigid electronic system cannot simply be wired with a slower but lively brain. On the one hand, this criticism is countered by the fact that the knowledge about the exact processes in a human brain is not in principle too complex to ever be understood. On the other hand, artificial intelligence is by no means limited to imitating a biological brain, but can also base its intelligence on a different mode of functioning.

Other critics are bothered by the hubris of wanting to improve people. Improvement through genetic engineering is socially outlawed as eugenics . There is also the question of whether super-intelligent beings use their abilities for the benefit or to the detriment of humanity . While proponents believe that a superintelligence must, by definition, be better in character than a normal human being today, trying to create it might only partially implement the vision and create an ill-willed intelligence. It should be noted that the legitimate interests of benevolent parties can also collide and the sovereign party prevails.

After all, there is a reluctance to become dependent on potentially defective implants. The advocates of transhumanism in particular argue that those who refuse the new technology will sooner or later be ousted or left behind by the avant-garde. The conclusion from this, according to the critics, is that richer people could buy more powerful brains and, with their increased mental capacity, they can suppress or exploit unarmed people all the better. This phenomenon is already beginning to become evident in societies in which higher educational qualifications are preferably made available to the higher social classes.

In fiction

Many science fiction novels and films are about super-beings. The mostly seemingly omniscient or omnipotent beings often meet the criteria of superintelligence mentioned at the beginning. The idea of ​​a completely spiritualized, i.e. H. Disembodied super intelligence first appeared in 1962 in the science fiction series Perry Rhodan - in the form of the super intelligence ES.

See also

Web links

Individual evidence

  1. a b c Nick Bostrom: How long before Superintelligence? - last update 2005
  2. Terms of transhumanism, section "What is a superintelligence" ( Memento of the original from July 16, 2011 in the Internet Archive ) Info: The archive link was inserted automatically and has not yet been checked. Please check the original and archive link according to the instructions and then remove this notice. @1@ 2Template: Webachiv / IABot / www.transhumanism.org
  3. a b I. J. Good: Speculations Concerning the First Ultraintelligent Machine - 1965 ( Memento of the original from March 4, 2016 in the Internet Archive ) Info: The archive link was inserted automatically and has not yet been checked. Please check the original and archive link according to the instructions and then remove this notice. @1@ 2Template: Webachiv / IABot / webdocs.cs.ualberta.ca
  4. Mihai Nadin: "Mind Anticipation and Chaos"
  5. Bostrom et al: FAQ from 1999
  6. a b Artificial brain '10 years away '. BBC News , July 22, 2009, accessed June 19, 2013 .
  7. Hans Moravec: When will computer hardware match the human brain? - 1997 ( Memento of the original from June 15, 2006 in the Internet Archive ) Info: The archive link was automatically inserted and not yet checked. Please check the original and archive link according to the instructions and then remove this notice. @1@ 2Template: Webachiv / IABot / www.transhumanist.com
  8. Bostrom, Nick: Superintelligence. Scenarios of a coming revolution. Suhrkamp 2016. P. 68ff
  9. Gerhard Fröhlich: Techno-utopias of immortality from computer science and physics . In: Ulrich Becker, Klaus Feldmann, Friedrich Johannsen (Hrsg.): Death and dying in Europe . Neukirchener Verlag, Neukirchen-Vluyn 1997, ISBN 3-7887-1569-3 , p. 184–213 ( jku.at [PDF; 81 kB ]).
  10. First thought-controlled arm prosthesis. In: derstandard.at
  11. Prostheses - higher, faster, further. In: zeit.de
  12. ^ Spiegel 1997: Live forever as a cyborg
  13. Pacemaker for the brain. In: focus.de
  14. Spiegel 2008: The Language of the Brain
  15. Largest neuronal network simulation achieved using K computer. RIKEN , August 2, 2013, accessed August 13, 2013 .
  16. Bostrom, Nick: Superintelligence. Scenarios of a coming revolution. Suhrkamp 2016. pp. 260–291
  17. Bostrom, Nick: Superintelligence. Scenarios of a coming revolution. Suhrkamp 2016. p. 299
  18. Bostrom, Nick: Superintelligence. Scenarios of a coming revolution. Suhrkamp 2016. p. 270ff
  19. Bostrom, Nick: Superintelligence. Scenarios of a coming revolution. Suhrkamp 2016. p. 277
  20. ^ Interview of the time with Nick Bostrom
  21. The Future of Evolution; Comment from Bertram Köhler on the book "Superintellektiven" by Nick Bostrom
  22. ^ Graphic: The form of the takeoff according to Bostom 2016, p. 94
  23. Bostrom, Nick: Superintelligence. Scenarios of a coming revolution. Suhrkamp 2016. p. 94
  24. Bostrom, Nick: Superintelligence. Scenarios of a coming revolution. Suhrkamp 2016. p. 338f
  25. Bostrom, Nick: Superintelligence. Scenarios of a coming revolution. Suhrkamp 2016. p. 95
  26. Bostrom, Nick: Superintelligence. Scenarios of a coming revolution. Suhrkamp 2016. p. 96f