History of Artificial Intelligence

from Wikipedia, the free encyclopedia

The founding event for artificial intelligence as an academic subject is the Dartmouth Conference in the summer of 1956 at Dartmouth College in Hanover (New Hampshire) , a six-week workshop with the title Dartmouth Summer Research Project on Artificial Intelligence , organized by John McCarthy as part of a Rockefeller Foundation funded research project. The term “artificial intelligence” appeared in the application for the first time. In addition to McCarthy himself, Marvin Minsky , Nathaniel Rochester and Claude Elwood Shannon took part.

prehistory

The idea that human intelligence or the processes of human thinking in general can possibly be automated or mechanized, that humans could design and build a machine that shows intelligent behavior in some way , is much older, however. The earliest source is mostly referred to Julien Offray de La Mettrie and his work L'Homme Machine , published in 1748 . The idea of ​​the Laplace demon , named after the French mathematician, physicist and astronomer Pierre-Simon Laplace, can also be counted among the theoretical forerunners of artificial intelligence as this design is based on the model concept that the entire universe follows the rules of a mechanical machine - like clockwork, so to speak - and this idea naturally also includes the human being and his spirit, his intelligence.

Historic automatons and robots

At several points in history there are reports of mechanical machines for certain activities that were built into a more or less human-like housing and thus - to a certain extent - should convey the illusion of an artificial human being. In some cases, it was also a matter of fairground attractions up to figures like C-3PO from Star Wars .

Homunculi, Golem and other artificial people

In addition to these automata, which at least their designers and builders generally understood as machines with limited mechanical capabilities, there were also theoretical or literary designs of artificially created living beings that were supposed to be similar to humans in terms of their capabilities and appearance . A general idea of ​​a homunculus was already described in antiquity, a plan for the alleged production of a homunculus can be found in the text De natura rerum (1538), which is generally attributed to Paracelsus . Further examples are the Jewish legend of the golem in its various variants or Mary Shelley's novel Frankenstein .

Artificial intelligence in literature

The Polish philosopher and science fiction author Stanisław Lem illustrated this idea in numerous works of fiction.

Theoretical foundations in the early 20th century

Based on the work of Alan Turing , including on the essay Computing machinery and intelligence , Allen Newell (1927–1992) and Herbert A. Simon (1916–2001) from Carnegie Mellon University in Pittsburgh formulated the Physical Symbol System Hypothesis . According to her, thinking is information processing, and information processing is a computation, a manipulation of symbols. The brain as such is not important when thinking: "Intelligence is mind implemented by any patternable kind of matter." This view that intelligence is independent of the carrier substance is shared by the representatives of the strong AI research. For Marvin Minsky (1927–2016) of the Massachusetts Institute of Technology (MIT), one of the pioneers of AI, “the goal of AI is to overcome death”. The robot specialist Hans Moravec (* 1948) from Carnegie Mellon University describes in his book Mind Children (children of the spirit) the scenario of the evolution of post-biological life : A robot transfers the knowledge stored in the human brain to a computer, so that the biomass of the brain becomes superfluous and a posthuman age begins in which the stored knowledge can be accessed for as long as desired.

Discussion of the possibility of artificial intelligence

Since the beginning of the scientific and philosophical discussion of artificial intelligence, the possibility of "strong" artificial intelligence in particular has been debated. It was even questioned whether artificial systems that resemble humans or are similar in a sense that has yet to be determined can even be imagined without contradiction.

Hubert Dreyfus ' book The Limits of Artificial Intelligence dates back to 1972. With his brother Stuart E. Dreyfus, he described the “limits of the thinking machine” in 1986.

On the other hand, Karl Popper (1977) refers to a thesis by Alan Turing, who once said, “State exactly in what you think a person should be superior to a computer, and I will build a computer that refutes your belief . ", But puts this statement into perspective again by recommending:" We should not accept Turing's challenge; because every sufficiently precise determination could in principle be used to program a computer. ”And he also points out that no one has been able to formulate a definition of intelligence that is accepted by all relevant experts for human intelligence, and consequently one that is not generally accepted There is a procedure with which the presence or the degree of expression of “intelligence” in humans - and then possibly also in an artificial system - could be objectively checked or measured.

The discussion as to whether there can even be such a thing as an artificial intelligence that is equal to, or even superior to, human intelligence is, however, characterized by a fundamental asymmetry:

  • Authors like the Dreyfus brothers argue that artificial intelligence cannot exist in the strict sense of strong AI, i.e. H. In the formal-logical sense they represent an universal statement (with a negative sign) and the arguments and considerations that they cite for this thesis can be attacked, disputed or possibly even refuted in many places.
  • Alan Turing, on the other hand, only claims that - under certain conditions or requirements - something is possible and leaves it to others to initially meet these requirements. As long as this has not yet happened, Turing's assertion cannot be checked, and consequently cannot be falsified, and in this respect, strictly speaking, according to the criterion for scientificity formulated by Popper , it is not a scientific statement.

To decide whether a technical system has an intelligence similar to that of humans, reference is often made to a suggestion by Alan Turing, which has become known as the Turing test . Allan Turing himself only outlined the general idea that could be the basis for such a test: if a person does not interact with two "partners", one of which is another person and the other is an artificial, technical system ( more) can find out or distinguish which of the partners is the human being and which is the computer, one could no longer deny the technical system the property of being intelligent as well. (Turing initially left more precise details open; by the way, it is of course clear that the interaction in such a test situation is to be designed in such a way, e.g. in the form of a telephone conversation or a written question-and-answer game, that no extraneous information can falsify the assessment.)

However, when Alan Turing made this proposal around 1950, the field of artificial intelligence did not even exist, and accordingly there was no distinction between strong and weak AI, and certainly not the dispute as to whether it was a strong AI in the narrower sense Sense at all. Of course, there were later various attempts to concretise Turing's idea and to implement it in practice, but all of them were criticized or not recognized because of deficiencies in the conceptualization - and also in the practical implementation.

At the same time, the computing power of computers has been increasing at an exponential rate for over 50 years: in an article in 1965 Gordon Moore predicted a doubling around every two years - initially only based on the packing density of the components on computer chips and initially only for the period up to 1975. Under the name Moore's Law , this prognosis became a rough rule for how the performance of computer systems would develop; In 2015, this law celebrated its 50-year period of validity (during this time there was a doubling of 25 times, i.e. a performance increase by the factor ).

Against this background and because the performance of the human brain is almost constant, we already have the term for the point in time when the performance of computers could one day surpass that of the human brain - and thus artificial intelligence technological singularity . From a purely technical point of view, in terms of the number of operations per unit of time and the available storage space, today's expensive supercomputers clearly surpass the estimated performance of the human brain, but human brains are still considered superior in tasks such as creativity, pattern recognition and language processing (2017) . The Chinese researchers Feng Liu, Yong Shi and Ying Liu carried out IQ tests with publicly and free-of-charge weak AIs such as Google AI or Apple's Siri and others in the summer of 2017 . These KIs reached a maximum of around 47, which is below that of a six-year-old child in the first grade. An adult has an average of 100. Similar tests were carried out in 2014 in which the AIs reached a maximum of 27.

Research directions and phases of AI

The computer scientist Hans-Werner Hein 1986 on the bad image of AI

The initial phase of AI was characterized by almost limitless expectations with regard to the ability of computers to “solve tasks that require intelligence to solve if they are carried out by humans”. Herbert Simon predicted in 1957, among other things, that within the next ten years a computer would become world chess champion and discover and prove an important mathematical proposition. These predictions did not apply. Simon repeated the prediction in 1990, but without specifying the time. After all, in 1997 the Deep Blue system developed by IBM succeeded in beating the world chess champion Garri Kasparov in six games. In 2011, the computer program Watson won the quiz Jeopardy! against the two most successful players to date.

In the 1960s, Newell and Simon developed the General Problem Solver , a program that should be able to solve any problem with simple methods. After almost ten years of development, the project was finally discontinued. In 1958, John McCarthy proposed to bring all human knowledge into a homogeneous, formal form of representation, the first level predicate logic .

Wheat tree: ELIZA

At the end of the 1960s, Joseph Weizenbaum (1923–2008) from MIT developed the ELIZA program using a relatively simple process , in which the dialogue between a psychotherapist and a patient is simulated. The impact of the program was overwhelming. Weizenbaum himself was surprised that it is relatively easy to give people the illusion of a soulful partner. "If you misunderstand the program, you can consider it a sensation," Weizenbaum later said of ELIZA. AI has achieved success in a number of areas, such as strategy games like chess and checkers, mathematical symbol processing, simulating robots, proving logical and mathematical theorems, and finally expert systems . The rule-based knowledge of a specific subject is formally represented in an expert system. For specific questions, the system also applies these rules in combinations that are not considered by human experts. The rules used to solve the problem can be displayed; H. the system can "explain" its result. Individual knowledge elements can be added, changed or deleted; modern expert systems have comfortable user interfaces for this purpose.

Expert systems

One of the best-known expert systems was the MYCIN, developed by T. Shortliffe at Stanford University in the early 1970s . It was used to support diagnosis and treatment decisions for blood infectious diseases and meningitis . An evaluation confirmed that his decisions are as good as those of an expert in the field in question and better than those of a non-expert. However, when it was given data on a cholera disease - an intestinal and not a blood infectious disease - the system reacted with diagnostic and therapeutic suggestions for a blood infectious disease: MYCIN did not recognize the limits of its competence. This is called the cliff-and-plateau effect. It is typical for expert systems, i.e. computer programs that are used for diagnostic support (Medical Decision-Support Systems) and are highly specialized in a narrow field of knowledge. In the 1980s, parallel to significant advances in hardware and software, AI was assigned the role of a key technology, particularly in the area of ​​expert systems. A variety of industrial applications were hoped for, and “monotonous” human work (and its costs) were also expected to be replaced by AI-controlled systems. However, after many forecasts could not be met, industry and research funding reduced their commitment. Such a period of decline in expectations and investments is known as the AI winter .

Machine learning and neural networks

Expert systems and other systems based on knowledge databases have had only moderate success because it has proven to be too difficult to translate the required knowledge into formal rules by hand. This weakness is circumvented through machine learning . The computer system learns independently using the available data and is thus also able to recognize hidden relationships that a person would not have taken into account. Classical methods learn an output function using previously extracted features that were extracted from the input data by manual programming. Here, however, a similar problem emerged as with the expert systems, that manual selection does not always lead to an optimal result. Artificial neural networks (ANNs) are currently a successful structure for machine learning . They are based on the ability to learn the required features yourself from the raw data , for example directly from the camera images.

Historically, the first ANNs were developed as linear models like the McCulloch-Pitts cell in 1943 and the Adaline model in 1959. Based on neurophysiology , the information architecture of the human and animal brain was analyzed . Neuroinformatics has developed as a scientific discipline to investigate these processes . Weaknesses in the modeling of even simple logical functions such as the XOR by these linear models initially led to a rejection of the ANNs and biologically inspired models in general.

Thanks to the development of non - linear multi - layer , folding neural networks and the necessary training methods , but also due to the availability of the high-performance hardware and large training data sets required for this (e.g. ImageNet ), ANNs have achieved successes in numerous pattern recognition competitions since 2009 and dominated one another classic method with manual selection of the features. The multi-layer neural networks used for this are also summarized under the keyword deep learning .

Furthermore, ANNs are also used as generative models , that is to say to generate real-looking images, videos or sound recordings, which was made possible in ever better quality in 2014 by the invention of Generative Adversarial Networks . The results of a work based on this from 2017, which generates imaginary images of faces, have been described by specialist circles as "impressively realistic". With DeepFakes , the results became known to a broader public from 2017 onwards. In particular, the question was discussed to what extent photo or video evidence can still be trusted if it is possible to automatically generate any real-looking images.

Playing partner in board and video games

In the meantime, numerous subdisciplines have emerged in AI, such as special languages ​​and concepts for the representation and application of knowledge, models for questions of revisability, uncertainty and inaccuracy and machine learning processes. The fuzzy logic has proven to be another form of weak AI established for example in machine controls. Other successful AI applications are in the areas of natural language interfaces, sensors , cybernetics and robotics .

In March 2016, the AlphaGo system defeated the South Korean Lee Sedol, one of the world's best Go players. The program developed by DeepMind had previously evaluated millions of archived games with deep learning and also played against itself several million times.

In August 2017, an artificial intelligence company from OpenAI defeated some of the world's best professional players in this field (including professional player Danylo "Dendi" Ishutin) in a $ 24 million tournament for the computer game Dota 2 . Dota 2 is considered to be one of the most complex video games ever, more complex than Go or Chess. However, Dota 2 was played here in one-to-one mode and not in the more complex team mode. OpenAI stated that it only took the AI ​​four months to reach this skill level. The AI ​​was trained by competing against itself over and over again. The AI ​​was given the same field of vision as the human player and was only allowed to perform a limited number of actions at the same time. The aim of OpenAI is now to develop an AI that can defeat the best human players in team mode.

Web links

Individual evidence

  1. ^ A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence . (McCarthy et al .: Funding application, August 1955, p. 1) ( Memento of the original from September 30, 2008 in the Internet Archive ) Info: The archive link was inserted automatically and has not yet been checked. Please check the original and archive link according to the instructions and then remove this notice. @1@ 2Template: Webachiv / IABot / www-formal.stanford.edu
  2. Allen Newell, Herbert A. Simon: Computer Science as Empirical Inquiry: Symbols and Search . In: Communications of the ACM . Vol. 19, No. 3, March, 1976, pp. 113–126 Text (PDF)
  3. Hubert L. Dreyfus: The limits of artificial intelligence. What computers can't . Athenaeum, Königstein 1985 [engl. Original: What Computers Can't Do: The Limits of Artificial Intelligence . 1972]
  4. Hubert L. Dreyfus, Stuart E. Dreyfus: Artificial Intelligence: From the limits of the thinking machine and the value of intuition . Rowohlt rororo, Reinbek 1986 (English Original: Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer . Free Press, New York 1986)
  5. KR Popper, JC Eccles: The I and its brain . Piper, Munich 1982, p. 257 (English original: The Self and Its Brain . Springer, Heidelberg 1977)
  6. GE Moore: Cramming more components Onto integrated circuits . In: Electronics . tape 38 , no. 8 , 1965 ( monolithic3d.com [PDF; 802 kB ]).
  7. Big data: Computer vs. Human Brain | MS&E 238 Blog. Accessed August 30, 2020 (English).
  8. Google AI twice as smart as Siri - but a six-year-old beats both . 5th October 2017.
  9. Minsky
  10. ^ Documentary Plug & Pray with Joseph Weizenbaum and Raymond Kurzweil
  11. ^ A b c d Ian Goodfellow, Yoshua Bengio, Aaron Courville: Deep learning . MIT Press, 2016, ISBN 978-0-262-03561-3 , 1 Introduction, pp. 1 ff . ( deeplearningbook.org ).
  12. Martin Giles: The GANfather: The man who's given machines the gift of imagination . In: MIT Technology Review . ( technologyreview.com [accessed November 14, 2018]).
  13. David Silver, Aja Huang et al .: Mastering the Game of Go with Deep Neural Networks and Tree Search (PDF) Google DeepMind and Google from January 27, 2016, accessed December 10, 2018
  14. Eike Kühl: Artificial Intelligence: Now she defeats professional gamer In: zeit.de , August 19, 2017, accessed on December 25, 2019.