Chinese room

from Wikipedia, the free encyclopedia

The Chinese Room is the name for a thought experiment by the philosopher John Searle . With his help, Searle tries to refute the opinion that digital computers could gain consciousness simply by executing a suitable program .

In the thought experiment, one imagines a closed room in which a person who does not understand any Chinese can answer questions asked in Chinese script - using instructions written in their native language - in Chinese script. People outside the room deduce from the results that the person in the room speaks Chinese, although this is not the case.

The experiment was supposed to show that a computer can execute a program and change sequences of characters based on rules without understanding the meaning of the characters . According to Searle, the ability to follow syntax should not enable semantics . According to Searle, a computer would have to show intentionality for this .

With the help of his thought experiment, Searle wanted to prove that it is not enough for a programmed calculator to pass the Turing test to be considered intelligent. Success in the Turing test is therefore not a sufficient criterion for so-called strong artificial intelligence . In addition, it calls into question the computational theories of the mind .

The thought experiment is introduced by Searle in his 1980 essay Minds, Brains, and Programs .

Thought experiment

Searle described an enclosed space in which a person is. He was sent notes with stories in Chinese through a slit in the door. He himself does not speak the Chinese language and thus understands neither the meaning of the individual characters nor the meaning of the story. Then he receives a slip of paper with questions about the story (also in Chinese notation). In addition, people find a pile of Chinese scripts and a “manual” with rules in their mother tongue. The scripts contain additional and background information on the stories (a kind of "knowledge database"). The manual enables him to relate the symbols to the story, but only at the level of character recognition (via the shape of the characters). He also takes instructions from the manual as to which characters to transfer (depending on the characters in the story, the scripts and the questions) to the answer sheet. So he follows purely mechanical instructions and pushes the result (the "answers" to the questions) through the doorway without having understood the story or the questions.

A native Chinese speaker is waiting in front of the door to read the answer sheets. Based on the meaning of the answers, he comes to the conclusion that there is also a Chinese-speaking person in the room who has understood the story .

The thought experiment resulted in several interpretations.

Views

System criticism

It is criticized that while the individual in the room does not speak Chinese, it is not clear why the room as a whole should not understand the language, i.e. H. the combination of executive organ (the human being), data and instructions. This combination is seen as a thinking system. This position tends to advocate the possibility of artificial intelligence (AI) as well as a functionalist view of what is to be regarded as a carrier of understanding.

This criticism shows that there are different ideas about the term “understanding” and (general) cognitive processes. While many proponents of the strong AI thesis (that the running of suitable programs not only simulate thinking or understanding, but can even produce) are of the opinion that human thought processes are an (unconscious) processing of input (stimuli) that determine certain mechanizable rules follows, counter the critics, that human thinking is made up of other things that cannot be abstracted into a program.

Searle responded to this criticism with the argument that humans would not understand the language even if they had memorized the data and instructions for manipulating the characters and were thus the entire thinking system.

Another approach to criticism is to question the validity of the experiment. In particular, it is questionable whether the required “rule manual” could even exist. The actual “intelligence” of the experiment clearly lies in this set of rules and it is questionable whether it can ever be sufficiently extensive to link all conceivable questions with meaningful answers. If, on closer inspection, it turns out that the set of rules in the postulated form cannot theoretically exist either (e.g. because it would have to be infinitely extensive), then the findings from the experiment would be meaningless.

The previous step, the creation of the rule manual, must also be considered in the overall system. It can be argued that the “intelligence” of the executing system (the room) actually comes from the knowledgeable creator of the manual, who has to understand Chinese in order to be able to establish meaningful rules. Thus, the question of the holder of intelligence shifts to an external entity that was simply left out in the original thought experiment.

When discussing the experiment, it is often overlooked that intelligence and consciousness are two different qualities that must not be confused. An artificial intelligence (in the weak sense) only has to be able to solve problems, but not necessarily experience the problem-solving consciously.

Robotic approach

“You create a robot that not only receives formal symbols, but also (controlled by a computer 'brain') can interact with the environment. This computer could understand terms on a completely different level and have mental states. "

It can be criticized here that the robot still has no intentional states (states that relate to something in the world) despite different outputs, it reacts solely on the basis of its program. However, this is based on the misunderstanding that biological cognitive systems are not subject to this criticism. In fact, babies, for example, initially only react to stimuli that are offered to them. Biological cognitive systems develop independent, complex actions, especially on the basis of their genetically determined and socially acquired information.

Simulation approach

“You create an artificial neural network that reacts in the same way as a speaking or writing native Chinese speaker. This network, since it offers a functionality that is adequate for the native speaker's brain, is equally capable of understanding the stories. "

On the one hand, according to Searle, this approach completely misses the actual question and it seems rather strange to him that such an approach comes from the representatives of strong AI. The basic question of AI research ultimately lies in revealing how the mind works - precisely without having to resort to the neuronal states of the brain.

"I thought the whole idea of ​​strong AI is that we don't need to know how the brain works to know how the mind works."

"I thought the whole idea of ​​strong AI is that we don't have to know how the brain works to know how the mind works."

- Searle (2000)

On the other hand, even if such a neural network could be completely simulated, it in no way gives rise to an understanding of a specific language or story. If, for example, the symbols in the Chinese room are replaced by pipes and valves (which simulate the neural network in the brain), which the room occupant turns according to the rules to get an answer in Chinese, they still have nothing of the actual answer Understand language or history.

Combination approach

“If you combine the different approaches, you get a robot with a built-in computer in its head that is programmed so that it can simulate all human synapses in the brain. His behavior would therefore be completely like that of a human. And ultimately, one should be able to attribute intentionality and thus understanding to this robot as a whole. "

At first glance, even Searle would ascribe understanding to such a robot. But it is the same with the "super" robot as it is with animals: Judging by behavior, we have to ascribe this ability to a creature that acts in this way, because otherwise we would not be able to explain it to ourselves.

But for Searle this argument also misses the actual problem. The strong AI wants to show that the correct answers to certain questions are a sufficient criterion for ascribing understanding to a program. The combination approach therefore overshoots the target. Coming back to the “super” robot: You might ascribe understanding to this, but not to the little program in your head.

See also

further reading

A translation of the article as well as a review by one of the most important representatives of the "opposition", Douglas R. Hofstadter , can be found in:

A comprehensive treatise on the discussions surrounding the Chinese Room can be found in:

  • Martin Dresler: Artificial Intelligence, Consciousness and Language: The Thought Experiment of the Chinese Room. Königshausen & Neumann, Würzburg 2009, ISBN 978-3-8260-4054-2 .

A general discussion of the topic of AI (including discussion of the Chinese room) in a fictional and lay-friendly form can be found in:

Web links

Individual evidence

  1. ^ John R. Searle, Minds, Brains, and Programs , in: The Behavioral and Brain Sciences , 1980 (3), 417-457.
  2. Ian Ravenscroft: Philosophy of Mind: An Introduction. Reclam-Verlag, 2008, ISBN 978-3-15-018440-0 , p. 168.
  3. Stevan Harnad : Searle's Chinese Room Argument . In: Encyclopedia of Philosophy. Macmillan, 2005.
  4. ^ Minds, Brains, and Programs. In: Robert Cummins, Denise D. Cummins: Minds, Brains, and Computers. The Foundations of Cognitive Science. An Anthology. Blackwell, Malden / MA 2000, ISBN 1-55786-877-8 , p. 146.