Soar (cognition)

from Wikipedia, the free encyclopedia

Soar (formerly SOAR as acronym for S tate, O ator A pply R esult) is a cognitive architecture ; hence a theory that defines all the primitive mechanisms and structures that underlie human cognition . These primitive principles remain constant over long periods of time and across different application domains. The most important of these primitive principles in Soar are:

  1. Problem solving is represented as searching in problem areas
  2. permanent knowledge is represented by production rules (in production memory)
  3. temporary knowledge is represented by objects (in the working memory)
  4. new targets are only generated when dead ends (impasses) occur
  5. Learning mechanism: chunking and from version 9.0.0 also reinforcement learning

On the basis of this architecture, more complex human abilities can now be modeled (e.g. mental arithmetic, language processing, learning processes, etc.). When these models are mature and complete, it should be possible to create an artificial intelligent agent that exhibits all forms of human behavior. Soar would then be the long sought "unified cognition theory" ( Newell 1990), which unites all previous, incoherent theories.

Implementation of Soar

The above five primitive cognitive principles were implemented in a computer program (current version: Soar Suite 9.0.0), which is available for free download (see links below)

How Soar Works

Soar moves from an initial state to one of possibly multiple goal states during the problem solving process. Each state represents a certain situation within the problem area (e.g. the current location in a labyrinth). Exactly one operator is applied to each state, which brings about a new state (e.g. movement in a labyrinth leads to a new location). This continues until a target state is reached. A problem space never has to be represented completely, only one or more states in it.

A program written in the Soar programming language sees e.g. B. looks like this: (Comments are marked with #)

Source code comment
# Production rule 1:
sp {propose * hello-world # This rule suggests using the "hello-world" operator
(state <s> ^ type state) # Condition: IF a status <s> exists in the working memory, then the rule "fires" (actions 1 and 2 are carried out)
(<s> ^ operator <o> +) # Action1: suggests applying an operator <o> to the current state
(<o> ^ name hello-world)} # Aktion2: <o> gets the name "hello-world"
.
# Production rule 2:
sp {apply * hello-world # This rule is used when the operator "hello-world" should be used
(state <s> ^ operator <o>) # Condition1: An operator <o> was selected
(<o> ^ name hello-world) # Condition2: <o> has the name "hello-world"
(write | Hello World |) # Action1: print "Hello World"
(stop) } # Action2: Stop problem solving process

When this program is loaded, the two production rules are added to the production memory (persistent knowledge). If you now create an agent and start it, the agent outputs "Hello World" and then stops. Without the program, Soar is just an architecture with no problem solving capabilities. For example, more complex programs could model human cognitive abilities (see above).

When an agent is started, Soar goes through a cycle:

  1. Input (optional)
    • new sensor data about the environment are recorded in the working memory
  2. Operator suggestions
    • Production rules test whether the memory (with the current state) meets certain conditions
    • In the example above, this is production rule 1
    • If so, the rules suggest using a certain operator (they "fire")
  3. Operator comparison
    • A preference is assigned to all operator candidates
  4. Operator selection
    • 1st case: One operator has a higher preference than all others, continue with step 5.)
    • 2nd case: Exactly one operator cannot be selected (English: impasse, German: dead end)
      • → A substate (intermediate state) is created with its own target state
      • → The goal is now to resolve the dead end, e.g. B. by trial and error (new cycle) or question to the user
  5. Operator application
    • All suitable production rules are applied, this can bring about a new state or end the cycle
    • In the example above, this is production rule 2
  6. Output (optional)
    • Commands are passed on to the environment
  7. Continue with step 1.)

In the case of the above program, this cycle is run through only once, but there are also more complex programs.

random access memory

Here the current status, the current operator and possible substates are saved as so-called WMEs (Working Memory Elements). A WME consists of an identifier (e.g. Operator1), an attribute (e.g. ^ name) and a value (e.g. "hello-world"). All WMEs with the same identifier are combined to form an "object". The value of an attribute can be a constant or the identifier of another object.

Chunking learning mechanism

If a dead end is reached in step 4) of the decision cycle, this must be resolved (e.g. by a new cycle with a different target state). If the dead end can be resolved, a new production rule is created, which is called "chunk". If this new rule is added to the production memory, a similar dead end will no longer occur in the future, since the new rule contains a possibility of resolving the dead end (e.g. the correct choice of the operator).

Applications of Soar

  • Modeling of cognitive processes in cognitive science and artificial intelligence
  • Predicting human performance
  • commercial version of Soar: "KB Agent" (Explore Reasoning Systems, Inc.)
  • Soar as a robot control architecture
  • TacAirSoar: Simulation of opposing fighter pilots in a virtual environment
  • Simulation of opposing players in the computer game " Quake II"

History of Soar

At the beginning of the development of Soar, "SOAR" was written completely in capital letters and stood for "State, Operator And Result", because in Soar problem solving is always the search in the associated problem space in which an operator is applied to a state in order to obtain a Result. Soar was developed by Allen Newell, John Laird and Paul Rosenbloom in the early 1980s.

criticism

Cognitive architectures assume that all cognitive processes can be traced back to a few principles (e.g. the "firing" of rules). However, there should be evidence that suggests a large number of highly specialized, non-investigatable, neuroanatomically defined functions (e.g. Kolb, Wishaw 1990).

literature

Web links