Insecure knowledge

from Wikipedia, the free encyclopedia

As uncertain knowledge in the are Artificial Intelligence and Knowledge Representation Information means which are uncertain due to lack of accuracy or reliability.

In formal logic and decision theory one uses to treat such information u. a. Instruments of probability logic (probabilistic reasoning), whereby z. B. an expected value for the validity of a statement is quantified, the fuzzy logic , where z. B. "fuzzy" predicates can be taken into account, and techniques of non-monotonic reasoning .

For example, the components of the knowledge base are weighted as “unsafe” by software agents for which it cannot be determined that they are to be regarded as irrefutably true; this generally applies to all individual knowledge acquired by agents.

How does uncertain knowledge arise?

Insecure knowledge can arise for various reasons.

  • For one thing, the environment can be inaccessible. As a result, not all information about the environment is available to the agent .
  • Another possibility is the inaccuracy of information. Wrong information is also considered to be uncertain knowledge.
  • A third possibility is the misinterpretation / misjudgment of the knowledge by the agent.

Spies distinguishes four types of uncertain knowledge:

  1. fuzzy knowledge that can be represented and processed with fuzzy logic, especially in control engineering
  2. Probability inferences, i.e. conclusions with the help of probability theory
  3. plausible reasoning , reasoning with the help of plausibility as well as positive and negative evidence
  4. Recognizing and understanding incomplete and distorted patterns , pattern recognition

For each of these types of uncertain knowledge, computer science and mathematics provide suitable methods.

How can agents plan with uncertain knowledge?

If there is uncertain knowledge, there are two options for the agent to deal with this knowledge.

Conditional planning

The agent plans for every possible outcome. A binary tree is created, which contains actions as nodes and the two possible outputs “success” and “failure” as outgoing edges. If an action now fails, the agent always has a plan B available to continue working.

Advantage: Agent is robust against failure. Once the plan is in place, the agent always knows immediately what to do.
Disadvantage: Creating the plan can take a relatively long time, depending on how large the plan is.

Execution monitoring

The agent only creates a "master plan". This is carried out. However, the agent monitors the execution and therefore notices immediately if a failure occurs. If this is the case, the agent starts again to design a new plan based on the current situation.

Advantage: Creation of the plan comparatively quickly.
Disadvantage: In the event of a failure, the agent temporarily stops execution in order to reschedule. This is particularly disadvantageous for time-critical tasks.

Individual evidence

  1. Marcus Spies: Uncertain knowledge: probability, fuzzy logic, neural networks and human thinking . Spektrum Akademischer Verlag, Heidelberg 1993, ISBN 3-86025-006-X , p. 17-19 .

literature

  • Walter Hehl : The uncanny acceleration of knowledge. Why we don't understand anything and still achieve great things . vdf Hochschulverlag AG at ETH Zurich, Zurich 2012, ISBN 978-3-7281-3455-4 .
  • Gerhard Knorz: Insecure and incomplete knowledge in knowledge-based systems, control of heuristics through systematic observation . In: Brigitte Endres-Niggemeyer and Jürgen Krause (eds.): Informatik-Fachberichte 114: Language processing in information and documentation: Hanover, 5. – 7. March 1985; Proceedings . Springer-Verlag, London 1985, ISBN 3-540-16071-X , pp. 203-215 .
  • Marcus Spies: Uncertain knowledge: probability, fuzzy logic, neural networks and human thinking . Spektrum Akademischer Verlag, Heidelberg 1993, ISBN 3-86025-006-X .
  • Oliver Thomas: Service modeling: methods, tools and industry solutions . Ed .: Markus Nüttgens. Physica-Verlag, Berlin 2008, ISBN 978-3-7908-2098-0 .
  • Stuart J. Russell: Artificial intelligence: a modern approach . Ed .: Peter Norvig. Prentice-Hall, London 1995, ISBN 0-13-103805-2 , Chapter V: Uncertain knowledge and reasoning, pp. 413-522 .

Web links