BDI agent

from Wikipedia, the free encyclopedia

BDI agents are representatives of the software agents . BDI stands for B elief, D esire and I ntentions, the three main components of this architecture. The agents are provided with assumptions about their environment, knowledge of the target state and intentions on how to achieve this state.

Origin of the BDI architecture

The BDI architecture was originally developed in the Rational Agency Project at the Stanford Research Institute . It goes back to the work of Michael Bratman , professor of philosophy at Stanford University , who dealt with decision-making in humans and published his model in 1987. In 1995 this was adapted for practice by Anand Rao and Michael Georgeff . The artificial agents were equipped with mental abilities for this purpose: they received knowledge about their environment ( beliefs ), desirable states ( desires ) and currently pursued intentions ( intentions ).

Development of a BDI agent

Like classic agents , the BDI agent is equipped with sensors that inform him about his environment and with effectors to influence the environment through actions. In between there is an interpreter that processes sensor inputs ( input ) and selects actions ( output ). The decisive difference to other agent architectures are the data structures on which the interpreter works:

World knowledge ( beliefs )

Information about the current state of the world in which the agent is staying and drawing conclusions is important for all types of AI systems . This is stored in a knowledge database and contains facts about the current environment, the internal state of the agent and background knowledge that may be necessary for drawing conclusions.

The storage can take place in any knowledge representation . The knowledge base needs to be constantly updated, influenced by the agent's perception and internal conclusions. This ensures that the model is as valid as possible in the world.

Goals ( desires )

This data structure specifies the main goals of the agent, which fundamentally influence its further behavior. A goal is chosen from a set of options and pursued for a certain period of time. Desires are a crucial part of deliberative behavior, since the agent is not prompted to take any further action without a goal.

The goal-oriented behavior also makes it possible to pursue the selected goal in the event of an unsuccessful action, e.g. B. by an alternative approach or a later attempt again.

Intentions ( intentions )

In order to achieve his desired goal, the agent has a plan database available from which he can select hierarchically organized plans that bring him closer to his goal. The selected plans are called intentions.

The BDI agents use the hierarchically organized plans to draw conclusions: first, a plan is selected that leads directly from the starting state to the target state. A plan normally consists of sub-goals, for each of which a separate, hierarchically subordinate plan is selected. This happens down to the level of elementary actions, e.g. B. Commands for controlling a motor.

extension

An extension of the BDI agent architecture is the BOID agent architecture, which also includes obligations of the agents.

See also

Frameworks for creating BDI agents

Individual evidence

  1. J. Broersen, M. Dastani, J. Hulstijn, Z. Huang, L. van der Torre : The BOID architecture: conflicts between beliefs, obligations, intentions and desires. Proceedings of the fifth international conference on Autonomous agents, ACM New York 2001, ISBN 1-58113-326-X , pp. 9-16