Jordan network

from Wikipedia, the free encyclopedia

A Jordan network is a partially fed back artificial neural network , which again regards the output of the network as part of the input in the subsequent time step. As with the Elman network , the Jordan network is able to process inputs at different times.

Scheme of a Jordan network

The context neurons can have additional feedback on themselves, with which the influence of the older patterns can be controlled. The strength of this feedback is controlled with a fixed memory factor λ. Jordan networks can be trained by means of backpropagation , for example , but the backward-directed edges (feedback) are not adapted.

Compared to the similar Elman network, the structure of the Jordan network has the disadvantage that the number of feedbacks is determined by the number of output neurons and thus by the problem.

In Elman networks, instead of the output neurons, the outputs of the hidden layer are fed back, which removes the restriction of the Jordan network. This is why Elman nets generally give better results.

literature

  • Michael I. Jordan: Attractor dynamics and parallelism in a connection is sequential machine . In: Joachim Diederich (Ed.): Artificial neural networks. concept learning . IEEE Computer Society, Washington, DC 1990, pp. 112-127, ISBN 0-8186-2015-3 .
  • Matthias Haun: Introduction to the computer-based simulation of artificial life . expert Verlag, Renningen 2004, ISBN 3-8169-1856-5 .