Markov logic network

from Wikipedia, the free encyclopedia

Markov logic networks apply probabilistic graphical models to first-order predicate logic and thus enable probabilistic inference. They were introduced in 2006 by Matthew Richardson and Pedro Domingos. In a Markov logic network, conclusions are made using the MCMC method .

motivation

The first-level predicate logic is well suited when reliable information is available and relationships are guaranteed to occur. However, this is seldom the case in the real world. The conclusion "The ground is wet" "It rained" is often correct, but someone could have wet the ground with the garden hose.

Probabilistic graphic models, on the other hand, allow such uncertain inference. If the random variables can be related by a directed acyclic graph (DAG), a Bayesian network is obtained . If the connection of the random variables is expressed by an undirected graph, a Markov logic network is obtained. In Bayesian networks the edges stand for causality, in Markov logic networks, however, only for correlation.

definition

A Markov logic network was defined as follows:

A Markov logic network is a set of tuples , where a formula of predicate logic is first order and a weight is.

Together with a finite set of constants with , a Markov random field can be defined with the aid of the Markov logic network . For this purpose, every possible assignment of each predicate of the Markov logic network becomes a node in the Markov random field.

Inference

The inference task in Markov logic networks is to find the most probable world (variable allocation). This kind of inference can happen in Markov logic nets in the following way:

Relationship to other statistical models

The following statistical models are special cases of Markov logic networks:

Individual evidence

  1. ^ A b Matthew Richardson, Pedro Domingos: Markov logic networks . In: Machine Learning . tape 62 , no. 1-2 , 2006, pp. 107-136 , doi : 10.1007 / s10994-006-5833-1 (English).