Decoding rule

from Wikipedia, the free encyclopedia

In information theory and coding theory , more precisely in channel coding , a decoding rule denotes a rule which transmitted word is to be assigned to a received word.

description

The problem is as follows: A transmitter (source) S sends a word c with n characters, consisting of letters of an alphabet A with q letters. The transmission goes over a disturbed channel . This means that individual letters can be changed due to transmission errors. The recipient thus receives a possibly changed word. His goal is to find the right word.

Simplifying assumptions are almost always made for the mathematical consideration:

  • the channel is discrete , the output signal can only have a finite number of different values
  • the channel is memoryless , the probability of errors for a letter does not depend on the letters previously sent.
  • the channel is symmetrical , the probability that a letter that has been sent will also arrive correctly is the same for all letters and the probability for all errors is the same. In formulas: and with . is the probability that it will be received, if it was sent.

In order to ensure a reasonably error-free decoding, redundancy is usually added to the words in a targeted manner by using error-correcting codes.

Minimum error decoding

With minimum error decoding, an attempt is made to find the word that was most likely sent. This depends on two major factors. On the one hand from the probability of errors in the channel, on the other hand from the entropy of the source; in other words, whether the words sent are evenly distributed and interdependent. Minimum error decoding is always the optimal decoding rule, but it is generally difficult to determine.

Example: Let the alphabet binary: the discrete memoryless symmetric channel have the probability of error , as the code of the binary is repetition code used: . Then the probability is

  • for no transmission error:
  • for a transmission error:
  • for two transmission errors:
  • for three transmission errors:

On a statistical average, it is broadcast three times as often as that . The Word is now being received. Consider the corresponding probabilities:

On the other hand, the probability that it was sent is:

The random variable stands for the sent word and the random variable for the received word. So the probability that it was sent is:

and the probability for :

So in this case it is decoded after .

Maximum likelihood decoding

In the case of maximum likelihood decoding, a received word is decoded to form the (code) word which most likely generated it. In contrast to minimum error decoding, maximum likelihood decoding is relatively easy to implement. Under the standard assumption of a discrete, memoryless channel with an error probability of , the code word is selected that differs the least from , i.e. the one with the smallest Hamming distance to .

Maximum likelihood decoding is the most commonly used in coding theory.

Under the prerequisites that the source sends its characters / words without memory and evenly distributed and the channel is discrete, symmetrical and memoryless, the maximum likelihood decoding is a minimum error decoding. For this reason, entropy coding is often carried out before block coding for error correction , for example by packing the data to be sent.

Example: In the above example, the word is post- decoded during maximum likelihood decoding. If the source sends and the same number of times, the decoding according to maximum likelihood is also a decoding according to minimum error.

literature

  • Rudolf Mathar: Information Theory. Discrete models and procedures . Teubner, Stuttgart 1996, ISBN 3-519-02574-4 ( Teubner Study Books - Mathematics ).