Block code
Block codes are a type of channel coding in the family of (error-detecting and) error-correcting codes . They are characterized by a fixed block size of symbols from a fixed alphabet (for binary codes ). In contrast to convolutional codes, individual blocks are encoded and decoded independently of one another.
Important properties of a block code are the information rate (the ratio of the amount of information contained to the total amount of data ) and its correction rate (i.e. the ability to recognize and / or correct errors). Both properties influence each other and create a common, insurmountable barrier. The barrier can be approached through optimization, but the codes are long and laborious to decode. Cascading codes has proven to be a more practical solution here.
Although block codes are often not optimal in terms of a minimum mean code word length, one often restricts oneself to block codes. Linear codes and systematic codes are another specialization .
construction
Possible words result from the alphabet and the block size , a subset of which represents the valid code words . The thickness of the alphabet is denoted by, it is in the case of binary codes . The thickness of the code can be used as many codes (always with linear codes) with be written. Given a block size of symbols, these codes can carry a payload .
The information rate is , the correction rate is limited by the (minimal) Hamming distance of the code . The Hamming distance between two code words and is the number of different symbols of these code words , the (minimum) Hamming distance of a (whole) code is the minimum Hamming distance of all (disjoint) code word pairs, i.e. H. . The latter limits the maximum (reliable) correction performance on symbol errors . In the case of cascaded correction procedures, in addition to error correction, error detection also plays a role. For a non-detect perfect codes a certain amount of multi-bit errors with which they themselves can not be corrected, on the other hand can be error correction capabilities against further (guaranteed) error detection capabilities to exchange and following corrective steps help: .
Different notations have been established for codes in the literature:
- or , the semicolon is often replaced by a comma and the square brackets by round brackets. is often left out, the same applies to the classic Hamming codes.
- Often, instead of (the number of useful symbols), the code strength , i. H. or the code specified itself , this information is partly hidden in the type of brackets used.
Furthermore, an attempt is made to keep this as well as the use of variable names consistent in this article as well as in related articles.
It is commonly referred to as a code if
- an alphabet with is,
- the code is and
- is the (minimum) Hamming distance .
If one looks at linear codes , one speaks of codes or codes, whereby here the dimension is from above the body . and have the same meaning as for the general block codes.
One is interested in the given , and in maximizing the power of the code, i. H. for , as this achieves an optimal information rate for these parameters. However, there are favorable parameters that lead to more efficient codes than their neighboring parameters. A code requires 11 protection bits, but a code requires 14. A code , like a code, requires 17 protection bits.
There are estimates of whether codes might be possible or violate certain principles:
- Singleton barrier ( MDS code )
- Hamming barrier ( perfect code )
- Plotkin bound
- Gilbert limit , Varshamov limit , Johnson limit , Griesmer limit , Bassalygo-Elias limit
- Optimal code
Boundaries indicate whether codes can exist, not whether they can be constructed and really exist.
Types of block codes
Formally, the code is called block code, where it is called the alphabet and is the length of each code word .
Trivial block codes are codes
- only one word as the code include: . All transmission errors can be recognized, but no information can be transmitted or
- the all possible words as the code include: . No transmission errors can be detected, but the information transmitted is maximum.
Remarks: |
---|
|
Linear block codes are codes
if there is a -dimensional subspace of . There is then a basis of . If you combine this basis into a matrix
together, one obtains a generator matrix of this linear block code. The code words are obtained by multiplying the input signal by the generator matrix
The main advantage of linear codes is that they can be easily coded and decoded.
Remarks: |
---|
To encode a code with code words one only has to keep code words in stock. The same applies to decoding with vs. . |
Parity
codes are linear, systematic and binary codes with the check symbol generator matrix
- and the overall generator matrix
They have a Hamming distance of 2 and represent block codes. They can recognize an error, but not correct an error. Linear binary block codes with an uneven Hamming distance can be expanded to form a code with an additional parity code.
Systematic block codes are codes that
consist of information symbols at the beginning of the block and check symbols at the end of the block (see illustration at the beginning of the article). They can be linear block codes at the same time, but do not have to be. They are linear block codes if, in addition to the information symbols (which are always linear), the check symbols are also linear.
Perfect block codes are codes
in which each word has the smallest Hamming distance to exactly one code word (and not to several) . Each word can thus be uniquely decoded.
Hadamard codes
are linear, non-systematic block codes . The generator matrix has a very striking shape
They have a low information rate, but can still decode data from very faulty signals.
Information rate for block codes
Be a block code and assume that the alphabet has different elements. Then the definition of the information rate reads :
- .
Is z. B. a binary code with different elements, bits are needed to distinguish between different code words. The information rate sets the lowest possible number of symbols in relation to the actually transmitted number of symbols.
If the first bits of a binary -bit code word are information bits that exist in all possible code words, then the information rate is:
- .
Examples of block codes
example 1
The code words are in the binary representation:
......... ....##### .###...## #.##.##.. ##.##.#.# ###.##.#.
There is no linear code of this power. On the one hand, the largest linear codes of this type are one and one code. The code cannot be converted into a linear code.
Example 2
The code words are in the binary representation (MSB left):
........... ...#...#### ..#..##..## ..##.####.. .#..#.#.#.# .#.##.##.#. .##.##..##. .#####.#..# #...##.#.#. #..###..#.# #.#.#.##..# #.###.#.##. ##...###### ##.#.##.... ###....##.. ####.....##
It is a linear systematic code with the base
...#...#### ..#..##..## .#..#.#.#.# #...##.#.#.
The 16 code words can be generated by XORing the basic vectors whose information bits are set (therefore linear code). The information bits represent the left 4 bits (bits 10 to 7), the protection bits the right 7 bits (bits 6 to 0) (hence the systematic code).
Example 3
The code words are in the binary representation:
........ .....### ...##..# ...####. ..#.#.#. ..##.#.# .#..#.## .#.#.#.. .##....# .##.##.. .###..#. .####### #...##.. #..#..## #.#..##. #.#.#..# #.##.... ##....#. ##...#.# ##.##...
There is no linear code of this power. Again, on the one hand , on the other hand, the largest linear codes of this type are one and one code. The code cannot be converted into a linear code.
Bug fix
Block codes can be used for error detection and error correction when transmitting data over faulty channels . The transmitter assigns a length code word to the information word to be transmitted , whereby . The addition of the additional symbols creates redundancy and decreases the information rate; however, the receiver can now use the redundant information to identify and correct transmission errors.
For example, if you use the assignment in the case of binary coding
Information word | code word |
---|---|
0 | 000 |
1 | 111 |
received code words with exactly one bit error can be corrected by reversing the deviating bit with the help of a majority function :
Incorrect code word |
Corrected code word |
Associated information word |
---|---|---|
00 1 | 00 0 | 0 |
0 1 0 | 0 0 0 | 0 |
0 11 | 1 11 | 1 |
1 00 | 0 00 | 0 |
1 0 1 | 1 1 1 | 1 |
11 0 | 11 1 | 1 |
However, if two bits are incorrect in this case, an error is recognized, but incorrectly corrected. If three bits are wrong, an error cannot even be recognized.
literature
- Rudolf Nocker: Digital Communication Systems 1. Basics of Baseband Transmission, 1. Edition, Friedrich Vieweg & Sohn Verlag, Wiesbaden 2004, ISBN 978-3-528-03976-9 .
- Markus Hufschmid: Information and communication . Basics of information transfer, Vieweg and Teubner, Wiesbaden 2006, ISBN 3-8351-0122-6 .
- Bernd Friedrichs: Channel coding. Basics and applications in modern communication systems. Springer Verlag, Berlin / Heidelberg 1995, ISBN 3-540-59353-5 .
Web links
- Channel coding and block codes (accessed April 6, 2018)
- Linear Error Correcting Codes (accessed April 6, 2018)
- Proinformatik - Functional Programming (accessed April 6, 2018)
- Channel Coding Formula Collection (accessed April 6, 2018)
- Theory and Practice of Error Control Codes Block Code Performance (accessed April 6, 2018)