Hamming code

from Wikipedia, the free encyclopedia

The Hamming code is a linear error-correcting block code developed by Richard Wesley Hamming , which is used in digital signal processing and communication technology for secure data transmission or data storage.

The Hamming code is a class of block codes of different lengths, which are formed by a general formation rule. The special feature of this code is the use of several parity bits . These bits supplement differently selected groups of the useful data bits carrying the information. A clever selection of the grouping, the mathematical basis of which is described below, not only enables error detection, but also error correction of the transmitted data bits.

The individual code words of the Hamming code have a Hamming distance of 3. Due to this difference of three bit positions, the decoder can recognize one or two bit errors in a data block, but only correct one bit error. If there are two bit errors, the decoder delivers a valid but incorrect code word. The extended Hamming code with a Hamming distance of 4 can recognize up to three bit errors in a data block with an additional parity bit, but it can also correct only one bit error. With the extended Hamming code, two bit errors are recognized as an incorrect (invalid) code word which cannot be corrected.

Properties of binary Hamming code
Number of digits 2 k −1, k ≥ 2 and integer
Weight 3
Maximum distance 3
Hamming distance 3
redundancy k
k is the number of parity bits per code word

history

In the 1940s Richard Hamming was working at Bell Labs on a computer called Bell Model V , which was equipped with fault-prone electromechanical relays with two machine cycles per second. The punch cards used for data entry could have errors due to wear and tear during the reading operation, which had to be corrected manually by employees of Bell Labs during normal office hours. During Richard Hamming's usual working hours, outside of office hours and on weekends, these reading errors caused the computer to skip the defective punch cards and continue working on other punch cards.

Hamming was frustrated by these errors and the multiple efforts and subsequently developed a special code that allowed the machine to independently recognize and correct reading errors on punched cards to a certain extent. In 1950 he published this Hamming code, which is still widely used today and in partially expanded forms in the area of channel coding .

Structure of the code

Only binary Hamming codes are shown below. Binary Hamming codes are based on parity codes over a data block of fixed length. The data block, also referred to as “data word” or “message word”, comprises bits and contains the actual useful information. can only assume specific integer values ​​in the Hamming code, which result from the formation specification of this code. The bit combinations in the data block comprising bits can in principle be selected as desired, that is to say any arbitrary bit combinations are permissible.

The code word of the Hamming code is obtained from the data word by integrating additional control points, the so-called parity bits. A fixed number of control points is inserted into each bit-length data word . This results in the code word with a length of . Since the control points carry redundant information derived from the data block, only certain bit combinations are possible for a code word . This enables both error detection and error correction.

The specification for creating the code by another equation from zu is described in the following form:

This means that if, for example, three binary digits are available for control bits (parity bits), the code word must necessarily have a length of . The data word then has a length of bits per code word or in general:

The following table shows all possible Hamming codes of different code word lengths up to a block size of 255 bits:

Parameter combinations for Hamming codes
n k N = n + k
Data bits
(data word)
Parity bits
(control points)
Message bits
(total length of the code word)
1 2 3
4th 3 7th
11 4th 15th
26th 5 31
57 6th 63
120 7th 127
247 8th 255

The following notation is usually used in the literature to classify the Hamming codes of different lengths: -Code. The first number indicates the number of message bits in the code word, the second number the number of data bits per code word. For the sake of simplicity, the hammering code is often found in demonstration examples in particular . For real applications, the overhead is here , i. H. the ratio of control bits to data bits is too unfavorable, which is why longer Hamming codes such as the Hamming code are used.

Sometimes the distance of the code is given as the third digit in the classification. Because of the fixed Hamming distance, however, only “ Hamming Code” is usually written instead of “Hamming Code”.

Calculation of the parity places

The parity positions (control bits) in a code word are calculated according to a method that is also used for the simple parity check bit . As a rule, as agreed, an even parity is selected for all control points: If the number of logical data bit positions included in the respective control point is an even number, the respective parity bit is logical . If the number of logical- in the data bit positions is an odd number, the respective parity bit is set to logical- so that the sum of the data bit positions and the parity bit always results in an even number of logical bits.

The individual parity bits of a code word are not formed over all positions (bits) of the data word, but only over individual, selected data bits. A clear method can be used to construct which data bit positions are included in which control bits. First, the frame of the code word is formed from the data bits and the control points:

  1. The individual parity control bits p are located in the code word at those positions that are powers of two (1, 2, 4, 8, 16, ...).
  2. The data bits d of the data word to be transmitted are entered between the free positions in the code word in ascending order from the left.

If there are parity bits, bits of the data word and the bits of the code word to be formed, a code word of the Hamming code constructed in this way has the following form (this representation can be expanded accordingly for larger code word lengths ):

Arrangement of the parity control points p 1 ... p k and the data bits d 1 ... d N-k in a code word with the positions c 1 ... c N for the bit arrangement shown below.

Every second data bit is included in the first parity bit, starting with. For the first parity bit, the sequence of code word positions results in all data bits that are in an uneven position in the code word:

The symbol is the logical XOR function and at the same time represents the formation rule for the control bits. As can be seen from the table above, only data bits occur at the bit positions in the code word on the right-hand side of the equation. This applies to all parity bits.

The bit to the right of p 2 in the code word is included in the second parity bit , then two places in the code word are skipped, the next two bits and are included, two places are skipped again, and so on. Instead of one data bit, two adjacent data bits are used and two positions are skipped over in the code word. This results in the formation regulation:

The third parity bit p 3 includes the following three digits on the right in the code word, four digits of the code word are skipped, then four bits are included, then four digits are skipped, and so on. Groups of four bits each are used to form the parity bit. This results in :

The parity bit is calculated over all positions c j of the code word in which there is a logical one in the -th position of the binary coding of the index j. This procedure is followed in the same way for the remaining parity bits until all parity bits of the selected Hamming code have been determined. In practical applications, the code word is determined by what is known as an encoder .

In the specific case of the hammering code, this results in the following code word table. Below that, the links of the individual parity bits are entered in the respective columns, from which the control matrix, also referred to as the test matrix, shown later for this example results directly . In the columns of the last three lines, arrows are entered at those points where they are logically found in the control matrix . With a little effort, the control matrix can be determined for each Hamming code according to this pattern. One parity bit is always shown per line with an upward pointing arrow (↑), and the data bits for determining the parity bit concerned are marked with a downward pointing arrow (↓).

Determination of the parity bits using the (7,4) Hamming code as an example
c 1 c 2 c 3 c 4 c 5 c 6 c 7
p 1 p 2 d 1 p 3 d 2 d 3 d 4

The arrangement of parity and data bits is chosen arbitrarily. A different sequence of the individual bits in the code word can be selected without restriction without changing the property of the Hamming code. This fact is used in the following systematic or separable Hamming code , in which the parity bits are always appended to the end of the data word to form the code word. The separable Hamming code is obtained according to the same formation rule, but is characterized by a different arrangement of the lines in the generator matrix and, associated with this, a different control matrix .

Shortest Hamming code

The shortest Hamming code is the Hamming code. A useful data bit is assigned to a three-bit long code word. The above general calculation shows that the two “parity bits” and only in this case directly correspond to the one useful data bit . There can only be the two valid code words and . The invalid code words , and are assigned to the code word during decoding by means of the majority decision method . , and the code word . The Hamming code, as a special case, is therefore equal to a repetition code with a length of 3. The Hamming code is also the only Hamming code that is only clearly specified in the structure of the code word.

properties

Correction services

The Hamming code always has a distance of three, regardless of the selected block size. This means that neighboring code words always differ by three bits. If an error occurs at one point in a code word, it is recognized as invalid and can be clearly assigned to the correct code word, i.e. the transmission error can be corrected. If, on the other hand, two errors occur in a code word, this assignment no longer works - the correction of the decoder incorrectly assigns the received code word to another. This is known as "incorrect correction". Another form of failure occurs with three transmission errors: Here the decoder recognizes the incorrect code word as valid.

Hamming codes can only correctly correct one bit error per data word. Because of its ability to assign all received code words to a valid code word, the Hamming code is a perfect code . Most other binary codes - such as the extended Hamming code - are not perfect. With these codes, transmission errors can result in code words that the decoder can recognize as incorrect but cannot assign a valid word (decoding failure).

Linearity

Hamming codes are basically linear codes . With these codings - also known as binary group codes - each modulo-2 addition ( XOR operation ) of two code words leads to a valid code word again. One of the requirements of a linear code is the existence of a neutral element. In the case of a Hamming code, this means that the zero word - all of the digits of which are logical "0" - must be valid. The so-called “ code weight ” thus corresponds to the Hamming distance of three for Hamming codes.

Another general property of group codes is that the individual valid code words c can be generated from a generator matrix G and the data words d in the following form:

The following generator matrix results from this equation with the formation rule shown in the previous section and the calculation rules for matrices for a non-separable Hamming code :

The first two lines of the matrix form the first two parity bits and the code word. The ones in the lines indicate which data bit positions are included in the respective parity bit. The third line, as well as all subsequent lines with only one one per line, map the data bits in the code word. The fourth line is the third parity bit .

The control matrix can be derived directly from the generator matrix , which is used by the decoder to detect incorrect bit positions by means of matrix multiplication . The check matrix must be chosen so that it is orthogonal to all valid code words :

The following control matrix can be determined for the generator matrix above :

The content of the matrix can be obtained from the generator matrix using the tabular method presented in the previous section for determining the parity bits.

If a single bit error occurs, the following applies to the resulting invalid code word:

Using the value of this equation, the incorrect bit position can be uniquely determined via a syndrome table and corrected by inverting.

A playful application of the hammering code in particular can be found in solving Ebert's hat problem .

Systematic Hamming Code

Due to the linearity, any rows of the generator matrix can be swapped without changing the code properties. The respective form of the generator matrix only needs to be coordinated between the encoder and decoder. A systematic code is present if all data bits d n and then all parity bits p n are arranged in the code word . Because they can be separated, encoders and decoders can be implemented in electronic digital circuits such as ASICs or FPGAs with less memory expenditure and less latency . Separable codes are also referred to as "systematic codes".

The above generator matrix can be transformed by moving the lines to the following generator matrix ′ so that the hammering code becomes a systematic code. A possible generator matrix of the systematic hammering code is:

The associated control matrix ′ can be determined more easily, because for systematic block codes with modulo 2 operations the following applies:

Hence ′ is determined as follows:

The first 4 columns of the control matrix 'consist of the last three rows of the generator matrix '. The right part of the control matrix is filled with the identity matrix .

This shows that only the statement -Hamming-Code for a specific implementation does not clearly describe the exact coding process and decoding process. This is only guaranteed by specifying the respective generator matrix or, in the case of cyclic Hamming codes, by the generator polynomial.

Cyclic Hamming code

In practical use, cyclic codes , in particular cyclic separable Hamming codes, play an important role. With these, the calculation of the individual check bits in the encoder and the decoding in the decoder can be implemented with minimal memory expenditure in the form of linear feedback shift registers (LFSR). Cyclic codes are linear codes for which the additional requirement applies that a rotation or cyclic shift of a code word must lead to a valid code word.

In general, the cyclic Hamming code can also be understood equivalently in the description as a subgroup of the BCH codes (Bose-Chaudhuri-Hocquenghem codes). BCH codes are a very large group of cyclic block codes, the parameters and structure of which can be varied more than the Hamming codes.

The generation of cyclic Hamming codes is carried out, depending on the block length, by primitive generator polynomials of the appropriate degree . The generator polynomial can be mapped directly in the LFSR to calculate the parity bits.

Typical generator polynomials are listed in the following table. However, other generator polynomials can also be selected in specific implementations without changing the properties of the Hamming code, if the selected polynomial is only primitive and has been agreed between the encoder and decoder:

Cyclic Hamming codes
k N n Generator polynomial G (z)
2 3 1 z 2 + z + 1
3 7th 4th z 3 + z + 1
4th 15th 11 z 4 + z + 1
5 31 26th z 5 + z 2 + 1
6th 63 57 z 6 + z + 1
7th 127 120 z 7 + z 3 + 1
8th 255 247 z 8 + z 7 + z 2 + z + 1
9 511 502 z 9 + z 4 + 1

Remarks:

  • z is an alternative notation for z 1 : z 6 + z + 1 → z 6 + z 1 + 1
  • 1 is an alternative notation for z 0 : z 6 + z + 1 → z 6 + z 1 + z 0
  • The number of terms is always odd and always contains the powers z k and 1: z 6 + z + 1
  • You can always mirror the exponents , i.e. H. Replace all x l by x k − l , a different, but precisely functioning, cyclic block code is created: z 6 + z + 1 → z 6 + z 1 + z 0  → z 0 + z 5 + z 6  → z 6 + z 5 + 1

Practical implementation of a Hamming encoder

(15,11) Hamming Code Encoder.

Cyclic separable Hamming codes can easily be implemented in digital electrical circuits. In the illustration on the right, a Hamming encoder with generator polynomial is shown for illustration .

In the illustration, the data bits are read in as a serial data stream in the top center and the code word is output serially on the right . The switches are initially in position : This means that the data bits in the code word are first output directly and simultaneously pushed into the LFSR. When all data bits of a data word have been read in, the two switches change to position : The content of the LFSR that corresponds to the parity bits is now output bit by bit . When all test points have been issued, the process is repeated. For the sake of simplicity, the necessary clock lines and synchronization circuits are not shown.

The Hamming decoder has a similar design: The received serial bit data stream from the code words is shifted into a corresponding LFSR and at the same time shifted into a separate shift register chain for the purpose of latency adjustment. After the complete reception of a code word, the content of the LFSR at the decoder serves as an address pointer in a syndrome memory, which is usually implemented as a fixed ROM table in the circuit. The data output of the syndrome memory acts directly on the serial data stream of the data bits and corrects incorrectly recognized data bit positions by inverting if necessary.

Decoding

Various methods can be used for decoding, which differ in the complexity of the decoder and decoder performance. An essential method is based on the syndrome table determined above, which provides information about which position in the code word is incorrect. This is a relatively simple procedure for received or read binary symbols. However, no general method is known with which a linear block code of any code word length could be decodable deterministically in polynomial time. In the case of an (N, n) hammering code, the decoding effort is dependent on the code word length and increases exponentially. By using the syndrome in the decoding, the number of possible error combinations can be reduced from 2 N to 2 n . In complexity theory , the time class of those decision problems is called NP-hard .

Another possibility to improve the decoding performance is the fact that in real communication systems the decoder usually receives the individual code words not as binary but as multi-level signals. The received analog signals are first quantized by an upstream analog-digital converter . The resulting gradations of the signal between logical and logical are interpreted by the decoder as probabilities and the code word is iteratively constructed on the basis of these. This procedure is referred to in the mostly English-language specialist literature as soft decision and results in a higher code gain .

The counterpart to this is the so-called hard decision , which can be seen as an extreme case of soft decision . The analog received signal is mapped as a digital input signal for the code words by means of a 1-bit wide analog-digital converter, a " comparator ", before decoding . In this way, it is already established before the decoding whether a certain bit of the received code word is logically '1' or logically '0'.

Hard decision decoding

In this case, the received code words are already available as digital sequences, which is why the decoding process is reduced to the evaluation of the syndrome table in a one-step process. This method is mostly used when the decoder is to be designed as simply as possible and no “chained codes”, i.e. H. consisting of combinations of different Hamming codes can be used.

In the representation using the control matrix introduced above, it has already been explained that the product of the received code word and the control matrix assumes a value other than 0 in the event of an error:

By arranging the parity digits accordingly, and thus as a result of the shape of the control matrix, in the simplest case the value of this equation can be used as a syndrome directly to correct the relevant bit position. If this equation yields the value 1 for -haming code, then exactly the first bit of the received code word is wrong. If the equation is 2, the second bit, and so on. The error can be corrected by negating the relevant bit position in the code word. If there are no errors, the above equation returns the value 0 and no bit position is corrected.

This simple correspondence between the syndrome value and the incorrect bit position is only the case with a Hamming code if the individual parity bits are located precisely at the positions in the code word which represent powers of two. This is the case with the generator matrix presented at the beginning . This eliminates the need for a syndrome table (ROM table) which first converts the respective value of the equation of control matrix and code word to a specific bit position. This simplification for the decoding is also the reason why Hamming codes in examples are mostly carried out in the form of the generator matrix shown above .

To decode the separable hammering code shown above with its control matrix ', however, it is necessary to convert the value from the test equation to determine the incorrect position in the code word. In the error-free case, the test equation, as with all Hamming codes, supplies the value 0. In the event of an error, it supplies a value other than 0, which does not have to correspond to the incorrect bit position in the code word. The conversion to the erroneous bit position can take place by means of a ROM memory , the addresses of which contain the value of the test matrix, and the data outputs indicate which bit position is to be corrected by inversion. In the case of the separable hammering code specified above , the ROM memory must have the following data content:

Input value
(ROM memory address)
Output value
(incorrect bit position in the code word)
0 0
1 5
2 6th
3 1
4th 7th
5 2
6th 3
7th 4th

Extended Hamming Code

Since the Hamming code can only recognize and correct one bit error per data word and two bit errors per data word lead to an incorrect code word in the decoder, there is a desire to improve these properties. This code is called the "extended Hamming code" (English extended Hamming code called). For this purpose, a further parity bit is added to the Hamming code, into which all binary digits of the non-extended Hamming code flow. In this way, for example, the hammering code becomes an extended hammering code.

The extension of a general block code by an additional control point only makes sense if the “code weight” is odd, since only then is additional information available in this additional control bit. This is always true for Hamming codes with a code weight of 3. This increases the Hamming distance in the extended Hamming code from 3 to 4, and the extended Hamming code can detect or correct the following errors per code word:

  1. It can recognize and correct individual bit errors positioned at will. In this case the syndrome value is unequal and the additional parity bit .
  2. It can detect double bit errors positioned at will, but no longer correct them. In this case the syndrome value is unequal and the additional parity bit .
  3. It can recognize all triple bit errors either as an invalid code word and during decoding assigns a valid code word that was not sent, or it detects triple bit errors that cannot be corrected. Which case occurs depends on the positions of the three bit errors in the code word. In the first case the syndrome value and the additional parity bit are not equal , in the second case the syndrome value and the additional parity bit are .
  4. Quadruple bit errors of a code word are either assigned as an invalid and correctable code word, as recognized in the first point, and a valid code word that was not sent. Or a valid code word that was not sent at all is received immediately. Which case occurs also in this case depends on the bit positions at which the bit errors occur in the code word. In the first case, the syndrome value and the additional parity bit are unequal . In the second case, the syndrome value and the additional parity bit is what corresponds to a valid code word. In all cases, in the event of four-fold bit errors, code words other than the sent code words are output, which is referred to as decoding failure.

For the decoder of extended Hamming codes, which only works by means of hard decision , the following truth table can be set up, according to whose input variables in the form of the syndrome vector and the additional parity check, the decoder can decide whether no error, a correctable error or there is an uncorrectable error:

Syndrome vector additional parity check Action of the decoder Code word received
= 0 0 no mistake valid
0 1 correctable error invalid
= 0 1 correctable error (in the parity bit) invalid
0 0 recognized, uncorrectable error invalid

The extended Hamming code is not a perfect code, since no longer all invalid code words can be clearly assigned to valid code words. Further processing levels after the Hamming decoder have to decide what happens in cases with recognized but not correctable data errors. Furthermore, if there are three or more bit errors per code word, a “decoding failure” can occur. This means that these multiple errors are either not recognized or assigned to valid code words that have not been sent. This is particularly important for Hamming codes with long code words, as this behavior does not change when the code word length is selected.

The extended Hamming code is used, for example, as a so-called "inner block code" in turbo product codes , such as those used in wireless radio networks for data transmission according to the IEEE 802.16 standard within the framework of WiMAX on the radio interface.

Code shortening

Both the Hamming code and the extended Hamming code can be shortened in the length of their code words in order to obtain code words with a specific, fixed length in applications. This is known as code shortening. As shown, all Hamming codes have only a comparatively few selectable code word lengths in coarse increments , whereby whole numbers and greater than 1 must be selected. Intermediate code word lengths are not possible with the Hamming code.

The code abbreviation procedure enables code word lengths to be selected between these individual coarse levels, however, depending on the procedure, this advantage is either bought at the expense of a poorer ratio of data bit positions (useful data) to the number of control positions in the code word, or the procedure reduces the minimum distance between the codes and thus reducing its correction capacity.

To shorten the code, the following procedures can be used for all codes:

  1. On the encoder side, only those possible code words are selected and then used as valid code words that are always logical in the first or last positions of the code word . Depending on the desired resulting code word length, a corresponding number of digits is selected, agreed between encoder and decoder and no longer changed in the process. Due to the fact that the omitted positions always have known values, they no longer need to be transmitted or stored: the resulting code word is shortened in length. With this method, the Hamming code's minimum distance of three and thus its correction capacity is retained. However, the ratio of the data bit component to the control bit component in the code word is less favorable. This means that those shortened Hamming codes have a larger proportion of control points (parity bits) in the code word than would be optimally necessary for Hamming codes with unabbreviated code word length.
  2. Selected positions of the code word are punctured on the encoder side, i.e. deleted and set to a fixed value of either logical or logical . Due to the fact of the fixed value, the corresponding binary digits no longer need to be transmitted or stored; there is a corresponding reduction in the length of the resulting code word. Depending on the choice of digits in the code word, there are reductions in the minimum distance of the code of varying degrees. Since the minimum distance is always three with Hamming codes, puncturing would lead to a complete loss of the correction performance. Punctures are therefore of no practical importance in block codes such as the Hamming code and are typically found in convolutional codes with a correspondingly high minimum spacing.

Practical applications of shortened and extended Hamming codes can be found, for example, in the correction of simple memory errors and the reliable detection of double memory errors per address in DRAM memories . These inexpensive memories only require a small capacitor for data storage per bit , and bit errors can occur relatively easily due to interference effects. Commercially available memory modules typically have a data bus width of 36 bits or 72 bits per memory address - both are values ​​that cannot be achieved directly by using corresponding code word lengths in the Hamming code.

By shortening the code according to the first method, a shortened Hamming code with a precisely matching code word length can be constructed relatively easily. In application notes of the company Xilinx for error correction using Hamming codes is of an extended Hamming code with the parameters assumed as a base. A shortened Hamming code is  formed from this, the code word length of which corresponds exactly to the data bus width of the DRAM memory module and can store 64 useful data bits per address. All parity bits of the Hamming code are transferred to the code word shortened to 72 digits. The data bit positions 65 to 120 are always set to logical and are not saved.

More number systems

With the binary Hamming code presented in the article, there are only two possible states per digit of the data word or code word (that is exactly what "binary" means). In number theory this fact is expressed by means of Galois fields of characteristic two, abbreviated GF (2). A special feature of all binary, error-correcting codes is that the determination of the error position is sufficient for error correction: Since there are only two possible states per position, an error with a determined position can always be corrected by inversion (0 ↔ 1) of the relevant position.

In addition to the binary Hamming code, there are also generalizations to other, higher number systems such as the ternary Hamming code in GF (3). The ternary Hamming code has per point three states: . For error correction, in addition to the localization of the error position, additional information is required for all non-binary codes as to which of the other options a certain point must be changed to.

In general, Hamming codes can also be formed on Galois fields , where there must be a prime power , i.e. H. with a prime number and .

Simplified variation with example

The following Hamming code variant offers a major simplification: the required Hamming bits are simply appended, and the binary number does not need to be reversed.


Transfer example: Decimal number 86 (stored in 8 bits) is to be transferred. 86 = 01010110

→ Hamming bits are required.

01010110 has 1's bits at the positions (counting from the right): 2, 3, 5 and 7

Write down the digits of the 1's bits in binary form with one another and add a pair. Ie every column must have an even number of 1's bits.

0010 (= 2nd digit)
0011 (= 3rd digit)
0101 (= 5th digit)
0111 (= 7th digit)
......
0011 (added so that there is an even number of 1s in each column)


Transmitted data with Hamming code:
01010110 | 0011
binary number | Hamming bits


Example troubleshooting: 01000110 | 0011 was received

Recalculate Hamming code: 01000110 has 1's in places: 2,3,7

0010 (= 2nd digit)
0011 (= 3rd digit)
0111 (= 7th digit)
0011 (= transmitted Hamming code)
......
0101 → because not 0000, but 0101 = 5 → 5th bit wrong

01010110 | 0011 = 86 from above [the Hamming code was able to correct the 1 bit error]

literature

  • Martin Bossert : Channel Coding . 2nd completely revised and expanded edition. Teubner, Stuttgart 1998, ISBN 3-519-16143-5 ( information technology ).
  • Todd K. Moon: Error Correction Coding. Mathematical Methods and Algorithms . John Wiley & Sons, Hoboken NJ 2005, ISBN 0-471-64800-0 .
  • André Neubauer: Channel coding. An introduction for engineers, computer scientists and natural scientists . J.Schlembach Fachverlag, Wilburgstetten 2006, ISBN 3-935340-51-6 .
  • Hermann Rohling : Introduction to information and coding theory . Teubner, Stuttgart 1995, ISBN 3-519-06174-0 ( Teubner study books - electrical engineering ).

Web links

Commons : Hamming codes  - album with pictures, videos and audio files

Individual evidence

  1. ^ Richard W. Hamming: Error Detection and Error Correction Codes . In: The Bell System Technical Journal , Vol. XXIX 2, 1950, pages 147-160.
  2. Error detection using a shortened Hamming code. (PDF; 184 kB) Company brochure (English)
This version was added to the list of articles worth reading on September 5, 2007 .