# Perfect code

In coding theory, a perfect code , or also densely packed code , denotes a block code in which each word has the smallest Hamming distance to exactly one code word (and not to several) , where is. ${\ displaystyle {\ mathcal {C}} \ subset \ Sigma ^ {n}}$${\ displaystyle w \ in \ Sigma ^ {n}}$ ${\ displaystyle c \ in {\ mathcal {C}}}$ ${\ displaystyle d_ {w}}$${\ displaystyle d_ {w} \ leq \ Delta ({\ mathcal {C}})}$

With the mostly used maximum likelihood decoding this means that each received word has exactly one code word to which it has the smallest Hamming distance and to which it can be clearly assigned. The name is derived perfectly from this, because there are no ambiguous decoding options. ${\ displaystyle w}$${\ displaystyle c}$

## Mathematical description

Let a block code with , representing the alphabet used . All code words have a minimum Hamming distance of . He can with it ${\ displaystyle {\ mathcal {C}} \ subset \ Sigma ^ {n}}$${\ displaystyle | \ Sigma | = q}$${\ displaystyle \ Sigma}$${\ displaystyle {\ mathcal {C}}}$${\ displaystyle d}$

${\ displaystyle t = \ left \ lfloor {\ frac {d-1} {2}} \ right \ rfloor}$

Correct mistakes.

To be perfect, must be the minimum Hamming distance between two code words of a code is odd, because otherwise for at least one word to exist, which contradicts the definition of perfect codes. ${\ displaystyle c_ {1}, c_ {2} \ in {\ mathcal {C}}: \ Delta (c_ {1}, c_ {2}) = d}$${\ displaystyle w}$${\ displaystyle \ Delta (c_ {1}, w) = {\ tfrac {d} {2}} = \ Delta (w, c_ {2})}$

Since the code corrects errors, a word can be uniquely assigned to a code word if the Hamming distance is. Conversely, this means for a specific code word that all words are decoded with a Hamming distance of at most after . As a set it is written like this: ${\ displaystyle t}$${\ displaystyle w \ in \ Sigma ^ {n}}$${\ displaystyle c \ in C}$${\ displaystyle d = \ Delta (c, w) \ leq t}$${\ displaystyle c}$${\ displaystyle w}$${\ displaystyle t}$${\ displaystyle c}$

${\ displaystyle {\ mathcal {B}} _ {t} (c): = \ {w \ in \ Sigma ^ {n} \ mid \ Delta (w, c) \ leq t \}}$

This set is also known as a sphere (sometimes hypercube) with a radius around . The number of elements of can be calculated as follows: ${\ displaystyle t}$${\ displaystyle c}$${\ displaystyle {\ mathcal {B}} _ {t} (c)}$

${\ displaystyle | {\ mathcal {B}} _ {t} (c) | = {\ sum _ {i = 0} ^ {t} (q-1) ^ {i} {\ binom {n} {i }}}}$

For faults there of possible positions for the error. Incorrect symbol values ​​are available for each fault location. There are bullets. These are disjoint subsets of the . This results in the inequality ${\ displaystyle i}$${\ displaystyle i}$${\ displaystyle n}$${\ displaystyle q-1}$${\ displaystyle | {\ mathcal {C}} |}$${\ displaystyle \ Sigma ^ {n}}$

${\ displaystyle | \ Sigma ^ {n} | \ geq | {\ mathcal {C}} | {\ sum _ {i = 0} ^ {t} (q-1) ^ {i} {\ binom {n} {i}}}}$

Dissolved after and with follows: ${\ displaystyle {\ mathcal {C}}}$${\ displaystyle | \ Sigma ^ {n} | = q ^ {n}}$

${\ displaystyle | {\ mathcal {C}} | \ leq {\ frac {q ^ {n}} {\ sum _ {i = 0} ^ {t} (q-1) ^ {i} {\ binom { n} {i}}}}}$

This inequality for the number of code words is called Hamming's limit or also spherical packing limit .

A perfect code is characterized by the fact that all words are contained in exactly one of the spheres (in other words: the spheres cover the space). Therefore equality holds for the Hamming bound itself. ${\ displaystyle w \ in \ Sigma ^ {n}}$

## Alternative interpretation

This limit can also be illustrated as follows (for the sake of simplicity only explained using binary codes, i.e. for ): ${\ displaystyle q = 2}$

For an error-correcting code, the decoder must receive enough information to be able to differentiate between all of the following cases: ${\ displaystyle t}$

• ${\ displaystyle | {\ mathcal {C}} | = 2 ^ {k}}$ different information words and each
• all possible patterns of bit errors of the bits of a code word${\ displaystyle 0 \ ldots t}$${\ displaystyle n}$

Since there are possibilities to distribute bit errors to bits, this results in a total of ${\ displaystyle {\ binom {n} {i}}}$${\ displaystyle i}$${\ displaystyle n}$

${\ displaystyle 2 ^ {k} \ cdot \ sum _ {i = 0} ^ {t} {\ binom {n} {i}}}$

Cases that have to be differentiated with the available bits, that is ${\ displaystyle n}$

${\ displaystyle 2 ^ {n} \ geq 2 ^ {k} \ cdot \ sum _ {i = 0} ^ {t} {\ binom {n} {i}}}$.

Equality applies to a perfect code, i.e. the bits carry exactly the minimum information required. This (in) equation corresponds to the above general definition for the case and . ${\ displaystyle n}$${\ displaystyle q = 2}$${\ displaystyle | {\ mathcal {C}} | = 2 ^ {k}}$

One is actually only interested in the correction of the information bits , for which less additional information would be sufficient - these bits of additional information would then have to be error-free, which of course is usually not guaranteed. Correction of all bits is therefore necessary. ${\ displaystyle k}$${\ displaystyle nk}$ ${\ displaystyle n}$

## Known Perfect Codes

If the alphabet size is a prime power , then: ${\ displaystyle q}$

If there is a perfect code with parameters with , there is a code  with the same parameters, so one of the following codes is: ${\ displaystyle {\ mathcal {C}}}$${\ displaystyle [n, k; d, q]}$${\ displaystyle k = \ mathrm {log} _ {q} (| {\ mathcal {C}} |)}$${\ displaystyle {\ mathcal {D}}}$${\ displaystyle {\ mathcal {D}}}$

• A trivial code: A code is called trivial here ,
• if he either only , d. H. contains a single code word: or${\ displaystyle q ^ {0} = 1}$${\ displaystyle [m, 0; 2m + 1, q]}$
• if it all contains possible code words of the given block length: .${\ displaystyle q ^ {m}}$${\ displaystyle [m, m; 1, q]}$
• A binary repetition code with an odd code word length. The parameters are .${\ displaystyle [2m + 1.1; 2m + 1.2]}$
• A Hamming code over the finite body , with parameters .${\ displaystyle \ mathbb {F} _ {q}}$${\ displaystyle \; \ left [{\ frac {q ^ {m} -1} {q-1}}, {\ frac {q ^ {m} -1} {q-1}} - m; 3, q \ right]}$
• The binary Golay code with parameters .${\ displaystyle {\ mathcal {G}} _ {23}}$${\ displaystyle [23.12; 7.2]}$
• The Golay ternary code with parameters with .${\ displaystyle {\ mathcal {G}} _ {11}}$${\ displaystyle [11.6; 5.3]}$

${\ displaystyle m}$stands for a positive natural number . ${\ displaystyle m \ in \ mathbb {N} ^ {+}}$

The codes and have the same parameters and can therefore correct the same number of errors with the same block length  . The conversion of a trivial code into a linear code with the same parameters is simple: If the code contains a single code word, the zero vector can serve as the only code word instead , and the resulting code is linear. The only remaining trivial code is the one that contains all possible words of the given block length. But this is already linear. The rest of the codes from the list are already linear codes . For every perfect code that is generally not a linear code, there is a linear code with the same parameters - provided the size of the alphabet is a prime power. ${\ displaystyle {\ mathcal {C}}}$${\ displaystyle {\ mathcal {D}}}$${\ displaystyle n}$ ${\ displaystyle c_ {0} = (0, \ dots, 0)}$${\ displaystyle q ^ {n}}$

It is open whether and for which parameters there are non-trivial perfect codes, if the alphabet size is not a prime power.

For practical purposes the trivial codes are of no interest because either

• no information can be transmitted or
• no errors can be detected or corrected.

## Individual evidence

1. ^ FJ MacWilliams, NJA Sloane: The Theory of Error-Correcting Codes , Amsterdam, North-Holland, 1977
2. Werner Lütkebohmert: Coding Theory , Braunschweig / Wiesbaden, Vieweg, 2003