bit

from Wikipedia, the free encyclopedia

The term bit ( suitcase word from English binary digit ) is used in computer science , information technology , communications engineering and related fields with the following meanings:

  • as a unit of measurement for the information content (see also Shannon , Nit , Ban ). 1 bit is the information content that is contained in a selection from two equally probable possibilities. The information content can be any real , non-negative value.
  • as a unit of measurement for the data volume of digitally represented (stored, transmitted) data. The amount of data is the maximum information content of data with the same size representation. The maximum occurs if all possible states are equally likely. The maximum is an integer multiple of 1 bit. It is the number of binary elementary states used for the representation .
  • as a designation for a digit of a binary number (usually "0" and "1") or more generally for a specific digit from a group of binary digits.

Word origin

The word bit is a cross of b inary dig it - English for "binary digit" or binary digit . It was proposed by the mathematician John W. Tukey probably in 1946, according to other sources as early as 1943. The term was first mentioned in writing in 1948 on page one of Claude Shannon's famous paper A Mathematical Theory of Communication . George Boole was the first to use bits as truth values .

Notation

The unit of measurement is called “bit” and has - according to IEC - “ bit ” as the unit symbol ; the alternative “b” is not used. Just as you can write “100 meter run” and “100 m run”, you can also write “32-bit registers” and “32-bit registers”. Unit prefixes are used in particular for specifying data rates , e.g. B. Mbit / s for megabits per second. The unit is only used in the singular, while the plural is used for certain “bits” of a group.

Representation of bits

number
of the bits the states
0 1
1 2
2 4th
3 8th
4th 16
... ...
8th 256
10 1024
12 4096
16 65,536
32 4,294,967,296
64 1.844674407 × 10¹⁹

The smallest possible distinction that a digital technical system can make is between two possibilities, also referred to as states in computer science . A pair of defined states, for example

  • On or off when a light switch is in position,
  • low resistance or high resistance in the switching state of a transistor ,

represents one bit.

In digital circuit technology, voltage levels are used for representation that lie within a design (logic family) in defined ranges, see logic level . If the voltage is in the high range, then the state is H , in the lower range L (from English high, low ).

Symbolically, regardless of the physical representation, the two states of a bit are noted as

  • true or false (for a Boolean variable ) or
  • 1 or 0 (with a binary digit of a numeric variable)

The assignment H1 , L0 is called positive logic , the reverse assignment is called negative logic .

While the physical representation with two states predominates when processing data, some storage technologies use several states per cell. A memory cell can store 3 bits if 8 different charge states can be reliably differentiated, see table. Similarly, several bits per symbol are transmitted in many radio standards, see e.g. B. Quadrature Amplitude Modulation .

Conversely, with n bits, regardless of their physical representation, 2 n different logical states can be coded, see exponential function . With two bits, for example, 2² = 4 different states can be represented, e.g. B. the numbers zero to three as 00 , 01 , 10 and 11 , see binary number .

Bit error

If individual bits change due to a disturbance during transmission or in a memory, one speaks of a bit error . A measure of how often or likely to occur bit error is the bit error rate .

There are methods which recognize such errors during the transmission and storage of data and can correct them within certain limits, see channel coding . In general, they create just as much redundancy in the information as is necessary to increase security.

Qubits in quantum information theory

The quantum bit (called qubit for short) forms the basis for quantum computers and quantum cryptography in quantum information theory . The qubit plays the same role as the classic bit in conventional computers: It serves as the smallest possible storage unit and, as a two-state quantum system, defines a measure for quantum information. "Two-state" does not refer to the number of states, but to exactly two different states that can be reliably differentiated during a measurement.

Trivia

In January 2012 it was possible to store 1 bit (2 states) in only 12 iron atoms, the lowest number of atoms for magnetic storage to date. A stable arrangement / alignment of the atoms for at least 17 hours near absolute zero temperature could be demonstrated.

For comparison:

  • Current NAND flash cells require around one million electrons to store a bit for 10 years at room temperature.
  • DNA has an information content of 2 bits per base pair and has a molecular mass of about 315 Dalton per bit instead of 672 for the above 12 iron atoms.

References and comments

  1. a b bit (unit in IT) . Duden , Bibliographical Institute, 2016
  2. ^ Claude Elwood Shannon: A Mathematical Theory of Communication . (PDF) In: Bell System Technical Journal , Volume 27, pp. 379-423 and 623-656, July and October 1948.
  3. IEC 60027-2, Ed. 3.0 , (2005-2008): Letter symbols to be used in electrical technology - Part 2: Telecommunications and electronics
  4. according to IEEE 1541 and IEEE 260.1
  5. “b” as a unit symbol can easily be confused with “B” - the unit symbol for the byte
  6. Science , Vol. 335, p. 196, doi: 10.1126 / science.1214131