# Landau symbols

Landau symbols (also O notation , English big O notation ) are used in mathematics and computer science to describe the asymptotic behavior of functions and sequences . In computer science, they are used in the analysis of algorithms and provide a measure of the number of elementary steps or storage units depending on the size of the given problem. The complexity theory they used to problems to be classed as "difficult" or consuming to solve them. For “light” problems there is an algorithm whose running time can be limited by a polynomial ; Problems are considered "difficult" for which no algorithm has been found that grows less rapidly than exponentially . They are called (not) polynomially solvable.

notation Illustrative meaning
${\ displaystyle f = o (g)}$

or

${\ displaystyle f \ in {\ begin {smallmatrix} \! {\ mathcal {O}} \! \ end {smallmatrix}} (g)}$

${\ displaystyle f}$ grows slower than ${\ displaystyle g}$
${\ displaystyle f = O (g)}$

or

${\ displaystyle f \ in {\ mathcal {O}} (g)}$

${\ displaystyle f}$ does not grow much faster than ${\ displaystyle g}$
${\ displaystyle f \ in \ Theta (g)}$ ${\ displaystyle f}$ grows as fast as ${\ displaystyle g}$
${\ displaystyle f = \ Omega (g)}$ ${\ displaystyle f}$does not always grow slower than (number theory) ${\ displaystyle g}$
${\ displaystyle f \ in \ Omega (g)}$ ${\ displaystyle f}$does not grow much slower than (complexity theory) ${\ displaystyle g}$
${\ displaystyle f \ in \ omega (g)}$ ${\ displaystyle f}$ grows faster than ${\ displaystyle g}$

## History of the O symbol

For the first time, the German number theorist Paul Bachmann expressed in 1894 "through the sign a quantity [...] whose order in relation to the order of does not exceed [...]." The also German number theorist Edmund Landau , through whom the and symbolism became known and with whose name it is associated today, especially in the German-speaking area, took over Bachmann's designation and also introduced the designation for "of small order". ${\ displaystyle O (n)}$${\ displaystyle n}$${\ displaystyle n}$${\ displaystyle O}$${\ displaystyle o}$${\ displaystyle o}$

## Special case: Omega symbol

### Two incompatible definitions

There are two very common and inconsistent definitions for in mathematics

${\ displaystyle f (x) = \ Omega (g (x)) \ (x \ rightarrow a),}$

where is a real number, or , where the real functions and are defined on a neighborhood of and is positive in this neighborhood. ${\ displaystyle a}$${\ displaystyle \ infty}$${\ displaystyle - \ infty}$${\ displaystyle f}$${\ displaystyle g}$${\ displaystyle a}$${\ displaystyle g}$

The first is used in analytical number theory and the other in complexity theory . This situation can lead to confusion.

### The Hardy-Littlewood definition

In 1914, Godfrey Harold Hardy and John Edensor Littlewood introduced the symbol with the meaning ${\ displaystyle \ Omega}$

${\ displaystyle f (x) = \ Omega (g (x)) \ (x \ rightarrow \ infty) \; \ Leftrightarrow \; \ limsup _ {x \ to \ infty} \ left | {\ frac {f (x )} {g (x)}} \ right |> 0}$

a. So is the negation of . ${\ displaystyle f (x) = \ Omega (g (x))}$${\ displaystyle f (x) = {\ hbox {o}} (g (x))}$

In 1916 the same authors introduced two new symbols and with the meanings ${\ displaystyle \ Omega _ {R}}$${\ displaystyle \ Omega _ {L}}$

${\ displaystyle f (x) = \ Omega _ {R} (g (x)) \ (x \ rightarrow \ infty) \; \ Leftrightarrow \; \ limsup _ {x \ to \ infty} {\ frac {f ( x)} {g (x)}}> 0}$;
${\ displaystyle f (x) = \ Omega _ {L} (g (x)) \ (x \ rightarrow \ infty) \; \ Leftrightarrow \; \ liminf _ {x \ to \ infty} {\ frac {f ( x)} {g (x)}} <0}$

a. So is the negation of and the negation of . ${\ displaystyle f (x) = \ Omega _ {R} (g (x))}$${\ displaystyle f (x) <{\ hbox {o}} (g (x))}$${\ displaystyle f (x) = \ Omega _ {L} (g (x))}$${\ displaystyle f (x)> {\ hbox {o}} (g (x))}$

In contrast to a later statement by Donald Ervin Knuth , Landau used these three symbols in 1924 with the same meanings.

These Hardy-Littlewood symbols are prototypes, they are never used exactly. has become to and to . ${\ displaystyle \ Omega _ {R}}$${\ displaystyle \ Omega _ {+}}$${\ displaystyle \ Omega _ {L}}$${\ displaystyle \ Omega _ {-}}$

These three symbols as well (this means that the two properties and are fulfilled) are still used systematically in analytic number theory today. ${\ displaystyle \ Omega, \ Omega _ {+}, \ Omega _ {-}}$${\ displaystyle f (x) = \ Omega _ {\ pm} (g (x))}$${\ displaystyle f (x) = \ Omega _ {+} (g (x))}$${\ displaystyle f (x) = \ Omega _ {-} (g (x))}$

#### Simple examples

We have

${\ displaystyle \ sin x = \ Omega (1) \ (x \ rightarrow \ infty)}$

and specially

${\ displaystyle \ sin x = \ Omega _ {\ pm} (1) \ (x \ rightarrow \ infty).}$

We have

${\ displaystyle \ sin x + 1 = \ Omega (1) \ (x \ rightarrow \ infty)}$

and specially

${\ displaystyle \ sin x + 1 = \ Omega _ {+} (1) \ (x \ rightarrow \ infty)}$

but

${\ displaystyle \ sin x + 1 \ not = \ Omega _ {-} (1) \ (x \ rightarrow \ infty).}$

#### Number theoretic notation

The strict notation is never used in number theory and one always writes less strictly . This means here " is an omega of ". ${\ displaystyle f \ in \ Omega (g)}$${\ displaystyle f = \ Omega (g)}$${\ displaystyle f}$${\ displaystyle g}$

### Knuth's definition

In 1976, D. E. Knuth published an article whose main aim is to justify a different use of the symbol. He tries to convince his readers that, with the exception of some older works (such as the 1951 book by Edward C. Titchmarsh ), Hardy-Littlewood's definition is almost never used. He writes that it could not be used with Landau and that George Pólya , who studied with Landau, confirmed the assessment that Landau probably did not use the symbol (in fact, there is a use in a treatise from 1924). Knuth writes: “For all the applications I have seen so far in computer science, a stronger requirement […] is much more appropriate.” He uses the symbol to describe this stronger requirement: “Unfortunately, Hardy and Littlewood didn't define as I wanted to. " ${\ displaystyle \ Omega}$${\ displaystyle \ Omega}$${\ displaystyle \ Omega}$${\ displaystyle \ Omega (f (n))}$

At the risk of misunderstanding and confusion, he also defines

${\ displaystyle f (x) = \ Omega (g (x)) \ Leftrightarrow g (x) = O (f (x))}$.

## definition

In the following table denote and either ${\ displaystyle f}$${\ displaystyle g}$

• Sequences of real numbers, then and is the limit , or${\ displaystyle x \ in \ mathbb {N}}$${\ displaystyle a = \ infty}$
• real-valued functions of the real numbers, then and is the limit of the extended real numbers:, or${\ displaystyle x \ in \ mathbb {R}}$${\ displaystyle a \ in \ mathbb {R} \ cup \ lbrace - \ infty, + \ infty \ rbrace}$
• real-valued functions of arbitrary topological spaces , then and is also the limit . The most important special case is here .${\ displaystyle (X, {\ mathfrak {T}})}$${\ displaystyle x \ in X}$${\ displaystyle a \ in X}$${\ displaystyle X = \ mathbb {R} ^ {n}}$

The Landau symbols can then be formally defined using Limes superior and Limes inferior as follows:

notation definition Mathematical definition
${\ displaystyle f \ in {\ hbox {o}} (g)}$ asymptotically negligible compared to${\ displaystyle g}$ ${\ displaystyle \ lim _ {x \ to a} \ left | {\ frac {f (x)} {g (x)}} \ right | = 0}$
${\ displaystyle f \ in {\ mathcal {O}} (g)}$ asymptotic upper bound ${\ displaystyle \ limsup _ {x \ to a} \ left | {\ frac {f (x)} {g (x)}} \ right | <\ infty}$
${\ displaystyle f = \ Omega (g)}$ (Number theory) asymptotic lower bound, is not in${\ displaystyle f}$${\ displaystyle {\ hbox {o}} (g)}$ ${\ displaystyle \ limsup _ {x \ to a} \ left | {\ frac {f (x)} {g (x)}} \ right |> 0}$
${\ displaystyle f \ in \ Omega (g)}$ (Complexity theory) asymptotic lower bound, ${\ displaystyle g \ in {\ mathcal {O}} (f)}$ ${\ displaystyle \ liminf _ {x \ to a} \ left | {\ frac {f (x)} {g (x)}} \ right |> 0}$
${\ displaystyle f \ in \ Theta (g)}$ asymptotically sharp bound, both and${\ displaystyle f \ in {\ mathcal {O}} (g)}$${\ displaystyle f \ in \ Omega (g)}$ ${\ displaystyle 0 <\ liminf _ {x \ to a} \ left | {\ frac {f (x)} {g (x)}} \ right | \ leq \ limsup _ {x \ to a} \ left | {\ frac {f (x)} {g (x)}} \ right | <\ infty}$
${\ displaystyle f \ in \ omega (g)}$ asymptotically dominant, ${\ displaystyle g \ in {\ hbox {o}} (f)}$ ${\ displaystyle \ lim _ {x \ to a} \ left | {\ frac {f (x)} {g (x)}} \ right | = \ infty}$

In practice, the limit values ​​usually exist , so that the estimation of the limes superior can often be replaced by the (simpler) calculation of a limit value. ${\ displaystyle \ lim {\ tfrac {f (x)} {g (x)}}}$

Equivalent to the definition with limit symbols , the following definitions with quantifiers can be used for a metric space , in particular for the cases and : ${\ displaystyle (X; d)}$${\ displaystyle X = \ mathbb {R}}$${\ displaystyle X = \ mathbb {N}}$

notation ${\ displaystyle x \ to a <\ infty}$
${\ displaystyle f \ in {\ hbox {o}} (g)}$ ${\ displaystyle \ forall \ C> 0 \ \ exists \ \ varepsilon> 0 \ \ forall \ x \ in \ lbrace x: d (x, a) <\ varepsilon \ rbrace: | f (x) |
${\ displaystyle f \ in {\ mathcal {O}} (g)}$ ${\ displaystyle \ exists \ C> 0 \ \ exists \ \ varepsilon> 0 \ \ forall \ x \ in \ lbrace x: d (x, a) <\ varepsilon \ rbrace: | f (x) | \ leq C \ cdot | g (x) |}$
${\ displaystyle f = \ Omega (g)}$ (Number theory) ${\ displaystyle \ exists \ c> 0 \ \ forall \ \ varepsilon> 0 \ \ exists \ x \ in \ lbrace x: d (x, a) <\ varepsilon \ rbrace: c \ cdot | g (x) | \ leq | f (x) |}$
${\ displaystyle f \ in \ Omega (g)}$ (Complexity theory) ${\ displaystyle \ exists \ c> 0 \ \ exists \ \ varepsilon> 0 \ \ forall \ x \ in \ lbrace x: d (x, a) <\ varepsilon \ rbrace: c \ cdot | g (x) | \ leq | f (x) |}$
${\ displaystyle f \ in \ Theta (g)}$ ${\ displaystyle \ exists \ c> 0 \ \ exists \ C> 0 \ \ exists \ \ varepsilon> 0 \ \ forall \ x \ in \ lbrace x: d (x, a) <\ varepsilon \ rbrace: c \ cdot | g (x) | \ leq | f (x) | \ leq C \ cdot | g (x) |}$
${\ displaystyle f \ in \ omega (g)}$ ${\ displaystyle \ forall \ c> 0 \ \ exists \ \ varepsilon> 0 \ \ forall \ x \ in \ lbrace x: d (x, a) <\ varepsilon \ rbrace: c \ cdot | g (x) | \ leq | f (x) |}$
notation ${\ displaystyle x \ to \ infty}$
${\ displaystyle f \ in {\ hbox {o}} (g)}$ ${\ displaystyle \ forall \ C> 0 \ \ exists \ x_ {0}> 0 \ \ forall \ x> x_ {0}: | f (x) |
${\ displaystyle f \ in {\ mathcal {O}} (g)}$ ${\ displaystyle \ exists \ C> 0 \ \ exists \ x_ {0}> 0 \ \ forall \ x> x_ {0}: | f (x) | \ leq C \ cdot | g (x) |}$
${\ displaystyle f = \ Omega (g)}$ (Number theory) (the test function g is always positive) ${\ displaystyle \ exists \ c> 0 \ \ forall \ x_ {0}> 0 \ \ exists \ x> x_ {0}: c \ cdot g (x) \ leq | f (x) |}$
${\ displaystyle f \ in \ Omega (g)}$ (Complexity theory) ${\ displaystyle \ exists \ c> 0 \ \ exists \ x_ {0}> 0 \ \ forall \ x> x_ {0}: c \ cdot | g (x) | \ leq | f (x) |}$
${\ displaystyle f \ in \ Theta (g)}$ ${\ displaystyle \ exists \ c> 0 \ \ exists \ C> 0 \ \ exists \ x_ {0}> 0 \ \ forall \ x> x_ {0}: c \ cdot | g (x) | \ leq | f (x) | \ leq C \ cdot | g (x) |}$
${\ displaystyle f \ in \ omega (g)}$ ${\ displaystyle \ forall \ c> 0 \ \ exists \ x_ {0}> 0 \ \ forall \ x> x_ {0}: c \ cdot | g (x) | <| f (x) |}$

Analogous definitions can also be given for the case and for one-sided limit values . ${\ displaystyle x \ to - \ infty}$

## Inference

For each function are through ${\ displaystyle f}$

${\ displaystyle \ Omega (f), {\ mathcal {O}} (f), \ Theta (f), {\ hbox {o}} (f), \ omega (f)}$

each set of functions is described. The following relationships apply between them:

{\ displaystyle {\ begin {aligned} \ Theta (f) & \ subseteq {\ mathcal {O}} (f) \\\ Theta (f) & \ subseteq \ Omega (f) \\\ Theta (f) & = {\ mathcal {O}} (f) \ cap \ Omega (f) \\\ omega (f) & \ subseteq \ Omega (f) \\ {\ hbox {o}} (f) & \ subseteq {\ mathcal {O}} (f) \\\ emptyset \, & = \, \ omega (f) \ cap {\ hbox {o}} (f) \ end {aligned}}}

## Examples and notation

When using the Landau symbols, the function used is often given in abbreviated form. For example , instead of writing in abbreviated form, this is also used in the following examples. ${\ displaystyle {\ mathcal {O}} (g) {\ text {with}} g \ colon \ mathbb {R} \ to \ mathbb {R}, n \ mapsto n ^ {3}}$${\ displaystyle {\ mathcal {O}} (n ^ {3}).}$

The examples in the table all contain monotonically increasing comparison functions for which their behavior is important. (The name of the argument is often used - often without an explanation, because it is very often a number.) In this regard, they are in ascending order; H. the complexity classes are contained in those in the lines below. ${\ displaystyle g}$${\ displaystyle n \ to \ infty}$${\ displaystyle n}$

notation meaning Clear explanation Examples of runtimes
${\ displaystyle f \ in {\ mathcal {O}} (1)}$ ${\ displaystyle f}$ is limited. ${\ displaystyle f}$does not exceed a constant value (regardless of the value of the argument ). ${\ displaystyle n}$ Determining whether a binary number is even.
Look up the -th element in a field in a register machine${\ displaystyle n}$
${\ displaystyle f \ in {\ mathcal {O}} (\ log \ log n)}$ ${\ displaystyle f}$ grows double logarithmically. Base 2 increases by 1 when squared. ${\ displaystyle f}$${\ displaystyle n}$ Interpolation search in the sorted field with uniformly distributed entries ${\ displaystyle n}$
${\ displaystyle f \ in {\ mathcal {O}} (\ log n)}$ ${\ displaystyle f}$grows logarithmically . ${\ displaystyle f}$grows roughly a constant amount when doubles. The base of the logarithm does not matter. ${\ displaystyle n}$
Binary search in the sorted field with entries ${\ displaystyle n}$
${\ displaystyle f \ in {\ mathcal {O}} ({\ sqrt {n}})}$ ${\ displaystyle f}$grows like the root function . ${\ displaystyle f}$grows roughly double when quadrupled. ${\ displaystyle n}$ Number of divisions of the naive primality test (divide by any integer ) ${\ displaystyle \ leq {\ sqrt {n}}}$
${\ displaystyle f \ in {\ mathcal {O}} (n)}$ ${\ displaystyle f}$grows linearly . ${\ displaystyle f}$grows roughly double when doubled. ${\ displaystyle n}$ Search in the unsorted field with entries (e.g. linear search ) ${\ displaystyle n}$
${\ displaystyle f \ in {\ mathcal {O}} (n \ log n)}$ ${\ displaystyle f}$ has super-linear growth. Comparison-based algorithms for sorting numbers ${\ displaystyle n}$
${\ displaystyle f \ in {\ mathcal {O}} (n ^ {2})}$ ${\ displaystyle f}$grows square . ${\ displaystyle f}$grows roughly fourfold when doubled. ${\ displaystyle n}$ Simple algorithms for sorting numbers ${\ displaystyle n}$
${\ displaystyle f \ in {\ mathcal {O}} (n ^ {m})}$ ${\ displaystyle f}$ grows polynomially. ${\ displaystyle f}$grows approximately times when doubled. ${\ displaystyle 2 ^ {m}}$${\ displaystyle n}$ "Simple" algorithms
${\ displaystyle f \ in {\ mathcal {O}} (2 ^ {n})}$ ${\ displaystyle f}$grows exponentially . ${\ displaystyle f}$grows roughly double when increasing by 1. ${\ displaystyle n}$ Satisfiability problem of propositional logic (SAT) using exhaustive search
${\ displaystyle f \ in {\ mathcal {O}} (n!)}$ ${\ displaystyle f}$grows factorially . ${\ displaystyle f}$grows roughly by times when it increases by 1. ${\ displaystyle (n + 1)}$${\ displaystyle n}$ Salesman Problem (with exhaustive search)
${\ displaystyle f \ in A (n)}$ ${\ displaystyle f}$grows like the modified Ackermann function . Problem is predictable , but not necessarily primitive-recursive

Landau notation is used to describe the asymptotic behavior when approaching a finite or infinite limit value. The large one is used to indicate a maximum magnitude. For example, the Stirling formula applies to the asymptotic behavior of the faculty${\ displaystyle {\ mathcal {O}}}$

${\ displaystyle n! = {\ sqrt {2 \ pi n}} ~ {\ left ({\ frac {n} {e}} \ right)} ^ {n} \ left (1 + {\ mathcal {O} } \ left ({\ frac {1} {n}} \ right) \ right)}$ For ${\ displaystyle n \ to \ infty}$

and

${\ displaystyle n! = {\ mathcal {O}} \ left ({\ sqrt {n}} \ cdot \ left ({\ frac {n} {e}} \ right) ^ {n} \ right)}$for .${\ displaystyle n \ to \ infty}$

The factor is only a constant and can be neglected for estimating the order of magnitude. ${\ displaystyle {\ sqrt {2 \ pi}}}$

Landau notation can also be used to describe the error term of an approximation. For example, it says

${\ displaystyle e ^ {x} = 1 + x + x ^ {2} / 2 + {\ mathcal {O}} (x ^ {3}) \ qquad}$for ,${\ displaystyle x \ to 0}$

that the absolute amount of the approximation error is smaller than a constant times for sufficiently close to zero. ${\ displaystyle x ^ {3}}$${\ displaystyle x}$

The small is used to say that an expression is negligibly small compared to the specified expression. For example, the following applies to differentiable functions ${\ displaystyle {\ hbox {o}}}$

${\ displaystyle f (x + h) = f (x) + hf '(x) + {\ hbox {o}} (h) \ qquad}$for ,${\ displaystyle h \ to 0}$

the error in approximation by the tangent is therefore faster than linear . ${\ displaystyle 0}$

## Notation traps

### Symbolic equal sign

The equals sign is often used in mathematics in Landau notation. However, it is a purely symbolic notation and not a statement of equality, to which, for example, the laws of transitivity or symmetry can be applied: A statement like is not an equation and neither side is determined by the other. From and it does not follow that and are equal. Nor can one conclude from and that and are the same class or that one is contained in the other. ${\ displaystyle f (x) = {\ mathcal {O}} (g (x))}$${\ displaystyle f_ {1} (x) = {\ mathcal {O}} (g (x))}$${\ displaystyle f_ {2} (x) = {\ mathcal {O}} (g (x))}$${\ displaystyle f_ {1}}$${\ displaystyle f_ {2}}$${\ displaystyle f (x) = {\ mathcal {O}} (g_ {1} (x))}$${\ displaystyle f (x) = {\ mathcal {O}} (g_ {2} (x))}$${\ displaystyle {\ mathcal {O}} (g_ {1} (x))}$${\ displaystyle {\ mathcal {O}} (g_ {2} (x))}$

In fact, it is a set that contains all those functions that grow at most as fast as . The spelling is therefore formally correct. ${\ displaystyle {\ mathcal {O}} (g (x))}$${\ displaystyle g (x)}$${\ displaystyle f (x) \ in {\ mathcal {O}} (g (x))}$

The notation with the equal sign as in is still used extensively in practice. For example, let the expression say that there are constants and such that ${\ displaystyle f = {\ mathcal {O}} (g)}$${\ displaystyle f (n) = h (n) + \ Theta (g (n))}$${\ displaystyle c_ {1}}$${\ displaystyle c_ {2}}$

${\ displaystyle h (n) + c_ {1} \ cdot g (n) \, \ leq \, f (n) \, \ leq \, h (n) + c_ {2} \ cdot g (n)}$

applies to sufficiently large . ${\ displaystyle n}$

### Forgotten limit

Another trap is that it is often not specified which limit value the land cover symbol refers to. But the limit is essential; for example, for but not for the one-sided limit value . Normally, however, the considered limit value becomes clear from the context, so that ambiguities rarely occur here. ${\ displaystyle \ textstyle {\ tfrac {1} {x}} \ in {\ hbox {o}} \ left ({\ tfrac {1} {\ sqrt {x}}} \ right)}$${\ displaystyle x \ to \ infty}$${\ displaystyle x \ downarrow 0}$

## Application in complexity theory

 The articles Landau symbols # application in complexity theory , complexity (computer science) and complexity theory overlap thematically. Help me to better differentiate or merge the articles (→  instructions ) . To do this, take part in the relevant redundancy discussion . Please remove this module only after the redundancy has been completely processed and do not forget to include the relevant entry on the redundancy discussion page{{ Done | 1 = ~~~~}}to mark. Accountalive 03:47, Jan. 1, 2010 (CET)

In complexity theory , the Landau symbols are mainly used to describe the (minimum, average or maximum) time or storage space requirements of an algorithm. One then speaks of time complexity or space complexity . The complexity can depend on the machine model used . As a rule, however, a “normal” model is assumed, for example one that is equivalent to the Turing machine .

It is often very time-consuming or completely impossible to specify a function for a problem which generally assigns the associated number of computing steps (or memory cells) to any input for a problem. Therefore, instead of capturing each input individually, you are usually content with only limiting yourself to the input length . However, it is usually too time-consuming to specify a function . ${\ displaystyle L}$${\ displaystyle f_ {L} \ colon w \ rightarrow f_ {L} (w)}$ ${\ displaystyle n = | w |}$${\ displaystyle f_ {L} \ colon n \ rightarrow f_ {L} (n), n = | w |}$

That is why the Landau notation was developed, which is limited to the asymptotic behavior of the function . So you consider the limits of the computational effort (the need for memory and computing time) when you enlarge the input. The most important Landau symbol is (large Latin letter "O"), with which one can indicate upper bounds ; lower bounds are generally much more difficult to find. Here is (often ) that a constant and exist, such that for all the following applies: . In other words: for all input lengths the computational effort is not significantly greater - i.e. H. at most by a constant factor - as . ${\ displaystyle f_ {L}}$${\ displaystyle {\ mathcal {O}}}$${\ displaystyle f \ in {\ mathcal {O}} (g)}$${\ displaystyle f (n) = {\ mathcal {O}} (g (n))}$${\ displaystyle c> 0}$${\ displaystyle n_ {0} \ in \ mathbb {N}}$${\ displaystyle n> n_ {0}}$${\ displaystyle f (n) \ leq c \ cdot g (n)}$${\ displaystyle f (n)}$${\ displaystyle c}$${\ displaystyle g (n)}$

The function is not always known; as a function , however, a function is usually chosen whose growth is well known (like or ). The Landau notation is there to estimate the computational effort (space requirement) when it is too time-consuming to specify the exact function or when it is too complicated. ${\ displaystyle f}$${\ displaystyle g}$${\ displaystyle g (x) = x ^ {2}}$${\ displaystyle g (x) = 2 ^ {x}}$

The Landau symbols allow problems and algorithms to be grouped according to their complexity in complexity classes .

In complexity theory, the various problems and algorithms can then be compared as follows: For problems with, one can specify a lower bound for, for example, the asymptotic running time , with correspondingly an upper bound. At , the shape of (e.g. ) is also referred to as the complexity class or measure of effort (e.g., quadratic). ${\ displaystyle \ Omega}$${\ displaystyle {\ mathcal {O}}}$${\ displaystyle {\ mathcal {O}} (f)}$${\ displaystyle f}$${\ displaystyle f (n) = n ^ {2}}$

In this notation, as the definitions of the Landau symbols show, constant factors are neglected. This is justified because the constants largely depend on the machine model used or, in the case of implemented algorithms, on the quality of the compiler and various properties of the hardware of the executing computer . This means that their informative value about the complexity of the algorithm is very limited.

9. With the comment: “Although I have changed Hardy and Littlewood's definition of , I feel justified in doing so because their definition is by no mean in wide use, and because there are other ways to say what they want to say in the comparatively rare cases when their definition applies ”.${\ displaystyle \ Omega}$