Theoretical computer science

from Wikipedia, the free encyclopedia
Mind map for a sub-area of ​​theoretical computer science

The theoretical computer science concerned with the abstraction , modeling and fundamental issues associated with the structure, processing, transmission and reproduction of information are related. Its contents are automaton theory, theory of formal languages , computability and complexity theory , but also logic and formal semantics as well as information , algorithm and database theory .

Theoretical computer science was classified - by the proponents of this scientific category - in the structural sciences and provides the basis for the definition , verification and execution of programs in programming languages , the construction of the compilers of programming languages ​​- the compiler construction - and the mathematical formalization and investigation of mostly discrete problems and their models. With the help of mathematical abstraction of the properties of the models obtained, useful definitions, propositions , proofs , algorithms , applications and solutions to or from problems emerged. Theoretical computer science with its timeless, mathematical truths and methods forms a formal skeleton that pervades computer science in practice with concrete implementations . Theoretical computer science identified many unsolvable problems by means of the computability theory and allows, often with constructive proof of the complexity theory, the delimitation of the practically efficiently solvable problems from those for which the opposite applies.

The constructive methods of theoretical computer science also include the design of formal systems , automata , graphs and syntax diagrams as well as the definition of grammars and semantics in order to formally grasp a problem with mathematical expressions and to set them apart from the informal level. The constructs thus describe the inner logic of a problem with mathematical-logical statements, which further allows a formal investigation and potentially new statements and algorithms of the formal models - supported by evidence - as results. In addition to the mathematical gain in knowledge, some of the solutions found can be implemented in practice to provide people with automated advantages of using mathematics and computers through machine semantics.

History of theoretical computer science

Theoretical computer science is closely related to mathematics and logic. In the 20th century there was emancipation and education as an independent discipline.

The discipline's pioneers were Kurt Gödel , Alonzo Church , Alan Turing , Claude Shannon , John von Neumann , Hans Hermes and Noam Chomsky .

Automata Theory and Formal Languages

The automata theory defines and formalizes automata or calculating machines and deals with their properties and computational strength. Among other things, automaton theory examines which problems can be solved by the different classes of computing machines.

The theory of formal languages considered formal grammars and formal languages generated by these grammars. It deals with syntactic and semantic features of these formal languages ​​over an alphabet . The problem of whether a word belongs to a formal language is solved by automata; as a result, there is a close connection between the grammars that generate formal languages ​​and the automata that recognize them.

Chomsky hierarchy

Most of the formal languages ​​that appear in practice, such as programming languages, have a simple structure and can be divided into one of the known language classes of the Chomsky hierarchy according to their complexity . The Chomsky hierarchy - according to Noam Chomsky , a pioneer in language theory - consists of four classes. These are, in ascending order of their power, the regular languages (type 3), the context-free languages (type 2), the context-sensitive languages (type 1) and the recursively enumerable languages (type 0).

The language classes of the Chomsky hierarchy are interrelated as follows:
Type 3 ⊂ Type 2 ⊂ Type 1 ⊂ Type 0, represented here by the inclusions
R ⊂ L CF ⊂ L ECS ⊂ L RE .
Regular languages can be derived from finite automata ,
context-free languages of (nondeterministic) push-down automata ,
context-sensitive languages of linearly constrained Turing machines and
recursively enumerable languages ​​are recognized by general Turing machines .

There is an equivalence between the four grammar classes and the four machine classes of the Chomsky hierarchy in terms of their generated and recognized classes of languages. The formal languages ​​that are generated by the respective grammar classes of the Chomsky hierarchy can - as listed above - be recognized by the corresponding machine classes and vice versa.

Pumping and Jaffe lemmas

Well-known practical aids in the characterization of regular and context-free languages ​​are the pumping lemmas , which supply a necessary , but not sufficient, condition that a language generated by a grammar is regular or context-free. Due to the structure of the statements of the lemmas, the pumping lemma for regular languages ​​is also called the uvw theorem and the pumping lemma for context-free languages ​​is also called the uvwxy theorem. In contrast to the pumping lemmas, extensions like Jaffe 's lemma provide a sufficient criterion .

Description of type 2 grammars

The Backus-Naur form (after John W. Backus and Peter Naur ) or BNF is a notation convention for context-free grammars and thus for context-free languages. The BNF is used in practice, for example, to define the syntax of programming languages . The respective syntax of the programming languages Pascal and Modula-2 has been defined in the extended Backus-Naur form , EBNF. The extended Backus-Naur form differs from the BNF only in a few notation extensions.

Computability theory

In the computability theory , the algorithmic solvability of mathematical problems - i.e. their computability - is examined. In particular, it is about the analysis of the internal structure of problems and the classification of problems according to different degrees of solvability or their unsolvability.

Intuitive and formal predictability and Church's thesis

Based on the intuitive predictability , the emotional notion of the problems for which solutions can be imagined and formulated, the predictability theory developed a formal mathematical definition of predictability that can be used to provide mathematical proofs that enable statements about predictability to be verified or falsified . Attempts to formally grasp the concept of calculability led to Church's thesis , which claims that the concept of mathematical calculability was found using the Turing machine and equally strong formal calculation models. The actual knowledge and statements of the calculability theory are based on the foundation of the mathematical models of predictability and Church's thesis.

Incomplete Theorem, Halting Problem, and Rice's Theorem

With the methods of the computability theory, for example, Kurt Gödel's incompleteness theorem can be formulated and proven. Another result of the computability theory is the realization that the halting problem is undecidable , so one cannot find an algorithm that examines arbitrary programs to see whether they ever halt at a certain input or not. According to Rice's theorem, any non-trivial property of a program in a powerful programming language is also undecidable .

Complexity theory

The complexity theory examines what resources (such as CPU time and memory ) must be the extent to which expended to certain problems to be solved algorithmically. As a rule, the problems are divided into complexity classes . The best known such classes are P and NP (in German literature notation also in Fraktur letters: and ). P is the class of efficiently solvable problems (more precisely: P is the class of problems that can be resolved by a deterministic Turing machine in polynomial time), NP the class of problems whose solutions can be checked efficiently (or equivalently: NP is the class of the problems that can be resolved by a nondeterministic Turing machine in polynomial time).

By specifying an algorithm for solving a problem, it is possible to specify an upper limit for the resource requirement mentioned above. The search for lower bounds , on the other hand, turns out to be far more difficult. To this end, it must be demonstrated that all possible algorithms that only use a certain amount of resources cannot solve a problem.

A (if not the) central question in complexity theory that has been open for decades is whether the classes NP and P match - solving an NP-complete problem in deterministic polynomial time would suffice as proof. Equivalent to this, one can try to solve an NP-complete problem efficiently on a Turing-equivalent computer ( see also Church's thesis ).

The parameterized algorithms is a relatively new branch of theoretical computer science, is being studied in the detail which instances are to address effectively all of NP-complete problems.

Formal semantics

The formal semantics deals with the meaning of a formal language described programs. In mathematical terms, a semantic function is constructed that maps a given program to the function it has calculated.

It stands for the semantic function, for the amount of syntactically correct programs, for the function calculated by the program and for the amount of possible memory allocations.

A distinction is made depending on the mathematical approach

Information theory

The subject of information theory is the mathematical description of information . The information content of a message is characterized by its entropy . This makes it possible to determine the transmission capacity of an information channel, which explains the relationship to coding theory. Furthermore, methods of information theory are also used in cryptology , for example the one-time pad is an encryption method that is secure in terms of information theory.

logic

Mathematical logic is used in many ways in theoretical computer science; conversely, this has also led to impulses for mathematical logic. Propositional logic and Boolean algebra are used e.g. B. used for description of circuits ; basic results of logic such as Craig interpolation are used. In addition, basic concepts of the theory of programming can of course be expressed using logic, in addition to the above-mentioned semantics, especially in the area of ​​the theory of logical programming . In the area of formal specification , various logics, including a. Predicate logic , temporal logic , modal logic and dynamic logic are used to describe the intended behavior of software and hardware systems, which can then be verified by means of model checking or theorem provers . The logics used in artificial intelligence , e.g. B. Modal logics, by means of which the knowledge of an agent is represented, are the subject of theoretical investigations. For the theory of functional programming that comes combinatorial logic used.

Other important researchers

literature

  • Alexander Asteroth , Christel Baier: Theoretical Computer Science. An introduction to predictability, complexity, and formal languages. Pearson, Munich 2003, ISBN 3-8273-7033-7
  • Katrin Erk, Lutz Priese: Theoretical Computer Science. A comprehensive introduction. 3rd edition, Springer, Berlin 2008, ISBN 3-540-76319-8
  • Bernhard Heinemann, Klaus Weihrauch: Logic for computer scientists. An introduction. Teubner, Stuttgart 2000, ISBN 3-519-12248-0
  • John E. Hopcroft , Rajeev Motwani , Jeffrey Ullman : Introduction to Automata Theory, Formal Languages, and Complexity Theory . 2., revised. Edition. Pearson Studium, Munich 2002, ISBN 3-8273-7020-5 (Original title: Introduction to automata theory, languages, and computation . Translated by Sigrid Richter, Ingrid Tokar).
  • John Kelly: Logic in Plain Text. Pearson, Munich 2003, ISBN 3-8273-7070-1
  • Uwe Schöning : Theoretical Computer Science - in a nutshell. Spektrum Akademischer Verlag, Heidelberg 2003, ISBN 3-8274-1099-1
  • Christian Wagenknecht: Formal languages, abstract automata and compilers: textbook and workbook for basic studies and advanced training . Vieweg + Teubner, 2009, ISBN 3-8348-0624-2
  • Ingo Wegener : Theoretical Computer Science. An algorithmic introduction. Teubner, Stuttgart 1999, ISBN 3-519-12123-9
  • Klaus W. Wagner: Introduction to Theoretical Computer Science. Basics and models. Springer, Berlin 1997, ISBN 3-540-58139-1
  • Klaus Weihrauch : Computability. Springer, Berlin 1987, ISBN 3-540-13721-1
  • Renate Winter: Theoretical Computer Science. Basics with exercises and solutions. Oldenbourg, Munich 2002, ISBN 3-486-25808-7
  • Rolf Socher : Theoretical basics of computer science. Hanser Verlag, Munich 2005, ISBN 3-446-22987-6
  • George M. Reed, AW Roscoe, RF Wachter: Topology and Category Theory in Computer Science. Oxford University Press 1991, ISBN 0-19-853760-3
  • Klaus Keimel: Domains and Processes. Springer 2001, ISBN 0-7923-7143-7
  • Dirk W. Hoffmann : Theoretical Computer Science . 1st edition. Hanser reference book, Munich 2007, ISBN 978-3-446-41511-9 .

Web links

Individual evidence

  1. Uwe Schöning: Theoretical Computer Science - in brief / Uwe Schöning . Spektrum, Akad. Verl., Heidelberg 2001, ISBN 3-8274-1099-1 , pp. 39.57 .
  2. Uwe Schöning: Theoretical Computer Science - in brief / Uwe Schöning . Spektrum, Akad. Verl., Heidelberg 2001, ISBN 3-8274-1099-1 , pp. 149 ff .
  3. Uwe Schöning: Theoretical Computer Science - in brief / Uwe Schöning . Spektrum, Akad. Verl., Heidelberg 2001, ISBN 3-8274-1099-1 , pp. 127 ff .
  4. Uwe Schöning: Theoretical Computer Science - in brief / Uwe Schöning . Spektrum, Akad. Verl., Heidelberg 2001, ISBN 3-8274-1099-1 , pp. 130 ff .