# First order predicate logic

The first-order logic is a branch of mathematical logic . It deals with the structure of certain mathematical expressions and the logical inference that goes from such expressions to others. In doing so, it is possible to define both the language and the inference purely syntactically, i.e. without reference to mathematical meanings. The interplay of purely syntactic considerations on the one hand and semantic considerations on the other hand leads to important findings that are important for all of mathematics, because this can be formulated using the Zermelo-Fraenkel set theory in the first-level predicate logic . In contrast to propositional logic , predicate logic makes use of quantifiers .

## A motivating example

The definitions below are intended to be motivated using the example of the theory of ordered Abelian groups . An ordered Abelian group consists initially of an Abelian group , that is, one has the following properties ${\ displaystyle (G, +)}$

• Associative law : For allthe following applies:.${\ displaystyle x, y, z \ in G}$${\ displaystyle (x + y) + z \, = \, x + (y + z)}$
• Commutative : For allthe following applies:.${\ displaystyle x, y \ in G}$${\ displaystyle x + y \, = \, y + x}$
• Neutral element 0 : For allthe following applies:.${\ displaystyle x \ in G}$${\ displaystyle x + 0 \, = \, x}$
• Inverse element : For allthere is, so that:.${\ displaystyle x \ in G}$${\ displaystyle -x \ in G}$${\ displaystyle x + (- x) \, = \, 0}$

In mathematical shorthand it can also be used as

${\ displaystyle \ forall xyz: \, (x + y) + z = x + (y + z), \ quad \ forall xy: \, x + y = y + x, \ quad \ forall x: \, x + 0 = x \ quad \ forall x: \, x + (- x) = 0}$

reproduce. We write in place of the often used one , because we don't want to say anything about the elements of the group anyway. Furthermore, we have a -relation for the order on the group, which must satisfy the following axioms, which are given here in shorthand: ${\ displaystyle \ forall x}$${\ displaystyle \ forall x \ in G}$${\ displaystyle \ leq}$

• Reflexivity :${\ displaystyle \ forall x: x \ leq x}$
• Transitivity :${\ displaystyle \ forall xyz: ((x \ leq y \ land y \ leq z) \ rightarrow x \ leq z)}$
• Group compatibility :${\ displaystyle \ forall xyz: (x \ leq y \ rightarrow x + z \ leq y + z)}$

Examples of ordered Abelian groups are or . ${\ displaystyle (\ mathbb {R}, +, -, \ leq)}$${\ displaystyle (\ mathbb {Z}, +, -, \ leq)}$

Overall, we have used some of the so-called logical symbols , brackets as auxiliary symbols , as well as the equal sign and variables for the elements. The symbols characteristic of the theory of the ordered Abelian groups are the constant 0, the functions + and - as well as the relation , whereby the usual notations in mathematics were used, i.e. instead of or instead . The two functions have different arity , + is two-digit, the formation of the inverse - is one-digit, the order relation under consideration is two-digit. This describes the symbols of a very specific language in which one can formulate the theory of the ordered Abelian groups. Some of the character strings formed from the symbols are "reasonable"; H. formed according to certain laws, and some of these also express "true statements". This is generalized in the following, in particular the difference between the "reasonable" character strings formed according to rules and a possible "truth content" of such character strings as well as the resulting consequences. These are important for all of mathematics. ${\ displaystyle \ forall, \ exists, \ land, \ lor, \ rightarrow, \ leftrightarrow, \ neg}$ ${\ displaystyle \ leq}$${\ displaystyle x + y}$${\ displaystyle + (x, y)}$${\ displaystyle x \ leq y}$${\ displaystyle (x, y) \ in {\ leq}}$

## The language of first order predicate logic

We describe the language used here in a purely syntactic way, that is, we determine the strings under consideration, which we want to call expressions of the language, without reference to their meaning.

### Symbols

A first-level language is made up of the following symbols:

• logical symbols:
• Quantifiers :  ,${\ displaystyle \ forall, \ exists}$
• Connectives :  ,${\ displaystyle \ land, \ lor, \ rightarrow, \ leftrightarrow, \ neg}$
• Technical characters:  ,${\ displaystyle (,), \ equiv}$
• Variable symbols :  ,${\ displaystyle v_ {0}, v_ {1}, v_ {2}, \ ldots}$
• a (possibly empty) set of constant symbols,${\ displaystyle {\ mathcal {C}}}$
• a (possibly empty) set of function symbols,${\ displaystyle {\ mathcal {F}}}$
• a (possibly empty) set of relation symbols.${\ displaystyle {\ mathcal {R}}}$

### Terms

The character strings built according to the following rules are called terms :

• Is a variable symbol, then is a term.${\ displaystyle v}$${\ displaystyle v}$
• Is a constant symbol, then is a term.${\ displaystyle c}$${\ displaystyle c}$
• Is a one-digit function symbol and is a term, then is a term.${\ displaystyle f}$${\ displaystyle t_ {1}}$${\ displaystyle ft_ {1}}$
• If is a two-digit function symbol and are terms, then is a term.${\ displaystyle f}$${\ displaystyle t_ {1}, t_ {2}}$${\ displaystyle ft_ {1} t_ {2}}$
• If a three-digit function symbol and are terms, then is a term.${\ displaystyle f}$${\ displaystyle t_ {1}, t_ {2}, t_ {3}}$${\ displaystyle ft_ {1} t_ {2} t_ {3}}$
• and so on for 4, 5, 6,… -digit function symbols.
Remarks
• If, for example, is a constant and are and one or two-digit function symbols, then a term is, since it can be created by applying the above rules: is a term, hence also ; and are terms, hence also, and ultimately also .${\ displaystyle c}$${\ displaystyle f}$${\ displaystyle g}$${\ displaystyle fgv_ {2} fc}$${\ displaystyle c}$${\ displaystyle fc}$${\ displaystyle fc}$${\ displaystyle v_ {2}}$${\ displaystyle gv_ {2} fc}$${\ displaystyle fgv_ {2} fc}$
• We do not use brackets and commas as separators, that is, we write and not . We thus implicitly assume that our symbols are designed in such a way that they are clearly legible. See Polish notation . Constants are sometimes interpreted as zero-digit functions, which is particularly obvious in this notation.${\ displaystyle fgv_ {2} fc}$${\ displaystyle f (g (v_ {2}, f (c)))}$
• The rules for the function symbols are often summarized as follows:
o   If is an n-place function symbol and are terms, then is a term.${\ displaystyle f}$${\ displaystyle t_ {1}, \ ldots, t_ {n}}$${\ displaystyle ft_ {1} \ ldots t_ {n}}$
This means nothing other than the infinite sequence of rules indicated above, because the three points do not belong to the agreed symbols. However, this notation is sometimes used.${\ displaystyle \ ldots}$

Further properties can be defined via the structure of the terms. First, the set of variables occurring in a term is recursively defined by the following three rules: ${\ displaystyle \ mathrm {var} (t)}$${\ displaystyle t}$

• If a variable symbol, then let .${\ displaystyle v}$${\ displaystyle \ mathrm {var} (v) \, = \, \ {v \}}$
• If is a constant symbol, then let .${\ displaystyle c}$${\ displaystyle \ mathrm {var} (c) \, = \, \ emptyset}$
• If is an n-place function symbol and there are terms, then let .${\ displaystyle f}$${\ displaystyle t_ {1}, \ ldots, t_ {n}}$${\ displaystyle \ mathrm {var} (ft_ {1} \ ldots t_ {n}) \, = \, \ mathrm {var} (t_ {1}) \ cup \ ldots \ cup \ mathrm {var} (t_ { n})}$

### Expressions

We now explain through educational laws which character strings we want to regard as expressions of language.

Atomic expressions

• If and are terms, then is an expression.${\ displaystyle t_ {1}}$${\ displaystyle t_ {2}}$${\ displaystyle t_ {1} \ equiv t_ {2}}$
• Is a one-digit relation symbol and is a term, then is an expression.${\ displaystyle R}$${\ displaystyle t_ {1}}$${\ displaystyle Rt_ {1}}$
• Is a two-digit relation symbol and are terms, then is an expression.${\ displaystyle R}$${\ displaystyle t_ {1}, t_ {2}}$${\ displaystyle Rt_ {1} t_ {2}}$
• and so on for 3, 4, 5, ...-digit relation symbols.

The remarks made above regarding the spelling of terms apply.

Compound expressions

We describe here how further expressions can be obtained.

• If there is an expression, there is also an expression.${\ displaystyle \ varphi}$${\ displaystyle \ neg \ varphi}$
• Are and expressions, so are , , and expressions.${\ displaystyle \ varphi}$${\ displaystyle \ psi}$${\ displaystyle (\ varphi \ land \ psi)}$${\ displaystyle (\ varphi \ lor \ psi)}$${\ displaystyle (\ varphi \ rightarrow \ psi)}$${\ displaystyle (\ varphi \ leftrightarrow \ psi)}$
• Is an expression and a variable, so are and expressions.${\ displaystyle \ varphi}$${\ displaystyle x}$${\ displaystyle \ forall x \ varphi}$${\ displaystyle \ exists x \ varphi}$

With this all expressions of our language are fixed. For example, if there is a one-digit function symbol and a 2-digit relation symbol, then is ${\ displaystyle f}$${\ displaystyle R}$

${\ displaystyle \ forall v_ {0} ((Rv_ {0} v_ {1} \ lor v_ {0} \ equiv fv_ {1}) \ rightarrow \ exists v_ {2} \ neg Rv_ {0} v_ {2} )}$

an expression because it can be constructed using the above rules. It should be pointed out once again that we create the expressions purely mechanically by means of the rules mentioned, without the expressions necessarily having to designate anything.

### 1st stage

Different languages ​​of the first level only differ in the sets , and , which are usually combined to form the set of symbols and also called the signature of the language. One then speaks more precisely of terms or expressions. The language, that is, the totality of all expressions formed according to the above rules, is designated by , or . In the latter, the Roman stands for the first stage. This relates to the fact that, according to the last generation rule, quantification can only be carried out using (simple) variables. does not provide for quantifying over all subsets of a set, over relations or over functions. So the usual Peano axioms cannot be expressed in, since the induction axiom makes a statement about all subsets of the natural numbers. This can be seen as a weakness of this language, but the axioms of the Zermelo-Fraenkel set theory can all be formulated in the first level with the single symbol , so that the first level is basically sufficient for mathematics. ${\ displaystyle {\ mathcal {C}}}$${\ displaystyle {\ mathcal {F}}}$${\ displaystyle {\ mathcal {R}}}$${\ displaystyle S}$${\ displaystyle S}$${\ displaystyle S}$${\ displaystyle L (S)}$${\ displaystyle L ^ {S}}$${\ displaystyle L_ {I} ^ {S}}$${\ displaystyle I}$${\ displaystyle L_ {I} ^ {S}}$${\ displaystyle L_ {I} ^ {S}}$${\ displaystyle \ in}$

### Free variables

Further properties of expressions in the language can also be defined purely syntactically. According to the structure described above using formation rules , the set of variables that occur freely in the expression is recursively defined as follows:  ${\ displaystyle L_ {I} ^ {S}}$${\ displaystyle \ mathrm {free} (\ varphi)}$${\ displaystyle \ varphi}$

• ${\ displaystyle \ mathrm {free} (t_ {1} \ equiv t_ {2}) = \ mathrm {var} (t_ {1}) \ cup \ mathrm {var} (t_ {2})}$
• ${\ displaystyle \ mathrm {free} (Rt_ {1} \ ldots t_ {n}) = \ mathrm {var} (t_ {1}) \ cup \ ldots \ cup \ mathrm {var} (t_ {n})}$
• ${\ displaystyle \ mathrm {free} (\ neg \ varphi) = \ mathrm {free} (\ varphi)}$
• ${\ displaystyle \ mathrm {free} (\ varphi \ land \ psi) = \ mathrm {free} (\ varphi) \ cup \ mathrm {free} (\ psi)}$ and also for ${\ displaystyle \ lor, \ rightarrow, \ leftrightarrow}$
• ${\ displaystyle \ mathrm {free} (\ forall x \ varphi) = \ mathrm {free} (\ varphi) \ setminus \ {x \}}$
• ${\ displaystyle \ mathrm {free} (\ exists x \ varphi) = \ mathrm {free} (\ varphi) \ setminus \ {x \}}$

Non-free variables are called bound variables . Expressions without free variables, that is, those with , are called sentences . All the axioms of the ordered Abelian groups given in the above motivating example are sentences when translated into language , for example for the commutative law. ${\ displaystyle \ varphi}$${\ displaystyle \ mathrm {free} (\ varphi) = \ emptyset}$${\ displaystyle L_ {I} ^ {\ {0, +, -, \ leq \}}}$${\ displaystyle \ forall v_ {0} \ forall v_ {1} + \! v_ {0} v_ {1} \ equiv + v_ {1} v_ {0}}$

### Metalinguistic expressions

The example just given as a symbolization of the commutative law in language shows that the resulting expressions are often difficult to read. Hence the mathematician, and often the logician as well, like to return to the classical spelling . The latter, however, is not an expression of language , but only a communication of such an expression using other symbols of another language, here the so-called metalanguage , that is, the language in which one speaks about. For the sake of readability, you can also leave out unnecessary brackets. This does not lead to problems as long as it remains clear that the more easily readable strings could be translated back at any time. ${\ displaystyle \ forall v_ {0} \ forall v_ {1} + \! v_ {0} v_ {1} \ equiv + v_ {1} v_ {0}}$${\ displaystyle L_ {I} ^ {\ {0, +, -, \ leq \}}}$${\ displaystyle \ forall x, y: x + y = y + x}$${\ displaystyle L_ {I} ^ {\ {0, +, -, \ leq \}}}$${\ displaystyle L_ {I} ^ {\ {0, +, -, \ leq \}}}$

### Substitutions

Often in mathematics variables are replaced by terms. This can also be explained here purely syntactically on the basis of our symbols. We use the following rules to define what it should mean to use the term for a variable . We again follow the regular structure of terms and expressions. The substitution is noted as, where the square brackets may be omitted. ${\ displaystyle t}$${\ displaystyle x}$${\ displaystyle [\,] {\ frac {t} {x}}}$

For terms , the substitution is defined as follows: ${\ displaystyle s}$${\ displaystyle s {\ frac {t} {x}}}$

• If a variable symbol is, then there is the same if and otherwise.${\ displaystyle v}$${\ displaystyle v {\ frac {t} {x}}}$${\ displaystyle t}$${\ displaystyle v = x}$${\ displaystyle v}$
• If is a constant symbol, then is .${\ displaystyle c}$${\ displaystyle c {\ frac {t} {x}}: = c}$
• If there is an n-place function symbol and terms, then is .${\ displaystyle f}$${\ displaystyle t_ {1}, \ ldots, t_ {n}}$${\ displaystyle [ft_ {1} \ ldots t_ {n}] {\ frac {t} {x}}: = ft_ {1} {\ frac {t} {x}} \ ldots t_ {n} {\ frac {t} {x}}}$

For expressions, we write square brackets around the expression in which the substitution is to be made. We determine:

• ${\ displaystyle [t_ {1} \ equiv t_ {2}] {\ frac {t} {x}}: = t_ {1} {\ frac {t} {x}} \ equiv t_ {2} {\ frac {t} {x}}}$
• ${\ displaystyle [Rt_ {1} \ ldots t_ {n}] {\ frac {t} {x}}: = Rt_ {1} {\ frac {t} {x}} \ ldots t_ {n} {\ frac {t} {x}}}$
• ${\ displaystyle [\ neg \ varphi] {\ frac {t} {x}}: = \ neg [\ varphi] {\ frac {t} {x}}}$
• ${\ displaystyle [(\ varphi \ lor \ psi)] {\ frac {t} {x}}: = ([\ varphi] {\ frac {t} {x}} \ lor [\ psi] {\ frac { t} {x}})}$ and also for ${\ displaystyle \ land, \ rightarrow, \ leftrightarrow}$
• ${\ displaystyle [\ exists x \ varphi] {\ frac {t} {x}}: = \ exists x \ varphi}$; analog for the quantifier${\ displaystyle \ forall}$
• ${\ displaystyle [\ exists y \ varphi] {\ frac {t} {x}}: = \ exists y [\ varphi] {\ frac {t} {x}}}$if and ; analog for the quantifier${\ displaystyle x \ neq y}$${\ displaystyle y \ notin \ mathrm {var} (t)}$${\ displaystyle \ forall}$
• ${\ displaystyle [\ exists y \ varphi] {\ frac {t} {x}}: = \ exists u [\ varphi] {\ frac {u} {y}} {\ frac {t} {x}}}$if and , where is a variable that does not appear in or , for example the first of the variables that fulfills this condition. The analogous determination is made for .${\ displaystyle x \ neq y}$${\ displaystyle y \ in \ mathrm {var} (t)}$${\ displaystyle u}$${\ displaystyle \ varphi}$${\ displaystyle t}$${\ displaystyle v_ {0}, v_ {1}, v_ {2}, \ ldots}$${\ displaystyle \ forall}$

With this definition, care was taken to ensure that variables do not inadvertently come into the sphere of influence of a quantifier. If the bound variable occurs in the term, it is replaced by another beforehand in order to avoid a variable collision. ${\ displaystyle x}$

### Final note on syntax

It should be emphasized again that everything that has been said so far only describes the structure or manipulation of certain character strings; these are purely syntactic terms. Neither the character strings nor their manipulations are assigned any meaningful content. Of course, one involuntarily reads the “meanings” suggested by the example above, that is, one reads in as “for everyone” and one as “and” and can only with difficulty get rid of their “slang meanings”. However, one should be aware that such a “meaning” is not required for the concepts presented and is not used anywhere. It is an essential point that the intended meaning of these character strings, their semantics, is only added in a step presented below. ${\ displaystyle \ forall}$${\ displaystyle \ land}$

## semantics

We assume one language . The expressions formed in this language according to the above rules should now be associated with mathematical structures. In these structures one can then examine the expressions for their truth content, which is specified in the following. ${\ displaystyle L_ {I} ^ {S}}$

### Structures

A structure above a signature is a non-empty set together with ${\ displaystyle {\ mathcal {A}}}$ ${\ displaystyle S}$${\ displaystyle A}$

• one element for each constant symbol ,${\ displaystyle c ^ {\ mathcal {A}} \ in A}$${\ displaystyle c \ in S}$
• a function for each -digit function symbol ,${\ displaystyle f ^ {\ mathcal {A}}: A ^ {n} \ rightarrow A}$${\ displaystyle n}$${\ displaystyle f \ in S}$
• a relation for each -digit relation symbol .${\ displaystyle R ^ {\ mathcal {A}} \ subset A ^ {n}}$${\ displaystyle n}$${\ displaystyle R \ in S}$

In the example of ordered Abelian groups given at the beginning, there is a structure. By structures so the symbols are "real" with constants, functions and relations associated. ${\ displaystyle (\ mathbb {R}, 0, +, -, \ leq)}$${\ displaystyle \ {0, +, -, \ leq \}}$${\ displaystyle S}$${\ displaystyle S}$

### Interpretations

An interpretation of is a pair consisting of a structure and a map . ${\ displaystyle L_ {I} ^ {S}}$${\ displaystyle {\ mathcal {I}} = ({\ mathcal {A}}, \ beta)}$${\ displaystyle S}$${\ displaystyle {\ mathcal {A}}}$${\ displaystyle \ beta: \ {v_ {i}; \, i \ in \ mathbb {N} _ {0} \} \ rightarrow A}$

This is associated with the idea that the structure is the mathematical object that should be described with the language, while the variables are assigned values ​​from the basic set , which is why this figure is also called assignment . The assignment of an interpretation can easily be extended to terms; this extension depends on the interpretation of the constant symbols and function symbols and is therefore also denoted by; one determines: ${\ displaystyle \ beta}$${\ displaystyle A}$${\ displaystyle {\ mathcal {I}}}$

• If is a variable, then let .${\ displaystyle v}$${\ displaystyle {\ mathcal {I}} (v): = \ beta (v)}$
• If is a constant symbol, then let .${\ displaystyle c}$${\ displaystyle {\ mathcal {I}} (c): = c ^ {\ mathcal {A}}}$
• If is an n-place function symbol and there are terms, then let .${\ displaystyle f}$${\ displaystyle t_ {1}, \ ldots, t_ {n}}$${\ displaystyle {\ mathcal {I}} (ft_ {1} \ ldots t_ {n}): = f ^ {\ mathcal {A}} ({\ mathcal {I}} (t_ {1}), \ ldots , {\ mathcal {I}} (t_ {n}))}$

If one sits, for example , there is such an interpretation. Then applies . ${\ displaystyle \ beta (v_ {i}) = i \ in \ mathbb {R}}$${\ displaystyle {\ mathcal {I}} = ((\ mathbb {R}, 0, +, -, \ leq), \ beta)}$${\ displaystyle {\ mathcal {I}} (+ v_ {1} -v_ {7}) = + ^ {\ mathcal {A}} ({\ mathcal {I}} (v_ {1}), - ^ { \ mathcal {A}} ({\ mathcal {I}} (v_ {7})) = 1 + (- 7) = - 6}$

If we only change an assignment at that point and map it to , we write and for the assignment changed in this way . The assignment of the variables is often clear or unimportant; then, somewhat unclean but practical, we also call the structure an interpretation. ${\ displaystyle x}$${\ displaystyle x}$${\ displaystyle a \ in A}$${\ displaystyle \ beta {\ frac {a} {x}}}$${\ displaystyle {\ mathcal {I}} {\ frac {a} {x}}: = ({\ mathcal {A}}, \ beta {\ frac {a} {x}})}$${\ displaystyle {\ mathcal {A}}}$

### Models

We want to say that an interpretation is a model for an expression and write for it if this results from the following rules:${\ displaystyle {\ mathcal {I}} = ({\ mathcal {A}}, \ beta)}$${\ displaystyle S}$${\ displaystyle \ varphi}$${\ displaystyle {\ mathcal {I}} \ vDash \ varphi}$
${\ displaystyle {\ begin {matrix} {\ mathcal {I}} \ vDash t_ {1} \ equiv t_ {2} &: \ Leftrightarrow & {\ mathcal {I}} (t_ {1}) = {\ mathcal {I}} (t_ {2}) \\ {\ mathcal {I}} \ vDash Rt_ {1} \ ldots t_ {n} &: \ Leftrightarrow & R ^ {\ mathcal {A}} ({\ mathcal {I }} (t_ {1}), \ ldots, {\ mathcal {I}} (t_ {n})) \\ {\ mathcal {I}} \ vDash \ neg \ varphi &: \ Leftrightarrow & {\ text { not}} {\ mathcal {I}} \ vDash \ varphi \\ {\ mathcal {I}} \ vDash (\ varphi \ land \ psi) &: \ Leftrightarrow & {\ mathcal {I}} \ vDash \ varphi { \ text {and}} {\ mathcal {I}} \ vDash \ psi \\ {\ mathcal {I}} \ vDash (\ varphi \ lor \ psi) &: \ Leftrightarrow & {\ mathcal {I}} \ vDash \ varphi {\ text {or}} {\ mathcal {I}} \ vDash \ psi \\ {\ mathcal {I}} \ vDash (\ varphi \ rightarrow \ psi) &: \ Leftrightarrow & {\ text {if} } {\ mathcal {I}} \ vDash \ varphi {\ text {then also}} {\ mathcal {I}} \ vDash \ psi \\ {\ mathcal {I}} \ vDash (\ varphi \ leftrightarrow \ psi) &: \ Leftrightarrow & {\ mathcal {I}} \ vDash \ varphi {\ text {if and only if}} {\ mathcal {I}} \ vDash \ psi \\ {\ mathcal {I}} \ vDash \ forall x \ varphi &: \ Leftrightarrow & {\ math cal {I}} {\ frac {a} {x}} \ vDash \ varphi {\ mbox {for all}} a \ in A \\ {\ mathcal {I}} \ vDash \ exists x \ varphi &: \ Leftrightarrow & {\ text {there is a}} a \ in A {\ text {with}} {\ mathcal {I}} {\ frac {a} {x}} \ vDash \ varphi \ end {matrix}}}$

This definition is based on the regular structure of the expressions of the language . The dot notation in the second rule stands for a list of rules, one for each arity. ${\ displaystyle L_ {I} ^ {S}}$

The concept of interpretation gave the variables and symbols from a meaning. Due to the model relationship that has just been defined, the logical symbols are also interpreted for the first time. ${\ displaystyle S}$

For a set of expressions we write if holds for all and say is a model of . If, for example, the above-mentioned axioms denote the ordered Abelian groups, then if and only if there is an ordered Abelian group. The assignment does not seem to play a role, since it only consists of records, i.e. it does not contain any free variables. This is actually the case, as the so-called coincidence lemma says. In such a case you can omit and just write. This means that for every assignment a model of all expressions is out. ${\ displaystyle \ Phi}$${\ displaystyle {\ mathcal {I}} \ vDash \ Phi}$${\ displaystyle {\ mathcal {I}} \ vDash \ varphi}$${\ displaystyle \ varphi \ in \ Phi}$${\ displaystyle {\ mathcal {I}}}$${\ displaystyle \ Phi}$${\ displaystyle \ Phi}$${\ displaystyle {\ mathcal {I}} = ({\ mathcal {A}}, \ beta) \ vDash \ Phi}$${\ displaystyle {\ mathcal {A}}}$${\ displaystyle \ beta}$${\ displaystyle \ Phi}$${\ displaystyle \ beta}$${\ displaystyle {\ mathcal {A}} \ vDash \ Phi}$${\ displaystyle {\ mathcal {I}} = ({\ mathcal {A}}, \ beta)}$${\ displaystyle \ beta}$${\ displaystyle \ Phi}$

### equality

In order to use equality, it should be noted that we introduced the symbol in the first level language . An expression of form is not an expression of first-level language, but the metalinguistic assertion of the equality of the two expressions and . The latter cannot be symbolized in the first-level language, only terms can be the same there. Parallel to the structure considered here, there is also the first-level predicate logic without equality , for this purpose one removes the symbol and the formation rule relating to it. It is true that equality can then be brought back into play via a relation, but then this is subject to interpretations so that one does not get the same thing as a logic with equality. Logical equality, on the other hand, means equality of individuals in any interpretation, and that's why one regards logics with equality. ${\ displaystyle \ equiv}$${\ displaystyle \ varphi = \ psi}$${\ displaystyle \ varphi}$${\ displaystyle \ psi}$${\ displaystyle \ equiv}$${\ displaystyle \ equiv}$

## Mathematical reasoning

### Inferences

Let be a given set of expressions, for example the above axioms of the ordered Abelian groups. The mathematician is interested in the conclusions that can be drawn from them. We say the expression follows and write for it when every model of is also a model of . This is the so-called semantic inference, as it refers to all possible interpretations of the symbols. ${\ displaystyle \ Phi}$${\ displaystyle \ varphi}$${\ displaystyle \ Phi}$${\ displaystyle \ Phi \ vDash \ varphi}$${\ displaystyle \ Phi}$${\ displaystyle \ varphi}$

### Sequence calculus

As a rule, the mathematician does not conclude semantically, but rather applies certain rules of closure with which he works his way from one statement to the next to the assertion. Starting from a given sequence of expressions, he moves on to new sequences in order to have “proved” with a sequence at the end that it follows from . The “proof” is a finite list of such sequences. Here, some of these inference rules are presented, their background is examined and then compared with the semantic inference. In is called the antecedent and the following expressions are called the succession . ${\ displaystyle \ Phi}$${\ displaystyle \ Phi \, \ varphi \, \ psi \, \ ldots}$${\ displaystyle \ Phi \, \ varphi}$${\ displaystyle \ varphi}$${\ displaystyle \ Phi}$${\ displaystyle \ Phi \, \ varphi \, \ psi \, \ ldots}$${\ displaystyle \ Phi}$

Prerequisite rule : is a permitted sequence if . Behind this is the simple fact that you can use one of the requirements at any time . ${\ displaystyle \ Phi \, \ varphi}$${\ displaystyle \ varphi \ in \ Phi}$${\ displaystyle \ Phi}$

Antecedent rule : if you already have, you can skip over, if . If one can infer from on , then one can do so even more under even stronger conditions. ${\ displaystyle \ Phi \, \ varphi}$${\ displaystyle \ Phi '\, \ varphi}$${\ displaystyle \ Phi \ subset \ Phi '}$${\ displaystyle \ Phi}$${\ displaystyle \ varphi}$

Case distinction : If you and already has, so you can pass. One can infer in the case of on , and also in the case of . Therefore one can in any case infer from on . ${\ displaystyle \ Phi \, \ psi \, \ varphi}$${\ displaystyle \ Phi \, \ neg \ psi \, \ varphi}$${\ displaystyle \ Phi \ varphi}$${\ displaystyle \ psi}$${\ displaystyle \ Phi}$${\ displaystyle \ varphi}$${\ displaystyle \ neg \ psi}$${\ displaystyle \ Phi}$${\ displaystyle \ varphi}$

Contradiction : If you and already has, so you can pass. If one assumes, in the sense of a proof of contradiction, that the assumption is valid, then the assumptions result in a contradiction as well , as a whole. Therefore the assumption was wrong and one can conclude. ${\ displaystyle \ Phi \, \ neg \ varphi \, \ psi}$${\ displaystyle \ Phi \, \ neg \ varphi \, \ neg \ psi}$${\ displaystyle \ Phi \, \ varphi}$${\ displaystyle \ neg \ varphi}$${\ displaystyle \ psi}$${\ displaystyle \ neg \ psi}$${\ displaystyle \ neg \ varphi}$${\ displaystyle \ varphi}$

Or introduction in the antecedent : If one has and already, one can pass over. Under the prerequisites results from as well as from . Therefore it already results when or applies. ${\ displaystyle \ Phi \, \ varphi \, \ chi}$${\ displaystyle \ Phi \, \ psi \, \ chi}$${\ displaystyle \ Phi \, (\ varphi \ lor \ psi) \, \ chi}$${\ displaystyle \ Phi}$${\ displaystyle \ chi}$${\ displaystyle \ varphi}$${\ displaystyle \ psi}$${\ displaystyle \ chi}$${\ displaystyle \ varphi}$${\ displaystyle \ psi}$

Or introduction in succession : If you already have, you can skip over. That is clear, since it is even more true. Accordingly, one can also pass over. ${\ displaystyle \ Phi \, \ varphi}$${\ displaystyle \ Phi \, (\ varphi \ lor \ psi)}$${\ displaystyle \ varphi}$${\ displaystyle \ varphi \ lor \ psi}$${\ displaystyle \ Phi \, (\ psi \ lor \ varphi)}$

Equality : You can write down the expression at any time , where is any term. This rule needs no explanation. ${\ displaystyle t \ equiv t}$${\ displaystyle t}$

The following three inference rules use the above-defined substitution of variables by terms:

Substitution : If one already has, one can pass over. If one can conclude from on , that is, on with the substitution in place of , then also on with the substitution in place of , if is the same . ${\ displaystyle \ Phi \, \ varphi {\ frac {t} {x}}}$${\ displaystyle \ Phi \, t \! = \! s \, \ varphi {\ frac {s} {x}}}$${\ displaystyle \ Phi}$${\ displaystyle \ varphi {\ frac {t} {x}}}$${\ displaystyle \ varphi}$${\ displaystyle t}$${\ displaystyle x}$${\ displaystyle \ varphi}$${\ displaystyle s}$${\ displaystyle x}$${\ displaystyle t}$${\ displaystyle s}$

Introduction of existence in the antecedent : If one already has, one can pass over. In order to be able to work with the condition of existence , one can use a for which applies. In proofs that use this rule, according to the existence requirement, it then says: Be such a ... ${\ displaystyle \ Phi \, \ varphi {\ frac {y} {x}} \, \ psi}$${\ displaystyle \ Phi \, \ exists x \ varphi \, \ psi}$${\ displaystyle \ exists x \ phi}$${\ displaystyle y}$${\ displaystyle \ varphi {\ frac {y} {x}}}$${\ displaystyle y}$

Introduction of existence in succession : If one already has, one can pass over. This rule is also understandable, because if one has found an example for with, then one can infer the existence statement and not mention the example any more. ${\ displaystyle \ Phi \, \ varphi {\ frac {t} {x}}}$${\ displaystyle \ Phi \, \ exists x \ varphi}$${\ displaystyle t}$${\ displaystyle \ varphi}$

The rules presented here, which form the so-called sequence calculus, are logically conclusive, as was stated as an addition to each rule. Mathematicians use some other inference rules, but it can be shown that they can all be derived from the above, that is, their application can be replaced by a finite chain of the above rules. If one has arrived at a finite number of applications of these rules based on , then from is logically conclusively derived that we write for it . ${\ displaystyle \ Phi}$${\ displaystyle \ Phi \, \ varphi}$${\ displaystyle \ varphi}$${\ displaystyle \ Phi}$${\ displaystyle \ Phi \ vdash \ varphi}$

In contrast to the semantic conclusion explained above, the “proofs” are of a purely syntactic nature; they can be seen as manipulation of character strings of the language under consideration. In order to be able to apply the rules of closure, one does not need to know what the symbols mean. ${\ displaystyle \ Phi \ vdash \ varphi}$

### Completeness and correctness

Is the interpretation of a model for a lot of expressions of language and so is also a model for , because the with leaves accompanying proof is indeed run directly in the model easily. So it is the so-called correctness set that from always follows. ${\ displaystyle {\ mathcal {I}}}$${\ displaystyle \ Phi}$${\ displaystyle L_ {I} ^ {S}}$${\ displaystyle \ Phi \ vdash \ varphi}$${\ displaystyle {\ mathcal {I}}}$${\ displaystyle \ varphi}$${\ displaystyle \ Phi \ vdash \ varphi}$${\ displaystyle \ Phi \ vdash \ varphi}$${\ displaystyle \ Phi \ vDash \ varphi}$

Conversely, it is quite conceivable that there are only a few models for a set of expressions that happen to have in common a property that can be expressed in the first-level language , without there being any possibility of being able to derive them from the above syntactic string operations . The fact that this is not the case, but that semantic and syntactic conclusions are equivalent, is known as Gödel's theorem of completeness and is a central result of the first-level predicate logic. It can be shown that for a second level predicate logic , in which one allows quantifications via relations, no sequence calculus equivalent to the semantic inference can be found. ${\ displaystyle \ Phi}$${\ displaystyle \ varphi}$${\ displaystyle \ Phi}$

## properties

### Satisfiability clause

A lot of expressions of the language is consistent, if there is no expression in the form of can be derived. Consistency is therefore a purely syntactic term. The following satisfiability theorem applies, which can be derived from Henkin's theorem and is closely related to Gödel's completeness theorem: ${\ displaystyle \ Phi}$${\ displaystyle L_ {I} ^ {S}}$${\ displaystyle (\ varphi \ land \ neg \ varphi)}$${\ displaystyle \ Phi}$

• There is a model for every consistent set .${\ displaystyle \ Phi}$

### Compactness theorem

• Compactness theorem : If a set of expressions is the language and if there is a model for every finite subset , then there is also a model for .${\ displaystyle \ Phi}$${\ displaystyle L_ {I} ^ {S}}$${\ displaystyle \ Phi}$${\ displaystyle \ Phi}$

If there were no model for , the satisfiability theorem would not be free of contradictions, and there would then be a derivation . However, since a proof only has a finite length and can therefore only involve a finite number of the expressions from , there must already be a finite subset with . After the completeness theorem, it follows , that is, there can be no model for any, contradicting the assumption. ${\ displaystyle \ Phi}$${\ displaystyle \ Phi}$${\ displaystyle \ Phi \ vdash (\ varphi \ land \ neg \ varphi)}$${\ displaystyle \ Phi}$${\ displaystyle \ Phi _ {0}}$${\ displaystyle \ Phi _ {0} \ vdash (\ varphi \ land \ neg \ varphi)}$${\ displaystyle \ Phi _ {0} \ vDash \ varphi \ land \ neg \ varphi}$${\ displaystyle \ Phi _ {0}}$

The finiteness theorem is also called compactness theorem: Choose a model for every consistent set of sentences and combine the models found in this way into a set . For a sentence be . Then, the sets form the basis of a topology on and the finite set is for compactness equivalent to this space. ${\ displaystyle \ Phi}$${\ displaystyle {\ mathcal {A}} _ {\ Phi}}$${\ displaystyle {\ mathcal {X}}}$${\ displaystyle \ varphi}$${\ displaystyle X _ {\ varphi}: = \ {{\ mathcal {A}} \ in {\ mathcal {X}}; \, {\ mathcal {A}} \ vDash \ varphi \}}$${\ displaystyle X _ {\ varphi}}$${\ displaystyle {\ mathcal {X}}}$

### Isomorphies

From the finiteness theorem it follows:

• If there is an infinite model for a set of expressions of the language , there are models of arbitrarily high power.${\ displaystyle \ Phi}$${\ displaystyle L_ {I} ^ {S}}$

If namely is given and is a cardinal number , then let a set of constant symbols not contained in. Every finite subset of then has a model in the language , where the set of symbols extended by the new constant symbols is. Because of the finiteness theorem, there is then a model for , and that has at least the power . With a little more precise reasoning, one can even find a model of the thickness if the thickness is less than or equal to . ${\ displaystyle \ Phi}$${\ displaystyle \ kappa}$${\ displaystyle \ {c _ {\ alpha}; \, \ alpha <\ kappa \}}$${\ displaystyle S}$${\ displaystyle \ Phi \ cup \ {\ neg c _ {\ alpha} \ equiv c _ {\ beta}; \, \ alpha <\ beta <\ kappa \}}$${\ displaystyle L_ {I} ^ {\ tilde {S}}}$${\ displaystyle {\ tilde {S}}}$${\ displaystyle \ Phi \ cup \ {\ neg c _ {\ alpha} \ equiv c _ {\ beta}; \, \ alpha <\ beta <\ kappa \}}$${\ displaystyle \ kappa}$${\ displaystyle \ kappa}$${\ displaystyle \ Phi}$${\ displaystyle \ kappa}$

This shows a weakness of the first-order predicate logic. Using the language of the first level, expression sets with infinite models can never be characterized down to isomorphism, because the class of all models for such a consistent set of expressions always contains models of arbitrarily high power, including non-isomorphic models. Two models are called elementarily equivalent if the sets of expressions for which they are models agree. The languages ​​of the first level can therefore only characterize infinite structures or models up to elementary equivalence .

### Löwenheim-Skolem's theorem

The Löwenheim-Skolem theorem can also be derived from Henkin's theorem:

• If there is an infinite model for an at most countable set of expressions in the language , then there is also a countable model.${\ displaystyle \ Phi}$${\ displaystyle L_ {I} ^ {S}}$

In the introductory example is a countable model. These can be found very easily in many mathematical theories, but the Löwenheim-Skolem theorem has profound applications in model theory . ${\ displaystyle (\ mathbb {Z}, +, -, \ leq)}$

### Lindström's theorem

Because of the weaknesses of the first level language mentioned above, one can look for suitable extensions. If one finds really expressive languages ​​in this way, which of course still needs to be specified, then Lindström's theorems show that one then has to do without the finiteness clause or the Löwenheim-Skolem clause. If one wants to keep both sentences, then the first-order predicate logic is "the best" that one can achieve.

## literature

1. The comma symbol "," is only used here as a separator for the list of symbols, it is not a symbol of the language.
2. For propositional correspondence see propositional logic §Buildings of propositional language .
3. a b c d Erich Grädel: Mathematical Logic . Mathematical Foundations of Computer Science, SS 2009. RWTH, Aachen, S. 129 ( uni-dortmund.de [PDF]).
4. H.-D. Ebbinghaus, J. Flum, W. Thomas: Introduction to mathematical logic , spectrum Akademischer Verlag, ISBN 3-8274-0130-5 , II, definition 3.1
5. For propositional correspondence see propositional logic §Formation rules .
6. ^ A b c W. Vogler: Logic for computer scientists . WS 2007/2008. Univ. Augsburg, Institute for Computer Science, Augsburg, p. 49 ( uni-augsburg.de [PDF]).
7. a b c R. Kruse, C. Borgelt: Basic concepts of predicate logic . Computational Intelligence. Otto von Guericke University, Magdeburg 2008, p. 14 ( fuzzy.cs.ovgu.de [PDF]).
8. H.-D. Ebbinghaus, J. Flum, W. Thomas: Introduction to mathematical logic , spectrum Akademischer Verlag, ISBN 3-8274-0130-5 , II, definition 3.2
9. There is also the term formula , see Elementary Language §Formulas and Logical Formula . Accordingly, atomic formula stands for atomic expression
10. Occasionally zero-digit relations are allowed, these then appear as logical constants (in principle, identifiers for true or false ). Predicates can also be found as an alternative expression for the relation symbols . See Stefan Brass: Mathematical Logic with Database Applications . Martin Luther University Halle-Wittenberg, Institute for Computer Science, Halle 2005, p. 176 , here p. 19 ( uni-halle.de [PDF]).
11. Also known as propositional logic links .
12. with the special case of single-digit relations = subsets
13. H.-D. Ebbinghaus, J. Flum, W. Thomas: Introduction to mathematical logic , spectrum Academic publishing house, ISBN 3-8274-0130-5 , VII: On the scope of the first stage
14. H.-D. Ebbinghaus, J. Flum, W. Thomas: Introduction to mathematical logic , spectrum Akademischer Verlag, ISBN 3-8274-0130-5 , II, definition 5.1
15. H.-D. Ebbinghaus, J. Flum, W. Thomas: Introduction to mathematical logic , spectrum Akademischer Verlag, ISBN 3-8274-0130-5 , III, definition 3.2
16. Philipp Rothmaler: Introduction to Model Theory , Spektrum Akademischer Verlag 1995, ISBN 978-3-86025-461-5 , end of paragraph 3.1.
17. H.-D. Ebbinghaus, J. Flum, W. Thomas: Introduction to mathematical logic , spectrum Akademischer Verlag, ISBN 3-8274-0130-5 , VI.2.1