# Propositional logic

The SL is a branch of logic that deals with statements and their linkage by connectives is concerned, starting from structureless elementary statements (atoms), where a truth value is assigned. In classical propositional logic , each statement is assigned exactly one of the two truth values ​​“true” and “false”. The truth value of a composite statement can be determined from the truth values ​​of its partial statements without additional information.

## history

Historically, propositional logic goes back to Aristotle , who first discussed propositional principles, namely in his metaphysics the principle of contradiction and the principle of the excluded third , and who in his first analysis addressed indirect proof . The two-valued propositional semantics developed a little later by the Megarian philosophers Diodoros Kronos and Philon . The statement semantics and axiomatics combined the stoic Chrysippos von Soli , who formulated the first propositional calculus. The advancement of the propositional logic of the Stoa through the Middle Ages is often overlooked. A first complete and decidable formalization for propositional tautologies - but not yet for propositional reasoning - George Boole created in 1847 with his algebraic logic calculus. Gottlob Frege formulated the first propositional calculus with inference rules in the context of his conceptual writing in 1879. It was the template for Bertrand Russell's propositional calculus in 1908, which later prevailed (see below).

## Differentiation from other logics

Since classical propositional logic has become decisive in today's mathematics, this article deals with this modern main type of propositional logic. In general, classical logic is characterized by two properties:

• Every statement has one of exactly two truth values, mostly “false” or “true” ( principle of two- valued or bivalence principle).
• The truth value of every compound statement is uniquely determined by the truth values ​​of its sub-statements (principle of extensionality or compositionality , see principle of extensionality )

The principle of duality is often confused with the principle of the excluded third party .

The classical propositional logic is that area of classical logic, the internal structure of sentences ( statements examined) then are contributions from other sets (subsets) together they set and how these subsets are linked. The internal structure of sentences, which in turn cannot be broken down into further sub-sentences, is not considered by propositional logic. An example: The statement “All cats are dogs and the earth is flat” is combined with the connective word “and” from the two shorter statements “All cats are dogs” and “The earth is flat”. These two statements themselves can no longer be broken down into further statements, and are therefore elementary or atomic from a propositional point of view. Other logical systems based on propositional logic consider the internal structure of such atomic propositions; an important example is predicate logic .

In contrast to classical logic, non-classical logic systems arise when the principle of duality, the principle of extensionality or even both principles are abolished. Non-classical logics that arise from the abolition of the principle of two- valued logic are called multi-valued logic . The number of truth values ​​(more common in this case: pseudo truth values) can be finite (e.g. three-valued logic), but is often also infinite (e.g. fuzzy logic ). By contrast, using logic that caused by removal of the extensionality, connectives (connectives) in which the truth value of the composite set can no longer be unambiguously determined from the truth value of its parts. An example of non-extension logic is modal logic , which introduces the single-digit non-extension operators “it is necessary that” and “it is possible that”.

Logical systems are not in competition for truth or correctness within logic. The question of which logical system should be used for a particular purpose is more of a pragmatic one.

Often, logical systems and logical questions are confused or mixed up with extra-logical questions. B. with the metaphysical question, which logical system is "correct", d. H. describe reality. There are different positions on this question, including the positivist position that this question is pointless. However, these questions fall into other areas, e.g. B. Philosophy , philosophy of science and linguistics .

If classical propositional logic is dealt with in this article, it is not to be understood as a metaphysical definition or even as an assertion that “all statements are true or false”. It's just that classical propositional logic just deals with statements that are true or false. This is a major formal simplification that makes this system relatively easy to learn. If more than two truth values ​​are needed for metaphysical or pragmatic reasons, classical propositional logic can serve as a starting point for setting up a suitable logical system.

## Colloquial introduction

### Simple statement (elementary statement)

A statement A is a sentence that is either true (w, true, true, 1) or not true (f, false, false, 0). This applies to both simple and linked statements. There are no “half-truths”. A statement can come from ordinary language as well as from the language of mathematics. An elementary proposition is a proposition that does not contain any propositional links ( not, and, or, if ... then, exactly then if ).

Examples of elementary statements:

• ${\ displaystyle A_ {1}}$: Munich is 781 km from Hamburg.
• ${\ displaystyle A_ {2}}$: 9 is divisible by 3.
• ${\ displaystyle A_ {3}}$: Eintracht Frankfurt will be German soccer champions next season.
• ${\ displaystyle A_ {4}}$: All cars are green.

${\ displaystyle A_ {2}}$is obviously true, but is false. one must first check before one can decide whether is true or false. At the moment , you cannot decide whether it is true. That will only become apparent at the end of the next football season. ${\ displaystyle A_ {4}}$${\ displaystyle A_ {1}}$${\ displaystyle A_ {1}}$${\ displaystyle A_ {3}}$

In classical propositional logic, a statement is either true or not true, even if one does not know its truth content. This is the case, for example, with unsolved mathematical problems.

Note: is an all-statement; the structure of such statements is the subject of predicate logic . In the sense of propositional logic it is an elementary proposition. ${\ displaystyle A_ {4}}$

### Negative statement - negation

The negation or negation (also: sentence negation , external negation , contradictory opposite ) of a statement A is that statement ¬A that is true if and only if A is false and which is false if and only if A is true. Easier: The negation of a statement A turns the truth value of A into its opposite.

The negative of a statement A is always obtained by adding the phrase “It is not the case that” in front of it. It is true that a natural language sentence can also be negated by inserting the word “not” or another negative formulation in a suitable place - but it is not always easy to recognize which formulation should be used and where. Formally one writes for “not A” in the most common notation (spelling) ¬A, in English and in switching algebra also “NOT A”, occasionally also “~ A”.

${\ displaystyle A}$ ${\ displaystyle \ neg A}$
not correct true
true not correct

We deny the above examples:

• ${\ displaystyle \ neg A_ {1}}$: It is not the case that Munich is 781 km away from Hamburg.
• ${\ displaystyle \ neg A_ {2}}$: It is not the case that 9 is divisible by 3.
• ${\ displaystyle \ neg A_ {3}}$: It is not the case that Eintracht Frankfurt will become German football champions next season.
• ${\ displaystyle \ neg A_ {4}}$: It is not the case that all cars are green. (There can be green cars, but there is at least one car that is not green.)

In general, the following applies to the negative:

• If a statement is true, the negation is false.${\ displaystyle A}$${\ displaystyle \ neg A}$
• If a statement is false, the negative is true.${\ displaystyle A}$${\ displaystyle \ neg A}$
• A statement cannot be true and false at the same time.${\ displaystyle A}$
• The statements and cannot be true at the same time.${\ displaystyle A}$${\ displaystyle \ neg A}$

A conjunction is a statement made up of two statements that asserts the truth of all of its partial statements. Colloquially, two statements A and B are combined with the connective word “and” to form a conjunction “A and B”, in logical language one usually uses the sign (spelling:) , occasionally also the ampersand, the ampersand (&). ${\ displaystyle \ land}$${\ displaystyle A \ land B}$

• Speaking: A and B
• Notation: ${\ displaystyle A \ land B}$
• in English and in switching algebra also A AND B

The statement is always true if both A and B are each true. Otherwise it is wrong, namely if either A or B or both statements are wrong. ${\ displaystyle A \ land B}$${\ displaystyle A \ land B}$

${\ displaystyle A}$ ${\ displaystyle B}$ ${\ displaystyle A \ land B}$
true true true
not correct true not correct
true not correct not correct
not correct not correct not correct

These partial statements and their negations are now linked by: ${\ displaystyle \ land}$

• ${\ displaystyle C_ {1}}$: 9 is divisible by 3 and 9 is a square number.
• ${\ displaystyle C_ {2}}$: 9 is not divisible by 3 and 9 is a square number.
• ${\ displaystyle C_ {3}}$: 9 is divisible by 3 and 9 is not a square number.
• ${\ displaystyle C_ {4}}$: 9 is not divisible by 3 and 9 is not a square number.

Is only true because is and is also true. is wrong because is wrong. is wrong because is wrong. is wrong because both and is wrong. ${\ displaystyle C_ {1} = A \ land B}$${\ displaystyle A}$${\ displaystyle B}$
${\ displaystyle C_ {2} = \ neg A \ land B}$${\ displaystyle \ neg A}$
${\ displaystyle C_ {3} = A \ land \ neg B}$${\ displaystyle \ neg B}$
${\ displaystyle C_ {4} = \ neg A \ land \ neg B}$${\ displaystyle \ neg A}$${\ displaystyle \ neg B}$

### Non-exclusive or - disjunction

A disjunction is a compound statement that claims that at least one of its sub-statements is true. The disjunction in this sense is also called a non-exclusive or . (But be careful: The term “disjunction” was and is often used for the exclusive or , “either ... or” - think of the concept of disjoint sets . Some authors therefore use the term adjunction for the non- exclusive or .) That The symbol " " comes from the Latin word "vel", which means "or" in German. ${\ displaystyle \ vee}$

• Speech: "A or B"; more precisely: "A or B or both", often in legal or medical texts: "A and / or B"
• Notation: ${\ displaystyle A \ vee B}$
• in English and in switching algebra also A OR B

The statement is always true if at least one of the partial statements A or B is true, or if both partial statements are true. Otherwise it is wrong, namely when both A and B are wrong. ${\ displaystyle A \ vee B}$${\ displaystyle A \ vee B}$

• A: 9 is divisible by 3
• B: 9 is a square number
${\ displaystyle A}$ ${\ displaystyle B}$ ${\ displaystyle A \ vee B}$
true true true
not correct true true
true not correct true
not correct not correct not correct

These partial statements and their negations are now linked by: ${\ displaystyle \ vee}$

• ${\ displaystyle C_ {5}}$: 9 is divisible by 3 or 9 is a square number.
• ${\ displaystyle C_ {6}}$: 9 is not divisible by 3 or 9 is a square number.
• ${\ displaystyle C_ {7}}$: 9 is divisible by 3 or 9 is not a square number.
• ${\ displaystyle C_ {8}}$: 9 is not divisible by 3 or 9 is not a square number.

${\ displaystyle C_ {5} = A \ vee B}$is true because both and are true. is true because is true. is true because is true. Only is wrong because both and are wrong. ${\ displaystyle A}$${\ displaystyle B}$
${\ displaystyle C_ {6} = \ neg A \ vee B}$${\ displaystyle B}$
${\ displaystyle C_ {7} = A \ vee \ neg B}$${\ displaystyle A}$
${\ displaystyle C_ {8} = \ neg A \ vee \ neg B}$${\ displaystyle \ neg A}$${\ displaystyle \ neg B}$

### Material implication

The material implication , also called conditional or subjunction , expresses the sufficient condition : It says that the truth of one sentence is a sufficient condition for the truth of the other sentence. One writes

${\ displaystyle A \ rightarrow B}$

or

${\ displaystyle A \ supset B}$(A 1) and reads
• A is a sufficient condition for B.
• If A, then B.
• A assumes that B.
• B is a necessary condition for A. The
fact that B is a necessary condition for A if and only if A is a sufficient condition for B is a surprising and perhaps counterintuitive, but correct statement. The sub-chapter Sufficient and Necessary Condition tries to make this connection more visible.
• A implies B.
• Only if B, then A.

or even just

• If A, then B.

In a conditional is called A, the antecedent , B is the consequent or consequent .

Examples:

• That it rains is a sufficient condition for the road to be wet.
• Even when it rains, the road is wet.
• When it rains, the road is wet.
• It can only rain when the road is wet.
• If person x has a car of brand y, x has a car.
• If a number n is divisible by 6, then the number n is divisible by 3.

The reading “if ... then” is problematic insofar as the natural language “if ... then” primarily expresses contextual relationships such as causality or temporal proximity. The material implication does not do any of this, it only mentions the formal context: “That it rains is a sufficient condition for the street to be wet”. The material implication does not take a position on the question of why this is a sufficient condition - whether due to a causal connection or even purely coincidentally.

${\ displaystyle A}$ ${\ displaystyle B}$ ${\ displaystyle A \ rightarrow B}$ ${\ displaystyle \ neg B \ rightarrow \ neg A}$
not correct not correct true true
not correct true true true
true not correct not correct not correct
true true true true

The reverse is the conclusion from on . For the examples this means: ${\ displaystyle A \ rightarrow B}$${\ displaystyle \ neg B \ rightarrow \ neg A}$

• If the road isn't wet, it won't rain.
• If person x does not have a car, then x does not have a car of brand y.
• If the number n is not divisible by 3, then n is not divisible by 6.

Colloquially, one can occasionally be  tempted to make further - incorrect - statements: (A 2)

• Because it wasn't raining, the road can't be wet.
This conclusion is wrong, as the road can also get wet for other reasons (burst pipe, practice by the fire brigade ...).
• x doesn't have a y brand car, so x doesn't have a car.
Wrong, because he could have a z brand.
• n is not divisible by 6, so n is not divisible by 3 either.
This conclusion is also wrong. The number 15 is not divisible by 6 and can be divided by 3.

That means: If the conclusion is true, then from the statement ¬A no statement about B is obtained; B can be true or false. (" Ex falso sequitur quodlibet " - "Anything follows from wrong") ${\ displaystyle A \ rightarrow B}$

Implication is an important tool in mathematics. Most mathematical proofs use the concept of implication.

(A 1) Beware of confusion: this is not a superset symbol, but the curve, arc or horseshoe symbol (Peano-Russell notation) - see article on the implication ; the subsets symbol, which looks the same, should open on the other side.
(A 2) Also known in English as modus morons ("foolish mode, mode of the idiot") - a pseudo- Latin play on words alluding to various modus terms in propositional logic such as modus tollens , derived from English moron 'fool, idiot, idiot' . In correct Latin it should - with the same meaning - be modus morans ( a instead of o ).

### Biconditional

The biconditional , often also called object-language equivalence or material equivalence , expresses the sufficient and necessary condition , i.e. says that a statement A applies exactly when a statement B applies. One writes:

${\ displaystyle A \ leftrightarrow B}$

• A is the case if and only if B is the case.
• A if and only if B.
• A is equivalent to B.
• A is the case if and only if B is the case.

A purely formal statement is also made with the biconditional, which says nothing about any contextual connection between A and B.

Instead of saying, one can also say that A is a sufficient condition for B and that B is a sufficient condition for A, so . In fact, these two statements are logically equivalent . ${\ displaystyle A \ leftrightarrow B}$${\ displaystyle (A \ rightarrow B) \ land (B \ rightarrow A)}$

${\ displaystyle A}$ ${\ displaystyle B}$ ${\ displaystyle A \ leftrightarrow B}$
not correct not correct true
not correct true not correct
true not correct not correct
true true true

Example:

• The natural number n is divisible by 6 if and only if n is divisible by 2 and by 3.
If n is divisible by 6, then it follows that n is divisible by 2 and 3. The reverse is also true: If n is divisible by 2 and by 3, then n is divisible by 6.
• Today is Tuesday exactly when tomorrow is Wednesday.

The biconditional as a compound statement within the logical language (see object language ) is often confused or mixed up with the concept of logical equivalence. The logical equivalence is a metalinguistic , mostly natural language formulated property of two statements of the logical language. A connection between logical equivalence and biconditional exists only insofar as the metatheorem applies that a biconditional is a tautology if and only if the two statements A and B are logically equivalent. ${\ displaystyle A \ leftrightarrow B}$

### Exclusive or

${\ displaystyle A}$ ${\ displaystyle B}$ ${\ displaystyle \ neg (A \ leftrightarrow B)}$
not correct not correct not correct
not correct true true
true not correct true
true true not correct

The exclusive or (contravalence or antivalence), "either A or B", means that exactly one of the two statements it connects is true. Similarly, an exclusive or is false not only if both A and B are false, but also if both are true. (Some authors use the term alternative for the exclusive or .)

Although the exclusive or is a concept that is dealt with over and over again in natural language, in most logical languages ​​it is not introduced as a separate noun . Instead, the exclusive or is expressed, for example, as a negated biconditional, i.e. as . ${\ displaystyle \ neg (A \ leftrightarrow B)}$

The exclusive or, on the other hand, enjoys great importance in switching algebra , where it is usually written down as XOR (eXclusive OR) .

### Negation of a linked statement (De Morgan's Laws)

#### Negation of a conjunction

The negative of the conjunction “A and B” (in the logical notation:) reads “It is not the case that A and B apply” (in the logical notation:) . This is logically equivalent to the statement "A is not the case, or B is not the case (or both)" (in logical notation:) . ${\ displaystyle A \ land B}$${\ displaystyle \ neg (A \ land B)}$${\ displaystyle (\ neg A \ lor \ neg B)}$

${\ displaystyle A}$ ${\ displaystyle B}$ ${\ displaystyle \ neg (A \ land B)}$
not correct not correct true
not correct true true
true not correct true
true true not correct

An example:

If you get the statement

It's raining and the earth is flat

would like to say no, then you can either say

It is not the case that it is raining and the earth is flat.

or one says

It's not raining or the earth is not flat (or both).

In switching algebra, the junction NAND is used very often , where “A NAND B” has the same truth value curve as the expression . ${\ displaystyle \ neg (A \ land B)}$

#### Negation of a disjunction

The negation of the disjunction “A or B (or both)” (in the logical notation:) reads “It is not the case that A or B is true” (in the logical notation:) . This is logically equivalent to the statement “A is not the case and B is not the case” (in logical notation:) . ${\ displaystyle A \ lor B}$${\ displaystyle \ neg (A \ lor B)}$${\ displaystyle \ neg A \ land \ neg B}$

${\ displaystyle A}$ ${\ displaystyle B}$ ${\ displaystyle \ neg (A \ vee B)}$
not correct not correct true
not correct true not correct
true not correct not correct
true true not correct

An example:

If you get the statement

The earth is a disk or the earth is a cube.

would like to say no, they say

It is not the case that the earth is flat or that the earth is a cube.

According to De Morgan's law, however, one can now also say:

The earth is not a disk and the earth is not a cube

or in more beautiful German

The earth is neither a disk nor a cube.

In switching algebra, the connective NOR is used, which has the same truth value curve as the statement . ${\ displaystyle \ neg (A \ lor B)}$

### Sufficient and necessary condition

This section is intended to take up and elaborate on the relationship between sufficient and necessary condition, which was initially often perceived as counterintuitive, as it was addressed in the section on the material implication.

Let us look again at the material implication . ${\ displaystyle A \ rightarrow B}$

One says: A is sufficient for B: Even if A is the case, B is also the case.

Conversely, one can also say: B is necessary for A. Without B, A cannot be fulfilled.

How does this connection come about?

We know that the truth of A leads to the truth of B, because A is a sufficient condition for B. So it is simply not possible for A to occur without B also occurring: B is therefore necessarily the case if A is the case. B is "necessary" for A.

So this connection is actually quite simple; The main reason that it is often perceived as counterintuitive at first is probably the difficulty in making a strict distinction between the many meanings of the colloquial "if ... then" on the one hand and the purely formal sufficient and necessary condition on the other.

With the colloquial "if ... then" one would almost always want to express a contextual (causal or temporal) connection between antecedent and consequence: "Rain causes road wetness", "First the rain falls, only afterwards does the road get wet". If the sufficient condition is misunderstood in this sense, then it is clear that the necessary condition formulated in reverse order “Only when the road is wet does it rain” looks strange: “Rain does cause road wetness. How can it ever be concluded from this that wet roads cause rain? "

The material implication does not say any of this. “A is a sufficient condition for B” simply means that if statement A is true, statement B is also true - timeless and incoherent, not “later” or “because”.

Analogously, the necessary condition, “B is a necessary condition for A”, only states that B is true if A is true. But that is exactly the definition of the conditional A → B.

## Formal access

### introduction

Latest when loud reading sentences like:

"The statement is true if and only if statements A and B are true",${\ displaystyle A \ land B}$

the self-confident layman will ask to be explained what this is about.

The answer of the logician: An attempt should be made to bring security into the rules of logical inference. Since the sophists , it has been clear to the West that apparently compelling conclusions can lead to apparently absurd results. Again and again paradoxes were formulated and felt as a challenge by great thinkers. Logicians therefore try to keep the rules of argument as strict as possible.

The introductory example makes it clear that a separation of the language levels is essential for this: The formal statement A∧B should be explained by speaking about the statement A as well as the statement B on a metalinguistic level.

One attempt to do this is to define propositional logic as a formal system , specifically as a calculus (a certain type of formal system). The terms “true” and “false” initially do not appear in this system. Instead, axioms are set which are simply viewed as character strings from which further derivable character strings are derived based on certain inference rules .

The aim is, on the one hand, that in a formal system only character strings (sentences) can be derived that are also true with a plausible interpretation. On the other hand, all sentences that can be interpreted as “true” should also be able to be derived. The first is the requirement for correctness , the second for the completeness of the formal system; both properties are described under calculus: The term calculus in logic .

For the classical propositional logic with which we are dealing here, there are calculi (formal systems) that are both correct and complete. For more complex logical systems (e.g. set theory ), however, it is impossible to set up a complete calculus that is also correct - this finding was proven by Kurt Gödel in 1931 ( Gödel's incompleteness theorem ).

### syntax

There are many different ways of formally defining the syntax (“grammar”) of a logical language; this usually happens within the framework of a calculation . The following definition is therefore only to be understood as an example of what a calculus for classical propositional logic can look like. Further examples of concrete calculi can be found under tree calculus , conceptual writing , systems of natural reasoning , sequence calculus or resolution calculus . Another axiomatic calculus is given as an example in the article Hilbert calculus , a graphical calculus in the article Existential Graphs .

#### Building blocks of propositional language

Sentence letters (“atomic formulas”, sentence constants), junctions and structural symbols should be used as building blocks of the propositional language . Sentence letters should be the characters P 0 , P 1 , P 2 ,…. Connections should be the signs ¬, ∧, ∨, → and ↔. The round brackets should serve as outline symbols. (A 1)

Formally, the z. B. can be expressed in the following way:

Let V be the ( countably infinite ) set of atomic formulas (sentence letters):

V = {P n | n ∈ N 0 } ( N 0 : set of natural numbers incl. 0), d. H. V = {P 0 , P 1 , P 2 , P 3 , ...}

Let J be the set of joiners and structural symbols:

J = {¬, ∧, ∨, →, ↔, (,)}

The alphabet of the logical language is the set V ∪ J, that is, the union of atomic formulas, connectors and structural symbols.

#### Formation rules

The formation rules determine how sentences (formulas) can be formed from the building blocks of the propositional language. Here propositional formulas are to be defined inductively as words over the alphabet of the logical language, i.e. over V ∪ J as follows : (A 2)

• All atomic formulas F ∈ V (i.e. all sentence letters) are formulas.
• If F is a formula, then (¬F) is also a formula.
(This formula is called the negation of F.)
• If F and G are two (not necessarily different) formulas, then (F ∧ G) is also a formula.
(This formula is called the conjunction of F and G.)
• If F and G are two (not necessarily different) formulas, then (F ∨ G) is also a formula.
(This formula is called the disjunction of F and G.)
• If F and G are two (not necessarily different) formulas, then (F → G) is also a formula.
(This formula is called the material implication or conditional of F and G.)
• If F and G are two (not necessarily different) formulas, then (F ↔ G) is also a formula.
(This formula is called the biconditional of F and G.)
• Nothing else is a propositional formula.

#### Final rules

Inference rules are generally transformation rules that are applied to existing formulas and generate new formulas from them. If you set up a calculus for a logical system, then you choose the transformation rules in such a way that they create formulas from existing formulas that follow semantically from the initial formulas - hence the term “rule of conclusion” ( draw a conclusion ).

Within the syntax, however, the final rules are purely formal transformation rules that have no meaning in terms of content.

Only two concrete final rules should be given here: the modus ponendo ponens and the substitution rule.

Modus ponendo ponens
From a proposition of form and a proposition of form one may infer a proposition of form ; are and wildcard for any formulas. For example, according to this final rule, from “When rain wets the road, the road surface is wet” and from “Rain wets the road” one can conclude “The road surface is wet”.${\ displaystyle \ varphi \ rightarrow \ psi}$${\ displaystyle \ varphi}$${\ displaystyle \ psi}$${\ displaystyle \ varphi}$${\ displaystyle \ psi}$
Substitution rule (substitution rule)
In a sentence, all occurrences of any atom (e.g. "P") can be replaced by any complex sentence (e.g. ). However, all occurrences of the selected atom really have to be replaced, and they really all have to be replaced by the same sentence.${\ displaystyle (P \ land \ neg Q)}$
For example, the substitution rule can be used to infer from on . It is said that P is replaced by or substituted for P (inserted).${\ displaystyle P \ rightarrow (P \ lor Q)}$${\ displaystyle (P \ land \ neg Q) \ rightarrow ((P \ land \ neg Q) \ lor Q)}$${\ displaystyle (P \ land \ neg Q)}$${\ displaystyle (P \ land \ neg Q)}$
(A 1) For predicate logic correspondence, see First Level Predicate Logic - Symbols .
(A 2) For the predicate logic correspondence, see First Order Predicate Logic - Terms .

#### Axioms

Axioms are excellent (in the sense of: emphasized) formulas in propositional language. The distinction is that they are used within a proof or a derivation (see below) without further justification.

In a pragmatic way, one chooses such formulas as axioms, which are semantically tautologies, i.e. always apply, and which help to shorten proofs. Within the syntax, however, the axioms are purely formal objects that have no meaning or justification whatsoever.

Axioms are generally optional; H. a calculus can do without axioms at all if it has enough or powerful rules of inference. Axiom-free calculi are, for example, the systems of natural reasoning or tree calculi .

An axiomatic calculus is to be shown here as an example, namely Russell's propositional calculus from his type theory in 1908, which he adopted in the Principia Mathematica in 1910 . This calculus comprises the following axioms (of which the fourth is redundant, i.e. not absolutely necessary because it can be derived from the other axioms):

1. ${\ displaystyle (P \ lor P) \ rightarrow P}$
2. ${\ displaystyle Q \ rightarrow (P \ lor Q)}$
3. ${\ displaystyle (P \ lor Q) \ rightarrow (Q \ lor P)}$
4. ${\ displaystyle [P \ lor (Q \ lor R)] \ rightarrow [Q \ lor (P \ lor R)]}$
5. ${\ displaystyle (Q \ rightarrow R) \ rightarrow [(P \ lor Q) \ rightarrow (P \ lor R)]}$

In order to be able to derive valid sentences from these axioms that contain other than those in the axioms, they are reduced to the existing connectors by the following definition:

1. ${\ displaystyle (P \ rightarrow Q): = (\ neg P \ lor Q)}$
2. ${\ displaystyle (P \ land Q): = \ neg (\ neg P \ lor \ neg Q)}$

As an alternative to - as here - concrete axioms, one can also specify axiom schemes, in which case one can get by without a substitution rule. If one interprets the above axioms as axiom schemes, then z. B. the first axiom scheme,, for an infinite number of axioms, namely all replacement instances of this scheme. ${\ displaystyle (P \ lor P) \ rightarrow P}$

#### Derivation and proof

A derivation is a list of ascending numbered sentences that begins with one or more assumptions (the premises of the derivation) or axioms. All following theorems are either also axioms (with some calculi further assumptions are also permissible) or have arisen from one or more of the preceding lines through the application of inference rules . The last sentence in the list is the conclusion of the derivation.

A derivation without premises is called proof . However, the words “derivation” and “proof” are often used synonymously.

If it is possible to derive a conclusion P from a set of assumptions (premises) Δ, then one also writes . ${\ displaystyle \ Delta \ vdash P}$

If it is possible a set P without the use of assumptions derive (prove), then one also writes . In this case, P is called the theorem . ${\ displaystyle \ vdash P}$

The sign goes back to the conceptual writing, the work in which Gottlob Frege specified the first formalization of predicate logic in 1879 . ${\ displaystyle \ vdash}$

In classical propositional logic, the rules of inference are chosen in such a way that all valid arguments (and only valid arguments) can be derived from them; the question of validity is dealt with in the following section, “Semantics”.

### semantics

Outside of logic, semantics refers to an area of ​​research that deals with the meaning of language and its parts. The word semantics is often used synonymously with the word meaning .

Even within logic, semantics is about meaning: namely, to assign a meaning to the expressions of a formal language - for example the language of propositional logic discussed here. In logic, this is usually done very formally.

At the center of (formal) semantics is an evaluation function (other terms are evaluation function, denotation function, truth value function), which assigns a meaning to the formulas of the logical language. Formally speaking, the evaluation function is a mapping of the set of formulas in the language into the set of truth values. The evaluation function is often referred to with the capital letter V.

In classical propositional logic, the evaluation function is very simple: the principle of two-valence requires that it must supply exactly one of exactly two truth values ​​for each formula to be evaluated; and the principle of extensionality requires that the evaluation function, when evaluating a complex sentence, only has to consider the evaluation of its subsets.

Every atom, i.e. every sentence letter (atom), is assigned a truth value by setting it. They say: the atoms are interpreted. So it is z. B. determined that P 0 is true, that P 1 is false, and that P 2 is also false. This is enough to evaluate the building blocks of the logical language. Formally, such an evaluation -  called an interpretation and often designated with the lowercase letter v - is a function in the mathematical sense, i. H. a mapping of the set of atoms into the set of truth values.

When the evaluation function V is applied to an atom, i. H. if it is to evaluate an atom, it provides the interpretation of that atom in the sense of the above paragraph. In other words, it provides the value that the evaluation v assigns to the atom.

In order to be able to evaluate the composite formulas, it must be defined for each junctor which truth value the evaluation function delivers for the different truth value combinations that its arguments can assume. In classical propositional logic, this is usually done using truth tables because there are only a few options.

The one-digit connectivity ¬, the negation , is defined in classical propositional logic in such a way that it reverses the truth value of its argument into the opposite, i.e. “negated”: If the evaluation of a formula X is true, then the evaluation function for ¬X returns false; but if X is evaluated incorrectly, then the evaluation function for ¬X returns true. The truth table looks like this:

a Negation
¬ a
f w
w f

The truth value curves of the two-digit connective used are defined in classical propositional logic as follows:

a b Conjunction
a ∧ b
Disjunction
a ∨ b
material implication
conditional
a → b
Biconditional
a ↔ b
f f f f w w
f w f w w f
w f f w f f
w w w w w w

In general, there are four single-digit and sixteen two-digit junctions for classical propositional logic. The logical language dealt with here is limited to the junctions ¬, ∧, ∨, → and ↔ because these are the most common and because their content is best known from everyday language. From a formal point of view, the only condition that one would like to fulfill when choosing connectives is that all other theoretically possible connectives can be expressed with the selected connectors; one says: that the set of chosen connectives is functionally complete. This requirement is met with the choice made here.

More information on the question of how many and which junctions there are and how many junctions are required to achieve functional completeness is described in the chapter on Junctions .

#### Semantic validity, tautologies

Semantic validity is a property of formulas or of arguments. (One argument is the claim that from some statements - the premises  - a certain statement - the conclusion  - follows.)

A formula in propositional language is said to be semantically valid if and only if the formula under all interpretations - i. H. among all assignments of truth values ​​to the atoms occurring in it - is true; if it is generally valid, so to speak; in other words, if the truth table for that statement shows the result true in every row . Semantically valid formulas are also called tautologies and, if there is a tautology, formally write as follows: ${\ displaystyle P}$

${\ displaystyle \ models P}$

An argument is called semantically valid if and only if, provided that all premises are true, the conclusion is also true. In the formulation of Gottfried Wilhelm Leibniz : Only truth follows from truth. This definition must of course also be formulated, and this happens as follows: An argument is semantically valid if and only if all assignments of truth values ​​to the atoms occurring in the premises and conclusion, under which the evaluation function delivers the value true for all premises , also returns the value true for the conclusion .

To express that a formula (the conclusion) semantically follows from a set of formulas (the set of premises), one writes formally as follows: ${\ displaystyle \ Delta}$${\ displaystyle P}$

${\ displaystyle \ Delta \ models P}$

Note the graphical similarity and the difference in content between (Chapter “Derivation and Proof”) and ( See: Semantic Conclusion ): The first formulation -   - expresses the syntactic validity of the argument, i.e. says that from the formulas in with the inference rules of the chosen calculus the formula can be derived . on the other hand, it asserts the semantic validity, which in classical propositional logic, as in the preceding paragraphs, as Leibniz's From truth follows only truth . ${\ displaystyle \ Delta \ vdash P}$${\ displaystyle \ Delta \ models P}$ ${\ displaystyle \ Delta \ vdash P}$${\ displaystyle \ Delta}$${\ displaystyle P}$ ${\ displaystyle \ Delta \ models P}$

#### Important semantic properties: satisfiability, refutability and unsatisfiability

Besides the property of validity (general validity) there are some other important properties: satisfiability, refutability and unsatisfiability. In contrast to validity, which can be a property of formulas or arguments, satisfiability, refutability and unsatisfiability are properties of sentences or sets of sentences.

• A formula is called satisfiable if there is at least one interpretation of the atoms (sentence letters) occurring in it, under which the formula is true.
• A formula is called refutable if there is at least one interpretation of the atoms occurring in it, under which the formula is wrong.
• A formula is called unsatisfiable if it is wrong under all interpretations of the sentence letters occurring in it.
• A set of formulas is called satisfiable if all the formulas it contains are satisfiable.

The question of whether a formula (or a set of formulas) has one of the properties mentioned is just like the question of whether a formula is generally valid, i.e. H. A tautology cannot be solved efficiently for general formulas: Although the truth table is a decision-making process for each of these questions, a truth table for a statement or a set of statements in n atoms comprises rows; the truth table procedure is nothing more than a brute force procedure . ${\ displaystyle 2 ^ {n}}$

Each of these questions can be traced back to the question of whether a certain formula can be fulfilled:

• A formula is a tautology if and only if is unsatisfiable.${\ displaystyle \ varphi}$${\ displaystyle \ neg \ varphi}$
• A formula is refutable if and only if it can be fulfilled.${\ displaystyle \ varphi}$${\ displaystyle \ neg \ varphi}$

The question of whether a statement can be fulfilled is called the satisfiability problem or SAT problem (after the English word for satisfiability ). The SAT problem plays an important role in theoretical computer science and complexity theory . The satisfiability problem for general (arbitrary) formulas is NP-complete , i.e. H. (provided that P is not equal to NP ) not solvable in polynomial running time.

For certain real subsets of the propositional language formulas, the SAT problem is nevertheless faster; H. solvable in polynomially limited computing time. Such a subset are the Horn formulas , which are conjunctions of disjunctions, the disjuncts of which are negated or non-negated atoms, whereby within such a disjunction, however, at most one atom may be non-negated.

#### Algebraic view

If you look at the semantics that have been set up here for classical propositional logic, you can see certain regularities. Is z. If, for example, the evaluation function is applied to a statement of the form X ∧ W, where W should be any true statement, then one finds that the evaluation function for X ∧ W always returns the truth value true if V (X) = true ( that is, V (X∧W) = V (X)). Structurally equivalent regularities also apply in other semantics, including those that are set up for completely different, non-logical systems. For the arithmetic z. B. that the evaluation function there (here called V arithmetic ) always returns the value of X for an expression of the form X + Y, provided that the value of Y is zero: V arithmetic (X + Y) = V arithmetic (X), when V arithmetic (Y) = zero.

A formal science that examines such structural laws is abstract algebra (mostly a branch of mathematics , but also computer science ). In abstract algebra, for example, it is investigated for which links there is a neutral element, i.e. H. an element N , which for a combination op leads to the following (for any X): X op N = X. From an algebraic point of view, one would say that there is exactly one neutral element for the classical propositional conjunction, namely true , and that there is also exactly one neutral element for addition in arithmetic, namely the number zero. As an aside, it should be mentioned that there are also neutral elements for other joiners; the neutral element for the disjunction is false : V (X ∨ F) = V (X), if V (F) = false.

Formal algebra regards formal semantics purely according to their structural properties. If these are identical, there is no difference between them from an algebraic point of view. From an algebraic point of view, more precisely: From the point of view of formal algebra, the semantics for classical propositional logic is a two-valued Boolean algebra . Other formal systems, the semantics of which each form a Boolean algebra, are switching algebra and elementary set theory. From an algebraic point of view, there is no difference between these disciplines.

#### Normal forms

Every propositional formula can be transformed into an equivalent formula in conjunctive normal form and an equivalent formula in disjunctive normal form .

### Metatheory

In the metatheory, the properties of logical systems are examined: The logical system is the object of investigation in the metatheory.

A metatheoretical question is, for example, whether a contradiction can be derived from a calculus.

The aim of this section is to consider some important metatheoretical questions from the perspective of propositional logic.

consistency
A calculus is called consistent if and only if it is impossible to derive a contradiction with the help of its axioms and rules , i.e. H. a statement of the form P ∧ ¬ P (e.g. "Hugo is tall and Hugo is not tall"). For a calculus to be used in propositional logic, this is a minimum requirement.
If it is possible to derive a contradiction in a calculus, then the calculus is called inconsistent .
There are formal systems in which such a contradiction can be derived, but which are entirely reasonable. Another concept of consistency is used for such systems: A calculus is consistent if not all formulas can be derived from it (see paraconsistent logic ).
It can easily be shown that the two concepts of consistency coincide for classical logic: In classical logic, any sentence can be derived from a contradiction (this fact is called Ex falso quodlibet ), i.e. H. if a classical calculus could only derive a contradiction, i.e. would be inconsistent in the first sense, then it could derive every statement, i.e. would be inconsistent in the second sense. Conversely, if a calculus is inconsistent in the second sense, i.e. every statement in it can be derived, then in particular every contradiction can also be derived and is also inconsistent in the first sense.
correctness
A calculus is called correct (semantically correct) if only those formulas can be derived from it that are semantically valid. For classical propositional logic this means more simply: A calculus is correct if only tautologies can be proven in it and only valid arguments can be derived.
If it is possible in a propositional calculus to derive at least one invalid argument or to prove at least one formula that is not a tautology, then the calculus is incorrect .
completeness
A calculus is called complete (semantically complete) if and only if all semantically valid formulas can be derived from it; for classical propositional logic: If all tautologies can be derived from it.
With regard to a specific semantics , a calculus is called adequate if it is (semantically) correct and (semantically) complete.

A metatheoretical result is, for example, the statement that all correct calculi are also consistent. Another metatheoretical result is the observation that a consistent calculus does not have to be automatically correct: It is easily possible to set up a calculus in which no contradiction can be derived, but in which e.g. B. the not generally valid statement of the form "A ∨ B" can be derived. Such a calculation would be consistent for the former, but incorrect for the latter.

Another, very simple result is the observation that a complete calculus does not automatically have to be correct or just consistent. The simplest example would be a calculus in which every formula in propositional language can be derived. Since every formula is derivable, all tautologies are derivable, which are formulas: That makes the calculus complete. However, since every formula can be derived, the formula P 0 ∧ ¬ P 0 and the formula A ∨ B can also be derived: the former makes the calculus inconsistent, the latter incorrect.

The ideal that a calculus should fulfill is correctness and completeness: If this is the case, then it is the ideal calculus for a logical system because it can derive all semantically valid sentences (and only these). So the two questions, whether a concrete calculus is correct and / or complete and whether it is even possible for a certain logical system to give a correct and complete calculus, are two particularly important metatheoretical questions.

## Delimitation and philosophy

Classical propositional logic as outlined here is a formal logical system. As such, it is one of many that are of equal value from a formal point of view and that have very specific properties: Most are consistent, most are correct, some are complete, and some are even decidable. From a formal point of view, the logical systems are not in any competitive behavior with regard to truth or correctness.

Extra-logical questions are clearly distinguished from formal, inner-logical questions: those about the usefulness (applicability) of individual systems for a specific purpose and those about the philosophical, especially metaphysical status of individual systems.

The usefulness consideration is the simpler one, with regard to which differences of opinion are less profound or less serious. Classic propositional logic, for example, has proven itself in the description of electronic circuits ( switching algebra ) or for the formulation and simplification of logical expressions in programming languages . Predicate logic is often used when it comes to formalizing factual knowledge and automatically drawing conclusions from it, as happens in the context of the Prolog programming language, among other things . Fuzzy logics , non-monotonous, multi-valued and also paraconsistent logics are very welcome when it comes to dealing with knowledge stocks in which statements with different degrees of certainty or even contradicting statements are to be stored and meaningful conclusions are to be drawn from the overall stock. Even if, depending on the application, there can be very large differences of opinion as to which logical system is more suitable, the nature of the problem is immediately and equally tangible for everyone involved. Individual scientific considerations and questions predominantly take place in this area.

Questions of a philosophical and metaphysical nature are (even) more controversial than such pragmatic considerations. The question of “which logical system is correct” is almost paradigmatic, whereby “correct” is meant here as: which logical system not only simplifies a partial aspect of reality in a model, but adequately describes reality, being as a whole. There are many different opinions on this question, including the opinion introduced by philosophical positivism that the question as a whole is pointless.

In the area of ​​metaphysical questions there is also the question of whether there is such a thing as a metaphysical principle of two- value, i.e. whether statements about reality consistently fit into the true / false scheme or not. This question is independent of the question of whether dealing with two-valued or multi-valued logics makes practical sense: Even if a metaphysical principle of two-valued logic prevails, one could use multi-valued logics in practical application, for example to grasp epistemic facts, for example from statements which are metaphysically true or false, but of which it is not or not yet known which of the two is the case. Conversely, even if such a metaphysical principle does not apply, one can prefer two-valued logic because of its simplicity for those applications in which only those sentences have to be dealt with that are actually true or false.

Like most metaphysical questions, the question of a metaphysical principle of duality has not been definitively answered satisfactorily. An early objection to such a principle, which Aristotle put up for discussion, was the subject of statements about future facts ("Tomorrow it will rain"). If statements about the future are true or false today, it is argued, then the future must be predetermined down to the last detail. Another objection that has been raised is that there are statements the truth of which cannot be ascertained in practice or in theory - for example, the truth of “The White House lawn on February 1, 1870 was exactly 6,120,375 “4 blades of grass” simply cannot be found.

Proponents of a metaphysical two-valued principle often refer to the behavior of metatheorists, i.e. mathematicians or logicians who make statements about formal systems: No matter how multi-valued or non-classical the system under investigation is, the meta-assumptions, meta-assertions and meta-determinations made are always two-valued: a calculus , even a paraconsistent or non-monotonous one, is always regarded as either consistent or inconsistent, and a logical system is always either correct or incorrect, complete or incomplete, decidable or undecidable, never “a little” of both. Proponents interpret this as an indication that in reality there is indeed a strict distinction between true and false or that it is at least sensible to assume one.

Another philosophical question is that of the metaphysical status of the object of investigation of logic, that is, what logical systems, calculi, truth values ​​actually "are".

The Platonic point of view is that the signs and constructs used in logic have an extra-logical meaning, that they are names for objects that actually exist (albeit of course non-physical). In this sense there would be such a thing as the true and the false , abstract objects that are named by the signs “true” and “false”.

The opposite pole to Platonism would be nominalism, which ascribes existence only to signs that are manipulated in logic. The subject of logic is signs, and the activity of logicians is the manipulation of signs. But the signs do not denote anything, so there is no such thing as true or false. In the fundamental dispute in mathematics, the nominalistic position would correspond to the formalistic direction .

A middle position would be taken by philosophical constructivism , according to which the signs do not denote independently existing objects, but objects are constructed by dealing with the signs.