Falsificationism

Are all swans white? The classical view of the philosophy of science was that it is the task of science to “prove” such hypotheses or to derive them from observational data. However, this seems difficult to do, since a general rule would have to be deduced from individual cases, which is not logically permissible. But a single black swan allows the logical conclusion that the statement that all swans are white is wrong. The Falsificationism thus aims to provide background questions, a falsification of hypotheses, instead of attempting a proof.

The falsificationism , rarely critical empiricism , is originally from Karl R. Popper developed scientific theory of critical rationalism . With the delimitation criterion of falsifiability and the method of falsification, he proposes solutions to the delimitation problem and the induction problem , i.e. to the questions of where the limits of empirical research lie and which methods it should apply.

overview

According to the philosophy of science founded by Karl Popper , progress in knowledge takes place through “ trial and error ”: We give an answer to open questions on an experimental basis and subject them to a strict examination. If they fail, we discard this answer and try to replace it with a better one.

Falsificationism therefore assumes that a hypothesis can never be proven, but can be refuted if necessary. This basic idea is older than Popper, you can find it e.g. B. with August Weismann, who in 1868 said it

“A scientific hypothesis can never be proven, but if it is wrong it can be refuted, and the question arises, therefore, whether facts cannot be brought up which are in irreversible contradiction with one of the two hypotheses and thus bring it down. "

For Karl Popper , the question of rationality in the scientific method arose, according to his own statement, through Einstein's theory of relativity . Until then, the prevailing opinion was that a theory like Newton's described irrefutable laws of nature, and hardly anyone doubted the truth and the finality of this theory. It was confirmed by numerous observations and made nontrivial prognoses possible. However, Einstein had not only developed a novel, powerful theory, but also made the traditional understanding of science considerably insecure. Popper was particularly impressed by Einstein's proposals to test his theory through qualified experiments, i.e. to examine prognoses through observations that could lead to a refutation (falsification) of the theory.

The question that arises as to whether the truth of a theory can be guaranteed at all led Popper to discuss the induction problem. The induction problem is the question of whether and, if so, to what extent it is possible, based on empirical observations, to draw knowledge-expanding inductive conclusions on general, especially law-like statements. This includes, for example, the problem of whether and, if so, what connection exists between the observation that the sun rose every day and the assumption that this will also be the case tomorrow. Already Hume and Peirce had dealt with the problem of induction.

Popper came to the conclusion that induction does not exist. He found that the assumption that there are inductively confirmatory observations that exclude or make contrary observations unlikely leads deductively to contradictions. According to Popper, theories can only prove themselves , they cannot be made likely or proven to be true. For him, induction not only does not exist for these applications, it does not exist at all, not even as a means of forming hypotheses. Because the formation of generalizations based on individual statements is logically impossible: Even the most trivial imaginable individual statements are “theory-laden”, that is, they always contain theoretical elements. The theory must therefore always be there (possibly unconsciously) before individual statements can even be made - for example by deductive derivation from this theory. Even when attempting to generate the sentence “All swans are white” purely syntactically from the sentence “This swan is white”, closer examination reveals that the meaning of the word “swan” changes unsystematically because of the theoretical elements hat: In the second sentence the word has the meaning of a universal , while in the first sentence it still denoted an individual.

He conducted the discussion on this with representatives of the Vienna Circle , who also discussed the problem of demarcation. This refers to the question of whether there is an exact criterion with which a statement can be excluded as unscientific. They were particularly concerned with the propositions of metaphysical philosophy, which they regarded as scientifically nonsensical. In the classical conception of the induction method, the delimitation was connected with the induction problem. Scientific knowledge there was knowledge that had been obtained from observational data using induction. The philosophers of the Vienna Circle assumed that this can also be decided syntactically by analyzing the structure of sentences that can arise through inductive methods. Accordingly, a sentence is scientific if a condition for its truth can be specified, which can be evaluated by empirical means (sensory perception , measurement, possibly supported by equipment) so that the statement can be verified . Popper rejected this answer together with the existence of an induction rule, because for him empirical theories are fundamentally not verifiable. Conversely, wrong theories can also have true conclusions. This is how Newton's theory of gravity predicted the existence of the planet Neptune . In the case of two false theories, there can still be gradations of greater or lesser falsehood and (in addition, also with two true theories) between greater or lesser explanatory value ( closeness to truth ).

Popper had been dealing with a similar problem of demarcation since 1919 (albeit without publishing anything about it): the problem of differentiating between science and pseudoscience (which he included astrology and psychoanalysis). Starting from this problem and with his determination that statements by empirical factual reports can only be refuted and not strengthened, and that an induction rule was impossible, he arrived at a new and changed problem. It was now about the demarcation between empirical-scientific and all other statements - without him seeing these other statements as problematic or nonsensical per se. This problem was even more important to Popper than the induction problem. According to Popper, a theory can only be empirical if it is possible for observational principles to contradict it. But this is only possible if it excludes certain observable facts from taking place. A theory with this property is falsifiable:

An empirical-scientific system must be able to fail because of experience . (Logic of Research, LdF for short, 17).

Correspondingly, a theory is empirically sharper, the closer it places restrictions on what can be observed, i.e. the more potential observation reports can contradict it. Popper's claim is to provide a rational, systematic and objective, i.e. intersubjectively verifiable, instrument with the delimitation criterion of falsifiability .

When Popper discussed these ideas with the representatives of the Vienna Circle, Feigl suggested in 1930 that he should elaborate them and publish them in a book. Popper distributed the manuscript ( The Two Basic Problems of Epistemology ) privately among the members of the circle. It was then positively reviewed by Carnap in the journal Knowledge . A substantially abridged and revised version was published in 1934 under the title Logic of Research ( LdF ), Popper's epistemological basic work. Over a period of 60 years (a total of 10 editions were published up to his death), he added to this repeatedly with appendices and discussion contributions in the footnotes (the last appendix in the year of his death), and he wrote a three-volume epilogue to it.

Popper always emphasized that his research logic itself is not an empirical theory, but a methodology that assumes that it is a matter of determining what is recognized as science. In particular, he opposed the naturalistic conception of methodology, according to which the scientific method is what scientists actually do. Due to its normative character, the falsification itself cannot be falsified. One can only critically prefer them to the other known methods:

by analyzing their logical consequences, by pointing out their fertility, their enlightening power with regard to epistemological problems. (LdF, 14)

Falsifiability

Falsifiability is a property of logical statements . A statement is falsifiable if and only if there is an observation clause with which the statement can be attacked; who refutes it if it is correct. Falsifiability is a criterion that is intended to distinguish empirical from non-empirical statements. A theory is then empirical if there is at least one observation sentence whose empirical examination can logically lead to a contradiction (!). "Tomorrow it will rain" is falsifiable, but not "Tomorrow it will rain or it will not rain" (a tautology that follows from the tertium non datur ( Latin ) in a purely logical manner ). It cannot be ruled out that in practice, due to the lack of suitable experiments (e.g. in astronomy or atomic physics), falsification cannot be carried out at all. Popper therefore made a fundamental distinction between “logical falsifiability” and “practical falsifiability”.

He warned against misinterpretation: "[the] aim of the delimitation [was] completely misunderstood" Falsifiability is not a criterion that characterizes rational acceptability, scientific recognition, scientific authority or the meaningfulness of a statement. It is also not a quality criterion. It must not be confused with the criterion of ' heightened dogmatism ' that Popper uses to characterize pseudoscience and pseudorationality. In critical rationalism, delimitation criteria fulfill the task of delimiting the areas in which a certain form of criticism can be effectively applied. Hans Albert pointed out in particular the danger that such criteria could be misused as “ dogmatic shielding principles ”, that such abuse could be promoted by scientific specialization and that “the representative of a subject could restrict his critical stance on the field in that he feels at home could facilitate ”. (Albert admitted to having made this mistake himself once with the falsifiability criterion.) William W. Bartley assessed the falsifiability criterion after adding pan-critical rationalism to Critical Rationalism as "relatively unimportant" and only of historical importance; Popper saw it differently, for him it was central.

Popper developed the delimitation criterion of falsifiability primarily as a counter-concept to that of verifiability . The proponents of logical empiricism considered this to be a criterion of delimitation (also criterion of meaning) between statements that have a cognitive meaning versus those that have no cognitive meaning. The latter can definitely have meaning in another sense (e.g. emotive or metaphorical ), so they are not completely meaningless. According to Carnap, pseudoscientific statements, for example, can consist of cognitively meaningful sentences; the criterion of meaning of logical empiricism and the criterion of falsification of critical rationalism are therefore not comparable because they are actually supposed to solve two different problems. Verifiability in the strict sense means that a statement can be reduced completely to observation sentences and thus makes considerably greater demands than falsifiability. Falsifiability was the criterion for Popper to distinguish a theory of empirical sciences (empirical sciences) from non-empirical scientific theories. The latter include metaphysics in the broadest sense, pseudoscience, but also mathematics , logic , religion and philosophy . In contrast to the Vienna Circle, Popper was of the opinion that there is no such thing as exact science.

Definitions are not falsifiable. Therefore, statements that implicitly contain the definition of what is said cannot be falsified. If the phrase “all swans are white” implies that it is an essential characteristic of swans to be white, it cannot be refuted by the existence of a black bird that otherwise has the characteristics of a swan. If, on the other hand, the color is not part of the definition of a swan, the sentence “All swans are white” can be checked by contrasting it with an observation sentence: “There is a black swan in the Duisburg Zoo”, regardless of whether there is a black swan really does exist.

Likewise, the axioms of mathematics are not falsifiable as positions. You can then check whether they are free of contradictions , independent of one another, complete and also necessary for the derivation ( deduction ) of the statements of a system of theories. The change in the axiom of parallels in the 19th century led to the development of other geometries in addition to the Euclidean one. However, this did not falsify Euclidean geometry . However, without these non-linear geometries, the development of the theory of relativity would not have been possible.

Only statements that are not tautologies can be falsifiable. Accordingly, the following sentence cannot be falsified: "All human actions are undertaken exclusively in the selfish interest and those that are apparently not selfish are undertaken with the selfish intention not to appear selfish." The combination of the two half-sentences closes the description of a human one Action that contradicts this theory. Likewise, universal existence sentences cannot be falsified. After seeing the black swan in Duisburg Zoo: “There is at least one black swan”. On the other hand, the theory: "All objects fall with an acceleration a = 10 m / s² to the earth" is falsifiable because one can check the value for a . A theory is falsifiable if the class of its falsification possibilities is not empty. (LdF 62).

The criterion of falsifiability is based on a classification of sentences:

Explanation of a process

According to Popper, two types of sentences appear as premises in the explanation of a process : General sentences (theories, laws, hypotheses) and special sentences (also called "boundary conditions" by Popper), which refer to the special circumstances. From suitable premises of this kind one can infer the truth of further special sentences (also called "prognoses") as conclusions . The forecasts describe the process to be explained. Conversely, based on the deductive inference rule of modus tollens , the falseness of a valid derived forecast can be inferred from at least one of the premises used. The following sentences can serve as an example: "All ravens are white" as a general sentence or theory, "There is a raven on my desk" as a boundary condition and as a prognosis "This raven is white". The prognosis can then be logically deduced from the theory together with the boundary condition. Conversely, from the appearance of a black animal on the desk it can be concluded that either one is not dealing with a raven - or that not all ravens are white. The exercise science uses this method because both individual case studies generalized as is also possible to proceed systematically inductive. The verification / falsification of training theories takes place again and again in competition when athletes who are prepared according to different theories meet.

Specific and numerical generality

Sentences of specific and numerical generality differ with Popper in that only sentences of specific generality refer to sets with an infinite number of elements. Sentences of numerical generality, since they refer to finite sets , can be replaced by conjunctions of finitely many special sentences. According to Popper, sentences of specific generality refer to all space-time domains. He assigns specific generality to the general sentences of the declarations. He also calls sentences of this form " universal sentences ". The expression “the European ravens” corresponds to numerical generality when “European” means “the ravens now living in Europe”. By convention, the term “all ravens” can be used for specific generality. The set of ravens then theoretically has an infinite number of elements.

Individual and universal terms

Popper considers the distinction between individual and universal terms to be indispensable and fundamental in order to clarify the logical relationships of general and particular sentences. According to Popper's terminology, individualities can only be defined by using proper names . Universals , on the other hand, can do without them. Individuals therefore refer to excellent space-time regions, universals not. Popper calls sentences in which only universals occur "universal sentences" . In addition to universal sentences, which Popper identifies as universal sentences, he also considers universal there-are sentences to be significant. They claim the existence of a process in a completely indefinite way, not related to a specific space-time area. This corresponds to the "sometime" or "somewhere" of the colloquial language. The negation of a universal proposition has the form of a universal there-is proposition. In the example used above, “Europe” is an individual term. If “raven” is only explained with universals, it is a universal term. The negation of "All ravens are white" is then "There are non-white ravens."

Basic rates

In the definition of falsifiability, Popper uses another type of sentence: basic sentences. He characterizes them as singular there-are-sentences. Through the use of individuals, these relate to a specially designated space-time area and claim that a certain process takes place there. This process must be observable for basic sets. According to Popper, observability can be freely defined as movement on macroscopic objects. Popper calls the negations of the singular there-are-sentences “singular there-are-not-sentences”. In the example above, “There is a raven on my desk.” Is a basic sentence. The individuals used in it are “mine” and the implicitly received “now”, which is expressed by the present tense . Ravens are also observable.

Logical context

According to Popper, these stipulations result in the following logical relationships between the types of sentence mentioned: No basic sentences follow from theories that are composed solely of universal sentences. However, further basic sentences can be derived from theories and basic sentences. Since theories are equivalent to negated universal there-are-clauses, they are logically incompatible with the corresponding there-are-clauses. Logically universal there-are-sentences follow from basic sentences, which have the logical form of singular there-are-sentences. Thus, basic sentences can contradict theories. The sentence "All ravens are white." Is logically equivalent to "There are no non-white ravens." From “Here is a black raven today” follows “There are black ravens” and thus “There are non-white ravens”. This sentence contradicts the universal sentence “All ravens are white”, which is equivalent to “There are no non-white ravens”. For Popper, the asymmetry between falsifiability and verifiability in theories lies in the fact that with regard to basic sentences, theories are only falsifiable and never verifiable. A theory as a universal proposition can contradict a basic proposition but can never be derived from it.

Popper claims that the distinction between universal sentences and singular there-is-sentences cannot be grasped by dividing classical logic into general, particular and singular sentences, since general sentences, for example, refer to all elements of a certain class and not necessarily one spatially - have a universal character in time. The general implication of the system of Principia Mathematica is also not suitable for this, since, for example, basic sentences can also be expressed as general implications. From the standpoint of classical logic , the sentences "All ravens are white" and "All ravens living today are white" are general sentences. She cannot grasp the distinction between universal and singular there-is-sentences introduced by Popper. In the symbolism of Principia Mathematica , a general is implication : . (Read: For each , the sentence implies the sentence .) The singular sentence “ Socrates was a wise man.” Can thus be written as a general implication by identifying “ ” with “ is Socrates” and “ ” with “ was a wise man” becomes. (For all things : if Socrates is, then was wise.) So the general implication does not correspond to the universal propositions as Popper understands them. ${\ displaystyle (x) f (x) \ rightarrow g (x)}$${\ displaystyle x}$${\ displaystyle f (x)}$${\ displaystyle g (x)}$${\ displaystyle f (x)}$${\ displaystyle x}$${\ displaystyle g (x)}$${\ displaystyle x}$${\ displaystyle x}$${\ displaystyle x}$${\ displaystyle x}$

Popper characterizes the falsifiability of a theory by the property of breaking down the set of all logically possible basic sentences into two non-empty subsets: the set of basic sentences with which the theory is incompatible (also called "empirical content") and the set with which the theory is compatible. According to Popper, in order to prove that a theory is falsifiable, it is sufficient to specify a logically possible basic sentence that contradicts the theory. This basic sentence does not have to be true, tested or recognized.

example

If the term “raven” is used as a universal term, the sentence “All ravens are white” can be understood as a theory. From it alone no basic sentences follow, because basic sentences claim that something observable occurs in a certain space-time domain. All-sentences, on the other hand, are equivalent to negated there-are-sentences; so they claim that something does not exist. "All ravens are white" and "All ravens are black" therefore do not necessarily contradict each other. Both sentences only claim that something does not exist (once non-white ravens and once non-black ravens) and are correct in the event that nothing exists. But if a basic sentence is added, for example “There was a raven on my desk today”, the sentence “There was a white raven on my desk today” follows. From the theory alone follows the sentence “There are no non-white ravens”. This is a negated universal there-is sentence. For example, it contradicts the universal there-is-sentence “There are green ravens”. This in turn follows from the singular there-is-sentence (basic sentence) “There was a green raven on my desk today”. The process that this sentence describes is observable. In addition, the sentence is logically possible. The two sentences "All ravens are white" and "There was a green raven on my desk today" contradict each other. So the theory is falsifiable.

falsification

In place of the verification of an empirical theory, Popper, who assumed a fundamental fallibilism (fallibility of humans), used the method of falsification , which always leads to progress when an observation contradicts a theory. If, on the other hand, a theory withstands the test, it proves itself without making the theory better (more likely, more credible). The method of falsification is one of the heart of the critical rationalism founded by Popper . Popper expanded the method of falsification to the method of criticism in later works ( Die open society and their enemies , German 1958, chapter 14; assumptions and refutations , 1963, chapter 8). The search for falsifications, for the conceivable use cases where theories fail, i.e. ultimately the search for errors, was viewed by Popper as crucial for the progress of knowledge. Only the correction of these errors through better theories leads to progress. William W. Bartley has worked out how the method of criticism can be applied to oneself ( Pan-Critical Rationalism ).

According to Popper, the main purpose of the scientific method is to prevent falsification from being circumvented. (In principle, this is always possible, which is why Popper opposed the view that there can be such a thing as an exact science.) To do this, he established methodical rules to exclude immunizing procedures, in particular (LdF, 57):

• Introduction of ad hoc hypotheses
• Modification of the definitions of the theory
• Criticism of the experimental setup of the experiments
• Reservations about the theorist's acumen

The method of falsification does not restrict the research approach to a positively applicable approach, but only excludes some of the possible approaches. Although much of the methodological rules focus on the problem of how to prevent a theory from escaping falsification, it does not dictate that a theory must always be immediately abandoned when such falsification occurs:

If recognized basic theorems contradict a theory, they are only the basis for its falsification if they simultaneously prove a falsifying hypothesis. (LdF, 63)

This falsifying hypothesis is the description of an effect that explains the falsifying basic theorems (and since this hypothesis has to be proven at the same time, not ad hoc).

For the falsification of a theory it is necessary, according to Popper, that a prognosis can be derived from it together with a boundary condition and that a recognized basic set has been established that contradicts the prognosis . An argument can then be formed that uses as a premise and contains the negation of the conjunction of and as a conclusion. This argument is then a falsification. The falsification can only be limited to theory if further determinations are made. Are z. For example, if the boundary conditions are less problematic than the theory and if they are also established as true, then the falsehood follows the theory . If several theories are used to derive the prognosis , Popper's falsification affects the entire system of theories used. A restriction to a theory can also only be made on the basis of stipulations. ${\ displaystyle t}$${\ displaystyle t}$${\ displaystyle r}$${\ displaystyle p}$${\ displaystyle b}$${\ displaystyle p}$${\ displaystyle \ neg p}$${\ displaystyle t}$${\ displaystyle r}$${\ displaystyle t}$${\ displaystyle t}$${\ displaystyle p}$

example

Let = "All ravens are white" and the boundary condition = "There was a raven on my table this morning". The prognosis then follows = "The raven on my table was white". If the basic sentence = "There was a green raven on my table this morning" is established as true, then the prognosis is false . One of the premises or must therefore be wrong. Popper calls this the transferring of falsehood back from the conclusion to at least one of the premises. If it is now also established as true, then the falseness of . would be falsified. (For an example of falsifying a probability hypothesis , see the Probability Hypotheses section .) ${\ displaystyle t}$ ${\ displaystyle r}$${\ displaystyle p}$${\ displaystyle b}$${\ displaystyle p}$${\ displaystyle t}$${\ displaystyle r}$${\ displaystyle r}$${\ displaystyle t}$${\ displaystyle t}$

Falsifications are statements about empirical facts and thus, according to Popper, as well as theories, cannot be finally decided. In the history of science, Popper sees attempts to immunize theories against falsifications through ad-hoc hypotheses or changes in the boundary conditions. Accordingly, falsifications in science are sometimes accepted very quickly, sometimes slowly and reluctantly. Successful immunization attempts can also lead to the fact that falsifications are proven to be inaccurate or lose their basis through minor modifications of the criticized theory (cf. LdF XIV, 506–509).

Degrees of falsifiability

In the case of competing theories, one can determine degrees of falsifiability according to Popper in order to compare their quality. The higher its empirical content, the higher the quality of a theory. Popper develops two methods to carry out a falsifiability comparison for theories: the comparison based on a subclass ratio and the dimensional comparison. Both methods complement each other.

Sub-class ratio

A comparison based on the sub-class relationship is only possible if the empirical contents of theories are nested in one another. A theory is falsifiable to a higher degree if its empirical content contains the empirical content of another theory as a real sub-class. To this end, Popper examines the relationship between empirical and logical content as well as empirical content and absolute logical probability of theories. The logical content of a proposition is the set of all logical consequences of this proposition. Popper comes to the conclusion that for empirical sentences the empirical content increases with the logical content, so that for them the falsifiability comparison can be recorded with the derivability relation, and that an increasing empirical content results in a decreasing absolute logical probability. According to Popper, the logically more general empirical proposition has the higher degree of falsifiability and is logically less probable.

Popper explains these relationships using the following four example sentences:

(p) All world body trajectories are circles ,
(q) All planetary orbits are circles,
(r) All orbits of the world are ellipses ,
(s) All planetary orbits are ellipses.

Since all planets are also world bodies, (q) follows from (p) and (s) from (r). Since all circles are also ellipses, (r) follows from (p) and (s) from (q). From (p) to (q) the generality decreases; (p) is thus more easily falsifiable and logically less likely than (q). From (p) to (r) the certainty decreases. From (p) to (s) both generality and definiteness. The corresponding ratios for degree of falsifiability and absolute logical probability apply.

Popper emphasizes that the falsifiability comparison with the help of the sub-class ratio of empirical contents is not always possible. That is why he still bases the falsifiability comparison on the dimension concept.

dimension

According to Popper, different theories can require different complex basic sentences for a falsification. Popper attaches this complexity to the number of basic clauses that are linked together by conjunction. He calls the dimension of a theory the greatest number for which the theory is compatible with any basic set. If a theory has the dimension , it can only be refuted by a conjunction of at least basic sentences. Popper does not consider it expedient to mark “elementary sentences” or “atomic sentences” so that theories can be absolutely assigned to dimensions. He therefore introduces “relatively atomic” basic sentences. The degree of falsifiability is thus based on the reciprocal of the dimension, so that a higher dimension means a lower degree of falsifiability. In clear terms, this means: the fewer basic sentences are sufficient to refute a theory, the easier it is to falsify it. An example should clarify the dimension comparison. ${\ displaystyle n}$${\ displaystyle d}$${\ displaystyle n}$${\ displaystyle d}$${\ displaystyle d + 1}$

example

Assume that one is interested in the lawful connection between two physical quantities. You can z. B. propose the theory that there is a linear relationship. The relatively atomic basic sentences then have the form: The measuring device at the point shows ... and the measuring device at the point shows .... The linear theory is compatible with any relatively atomic basis theorem. It is also compatible with every conjunction of two relatively atomic basis clauses. Only conjunctions with at least three relatively atomic basic clauses can contradict linear theory. The linear theory has the dimension . Expressed geometrically, this means that two points define a straight line and that a decision can be made for three points whether they lie on a straight line or not. If you specify the starting point of the system, e.g. B. because the test arrangement requires it, then the dimension changes. Each specification of a point reduces the dimension by . If two points are given, a relatively atomic proposition can falsify the theory. One can have a linear theory represented as follows as a function: . An alternative theory, one can assume a parable: . If one sets the point , one limits the position of the graphical representation of the theories: and . (Both go through the zero point of the coordinate system.) The first theory then has the dimension and the second the dimension . Both meet the condition . One can specify one more point . For the linear theory is then obtained: ; for the square z. B. . The dimensions have been reduced by. Another measuring point leads to the falsification of the linear theory, because the condition cannot be fulfilled for. It is different with the quadratic theory. It can be set to this condition. E.g. satisfies the condition . The specification of a fourth point would also make a falsification possible for the quadratic theory. The dimension of a theory can be restricted in its dimension in another way than by specifying a point. For the linear theory, e.g. B. the slope can be specified. Expressed geometrically, this does not define the position of the straight line in the coordinate system, but rather clearly expresses the inclination to the axis. (Popper calls the restriction of the dimension by specifying a point “material”, which is “formal” by specifying, for example, the gradient or other properties that change the shape of the curve and not its position.) The specification of a point of the graphic Presentation of a theory increases the degree of falsifiability of this theory. The same applies to a formal restriction by specifying the slope. ${\ displaystyle A}$${\ displaystyle k_ {a}}$${\ displaystyle B}$${\ displaystyle k_ {b}}$${\ displaystyle 2}$${\ displaystyle 1}$${\ displaystyle f (x) = mx + n}$${\ displaystyle f (x) = ax ^ {2} + bx + c}$${\ displaystyle (0,0)}$${\ displaystyle f (x) = mx}$${\ displaystyle f (x) = ax ^ {2} + bx}$${\ displaystyle 1}$${\ displaystyle 2}$${\ displaystyle f (0) = 0}$${\ displaystyle (1,1)}$${\ displaystyle f (x) = x}$${\ displaystyle f (x) = x ^ {2}}$${\ displaystyle 1}$${\ displaystyle (2,3)}$${\ displaystyle m = 1}$${\ displaystyle f (2) = 3}$${\ displaystyle f (x) = (1/2) x ^ {2} + (1/2) x}$${\ displaystyle f (2) = 3}$${\ displaystyle m}$${\ displaystyle x}$

Probability hypotheses

According to Popper, when the definition of falsifiability is applied to probability hypotheses, the logical relationships are not as unambiguous as in theories with the logical form of universal propositions. Popper points out that probability hypotheses cannot be in direct logical contradiction to basic sentences and therefore, strictly speaking, cannot be falsified. This is due to the logical form of probability hypotheses, which Popper characterizes as follows: Probability hypotheses are logically equivalent to an infinite set of there-are-sentences; from every probability hypothesis there-are-sentences can be derived. In addition, logically stronger generalized there-are-sentences can be derived from them. These have the form: For each link number there is a link number with the characteristic . So z. For example, from the hypothesis "The probability of a head toss is under the conditions " (" " for short ) the sentence "For each limb number there is a limb number so that the corresponding throw is head" can be deduced. But this is also followed by sentences such as “There are both head throws and number throws in the sequence”, etc. Both sentence types are not falsifiable, however, since they cannot contradict any finite conjunctions of basic sentences. Nevertheless, Popper does not modify the methodological requirement for falsifiability for empirical systems of theory and analyzes the methodological decisions that make probability hypotheses falsifiable. ${\ displaystyle x}$${\ displaystyle y}$${\ displaystyle z}$${\ displaystyle p}$${\ displaystyle k}$${\ displaystyle 1/2}$${\ displaystyle b}$${\ displaystyle p (k, b) = 1/2}$${\ displaystyle x}$${\ displaystyle y}$

A resolution as developed by Popper consists of the requirement that finite empirical sequences, which are described by conjunctions of a finite number of basic sentences, have a high degree of approximation to the shortest ideally random mathematical sequences for which Popper specifies a construction method have to. The falsifiability is achieved by the requirement that finite sequences that do not approach ideally random sequences from the start are considered to be logically excluded.

Popper introduces the problem of the falsifiability of probability hypotheses to further elucidation using the so-called law of large numbers and the logical interpretation of the calculus of relative probability. Popper sees the logical interpretation of the calculus of probability as a generalization of the concept of derivability. Is a set a set the probability (abbreviated , read: "The likelihood of in terms of is ."), It follows logically from ( tautology ). The probability corresponds to the logical contradiction (contradiction). Using this logical interpretation, Popper interprets the law of large numbers as follows: From a probability hypothesis, a statement about the relative frequency can almost logically be derived for very large numbers (the number of independent repetitions). “Almost logically deducible” here means a probability very close to . Popper points out that for statements about relative frequencies that lie outside a given small interval, this probability is almost . Accordingly, probability hypotheses are falsifiable in the sense that they almost logically contradict statements about relative frequencies with deviating numerical values. The methodological decision necessary to make probability hypotheses falsifiable is to value this almost logical contradiction as a logical contradiction. The term “almost logically derivable” is mathematically specified by Popper by using the binomial distribution as a metric of the relative logical probability. The size of the selected sample and the permissible deviation of the relative frequency in the sample can then be used to calculate the probability with which a test record follows from a probability hypothesis via the relative frequency (see example). ${\ displaystyle y}$${\ displaystyle x}$${\ displaystyle 1}$${\ displaystyle p (x, y) = 1}$${\ displaystyle x}$${\ displaystyle y}$${\ displaystyle 1}$${\ displaystyle x}$${\ displaystyle y}$${\ displaystyle 0}$${\ displaystyle n}$${\ displaystyle 1}$${\ displaystyle 0}$

According to Popper, probability hypotheses cannot be in direct logical contradiction to basic sentences and conjunctions of a finite number of basic sentences, but they can contradict their logically weaker conclusions, the sentences about relative frequencies in finite empirical sequences. In this way they divide the set of all logically possible basic sentences into two subsets: those with which they contradict and those with which they are logically compatible. According to Popper, probability hypotheses are therefore falsifiable.

example

Suppose one wants to empirically test the hypothesis = "The probability of receiving a head throw under the conditions is ". Under one can assume the usual conditions: smooth table, independent throws, etc. One can then form the test set = "The relative frequency of head throws in a series of tests comprising throws under the conditions is around ". It can then be calculated: The logical probability of the test set with respect to the hypothesis . It is using the standard deviation . An environment was used as a basis in order to obtain a high probability. This results in an interval between and around the exact value of . The test set can now be confronted with the result of an experiment. In doing so, one does not use the conjunction of 10,000 basic sentences (“the first toss was heads and the second toss was heads ... and the 10,000th toss was tails”), but one compares it with its logically weaker statistical conclusion. So z. B. with "The relative frequency of head tosses under 10,000 coin tosses was today under the conditions " This statistical statement contradicts the test set . The probability hypothesis would therefore be falsified. A sequence that alternates heads and tails on the first 100 throws falsifies the hypothesis, as it does not behave randomly. ${\ displaystyle h}$${\ displaystyle p}$${\ displaystyle b}$${\ displaystyle 1/2}$${\ displaystyle b}$${\ displaystyle e}$${\ displaystyle n = 10000}$${\ displaystyle b}$${\ displaystyle 1/2 \ pm 0 {,} 015}$${\ displaystyle p (e, h)}$${\ displaystyle e}$${\ displaystyle h}$${\ displaystyle 0 {,} 997}$ ${\ displaystyle \ sigma = {\ sqrt {np (1-p)}}}$${\ displaystyle 3 \ sigma}$${\ displaystyle 4850}$${\ displaystyle 5150}$${\ displaystyle 5000}$${\ displaystyle e}$${\ displaystyle 0 {,} 48 \ pm 0 {,} 0005}$${\ displaystyle b}$${\ displaystyle e}$${\ displaystyle h}$

criticism

Positivism dispute

The criterion of falsifiability was criticized by representatives of the Frankfurt School during the so-called positivism dispute in the 1960s : not all theories have a prognostic character and not all make predictions. They took the position that the scientific nature of such theories could be formally formulated without the criteria to be applied having to be based on falsifiability.

Paradigm shift according to Thomas S. Kuhn

Thomas S. Kuhn was of the opinion that scientists do not look for falsifications in normal science, but work within an accepted paradigm - a fundamental theory - to solve riddles and clear up anomalies ('normal science'). "No process that has been uncovered by the historical study of scientific development bears any resemblance to the methodological template of falsification through direct comparison with nature." According to Kuhn, scientific change only occurs when the anomalies are so great that a scientific crisis occurs . Such a crisis occurs when the paradigm loses its general acceptance because of the anomalies and thus the consensus among scientists on the fundamentals is shattered. (For Popper exactly the opposite is true: for him, highly developed rational science is only given if the scientists disagree on the fundamentals; he sees unity and general recognition as a crisis - “orthodoxy is the death of knowledge, since the growth of knowledge depends entirely on the existence of disagreement ”) Only then is the search for new fundamental theories - new paradigms - ('extraordinary science'). If anything, only these are described by Popper's falsificationism. Such new paradigms are often incommensurable with the old ones , so they represent structural breaks and not an advance in knowledge in the sense of the accumulation of knowledge.

Kuhn also saw a fundamental mistake by Popper in the conception of the empirical observational principles . In order to be effective as a scientific instrument, the falsification must provide definitive evidence that the theory tested has been refuted. However, since falsification hypotheses are empirical, they themselves can in turn be refuted. For Kuhn it followed from this that the critical discussion of competing theories does not make sense. Switching to a new paradigm is therefore more like a political decision or a religious conversion.

Wolfgang Stegmüller has given several aspects of Kuhn's view a rational reconstruction within the framework of the structuralist theory concept according to Joseph D. Sneed . For example, a failure of an application can always be treated rationally in such a way that the physical system in question is excluded from the set of intended applications of the theory. The theory itself is therefore not falsified.

Clever falsification according to Lakatos

The works of Imre Lakatos are basically a refinement of Critical Rationalism against Thomas Kuhn. A falsificationism, in which theories are fundamentally abandoned if falsification has taken place, he called "naive falsificationism", a term that Kuhn had used in his criticism of Popper in this context. He agreed with Kuhn that there had been a large number of falsifications in the history of science that had not led to a change in theory. However, Kuhn's position is relativistic and similar to religion: "According to Kuhn, the change in science - from one 'paradigm' to another - is an act of mystical conversion, which is neither directed nor can be directed by questions of reason and which is entirely related to the field of '(social ) Psychology of research 'belongs ”(ibid., P. 90).

Lakatos criticized Popper for the fact that the conventional definition of which basic rates are acceptable creates a kind of immunization of falsification. The history of science shows that assumed falsifications can certainly have an irrational origin. Because of these problems, a methodology should be developed within the framework of a “sophisticated falsificationism” with which it is possible to set up a heuristic for research programs with which the context of theories can be rationally justified. In particular, each new theory must have a surplus of empirical content, be able to explain the old theory and already be confirmed in order to be recognized as scientific.

This type of methodology is particularly effective for the falsification of complex systems of theories with several hypotheses and boundary conditions. Since in such a case it is not clear which component of the system is the reason for the falsification, individual statements can be exchanged according to the principles mentioned in order to test the theory again. So that one can still speak of a uniform research program, the “hard core” of the hypotheses should be retained, while the less important hypotheses and constraints are varied.

Epistemological anarchism according to Feyerabend

Paul Feyerabend fundamentally denied that it was possible to work with rational criteria within research programs. This does not mean that Feyerabend considers science to be an irrational endeavor; rather, for him, science is “the most rational enterprise that has been invented by humans to date.” For him, research institutions work on the principle of perseverance. On the other hand, there is also a pluralism of ideas in the ongoing scientific process. A reason for crises and revolutions does not result from this, but there are incommensurabilities .

In particular, new research programs are exposed to considerable resistance and it is more a matter of chance whether and in what period they can establish themselves. There are no reasons why one should not help new theories with irrational methods. In this sense, Feyerabend campaigned for a view that can be classified as scientific-theoretical and methodological relativism .

Holism according to Quine

The holism advocated by Willard Van Orman Quine contradicts Popper's view of science, e.g. B. regarding the position of falsification in the change of theory. The hypotheses of a theory are not independent, so that in the case of a contradicting empirical observation, no logical conclusion can be drawn as to which partial hypothesis or boundary condition is the reason for a possible falsification. Pierre Duhem had already drawn attention to this connection , so that this view is known as the Duhem-Quine thesis . Quine had concluded from this that the examination of such a system could only be carried out by examining all related sentences and then the system should in principle be rejected as a whole (holism). According to Quine, in the event of a refutation, scientists react with two options, a conservative option in normal scientific periods, where the smallest possible changes are made to the periphery of the theory in order to save it, and a revolutionary option, where central elements of the theory are changed. In contrast to Popper, with Quine empirical refutation only plays an important role in normal scientific periods, while considerations of simplicity predominate in revolutionary phases.

Theory dynamics according to Stegmüller

For Wolfgang Stegmüller , the demand for the verification of the test sets did not solve the problem of induction, since the test sets are based on a definition, even if they are recognized intersubjectively. Stegmüller saw here the termination of an infinite regress analogous to Fries' trilemma . Even if justified differently, he saw the problem in a similar way to Kuhn, who he held up to a lack of scientific-theoretical justification, in the empirical character of the basic sentences and came to the conclusion that there was only between the deductivism of Popper (probation) and the inductivism of Carnap (confirmation) there are slight formal differences. Stegmüller accused critical rationalism of being an inhuman rationalism, since its normative methodological demands cannot be met by any scientist working in practice.

Based on his criticism of the pure propositional concept of theories, Stegmüller represented a semantic view of scientific theories , drawing on the work of Patrick Suppes and Joseph D. Sneed , Ulises C. Moulines and Wolfgang Balzer . Theories here consist of a formal mathematical structural core, intended applications and special laws that are linked to other theories through cross-connections. This results in improved explanations for a rational theory dynamic compared to the conventional view of empirical theories as a set of laws, as it is represented by logical empiricism or critical rationalism.

Popper himself addressed the question of complex systems of theories long before Quine and pointed out that a falsification does not logically refute individual components (cf. LdF, Chapters 19-22). For Popper, however, the global holistic dogma is not tenable, since partial hypotheses of a system can be recognized as the reason for a falsification based on analyzes.

literature

• Max Albert: The falsification of statistical hypotheses , in: Journal for General Philosophy of Science 23/1 (1992), 1–32
• Gunnar Andersson: Critique and History of Science. Mohr Siebeck, Tübingen 1988. ISBN 3-16-945308-4
• KH Bläsius, H.-J. Bürckert: Automation of logical thinking. Oldenbourg, Munich 1992 (2nd chapter online basics and examples. ). ISBN 3-486-22033-0
• Georg JW Dorn: Poppers two definitions of "falsifiable". A logical note on a classic passage from the “Logic of Research” , in: conceptus 18 (1984) 42–49
• Sven Ove Hansson : Falsificationism Falsified , in: Foundations of Science 11/3 (2006), 275-286
• Sandra G. Harding (ed.): Can Theories be Refuted? Essays on the Duhem-Quine Thesis , Dordrecht-Boston 1976 With important essays and excerpts by Popper, Grünbaum, Quine, Wedeking
• Richard C. Jeffrey: Probability and falsification: Critique of the popper program , in: Synthesis 30 (1975), 95-117
• Gary Jones / Clifton Perry: Popper, induction and falsification , in: Knowledge 18/1 (1982), 97-104
• Handlexikon zur Wissenschaftstheorie dtv, Munich 1992 (with contributions by Karl Popper himself). ISBN 3-423-04586-8
• Herbert Keuth: The philosophy of Karl Poppers Mohr Siebeck, Tübingen 2000. ISBN 3-16-147084-2
• I. Lakatos: Falsification and the Methodology of Scientific Research Programs , in: Lakatos, I / Musgrove, A. (Ed.): Criticism and the Growth of Knowledge , CUP, Cambridge 1970
• David Miller: Critical Rationalism: A Restatement and Defense , Open Court, Chicago 1994. ISBN 0-8126-9198-9
• Hans-Joachim Niemann : Lexicon of Critical Rationalism. Mohr Siebeck, Tübingen 2004. ISBN 3-16-148395-2
• Karl R. Popper: Logic of Research Springer, Vienna 1935, edited by Herbert Keuth, Mohr Siebeck, Tübingen 2005 (11th edition, online 2nd edition 1966 with note). ISBN 3-16-146234-3
• Karl R. Popper: Falsifiability, two meanings of , in: Helmut Seiffert and Gerard Radnitzky (ed.): Handlexikon zur Wissenschaftstheorie , Ehrenwirth, Munich 1989, 82–85.
• Karl R. Popper: The two basic problems of epistemology. Ed. Based on manuscripts from the years 1930–1933. by Troels Eggers Hansen with a foreword by Karl Popper from 1978. Mohr Siebeck, Tübingen 1994 (2nd edition). ISBN 3-16-838212-4
• Karl R. Popper: Conjectures and Refutations. Edition in one volume. Mohr Siebeck, Tübingen 2000. ISBN 3-16-147311-6
• Gerhard Schurz and Georg JW Dorn: Why Popper's Basic Statements are not Falsifiable. Some Paradoxes in Popper's "Logic of Scientific Discovery" , in: Zeitschrift für Allgemeine Wissenschaftstheorie 19 (1988) 124-143
• Friedel Weinert: The Construction of Atom Models: Eliminative Inductivism and its Relation to Falsificationism , in: Foundations of Science 5/4 (2000), 491-531

Individual evidence

1. August Weismann: About the justification of Darwin's theory. Leipzig 1868, p. 14f. See also Franz Graf-Stuhlhofer : August Weismann - a "forerunner" of Poppers. In: Conceptus. Journal of Philosophy 20 (1986) 99f.
2. ^ Karl Popper: Autobiography. In PA Schilpp (ed.): The philosophy of Karl Popper (1974), section 8.
3. Logic of Research , Section 6.
4. Logic of Research , Section 1.
5. Autobiography, Section 9: “As it occurred to me first, the problem of demarcation was not the problem of demarcating science from metaphysics but rather the problem of demarcating science from pseudoscience. At the time I was not at all interested in metaphysics. It was only later that I extended my ' criterion of demarcation ' to metaphysics. "
6. ^ David Miller: The Objectives of Science.  ( Page no longer available , search in web archivesInfo: The link was automatically marked as defective. Please check the link according to the instructions and then remove this notice. In: Philosophia Scientiæ. 11 : 1 (2007), p. 27 ( PDF , 263 kB; English ).
7. Troels, Eggers, Hansen (ed.): The two basic problems of the theory of knowledge. Based on manuscripts from 1930–1933 . Tübingen 1979, p. XXVII.
8. WW Bartley: Rationality, Criticism, and Logic ( Memento of November 27, 2007 in the Internet Archive ) ( MS Word ; 283 kB). Philosophia 11 : 1-2 (1982), Section XXIII.
9. ^ Rationality, Criticism, and Logic, Sections XXI and XXII.
10. Tract , pp. 5 126f, 1–4 106.
11. Lorenzo Fossati: We're all just provisional! (PDF; 51 kB). Enlightenment and Criticism 2/2002, p. 8.
12. ^ Nicholas Maxwell: Review of Problems in the Philosophy of Science by I. Lakatos, A. Musgrave. The British Journal for the Philosophy of Science 20 : 1 (May 1969), pp. 81-83.
13. ^ Mariano Artigas: The Ethical Nature of Karl Popper's Theory of Knowledge (1999).
14. Arnd Krüger : Popper, Dewey and the theory of training - or what matters is on the spot, in: Leistungssport 33 (2003) 1, pp. 11–16; http://www.iat.uni-leipzig.de:8080/vdok.FAU/lsp03_01_11_16.pdf?sid=D60B688F&dm=1&apos=5235&rpos=lsp03_01_11_16.pdf&ipos=8483 .
15. Thomas S. Kuhn: The structure of scientific revolutions. Suhrkamp, ​​Frankfurt M 1976 (2nd edition), p. 90, ISBN 3-518-27625-5 .
16. Cf. Imre Lakatos: Falsification and the Methodology of Scientific Research Programs. in: Imre Lakatos, Alan Musgrave (eds.): Critique and progress in knowledge. Vieweg, Braunschweig 1974, pp. 89-189, ISBN 3-528-08333-6 .
17. Cf. Paul Feyerabend: Against method pressure. Suhrkamp, ​​Frankfurt 1983 (2nd edition), ISBN 3-518-57629-1 .
18. ^ Paul Feyerabend: Against method pressure. Suhrkamp, ​​Frankfurt 1983 (2nd edition), ISBN 3-518-57629-1 , p. 80.
19. Cf. Willard Van Orman Quine: Two dogmas of empiricism. in: W. Van Orman Quine: From a logical point of view. Ullstein, Frankfurt 1979, pp. 27-50, ISBN 3-548-35010-0 .
20. Pierre Duhem: Aim and Structure of Physical Theories. Edited by Lothar Schäfer. Trans. V. Friedrich Adler. Meiner Felix, Hamburg 1978, 1998 (orig. Paris 1906), ISBN 3-7873-1457-1 .
21. See Wolfgang Stegmüller: The problem of induction. Hume's Challenge and Modern Answers. Knowledge Buchgesellschaft, Darmstadt 1974, especially pp. 8–50, ISBN 3-534-07011-9 .
22. Wolfgang Stegmüller: Problems and results of the philosophy of science and analytical philosophy. Volume II Theory and Experience, Second Part: Theory Structures and Theory Dynamics, Springer Verlag.
23. ^ Karl Popper: Assumptions and Refutations , pp. 348–250.