The Poverty-of-the-Stimulus-Argument (POSA) says that much of what is in the human mind is too complex and diverse for it to be (in the course of life) can have come in from outside. The argument is often used by nativists in support of their position.
Historically, nativism is the doctrine of innate ideas : our ideas are in our spirits from birth. Modern nativists see the seat of “innate ideas” in the genetic makeup. But what exactly is innate? The ideas are not literally in the newborn's mind . The appearance of ideas in the mind depends on certain events or processes. Leibniz compared the mind to a block of marble . In the same way that the chisel brings out the figure in the marble, events bring out the ideas. What is in our mind never got into it, it was there from the beginning.
Nativists usually do not justify their stance on positive evidence that an idea is innate. Such positive evidence is actually difficult to provide. The justification of nativism is the denial of empiricism . The Poverty of Stimulus Argument (POSA) is of particular importance here . Although the basic idea of POSA goes back to ancient times, the term goes back to the linguist Noam Chomsky .
Chomsky's POSA in particular
Noam Chomsky is one of the most prominent nativists today. For the area of language , his POSA is that the structure of language can essentially not be learned by an unprepared organism through data coming from outside. (“ [The] narrowly limited extent of the available data… leaves little hope that much of the structure of language can be learned by an organism initially uninformed as to its general character ”).
The basic structure of the argument is as follows:
- There are patterns in every natural language that cannot be learned from positive evidence alone. Positive evidence is provided by the utterances the child hears as language develops. Negative evidence, on the other hand, is information about which utterances are not well-formed. A negative document is e.g. For example, if a child makes an ill-formed statement and is then corrected by his or her parents.
- In practice, children are only confronted with positive evidence.
- Children learn the grammar of their mother tongue.
- Hence , humans need to have an innate and language-specific mechanism that includes knowledge of grammar.
Empiricists (such as BF Skinner ), however, argue that there are general learning mechanisms that are sufficient to cope with any complex task. Nativists, on the other hand, counter that the input is too little in the course of life. The general learning mechanisms were not sufficient to generate what knowledge we have in our heads. Chomsky concludes from this that there must be special, language-specific mechanisms to explain the acquisition of language.
Basis: the universal grammar
Chomsky's starting point is the “amazing fact” that by the age of about eight almost every child speaks the language of his language community fluently. Children seem to be able to do this without formal instruction and after having only been confronted with a relatively small sample of sentences . In the early versions of his theory, Chomsky viewed children as de facto linguists who hypothesized the syntax of a language based on input . In order not to “grope in the dark” endlessly, an innate universal grammar helps the child to set up the hypotheses. Indeed, universal grammar was conceived by Chomsky as a system of principles, conditions, and rules that are contained in all human languages and that, in a sense, constitute the essence of language.
Chomsky later abandoned this approach in favor of the principle and parameter approach. He now denies that universal grammar contains rules. The universal grammar is rather like a kind of box with switches. The verbal input that a child hears causes certain switches to be set in one direction or the other. A "switch" is, for example, the decision as to whether or not a pronominal subject can be left out in a language ( pro-drop parameter ). In Italian, for example, you can leave out the subject ( Sono Italiano ), in English and German you can't (“I am German”). The knowledge that there are zero-subject and non-zero-subject languages is innate, and the decision is made with the first linguistic input; the switch is flipped in one direction or the other. With little input, the child takes a big step forward in language development , as there is a specific, innate mechanism that accelerates learning.
Three variants of the POSA
To support these two aspects of universal grammar (innate and language-specific), Chomsky relies on the Poverty of Stimulus Argument (POSA) . According to Cowie, the Chomsky POSA is used in three flavors.
The posterior POSA
This variant of POSA is basically empirical : since language cannot be learned from the available input, the principles of universal grammar must be innate. In response to Hilary Putnam's criticism of universal grammar, Chomsky formulates this form of POSA. Putnam was referring to Chomsky's finding that mastery of the mother tongue is independent of the speaker's intelligence quotient. Putnam replied that this only proves that any normal adult can learn what any normal adult can learn. Of course, the “innate” human intellectual abilities are important for language learning. All in all, Putnam lacks evidence of a specific and innate ability to learn the language.
Chomsky refers to Putnam's statement that someone who uses general learning mechanisms will always use the simplest possible hypothesis. Chomsky gives an example to demonstrate that children do not always use the simplest possible hypothesis when learning languages. Suppose a child often hears sentences like these:
- "Ali is happy" "
- "Is Ali happy?"
Using general learning mechanisms, the child would first have to make the following hypothesis:
- H1 ( structure-independent rule ): "If you want to transform a statement into a question, take the first verb in the sentence and put it at the beginning".
But soon the child will hear sentences like these:
- "The man who is happy sings"
The child would have to transform this sentence using H1:
- "Is the man who sings happy?"
The child should now be confronted with negative feedback from the language community and only then form the following hypothesis:
- H2 ( structure-dependent rule ): "If you want to convert a statement into a question, take the first verb that follows the subject phrase and put it at the beginning".
This rule leads to the following correct question:
- "Does the man who is happy sing?"
Chomsky argues as follows:
- The linguistic input is too little for the child to be able to reject H1 on the basis of this.
- No child makes mistakes like in "Is the man who sings happy?"
He concludes from this that no child will ever follow the H1. Therefore, it does not need any input that could refute H1. If the child acquires language with the help of general learning mechanisms, it should first show a preference for H1, according to Putnam's statement that the simplest hypothesis is then preferable. Since this is not the case, there has to be a language-specific learning mechanism (the universal grammar ) which contains a rule like this: "Construct a structure-dependent rule and ignore structure-independent rules".
Chomsky's argument is based on the assumption that H1 is simpler than H2. H2 requires a syntactic analysis. H1 is based solely on observation, H2 relates to what cannot be observed. With this, however, Chomsky contradicts his own statement (in the Critique of Structuralism ) that grammatical hypotheses that relate only to the observable are less simple and less elegant than those hypotheses that relate to the non-observable. According to his own statement, general learning mechanisms should therefore prefer H2.
Chomsky's assessment that structure-dependent rules are simpler than structure-independent ones assumes that syntactic properties are only related to the linguistic experience of the language learner through many intermediate steps. Syntactic categories could therefore not be learned.
However, the empirical findings contradict this assessment. Saffran, Aslin and Newport could e.g. B. show that eight-month-old children are able to distinguish between words and non-words in an artificial language after just two minutes and that they apparently use statistical regularities in the material. The only logical conclusion from this (and numerous other) experiments is that children can learn syntactic categories by using only general learning mechanisms.
Chomsky's statement that the input would be too small for children to discard H1 in a reasonable time must also be questioned. In an analysis of the text base of the Wall Street Journal , Pullum and colleagues found numerous among the first 500 questions that would refute a structure-independent rule. The same applies to an analysis of Oscar Wilde's The Importance of Being Earnest . Now the Wall Street Journal and Oscar Wilde are not the usual linguistic inputs for a child. However, Pullum's results are likely to be transferable to what children hear in the first few years of life.
The POSA as a logical problem
This variant of the argument is based less on empirically testable assertions than on the fact that the data in principle ( a priori ) may not be sufficient to enable the acquisition of grammatical rules. The language learner never hears ungrammatical sentences as “counter-examples” ( negative evidence ). No competent speaker gives a child a list of wrong sentences and the addition that they should be avoided. So the only thing left for the child to do is to formulate ungrammatical sentences and to have the parents correct them. But that almost never happens, parents usually ignore the ungrammatical utterances of their children. There are also an infinite number of well-formed sentences that the language learner never hears. Therefore, if a sentence does not appear in the input, the learner cannot conclude that it is illegal. Consequently, language-specific knowledge must be innate.
However, the logic of this form of POSA is vulnerable. If the lack of negative evidence in the input is enough to postulate an innate and specific mechanism, this could be shown in many other areas as well. Imagine a person learning what a goulash is. Almost every person acquires a “culinary competence” in the course of their life, which enables them to differentiate between the most varied forms of food and between food and non-edible food. But nobody informs the person that tacos , pizzas , steaks and of course stones , dogs and clouds are not goulash. The person is only ever presented with positive evidence for goulash. Despite the lack of counterexamples, all people believe that a goulash is a goulash and nothing else. But no one would suspect an innate and food-specific mechanism behind this.
The claim that there is no negative evidence is also incorrect. For example, the fact that we call hamburgers “hamburger” and not “goulash” is certainly a form of evidence that reinforces the speaker's idea of “goulash”.
Above all, one should differentiate between "data" and "evidence". Data are facts as presented to experience. Evidence is data that is used to confirm or refute a theory. Data can be both negative and positive evidence. So a child does not necessarily need explicit corrections in order to be able to experience the falsity of a grammatical hypothesis. There are three sources of negative evidence:
Negative data as negative evidence
Nativists are almost exclusively referring to one study when they claim that children experience virtually no negative feedback on language development. Brown and Hanlon had observed three mother-child pairs. The mother's explicit confirmation or rejection of the child's utterances did not correlate with the well-formedness of what the child said. This seemed to confirm that negative feedback (and feedback at all) had no impact on language development.
However, later research refuted what other authors had deduced from Brown and Hanlon's data. Hirsh-Pasek, Treiman and Schneidermann showed, for example, that mothers of two-year-old children repeated (and corrected) their children's ungrammatic sentences far more often than their correct sentences. Hirsh-Pasek and colleagues deduce from this that the child's environment is full of subtle indications of the correctness of the child's utterances. Further research clearly showed that the feedback children receive on correct sentences is different from that that children receive on incorrect sentences. Moerk was also able to show in a re-analysis of the original data from Brown and Hanlon that even Brown's own records contained an abundance of corrective feedback.
It is therefore simply wrong for parents not to correct the ungrammatic utterances of their children. According to Demetras and others, the fact that parents do not correct some of the ungrammatical utterances of the child is only a problem if one assumes that the child must have mastered the whole system of grammar at once. Nor do children, as Marcus suggests, have to repeat the same sentence over and over in order to get enough feedback so that they can check a rule. It is completely sufficient that they utter sentences that are formed according to a certain rule and receive feedback on them.
Positive data as negative evidence
A hypothesis can also be refuted if one experiences positive data. For example, an English speaking child who uses sentences like
- "The boy wants a curry" and
- "Dad wants a beer"
hears, found the rule (1) confirmed that an –s must always be added to the verb stem. In the sense of a collection of restrictions ( assumed by Steven Pinker ) one could now imagine that the child invents an arbitrary restriction (2) of this rule, e.g. B. that this only applies when the subject is animate. This hypothesis about a restriction of the rule would be refuted as soon as the child like a sentence
- "The curry tastes good"
hears. With the help of these positive data for rule (1), the child can ultimately correct his over-generalized rule (2).
The lack of data as negative evidence
The language learner is to be regarded as an active hypothesis tester. If he establishes that a sentence is never uttered in his linguistic environment that should be possible according to a hypothesis, he will reject the hypothesis. The fact that an infinite number of sentences are not uttered does not matter here. The non-appearance of a certain sentence, which would be expected in a certain situation, is the decisive criterion for the rejection of the hypothesis, not the non-appearance of any sentence at some point. Cowie gives an example:
Many preschoolers seem to suspect that all intransitive verbs can be used as causatives . A child hears e.g. B. Sentences like "I melted it" and analogously forms the ill-formed sentence "I giggled her" (I giggled her) when it wants to express that it made someone giggle by tickling. Suppose the child sees his father knocking over the coffee cup so that it falls off the table. On the basis of the hypothesis, the child could now expect the father to say: "I fell the cup off the table" (I fell the cup off the table). But this is not the case, the father says, for example, "I caused the cup to fall from the table". The non-occurrence of “I fell the cup off the table” in this situation is therefore negative evidence for the child's hypothesis that all intransitive verbs can be used as causatives.
The "repeated" POSA
According to this variant of the argument, it is not possible to form or even test the rules of universal grammar from the linguistic input . These rules are so abstract that a prelinguistic child cannot find any information about them in the data available to them. Universal grammar must therefore be innate.
An example already mentioned for a component of universal grammar is the pro-drop parameter ( see above ). The knowledge that there are languages from which the subject can be left out and languages from which this is not possible cannot be derived from the linguistic input.
This variant of the POSA is based on the assumption that the validity of the universal grammar has been proven. Using the example of the pro-drop parameter , however, it can be shown that the existence of the switches postulated by Chomsky is more than doubtful. Almost all children begin their language development as if their language were a zero-subject language (“Will biscuit!”). If, as Chomsky and other nativists claim, you never get negative feedback for it, how do you get the pro-drop parameter right again? In addition, even in non-zero subject languages, the linguistic input contains many utterances in which the subject is missing (“Must go”, “Don't believe a word” or Couldn't give a damn , etc.).
So if the existence of universal grammar is doubtful, there is no need to prove how it can be acquired by the child.
Enlightened empiricism as an alternative to nativism
Cowie summarizes the empirical results by saying that the stimulus is in fact not as “poor” as Chomsky would have us believe. Cowie opposes nativism with enlightened empiricism . Enlightened empiricism assumes that there is definitely something like principles and structures that limit the choice of language learners. However, these principles and structures are the result of previous learning experience. Nativists tend to overestimate the difficulties a language learner faces and underestimate the resources they can fall back on. As nativism suggests empiricism, the child does not stand as a tabula rasa before every new step in language development . It uses its previous knowledge efficiently to extract meaningful rules from the input. Instead of approaching the position of enlightened empiricism, however, according to Cowie, the strategy of the nativists is different. As soon as it has been empirically shown that a certain grammatical rule can be learned with the help of general learning mechanisms, it is simply asserted that another rule or principle is “not learnable”. But the debate about the POSA is likely to continue for a long time.
- Alexander Clark, Shalom Lappin: Linguistic Nativism and the Poverty of the Stimulus. Wiley-Blackwell, 2010, ISBN 978-1-4051-8784-8 .
- Fiona Cowie: Innateness and Language. In: Edward N. Zalta (Ed.): Stanford Encyclopedia of Philosophy .
- ^ Noam Chomsky: Rules and representations. Basil Blackwell, Oxford 1980, ISBN 0-631-12641-4 .
- ^ A b Noam Chomsky: Aspects of the theory of syntax. MIT Press, Cambridge, MA 1965
- ^ Burrhus F. Skinner: Verbal Behavior . Copley Publishing Group, Acton, Mass. 1957.
- ↑ a b c d e f Noam Chomsky: Reflections on language. Fontana, London 1975, ISBN 0-00-634299-X .
- ↑ Noam Chomsky: Bare phrase structure. In: Gert Webelhuth (Ed.): Government and binding theory and the minimalist program. Blackwell, Oxford 1995, pp. 383-440, ISBN 0-631-18059-1 .
- ↑ a b c d e Fiona Cowie: What's within. Nativism reconsidered. Oxford University Press, New York 1999, ISBN 0-19-512384-0 .
- ^ A b c d e Hilary Putnam: The "innateness hypothesis" and explanatory models in linguistics. In: John Searle (Ed.): The philosophy of language. Oxford University Press, London 1971, pp. 130-139.
- ^ Jenny R. Saffran, Richard N. Aslin, Elissa L. Newport: Statistical learning by 8-month-old infants. In: Science . Volume 274 (1996), pp. 1926-1928, ISSN 0036-8075 .
- ^ A b Geoffrey K. Pullum: Learnability, hyperlearning, and the poverty of the stimulus. In: Jan Johnson, Matthew L. Luge, Jeri L. Moxley (Eds.): Proceedings of the 22nd annual meeting. General session and parasession on the role of learnability in grammatical theory. Berkeley Linguistics Society, Berkeley, CA 1996, pp. 498-513 ( ecs.soton.ac.uk ( Memento of the original from April 19, 2006 in the Internet Archive ) Info: The archive link has been inserted automatically and has not yet been checked. Please check the original - and archive link according to the instructions and then remove this note. ).
- ^ A b Geoffrey K. Pullum, Barbara Scholz: Empirical assessment of stimulus poverty arguments. In: The Linguistic Review. Volume 19, 2002, pp. 9–50, ISSN 0167-6318 ( ( page no longer available , search in web archives: ling.ucsd.edu ) PDF).
- ↑ a b c Roger Brown, Camille Hanlon: Derivational complexity and order of acquisition in child speech. In: John R. Hayes (Ed.): Cognition and the development of language. Wiley, New York 1970, pp. 11-53, ISBN 0-471-36473-8 .
- ↑ cf. also Ted Schoneberger: Three myths from the language acquisition literature . In: The Analysis of Verbal Behavior . tape 26 , 2010, ISSN 0889-9401 , p. 107-131 , PMC 2900953 (free full text).
- ↑ a b Kathy Hirsh-Pasek, Rebecca Treiman, Maita Schneidermann: Brown and Hanlon revisited. Mothers' sensitivity to ungrammatical forms. In: Journal of Child Language. Volume 11 (1984), pp. 81-88, ISSN 0305-0009 .
- ↑ a b Marty J. Demetras, Kathryn N. Post, Catherine E. Snow: Feedback to first language learners. The role of repetitions and clarification questions. In: Journal of Child Language. Volume 13 (1986), pp. 275-292, ISSN 0305-0009
- ↑ John N. Bohannon, Laura B. Stanowicz: The issue of negative evidence. Adult responses to children's language errors. In: Developmental Psychology. Volume 24 (1988), pp. 684-689, ISSN 0012-1649
- ↑ Ernst L. Moerk: Positive evidence for negative evidence. In: First Language. Volume 11 (1991), pp. 219-251, ISSN 0142-7237 .
- ^ Gary F. Marcus: Negative evidence in language acquisition. In: Cognition. Volume 46, 1993, pp. 53-85, ISSN 0010-0277 .
- ^ A b Steven Pinker: Productivity and conservatism in language acquisition. In: William Demopoulos, Ausonio Marras (Ed.): Language learning and concept acquisition. Foundational issues. Ablex, Norwood, NJ 1986, pp. 54-79, ISBN 0-89391-316-2 .
- ↑ Steven Pinker: The language instinct. How the mind creates language. Harper-Collins, New York, NY 1994, ISBN 0-06-097651-9 .
- ↑ Nina Hyams: The pro-drop parameter in child grammars. In: Proceedings of the West Coast Conference in Formal Linguistics. Volume 2. Stanford Linguistics Association, Stanford, CA 1983, pp. 126-139, ISSN 1042-1068 .
- ^ Ruth A. Berman: In Defense of Development. In: Behavioral and Brain Sciences. 14: 612-613 (1991), ISSN 0140-525X .