phonetics

from Wikipedia, the free encyclopedia
Representation of the speech process in real-time magnetic resonance imaging

The phonetics ( ancient Greek φωνητικός phōnētikós , German , for tones, speaking properly ' , of φωνή Phone , German voice " ) is a scientific discipline that speech sounds examined the following aspects: According to production in the larynx, throat, mouth and nose, the acoustic properties of sounds and the perception and processing of sounds by the ear and the human brain. Phonetics is an independent interdisciplinary subject between linguistics , anatomy , physiology , neurology ,Physics and math . The subject area of ​​phonetics is spoken language in all its realizations.

Like phonology, phonetics examines spoken language, but from a different perspective. Phonology, as a branch of linguistics, classifies sounds in individual languages ​​based on their function of differentiating meanings. Phonetics, on the other hand, deals with the physical , neurological and physiological aspects that are relevant to sound production, transmission and perception, and makes use of scientific methods.

Adjacent subjects and related disciplines

Differentiation from phonology

The linguistic discipline of phonology is closely related to phonetics. Phonology classifies sounds based on their distribution and function in a specific language. Due to minimal pairs such as As red and dead phonology identifies the smallest meaningful distinctive sounds of a language, the phonemes (in this case / ⁠ ʀ ⁠ / and / ⁠ t ⁠ / ). In contrast to phonology, phonetics examines the specific articulatory and acoustic characteristics of the sounds of all languages, i.e. H. it deals with how linguistic sounds are formed, how they are picked up by the human ear and processed in the brain, and how one can measure and describe spoken sounds acoustically. The smallest unit in phonetics is the sound or the phone , the smallest unit of sound in the sound continuum of spoken language. These smallest units are identified by analyzing and decomposing linguistic utterances. In phonetics z. B. be described by which articulation these sounds are generated. A phoneme, the abstract unit from phonology, corresponds to one or more phones in an utterance. Phones which count as variants of the same phoneme in a certain language, i.e. which have the same function in the respective language, are also called allophones of this phoneme.

Interdisciplinary subject

Phonetics is an interdisciplinary subject that uses results and methods from the subjects of anatomy, physiology, neurology, physics and mathematics. Phonetics uses findings from anatomy and physiology to describe the formation of sounds with the lungs , larynx as well as the mouth and nose , and neurological results to describe the processing of sounds by the human brain. Physics, especially the sub-area of acoustics , is relevant for the description of the sound transmission of linguistic sounds, as is some knowledge from mathematics, which offers the mathematical framework for describing sound waves (e.g. Fourier analysis ).

Phonetics is seen in many publications as an interdisciplinary scientific subject; However, many introductions to linguistics also lead them as a sub-area of ​​linguistics and treat them together with the linguistic disciplines of phonology, morphology and syntax .

In addition to phonetics and phonology, the subjects of speech science , speech training , rhetoric , art of speech , clinical linguistics , speech therapy and speech therapy also include spoken language.

History of Phonetics

Jean-Pierre Rousselot was one of the pioneers of voice recording for scientific purposes. His central work on this was Principes de Phonétique Expérimentale from 1897. It influenced many researchers after him. In the picture his device for voice recording (around 1900).

The origins of phonetics go back to a period between 800 and 150 BC. On the Indian subcontinent, where Indian linguists describe the phonetics of Sanskrit .

The foundations for a systematic description of the organs of articulation were laid in European antiquity and the Renaissance . In ancient times, the doctor Galenus dealt with the structure of the larynx, and the doctor and natural scientist Avicenna also dealt scientifically with phonetics in the 11th century. All in all, however , in the Middle Ages there were rather setbacks in terms of knowledge and ideas about linguistic sound production and reception, which only changed again in the Renaissance. Even Leonardo da Vinci can be named as a forerunner of the phoneticists, because his studies on dissected corpses contributed to the knowledge of the structure of the larynx.

With the rise of the natural sciences in modern times , the prerequisites for phonetics as a scientific discipline emerged. B. the acoustic vibration theory , with which the mathematician Leonhard Euler tried towards the end of the 18th century to describe the acoustic properties of vowels more precisely. The first attempts to produce language artificially can also be found in the 18th century. An example is Wolfgang von Kempelen , who worked on a speaking machine from 1769.

Reconstruction of the Kempelens speaking machine

Phonetics experienced a breakthrough in the 19th century when technical devices such as the phonograph were available with which linguistic sounds could be recorded and analyzed for the first time. Jean-Pierre Rousselot was one of the pioneers of voice recording for scientific purposes and can be named as one of the founding fathers of phonetics as a scientific discipline. Ludimar Hermann also succeeded in analyzing vocal and sound curves in 1889 and 1890 with the help of mathematical principles; he also coined the term formant .

At the same time, at the end of the 19th century, the articulatory phoneticians realized that "speech sounds" need their own description system because most languages ​​no longer have a clear relationship between letters and sounds and thus the common alphabets for describing the sounds of a language not suffice. For example, Alexander Melville Bell published his Visible Speech in 1867, a phonetic script with which he tried to describe vowels precisely. These activities culminated in the founding of the International Phonetic Association in 1884 and the publication of the first International Phonetic Alphabet (IPA) in 1888.

In German-speaking countries, phonetics was first recognized as an independent discipline in 1919, when it was admitted as a major and minor for doctorates at the Philosophical Faculty of the University of Hamburg . The first scheduled extraordinary professorship for phonetics in Germany was established in 1922 at the Hanseatic University of Hamburg.

Other important technical developments for phonetics were z. B. X-ray imaging and sonography at the beginning of the 20th century. Further advances in phonetics can be expected through technical developments. So in recent years z. B. Great advances have been made in real-time MRIs . This makes it easier for phonetics to analyze acoustic signals and physiological processes during sound generation.

Subareas of phonetics

Main areas of work

General phonetics deals with the physical processes involved in specific speech acts as well as their metrological recording. It has the following areas:

  • Articulatory phonetics is the study of the structure and function of the speech apparatus and its use in the production of language.
  • Acoustic phonetics examines the physical structure of sound waves as the carrier of spoken sounds.
  • Auditory or perceptual phonetics deals with the perception of spoken sounds by the listener and the respective roles of the hearing and the brain

There is also the work area of systematic phonics, which is the systematic description of the sounds ( Phone ) seeks the languages of the world, including the description of the consonants and vowels of all human languages and their transcription into a phonetic script . Systematic phonetics also includes the description of suprasegmental phonetics ( prosody ), i.e. H. the description of individual sounds and their use in the syllable or in the word .

Articulatory Phonetics

Anatomy of the mouth and nose with the most important organs of articulation such as the tongue, lips, lower jaw and soft palate

Articulatory phonetics deals with the interaction of breathing (generating the necessary air pressure in the lungs), phonation in the larynx and articulation in the pharynx, mouth and nose (the vocal tract ). The air pressure required for sound is generated in the lungs by breathing. In the larynx are the vocal folds , which create the vibrations in the air that are responsible for the sound. Finally, the pharynx, mouth and nose act depending on the position of z. B. The palate or tongue as a filter that further modifies the sound.

The articulatory phonetics is particularly interested in the role and position of the moving parts in the larynx and mouth, that tongue , lips , jaw , soft palate (velum) with the uvula (uvula), throat and glottis . Depending on the position of these organs of articulation, different linguistic sounds are generated. Phonetics speaks of different places or places of articulation when describing the places where (parts of) the tongue and / or lips are when consonants are generated. So one speaks z. As in the sounds [⁠ b ⁠] or [⁠ m ⁠] of bilabial sounds, because here the upper and lower lips are mainly involved in phonation. For other consonants such as B. [⁠ d ⁠] or [⁠ g ⁠] plays the position of the tongue comprises a roller ( dental , behind the upper teeth, or velar , at the soft palate).

Articulatory phonetics has various experimental investigation techniques to capture larynx behavior and the behavior of the articulators. Laryngoscope , laryngograph and photoelectroglottography are used for the larynx . Palatography , x-rays , electromagnetic articulography , ultrasound measurements ( sonography ) and magnetic resonance tomography or real-time magnetic resonance tomography are used to record the articulatory geometry .

Acoustic Phonetics

Oscillogram (top), spectrogram (middle) and phonetic transcription (bottom) of the spoken word Wikipedia using the Praat software for linguistic analysis.
Language file for this

Acoustic phonetics deals with the description of linguistic sounds as sound vibrations as they are transmitted from speaker to listener. The examination area of ​​acoustic phonetics is thus in the area after the articulation by the speaker and before the signal is picked up by the listener's ear. The basics of acoustic phonetics come from a sub-area of ​​physics, acoustics. Acoustic phonetics describes the generation and transmission of sound vibrations that are generated by spoken sounds. Under sound is minimal air pressure fluctuations which are audible understands. Speech sounds belong to a special type of sound vibrations, namely sounds . In contrast to pure tones (e.g. from music), sounds are composite sound vibrations. In contrast to noises , sounds are periodic sound oscillations. In acoustics, sounds (including linguistic sounds) are described as sinoidal vibrations .

More precisely, linguistic sounds are compound vibrations that can be broken down into individual sinoidal vibrations. With such a breakdown, the amplitudes of the individual partial oscillations are determined. This is how you get a sound spectrum, and the method used to do this is called frequency analysis or Fourier analysis (after the French mathematician Jean Baptiste Joseph Fourier ). The acoustic results are relevant for phonetics because sound waves are generated during the production of speech sounds that travel from the larynx to the pharynx, mouth and nose. These sound waves can be measured and described using the acoustics.

Acoustical phonetics uses different forms of representation to make the acoustics of linguistic utterances visible. An important form of representation is the oscillogram , which shows the sound vibrations as a graph along a time axis. The oscillogram shows the actual vibration process of the sound, i.e. it measures the vibration of the air particles during the transmission of sound waves.

Spectrogram of the sounds [i, u, ɑ] in American English, formants are clearly visible

Often you don't just want to show the pure sound vibrations, you want to show at the same time which frequencies and amplitudes the sound waves of a linguistic utterance have and how they change over time. This is possible if the acoustic information of the sound vibrations is converted into a spectrogram or sonagram using mathematical methods , a graphic representation of the frequency spectrum of a signal. In the sonogram, the time course is shown on the x-axis (from left to right), while the frequency is shown on the y-axis (from bottom to top). The amplitude of the sound waves is represented by different shades of gray: the darker an area, the greater the amplitude. The bars in a sonogram that have a higher degree of blackness represent the frequency bands with a higher energy, the so-called formants . In the sonagram the formants are the graphic representation of the vowel sound.

An important focus of acoustic phonetics is the description and analysis of sound utterances using spectrograms and sonograms. Other topics in the field of acoustic phonetics, which are made possible primarily by the increasing use of computers, are automatic speech recognition and speech synthesis .

Auditory or perceptual phonetics

Anatomy of the ear with the external auditory canal, middle ear with hammer, anvil and stapes (in gray) and inner ear with cochlea or
cochlea (in purple)

Auditory or perceptual phonetics deals with the recording and processing of linguistic sounds in the auditory organ and the auditory nervous system.

The sound waves of linguistic sounds are conducted via the outer ear and the middle ear into the inner ear , where the actual hearing organ, the organ of Corti, is located. The question of how language is processed in the ear and in the human brain is part of various hearing theories, including the resonance hypothesis and the traveling wave theory of Georg von Békésy .

An important research area of ​​auditory phonetics is the relationship between the subjective perception of spoken sounds and the physically measurable parameters of the acoustic signal, such as the volume and the measurable sound pressure level (in decibels , dB) as well as the pitch . Research on auditory speech perception was groundbreaking for perceptual phonetics . B. by Bell Laboratories in the middle of the 20th century, who wanted to find out how much the voice signal can be reduced without it becoming incomprehensible in order to better utilize the capacity of the telephone lines.

An important result of auditory speech perception from phonetics is, among others, the knowledge that a linguistic utterance consists of a continuous acoustic signal. In the early days of phonetics, it was expected that clearly definable segments (vowels, consonants) could be identified in the measurements of linguistic utterances and also generated synthetically. However, as it turned out with the experiments of the pattern playback synthesizer of the Haskins laboratories , this was possible for vowels, but not for consonants. Experiments on speech perception resulted in the finding that people divide linguistic input into clearly defined categories: If one varies the linguistic input slightly (e.g. from [ ] to [ ] to [ ]), test persons mainly choose three categories true ( categorical perception ). If musical tones or noises are used as input, test persons can name significantly more subtle differences (continuous perception). From this and other experiments, the researchers at the Haskins Laboratories developed their motor theory of speech perception.

Further possible classifications of the sub-areas of phonetics

If you classify phonetic subareas according to their methodological approach, you can distinguish them as follows:

  • Descriptive phonetics: description and analysis of sounds through the use of the ear ("ear phonetics")
  • Symbol phonetics: Representation of what is heard with the International Phonetic Alphabet (IPA)
  • Instrumental or signal phonetics: Research into linguistic sounds using mechanical and electronic devices
  • Experimental phonetics: Research into the connection between a spoken utterance and the perception of test subjects in the experiment

Phonetics of the individual languages

In addition to describing and measuring the processes involved in speech production and speech perception, phonetics also helps to record the sound inventory of individual languages . The sounds or phones of a language are first identified through the phonetician's observations and then systematically described: Consonants are described and classified based on their articulation type and the point of articulation , vowels based on the tongue position and the roundness of the mouth. For example, found among the consonants of the German nasals [⁠ m ⁠] , [⁠ n ⁠] and [⁠ ŋ ⁠] (as in the words dam , then and urge ). These are articulated bilabially (with both lips), alveolar (with the tongue on the upper dome behind the upper incisors) or velar (on the soft palate ). In French, however, can be found next to [m] and [n] (as in pomme , panne also the) palatal nasal [⁠ ɲ ⁠] (as in campaign ).

The languages ​​of the world make different use of the potentially possible phones. For example, one finds languages ​​whose sound inventory includes a small number of vowels or consonants, such as the Papuan language Rotokas with its only six consonants and five vowels. Another extreme is the South African Khoisan language ! Xũ , which has a total of 141 phonemes, including a large number of consonants, clicks and diphthongs .

Phone be a phonetic transcription presented in writing, with the International Phonetic Alphabet (IPA) is considered the standard for this.

The phonetics of many individual languages ​​has been well researched; The linguists Peter Ladefoged and Ian Maddieson give an overview of the sound systems of the world's languages with their book The Sounds of the World's Languages . Introductions to phonetics are available for many European languages, e.g. B. for German, English or French. A milestone in the description of the English language is the book An Outline of English Phonetics by the phonetician Daniel Jones from 1922.

Applied Phonetics

The results of general and systematic phonetics flow into sub-areas of applied phonetics, e.g. B. in forensic phonetics or clinical phonetics , and also in language acquisition research .

In forensic phonetics, phonetic knowledge is used to investigate typical speaker's voice and speech characteristics, e.g. B. with forensic questions in the field of forensics or forensic technology or when writing forensic court reports. Findings from phonetics are the foundation for forensic experts who should decide in court, for example, whether a defendant is the speaker on an audio recording. Methods such as simply listening to the recording by the reviewer through to technical analyzes using a spectograph, for example, are used.

Clinical phonetics is an application-oriented branch of the linguistic discipline of phonetics. It deals with the description of symptoms and diagnosis of speech , language and voice disorders in adults and disorders of language acquisition or language development in children. Clinical phonetics began to establish itself as an independent discipline in the late 1970s; Fundamental to the discipline was the publication of David Crystal's book Clinical Linguistics in 1981. The goals of clinical phonetics include the application of findings from phonetics to treat speech and language disorders in patients and the integration of clinical results into linguistic theory. She is also concerned with the expansion of the International Phonetic Alphabet (IPA) to include transcription methods that reproduce the language of speech-impaired individuals more appropriately.

Phonetic basics are also relevant for language acquisition research, which examines the acquisition of speaking skills and individual sound development in (especially healthy) children. Basic phonetic knowledge also flows into orthoepy , the teaching of or the regulation of the standardized wording of a language that should be free from regional influences (standard pronunciation).

literature

Introductions

  • Bernd Pompino-Marschall: Introduction to Phonetics. 3. Edition. Walter de Gruyter, Berlin 2009, ISBN 3-11-022480-1 .
  • Henning Reetz, Allard Jongman: Phonetics. Transcription, Production, Acoustics, and Perception . Wiley-Blackwell, Oxford 2008, ISBN 978-0-631-23226-1 .
  • Richard Wiese: Phonetics and Phonology. UTB, Tübingen 2010, ISBN 978-3-8252-3354-9 .

Articulatory, acoustic and auditory phonetics

  • Fabian Bross: Basics of Acoustic Phonetics. In: Helikon. A Multidisciplinary Online Journal. No. 1, 2010, pp. 89-104 ( Online ; PDF; 1.3 MB).
  • Keith Johnson: Acoustic and Auditory Phonetics. 3. Edition. Wiley-Blackwell, Oxford 2012, ISBN 978-1-4051-9466-2 .
  • Peter Ladefoged: Elements of Acoustic Phonetics. Chicago 1996, ISBN 0-226-46764-3 .
  • Joachim MH Neppert: Elements of an acoustic phonetics. 4th edition. Hamburg 1999, ISBN 3-87548-154-2 .
  • Henning Reetz: Articulatory and Acoustic Phonetics. Trier 2003, ISBN 3-88476-617-1 .

Phonetics of individual languages

  • Thomas Becker: Introduction to the phonetics and phonology of German. Scientific Book Society, Darmstadt 2012, ISBN 978-3-534-24949-7 .
  • Paul Carley, Inger Margrethe Mees, Beverley Collins: English phonetics and pronunciation practice . Routledge, London 2018, ISBN 978-1-138-88634-6 .
  • Peter Ladefoged, Ian Maddieson: The Sounds of the World's Languages. Wiley-Blackwell, Oxford 1996, ISBN 0-631-19814-8 .
  • Elissa Pustka: Introduction to the Phonetics and Phonology of French. 2nd Edition. Erich Schmidt Verlag, Berlin 2016, ISBN 978-3-503-16631-2 .

Web links

Commons : Phonetics  - collection of images, videos and audio files
Wiktionary: Phonetics  - explanations of meanings, word origins, synonyms, translations

Individual evidence

  1. Etymology according to Wahrig, German dictionary , keyword: Phonetics
  2. Bernd Pompino-Marschall: Introduction to Phonetics. 3. Edition. Walter de Gruyter, Berlin 2009, ISBN 3-11-022480-1 , p. 178.
  3. a b Hadumod Bußmann : Lexicon of Linguistics (= Kröner's pocket edition . Volume 452). Kröner, Stuttgart 1983, ISBN 3-520-45201-4 , p. 385.
  4. ^ William O'Grady, Michael Dobrovolsky, Francis Katamba: Contemporary Linguistics. An Introduction . 4th edition. Longman, London / New York 1997, ISBN 0-582-24691-1 , pp. 18 (English).
  5. ^ RH Robins: A Short History of Linguistics . 4th edition. Longman, London / New York 1997, ISBN 0-582-24994-5 , pp. 175 (English).
  6. ^ I. Ormos: Observations on Avicenna's Treatise on Phonetics. In: Acta Orientalia Academiae Scientiarum Hungaricae. Volume 39, 1985, pp. 45-84.
  7. ^ I. Ormos: A key factor in Avicenna's theory of phonation. In: Acta Orientalia Academiae Scientiarum Hungaricae. Volume 40, 1986, pp. 283-292.
  8. Guilio Panconcelli-Calzia: Historical Numbers of Phonetics. Source Atlas of Phonetics . Benjamin, Amsterdam / Philadelphia 1994, ISBN 90-272-0957-X , pp. 18 .
  9. a b Bernd Pompino-Marschall: Introduction to Phonetics . 3. Edition. Walter de Gruyter, Berlin / New York 2009, ISBN 978-3-11-022480-1 , p. 5-6 .
  10. Guilio Panconcelli-Calzia: Historical Numbers of Phonetics. Source Atlas of Phonetics . Benjamin, Amsterdam / Philadelphia 1994, ISBN 90-272-0957-X , pp. 60 .
  11. Guilio Panconcelli-Calzia: Historical Numbers of Phonetics. Source Atlas of Phonetics . Benjamin, Amsterdam / Philadelphia 1994, ISBN 90-272-0957-X , pp. 54 .
  12. ^ David Crystal: The Cambridge Encyclopedia of Language . 2nd Edition. Cambridge University Press, Cambridge 1997, ISBN 0-521-55967-7 , pp. 160-161 (English).
  13. Bernd Pompino-Marschall: Introduction to Phonetics . 3. Edition. Walter de Gruyter, Berlin / New York 2009, ISBN 978-3-11-022480-1 , p. 6-7 .
  14. Guilio Panconcelli-Calzia: Historical Numbers of Phonetics. Source Atlas of Phonetics . Benjamin, Amsterdam / Philadelphia 1994, ISBN 90-272-0957-X , pp. 77-78 .
  15. Bernd Pompino-Marschall: Introduction to Phonetics . 3. Edition. Walter de Gruyter, Berlin / New York 2009, ISBN 978-3-11-022480-1 , p. 18 .
  16. Bernd Pompino-Marschall: Introduction to Phonetics . 3. Edition. Walter de Gruyter, Berlin / New York 2009, ISBN 978-3-11-022480-1 , p. 78-85 .
  17. Fabian Bross: Basics of Acoustic Phonetics. In: Helikon. A Multidisciplinary Online Journal. No. 1, 2010, p. 89. ( Online ; PDF; 1.3 MB)
  18. Bernd Pompino-Marschall: Introduction to Phonetics . 3. Edition. Walter de Gruyter, Berlin / New York 2009, ISBN 978-3-11-022480-1 , p. 87-91 .
  19. Fabian Bross: Basics of Acoustic Phonetics. In: Helikon. A Multidisciplinary Online Journal. No. 1, 2010, pp. 94-95. ( Online ; PDF; 1.3 MB)
  20. Joachim MH Neppert: Elements of an acoustic phonetics. 4th edition. Hamburg 1999, ISBN 3-87548-154-2 , p. 98.
  21. Henning Reetz, Allard Jongman: Phonetics. Transcription, Production, Acoustics, and Perception . Wiley-Blackwell, Oxford 2009, ISBN 978-0-631-23226-1 , pp. 155-156 (English).
  22. Bernd Pompino-Marschall: Introduction to Phonetics . 3. Edition. Walter de Gruyter, Berlin / New York 2009, ISBN 978-3-11-022480-1 , p. 108 .
  23. Bernd Pompino-Marschall: Introduction to Phonetics . 3. Edition. Walter de Gruyter, Berlin / New York 2009, ISBN 978-3-11-022480-1 , p. 108-109, 132 .
  24. Bernd Pompino-Marschall: Introduction to Phonetics . 3. Edition. Walter de Gruyter, Berlin / New York 2009, ISBN 978-3-11-022480-1 , p. 145-147, 153-158 .
  25. Bernd Pompino-Marschall: Introduction to Phonetics . 3. Edition. Walter de Gruyter, Berlin / New York 2009, ISBN 978-3-11-022480-1 , p. 160-171 .
  26. Henning Reetz, Allard Jongman: Phonetics. Transcription, Production, Acoustics, and Perception . Wiley-Blackwell, Oxford 2009, ISBN 978-0-631-23226-1 , pp. 263-273 (English).
  27. Bernd Pompino-Marschall: Introduction to Phonetics . 3. Edition. Walter de Gruyter, Berlin / New York 2009, ISBN 978-3-11-022480-1 , p. 2-3 .
  28. Bernd Pompino-Marschall: Introduction to Phonetics. 3. Edition. Walter de Gruyter, Berlin 2009, ISBN 3-11-022480-1 , pp. 177-183, 221.
  29. Bernd Pompino-Marschall: Introduction to Phonetics. 3. Edition. Walter de Gruyter, Berlin 2009, ISBN 3-11-022480-1 , p. 193.
  30. Bernd Pompino-Marschall: Introduction to Phonetics. 3. Edition. Walter de Gruyter, Berlin 2009, ISBN 3-11-022480-1 , pp. 257-260.
  31. ^ Peter Ladefoged, Ian Maddieson: The Sounds of the World's Languages. Wiley-Blackwell, Oxford 1996, ISBN 0-631-19814-8 .
  32. Thomas Becker: Introduction to the Phonetics and Phonology of German. Scientific Book Society, Darmstadt 2012, ISBN 978-3-534-24949-7 .
  33. ^ Paul Carley, Inger Margrethe Mees, Beverley Collins: English phonetics and pronunciation practice . Routledge, London 2018, ISBN 978-1-138-88634-6 .
  34. Elissa Pustka: Introduction to the Phonetics and Phonology of French. 2nd Edition. Erich Schmidt Verlag, Berlin 2016, ISBN 978-3-503-16631-2 .
  35. ^ Daniel Jones: An Outline of English Phonetics . Teubner, Leipzig / Berlin 1922.
  36. ^ Geoffrey Stewart Morrison, Ewald Enzinger: Introduction to forensic voice comparison . In: William F. Katz, Peter F. Assmann (Eds.): The Routledge Handbook of Phonetics . Routledge, London / New York 2019, ISBN 978-1-138-64833-3 , pp. 599-634.
  37. ^ David Crystal: Clinical Linguistics (= Disorders of Human Communication , Vol. 3). Springer, Vienna et al. 1981, ISBN 3-211-81622-4 .
  38. ^ William F. Katz: New horizons in clinical phonetics . In: William F. Katz, Peter F. Assmann (Eds.): The Routledge Handbook of Phonetics . Routledge, London / New York 2019, ISBN 978-1-138-64833-3 , p. 527.