Universals of music perception

from Wikipedia, the free encyclopedia

Under universals of music perception elements of music perception and processing are understood to be as congenital, d. H. culture-independent.

The view is often held that music is a universal form of expression. This implies the assumption that music has universal characteristics, i.e. characteristics that are common to almost all musical systems in the world, and that there are biological conditions for the processing of music. It has been controversial since ancient times whether universals are constructions that are not real, or whether they can be recognized as facts. This problem is called the universals problem .

A universal characteristic is spoken of when the characteristic is not learned but appears spontaneously because it is latent in all normal persons, i.e. is innate (Dissanayake, 2001). From this perspective, music is not a universal language, but rather the universals of music perception and processing describe the conditions for the development of the characteristics of the music of different cultures.

Framework

Influences on hearing perception

Music perception is based on a number of different influences, some of which are almost universal, while others are dependent on personal or group-specific characteristics and attitudes:

  1. Physical influences , d. H. the physical sound signal and the type of transmission to the ear, as well as physical framework conditions and laws (e.g. uncertainty relation between frequency and time resolution ). These influences are universal.
  2. Anatomical and physiological influences , e.g. B. Structure and function of the outer ear, middle ear and inner ear, properties and behavior of nerve cells, "basic" structure and interconnection of the brain. These influences are innate and generally apply to all people. Exceptions can be made for people who have hearing impairments or for congenital anatomical deviations. These influences do not apply to animals or apply in a different form.
  3. Early childhood influences . In order to understand language, a toddler must learn to analyze the abundance of nerve impulses that the inner ear and the brain areas behind it deliver in order to recognize the patterns of speech-relevant sounds. The analysis techniques learned for this form the basis of hearing and are used for music perception. Some basic language components are used by most cultures (voiced and unvoiced sounds, pitch and volume changes), so some of the basics of listening are certainly cross-cultural. There can be cultural differences in details.
  4. Knowing hearing . Listening experiences are collected later, which are used to classify and evaluate what has been heard. These include B. the development of personal taste or the connection of auditory events with personal experiences. These influences are highly individual, at best group-specific. The perceptions that are shaped by this cannot simply be generalized. Statements across all individuals can only be achieved in this area using statistical methods. For general statements, as heterogeneous groups as possible would have to be interviewed.

Only statements that are based on physical conditions, human anatomy, basic signal processing methods of the human hearing / brain, as well as cross-group and cross-cultural aspects can be considered "universal".

Perception of sound signals

The sound signals that hit the human ear are filtered and preprocessed through the outer , middle and inner ear as well as the subsequent signal processing in the brain before they can be perceived. The perceived properties of a sound (e.g. the perceived pitch , the timbre or the volume ) can differ from the physically measured properties of the sound due to the preprocessing (e.g. the measured fundamental frequency , the sound pressure level or its spectrum ). Examples: For piano sounds z. B. the pitch determined with a frequency meter from the pitch heard (see also stretching ). A frequency component with a certain level can be perceived as very dominant by the hearing, but not perceived by the hearing at all at other times (see also masking ).

This means that if statements are to be made about the perception of music signals, a physical analysis of the sound is not sufficient; the processing of the sound in the human ear must also be taken into account. This requires psychoacoustic examinations.

Properties of music signals

One-dimensional vibrators (e.g. string and wind instruments)

Musical instruments are often used for melody voices, which can be described as so-called "one-dimensional vibrators". The "one-dimensional oscillators" include z. B. String instruments (a string swings up and down) or wind instruments (a column of air swings up and down in the pipe). The vibrations and the radiated sound are almost periodic . The spectrum of these periodic oscillations can be described as a first approximation by a fundamental tone and its overtones , with the overtones being generated at integer multiples of the fundamental frequency. The perceived pitch then corresponds to the pitch of the fundamental.

On closer inspection, the fundamental and overtones of real musical instruments are not always exactly in relation to each other in small whole numbers. This leads to the development of beats that make the sound of the instrument sound "fuller".

In the case of real musical instruments, non-periodic components or noise components are added to the periodic vibrations (e.g. of the string or air column). Examples of this are striking noises in string instruments, blowing noises in wind instruments and organ pipes. These noises are e.g. Partly defining the sound impression (the sound of a panpipe would hardly be recognizable without the air noise that arises when blowing it).

With many musical instruments, the spectrum of this tone changes when a tone is played. The spectral changes that occur when a string or the air column vibrates are often decisive for the sound of a musical instrument. If the first tenths of a second are faded out, many musical instruments can hardly be identified.

In addition, the frequency of a tone can change while it is being played. There are periodic frequency changes (e.g. vibrato on flutes) or non-periodic frequency changes (e.g. on the piano, the pitch is slightly higher when you hit it than when it fades away).

Multi-dimensional transducers (e.g. drums and bells)

Rhythm instruments ( drums , timpani , cymbals ) and bells are "two-dimensional vibrators". Here vibrations spread over a surface (eardrum, metal jacket). Different vibration zones can form on the vibrating surface. The total vibration and the radiated sound are no longer periodic. Corresponding to the different excited vibrations, the sound signal contains not only frequencies of a fundamental tone and its integer multiples, but also frequency components in non-integer multiples. The excited frequencies depend on the material, shape and dimensions of the vibrating body. If the vibrations do not deviate too much from periodic vibrations, or if there is a pronounced spectral maximum at a frequency, then pitches can be assigned to these sounds (e.g. with timpani and bells). If there are strong deviations from periodic oscillations, it is no longer possible to assign a pitch (e.g. for cymbals).

Analysis of music signals

There are several approaches to analyzing music signals

  • Analysis of the vibration
    mechanics The attempt is made to measure or model the vibration behavior of the individual components of a musical instrument (e.g. vibration behavior of strings, distribution of vibrations on the sound body, build-up and reduction of mechanical vibrations).
    Example: Which vibrations do the strings and sound bodies of a Stradivarius perform? And what is the difference between the distribution of vibrations on the body of a Stradivarius and that of other violins?
  • Signal-theoretical analysis The
    attempt is made to analyze the acoustic signal emitted by a musical instrument more precisely (e.g. analysis of the temporal progression of the spectrum, level, basic frequency).
    Example: What does the acoustic signal of a Stradivarius look like? How do the fundamental frequency and spectrum develop during a Stradivarius tone? And how does it differ from other violins?
  • Psychoacoustic analysis The
    attempt is made to analyze the perceptions that a person has when the musical instrument is played (e.g. perceived pitch, perceived volume, perceived sound).
    Example: How is the sound of a Stradivarius perceived? Which components of sound perception are important for a Stradivarius sound? And what is the difference in perception compared to other violins?

Since musical instruments can perform relatively complex vibrations and the acoustic signals from musical instruments do not exactly have simple structures, the analysis of the vibration mechanics or the acoustic signal can be a mathematically quite demanding task. The same applies to the analysis of the human perceptions caused by this.

Physiological foundations of music perception

Listening area

The area in which music can be perceived is limited by the human listening surface . He can perceive frequencies between 16  Hz and 20 kHz. However, the frequency range used for music is essentially limited to frequencies between 40 Hz and 10 kHz.

The human hearing is most insensitive at the upper and lower limits of the perceptible frequency range and most sensitive in the range between 1000 and 5000 Hz, where frequency ranges important for speech understanding are located.

Pitch perception

The pitch perception and the resolution of the frequencies in the audible range is closely associated with the physiology of the inner ear and the auditory brain. The inner ear carries out a frequency analysis of the heard signal by filtering out different frequencies along the row of hair cells in the organ of Corti of the cochlea (cochlea). This is where the synapses (connection points) of nerve cells are located , which transmit the signals for the respective frequencies to the brain for processing .

For pitch perception by ear, two different mechanisms are available:

  • Relationship between frequency and perceived pitch (pitch in mel )
    Evaluation of the oscillation period of a tone (dashed line in the picture on the right). To evaluate the oscillation period, the excitation patterns of the nerve cells in the auditory midbrain ( inferior colliculus ) are examined for periodicities. The perceived pitch then corresponds to the fundamental frequency of the tone. This evaluation is only possible as long as the ear can still follow the period of the signal. This is the case, individually different, up to frequencies between 800 Hz (tone g 2 ) and 1600 Hz (tone g 3 ).
  • Evaluation of the location on the cochlea where nerve cells are stimulated. (dotted line in the picture on the right) The perceived pitch results from the distance between the position of maximum excitation of the row of hair cells and the end of the cochlea. The location on the cochlea is used to determine the pitch when the ear can no longer keep track of the period of the signal; H. for fundamental frequencies above 800 to 1600 Hz.

These two mechanisms have different effects on the perception of sound intervals.

  • If the period of the tone can be evaluated, the perceived pitch corresponds to the fundamental frequency of the tone. In the case of a tone interval, the fundamental frequency of the tones changes by a certain factor and it is perceived as a similar change in the perceived pitch regardless of the pitch. That means: Tone intervals and melodies sound almost the same in different pitches.
  • If the perceived pitch is determined via the maximum excitation on the cochlea, the relationship between the perceived pitch and the frequency of the tone becomes non-linear. The perceived pitch changes much less with the same frequency changes than with the first mechanism. Tone intervals above 800 to 1600 Hz are felt to be smaller than they are from their frequency ratio. That means: Melodies in very high pitches (above g 2 or g 3 ) sound different than in lower pitches, and the higher the pitch beyond this limit, the smaller the perceived pitch intervals.

When perceiving the pitch at lower frequencies, the composition of the tone from the fundamental and overtones is irrelevant. Only the period of the tone is important. The period of a tone and thus the perceived pitch are retained even if a tone only consists of overtones and the fundamental tone is omitted ( residual tone ).

Pitch resolution

The achievable frequency and pitch resolution depends on the packing density of nerve cell connections in the row of hair cells and the ability of the brain to process the signals “nerve cells precisely”.

  • At low frequencies near the lower limit of hearing, a musical octave corresponds to less than a millimeter along the row of hair cells. Here the possible pitch resolution is relatively low. Below 500 Hz, humans distinguish around 270 different pitches with a constant distance of 1.8 Hz.
  • With increasing frequency, the length of the row of hair cells, which is available for an octave evaluation, increases. The possible pitch resolution increases accordingly. It reaches its maximum from frequencies of 500 Hz with a length within the hair cell row of about 6 mm per octave.
  • At medium and higher frequencies above 500 Hz and up to about 3000 Hz, the length of the row of hair cells per octave and thus the achievable pitch resolution remains roughly constant (about 6 mm per octave). From 500 Hz to 15,000 Hz, about 350 logarithmic pitch intervals can be recognized, experienced musicians can still distinguish pitch intervals of about 1/33 semitones (3 cents ). This corresponds to a frequency difference of 1 Hz at 500 Hz.

Due to the achievable frequency resolution, there are limits to the way in which the brain categorizes pitches, more precisely how many tones the octave is divided into. There is no direct connection between discernment and the categorization of pitches in scales - these categories are coarser and are mostly learned in alignment with consonant intervals.

Perception of musical voices

The physiology and processing steps of the human inner ear have an impact on the perception of pieces of music. An essential effect of the inner ear is the so-called masking effect : If individual tones are played in a frequency range where they predominate in terms of strength, the mechanics of the inner ear not only stimulate the nerve cells that are responsible for these tones, but also to a considerable extent nerve cells in the environment. Since the perceived volume depends on the overall excitation of the nerve cells in the inner ear, this means that a melody voice is perceived louder than it is physically seen.

Music components that do not have a single tone character (accompaniment in chords , rhythm instruments) tend to stimulate a broad frequency range in terms of their spectrum , so that hardly any additional nerve cells are stimulated here due to the masking effect. There is hardly any increase in the perceived volume.

This helps to ensure that a melody voice can be heard well within the accompaniment, even if its sound level is not significantly higher than that of the accompanying instruments.

Perception of rhythms

The nerve cells of the inner ear have the property that their excitation decreases with constant stress. After a short period of rest, they regenerate and emit particularly strong signals when stimulated again.

This effect leads to an emphasis on the rhythm of pieces of music. Instruments that carry the rhythm often only sound for a short time and in frequency ranges in which other musical voices are not currently present (e.g. deep bass range on a large drum , relatively overtone-containing range on cymbals , but also: rhythmic accompaniment of an or several octaves below or above the melody voice).

In these frequency ranges there is relative calm between the rhythmic beats, so that the nerve cells responsible for these frequencies can recover. When a rhythm beats, these nerve cells then generate particularly strong signals.

This contributes to the fact that rhythm instruments can be perceived very well, even if their sound level is not significantly higher than that of the other instruments.

Psychoacoustic basics of music perception

Physics and psychoacoustics of scales

The choice of scales is linked to the perception of amplitude or frequency fluctuations .

  • If the amplitude or the frequency of a sound fluctuates very slowly (in the range of a few Hertz ), these fluctuations are perceived as a change in the volume or the pitch of the sound.
  • Faster fluctuations (above 10 Hertz) are perceived as a rough, "hard", less pleasant tone.
  • If the fluctuation frequency is significantly above the perceptibility threshold for tones (significantly above 20 Hertz), these fluctuations can lead to the perception of difference tones . These difference tones often give the sound a less pleasant character.

The tones used in a scale should sound pleasant when they play together. This is true not only when polyphony is used as a musical means of expression, but also with monophonic music . Because in a reverberant environment, successive tones sound simultaneously for a short time: The reverberation of the previous tone has not yet subsided when the next tone is played.

Variations in the amplitude of chords:
1. C major, pure tuning
2. C major, equal tuning
3. C major, scale with pitches that are too small
4. Dissonance C-F sharp-B ”

If tones are to sound pleasant when they come together, no strong and rapid amplitude fluctuations should be caused. This significantly influences the choice of a scale:

The residual tone is usually much lower than the presented individual tones. The single tones are interpreted as overtones of the residual tone. The amplitude and frequency of the sound mix remain constant. An example of such a scale is pure tuning .

Example: For a purely tuned Dur - chord , the sound frequencies are in the ratio 4: 5: 6 to each other. The result is a residual tone 2 octaves lower, the notes of the chord become the 4th, 5th and 6th overtones of the residual tone. The envelope of such a chord is constant ( blue curve above ).

Pure major chords are generally rated as melodious.

  • If the tones of a scale deviate from the ratio of small whole numbers, a residual tone with beats arises when they coincide . The frequency of the beats results from the deviations from the ratio of small whole numbers. An example of such a scale is the equal- scale tuning , which is mostly used today, or the tempered tunings used earlier.
    Example: In an equally-tuned major chord, the individual tones deviate by a few Hertz from the pure tuning. The envelope curve changes over time ( green curve, 2nd from above ).

However, the changes in amplitude are so slow that they are not uncomfortable. But: A major chord tuned with the same level no longer sounds quite as good as a pure major chord.

  • If the tones deviate strongly from the ratio of small whole numbers, very strong and rapid changes in amplitude (rapid beats) occur when they clink together. The result is a rough, hard, rather unpleasant sound.

If there are larger deviations from integer frequency ratios, the envelope of the chord changes quickly and abruptly ( yellow curve, 3rd from the top ). The behavior is similar to the behavior of a dissonance ( red curve below ).

Such chords are more likely to be heard as a dissonance.

The consequence of this is that scales are preferred in which tones are in the ratio of small whole numbers to one another, or which at least come close to this. Because when they play together, more pleasant sounds are created.

Universals of pitch and melody perception

Discrete pitch categories

The perception of discrete pitches is likely to be universal. Even children seem predisposed to singing discreet pitches. This categorical pitch perception exists in all cultures - it allows the musical message to be understood despite difficulties such as a noisy environment or poor intonation (Dowling & Harwood, 1986).

The purpose of creating categories is to reduce the amount of data to be processed and in this way prevent overloading when listening to music and practicing music. However, the specific categories themselves are learned and thus differ from culture to culture.

Chroma and Octave Identity

According to the two-component theory of Géza Révész (1913), in addition to the pitch dimension, there is another dimension, the chroma or pitch, and in this context the octave identity, which is also often viewed as a universal. As Chroma cyclically recurring tonal character is called tones in octaves. This becomes clear, for example, in the fact that different variants of a melody are perceived as equivalent if the entire melody or only individual tones of the melody are shifted by an octave and the contour is retained. Without an octave identity, every tone in the entire listening area would have its own tone character, which would mean enormous complexity. But through the octave identity our brain only has to identify as many tones as there are within an octave. The division into octaves therefore orders and structures. All highly developed musical cultures give octave-spaced tones the same name. Octave identity is also perceived by monkeys and recent results of brain research show that other mammals also have octave mapping - namely in the auditory thalamus , i.e. between the brain stem and cerebrum (Braun and Chaloupka, 2005).

Intervals

In most cultures, in addition to the octave, there are also fifths and fourths . Apparently the brain is more inclined to these categories, because combinations of tones whose frequency ratios are given by small integers, in contrast to those with more complicated frequency ratios, generate additional periodic patterns in nerve signals (e.g. the octave has a frequency ratio of 1 : 2 , the fifth of 2 : 3, the fourth of 3 : 4, on the other hand the tritone of 32 : 45). This is also suggested by experiments in which children and adults were able to better remember tone sequences whose tones were in small frequency ratios, e.g. better tone sequences with fifths and fourths than with the tritone (Trehub, 2000).

Exponential growth in frequency

The frequency ratio of intervals increases exponentially.

Example:
interval Frequency ratio
1 octave 1: 2
2 octaves 1: 4
3 octaves 1: 8
...
k octaves 1: 2 k

See: The interval space .

Conversely, the pitch has a logarithmic relationship to the frequency. The resulting psychophysical scale is universal (Justus and Bharucha, 2002).

Scales and tone hierarchies

Scales have a relatively small number of degrees in all cultures, almost everywhere they consist of five to seven notes per octave. This goes well with the fact that the short-term memory limit for categories is around seven (Miller, 1956).

The number of steps into which the octave is divided also depends on how differentiated tones can be categorized.

There are also hardly any equidistant scales; In other words, with scales, the intervals between adjacent pitches are almost never the same size, e.g. B. There are whole tones and semitones in the diatonic scale. In this way, tonal references can be established, the tones are in different relationships to the basic tone and the listener can at any point in time imagine where the music is in relation to the tonal center of the music. This can create a perception of tension and dissolution, which increases the possibilities of musical expression and experience (Sloboda, 1985).

These different relationships to the keynote create tone hierarchies that can also be found in almost every culture, i.e. That is, the notes of the scale have different functions, they occur with different frequencies and at different positions in a melody. However, the specific tone hierarchies vary between cultures (Justus & Bharucha, 2002). There seems to be a universal processing predisposition for scales with unequal pitch - such scales are easier to encode and retain than scales with equal intervals. This can already be seen in small children:

Trehub (2000) presented children with three scales - the major scale, a new unevenly spaced scale and an equidistant scale - and investigated whether they could tell when a note in the scale was shifted by three or four semitones. All three scales were probably unknown to the children, but they performed significantly better on the two scales with unequal intervals than on the equal-step scale.

Melodic contour

Another universal in pitch and melody perception is related to the melodic contour. The listener tends to process global information relating to the relationship between tones rather than precise, absolute stimuli such as specific pitches or intervals (Trehub, 2000): after hearing an unknown melody, little more than its contour is usually remembered, i.e. Changes in the direction of the pitch. Furthermore, different tone sequences with the same contour are perceived as related. Even in toddlerhood, the melodic contour is of great importance in the representation of melodies, which indicates a universalism. Experiments by Trehub (2000) show that toddlers treat a melody that has been transposed (intervals remain the same) as identical to the original melody. Even if the intervals change but the outline is retained, the melody is treated as known and not new. But if even one note is shifted in such a way that the contour changes, the melody appears unknown to children and adults.

grouping

The use of auditory grouping strategies is also universal. The organization of tones into perceptual units increases the economy and efficiency when processing music, which is limited by the short-term memory capacity. They are grouped and structured according to certain design principles, but it is questionable whether they are also universal. Since musical perception is also shaped by learned categories and schemes, other ways of listening are always possible (Motte-Haber, 1996).

Universals of rhythm perception

Grouping and finding regularities

The grouping of events into units of perception in order to reduce information is also one of the universals of rhythm perception. This can be seen, for example, in the fact that we usually combine a series of hits into groups of two or three hits of different weights (Fricke, 1997).

In this context, an attempt is also made to find a regular pulse around which the other events can be organized - regularities are always actively sought for economic processing. This is confirmed, among other things, in experiments by Drake and Bertrand (2001), in which the synchronization was over 90% when people were supposed to beat the beat to music, and which show that even infants adapt their suction rate to the rate of an auditory sequence can.

Organization on different levels

Rhythm is always organized on different levels: rhythmic patterns are laid over the regular pulse mentioned - the pulse is divided by asymmetrically arranged sounds.

The details of the rhythmic organization differ from culture to culture. One of the simplest rhythms is the dactyl (one long interval followed by two short ones); In other cultures, such as in southern Africa or in India, more complex rhythms can be found - here the number of beats within the pulse can be large and odd, e.g. B. 7 to 17 strokes are common in India.

The asymmetry of the rhythmic patterns creates a sense of location within the beat . Stresses emerge that are essential to the music of almost all cultures. These points of reference form the basis for a sense of movement and calm and also provide indications for the coordination of the various parts in polyphonic music (Sloboda, 1985).

Individual evidence

  1. Author unknown - The mechanism of octave circularity in the auditory brain (after 2005) [1] at neuroscience-of-music.se

literature

General

  • Ellen Dissanayake: Art as a human universal. An adaptionist view . In: Peter M. Hejl (Ed.): Universals and Constructivism . Suhrkamp, ​​Frankfurt / M. 2001, ISBN 3-518-29104-1 , pp. 206-234.
  • C. Drake, D. Bertrand: The quest for universals in temporal processing in music . In: Robert J. Zatorre et al. a. (Ed.): The biological foundations of music . Academy of Science, New York 2001, (Annals of the New York Academy of Sciences; vol. 930) ISBN 1-573-31307-6 , pp. 17-27.
  • W. Jay Dowling, Dane L. Harwood: Music cognition Academic Pr., Orlando Fl. 1986, ISBN 0-122-21430-7 .
  • JP Fricke: Rhythm as a factor of order. Information psychological conditions for the organization of time . In: Axel Beer u. a. (Ed.): Festschrift Christoph-Hellmut Mahling for his 65th birthday . Schneider, Tutzing 1997, ISBN 3-795-20900-5 , pp. 397-412.
  • Robert Jourdain: The well-tempered brain. How music arises and works in the head . Spectrum Academic Publishers , Heidelberg 2001, ISBN 3-827-41122-X .
  • TC Justus, JJ Bharucha: Music perception and cognition . In: Harold Pashler (Ed.): Stevens' handbook of experimental psychology . Wiley, New York 2002.
  • GA Miller : The magical number seven, plus or minus two. Some limits on our capacity for processing information . In: Psychological Review , 63 (1956), pp. 81–97,
  • Helga de la Motte-Haber: Handbook of Music Psychology . Laaber-Verlag, Laaber 2002,
  • Géza Révész : On the foundation of tone psychology , Veit, Leipzig 1913.
  • John A. Sloboda: The musical mind. The cognitive psychology of music . Univ. Pr., Oxford 2003, ISBN 0-198-52128-6 .
  • S. Trehub: Human processing predispositions and musical universals . In: Nils L. Wallin u. a. (Ed.): The origins of music. Consists of papers given at a workshop on the "The origins of music" held in Fiesole, Italy, May 1997 . MIT Pr., Cambridge, Ma. 2001, ISBN 0-262-23206-5 .

Pitch perception

  • Daniel Bendor, Xiaoqin Wang: The neuronal representation of pitch in primate auditory cortex. In: Nature . Vol. 436, No. 7054, 2005, pp. 1161–1165, doi : 10.1038 / nature03867 .
  • Martin Braun, Vladimir Chaloupka: Carbamazepine induced pitch shift and octave space representation. In: Hearing Research. Vol. 210, No. 1/2, 2005, pp. 85-92, doi : 10.1016 / j.heares.2005.05.015 .
  • Ulrich W. Biebel, Gerald Langner: Evidence for "pitch neurons" in the auditory midbrain of chinchillas. In: Josef Syka (Ed.): Acoustical Signal Processing in the Central Auditory System. (Proceedings of an International Symposium on Acoustical Signal Processing in the Central Auditory System, held September 4-7, 1996, in Prague, Czech Republic). Plenum Press, New York NY et al. 1997, ISBN 0-306-45608-7 , pp. 263-269, doi : 10.1007 / 978-1-4419-8712-9_24 .
  • Ulrich W. Biebel, Gerald Langner: Evidence for interactions across frequency channels in the inferior colliculus of awake chinchilla. In: Hearing Research. Vol. 169, No. 1/2, 2002, pp. 151-168, doi : 10.1016 / S0378-5955 (02) 00459-8 .
  • Adrian Rees, Ali Sarbaz: The influence of intrinsic oscillations on the encoding of amplitude modulation by neurons in the inferior colliculus. In: Josef Syka (Ed.): Acoustical Signal Processing in the Central Auditory System. (Proceedings of an International Symposium on Acoustical Signal Processing in the Central Auditory System, held September 4-7, 1996, in Prague, Czech Republic). Plenum Press, New York NY et al. 1997, ISBN 0-306-45608-7 , pp. 239-252, doi : 10.1007 / 978-1-4419-8712-9_22 .
This version was added to the list of articles worth reading on May 21, 2006 .