Computer music

from Wikipedia, the free encyclopedia

Computer music is music for the creation of which the use of computer systems is necessary or essential. The term electronic music , on the other hand, relates more generally to the electronic generation of sound signals.

history

As early as the 17th century there was a growing conviction that music is the art of skillful order of numbers. One of the earliest traditions of this type is the Arca Musarithmica , which was mentioned in the Musurgia Universalis of the Jesuit priest and music scholar Athanasius Kircher , printed in 1650 . Similarly, by Gaspar Schott writings handed down to mechanical musical instruments.

After 1770 the center of music automatons shifted from Germany to France, England and Switzerland. The inventions of the Frenchman Vaucanson are worth mentioning . The mechanical automaton was the subject of some of ETA Hoffmann's stories .

Among other things, Wolfgang Amadeus Mozart is said to have come up with a musical dice game (KV 294 d), a "guide to compose a waltz or grinder with two dice ...". There, 3/8 bars in the piano setting are listed in a table, the selection of which is made by the number of dice thrown and notated one after the other, resulting in a finished composition.

If now the computer "rolls the dice", i. H. Generated random numbers , the numbers correspond to notes. At the same time, the computer is taught rules that determine which of the rolled notes are allowed and which may be rejected because they contradict the rules in this case. The rules can be taken from a textbook, or the computer can be programmed to use e.g. B. after entering a Bach chorale finds the rules and conditions there himself, such as which harmonies occur, which tone sequences occur more often, etc. The Russian mathematician Andrei Andrejewitsch Markow introduced the Markov chains named after him in connection with text studies , in which transition probabilities for individual Elements are considered. Claude Elwood Shannon , the founder of information theory , then pointed out that the Markov method could also be used in musical experiments.

In August 1951, music was played publicly with the Australian CSIRAC (Council for Scientific and Industrial Research Automatic Computer).

Finally, Lejaren A. Hiller and Leonard M. Isaacson called the fourth (experimental) movement of their ILLIAC suite for string quartet (the first computer composition written in 1955/1956) Markov chain music . The first and second movements were dominated by the Palestrina counterpoint rules , which Johann Joseph Fux had formulated in his famous work Gradus ad Parnassum . The third and fourth movements were dominated by more modern compositional rules such as twelve-tone technique and even stochastics . Hiller's second project, the Cantata computer , used a special composition program called MUSICOMP .

At Bell Laboratories in the USA, Max Mathews originally worked with artificial speech synthesis . In order to use the IBM 7090 as a sound generator, he divided the sound synthesis process into two phases: in the first, the instantaneous values ​​of the waveform were recorded in a data memory ( magnetic tape ); In the second, the stored values ​​were read out and converted into digital audio signals . The MUSIC family of computer programs was created to replace the large amounts of numbers with a few musical parameters . In the MUSIC III program (1960), generated sounds could be used to modulate additional oscillators . Under the title The Technology of Computer Music , Max Matthews and his colleagues provided a thorough description of the MUSIC V programming language , which marked an important milestone in the field of computer sound synthesis in the 1970s. The Max / MSP (1997) program, later made famous by Miller Puckette , is named after Matthews.

One of the first hybrid systems (analog and digital) was the GROOVE synthesizer constructed in 1970 by Max Matthews and John R. Pierce at Bell Telephone Laboratories . The composer had the opportunity to play his previously programmed piece in various interpretations. At the same time, the hybrid system MUSYS was developed by David Cockerell, Peter Grogono and Peter Zinovieff in England.

At the end of the 1970s, so-called mixed digital systems emerged in which one computer could control another sound-producing computer. The Electronic Music Studios did pioneering work here . From 1976 Giuseppe Di Giugno developed several digital synthesizers for the Paris sound research institute IRCAM with the advice of Pierre Boulez and the composer Luciano Berio . The SSSP digital synthesizer was also constructed by a research group at the University of Toronto . The Fairlight CMI was designed in Australia . At the same time, the Americans Jon Appleton , Sydney Alonso and Cameron Jones developed the Synclavier .

For Iannis Xenakis, the central category of musical processes became the density of sound products and their arrangement according to the laws of mathematical probability. He used an IBM 7090 computer for his first works. Musical experiments resulted from game theory , group theory and row theory . With the UPIC system , greater demands could be made on the computer.

At the beginning of 1983, leading synthesizer manufacturers agreed on a uniform standard so that a larger number of synthesizer types could be controlled by as many types of computers as possible: Musical Instrument Digital Interface , or MIDI for short.

composition

The composition of music can be done by using computers. Score synthesis is an application of computer-aided composition in the form of computer-generated scores . In a number of approaches, attempts have been made to map such structures by programming , initially with the programming language Fortran .

Sound synthesis

With the advancing development of electronics and digital technology , the processes of sound synthesis have become increasingly flexible and powerful.

Sound synthesis from two frequencies

Additive sound synthesis

Additive synthesis is of interest for sound synthesis, but in principle its application is not tied to digital signal processing.

A sound turns out to be a periodic, non-sinusoidal curve on the screen of an oscillograph (e.g. right picture in Fig. A). As Jean Baptiste Fourier has established, every periodic oscillation can be understood as a superposition of sinusoidal curves (different frequencies and amplitudes). This makes it possible to combine sounds (periodic curves) by adding individual tones (simple sinus curves) (sound synthesis). By selection and variation z. B. the amplitude of the individual components results in a variety of different sounds ( additive synthesis ) (see Fig. A). However, since a sound, e.g. If, for example, the key a 'struck on the piano (with the basic oscillation 440 Hz) changes during its duration ( Jean-Claude Risset ), this simple sound synthesis is not sufficient to produce a real piano sound. It turns out to be quite essential for this u. a. the transient process, d. H. the time in which the sound builds up is important. In addition, colored noise , which consists of an almost infinite number of partial oscillations with a frequency maximum, makes a particularly important contribution to the sound.

An important characteristic are the time differences in the build-up and breakdown of overtones ; every single overtone has its own complex envelope . In electronic organs, the implementation of the additive synthesis principle is limited to the controlled addition of a few sinusoidal harmonics (usually less than ten) that are tuned to the fundamental at fixed intervals . Therefore it is not possible to influence the envelope curve here.

The Fast Fourier Transformation achieved a reduction in storage space requirements through several smaller computer systems. For a fully-fledged additive synthesis, it is necessary to influence the individual envelopes separately. Whether a common envelope for all overtones, or the individual programming for several overtones of a sound - the sound results differ significantly in quality. Envelope copy functions can help here.

Synthesis by frequency modulation

If the frequency f of a tone changes periodically, these pitch fluctuations are called vibrato . It is said that the sound is frequency modulated (see Fig. B). If the carrier frequency and modulation frequency approach one another (see Fig. C), only a few waveforms result, e.g. B. the two in Fig. C, rich results. On the one hand, this FM synthesis is less complex than additive sound synthesis and, on the other hand, it is a more flexible technique because carrier and modulation vibrations can take any form (not just sinusoidal vibrations) and completely new sounds can be synthesized.

Synthesis by amplitude modulation

With the analog synthesizer it is possible to change the strength of the modulation voltage and its frequency. For this i. d. Usually the modulation oscillator LFO is used. The lower limit of its frequency range begins outside the human hearing area (around 0.01 Hz). If the volume of a sound is modulated in the VCA with an LFO frequency that rises within the range from 0.01 to 16 Hz, this sound experiences increasing rhythmic changes depending on the modulation frequency and strength. The well-known tremolo effect increases to a peculiar roughness of the sound; one speaks here of subauditive control . A stationary sound is only created by increasing the LFO frequency in the human hearing range.

Waveform synthesis

The starting point for waveform synthesis is a sine wave , the amplitude of which is controlled by an envelope generator. This wave passes through an assembly called a non-linear processor ( waveshaper ). The processor converts the overtone-free sine wave into a sound signal with changing overtone components. At the beginning of the 1980s, the Casio company developed phase distortion synthesis , a variant of waveform sound synthesis that was expanded to Interactive Phase Distortion in 1988 .

Sound sampling

Sound sampling is a digital method of storing sounds. In practice, sound sampling as a combination with digital and analog sound manipulations has already become an independent sound synthesis technique, although the principle involved is the reproduction of existing sounds. With the increasing storage capacity of computers, waveforms of mechanical instruments with enormous memory and their input and output processes could be stored in a wavetable in the computer. By manipulating (filters, modulators, etc.) these values ​​can be changed sounds and effects (e.g. reverb) can be added. Since, on the one hand, hearing can be viewed as an analog process, and on the other hand, the computer is a digital machine, the discrete numbers of the computer have to be converted into smooth (electrical voltage) curves during output so that they can ultimately be made audible via the loudspeaker . This is done using a digital-to-analog converter . Conversely, analog sound events get into the computer when analog-digital converters are used. In this case, instantaneous amplitudes (high values) of a sound curve - a pure tone, recorded via a microphone and made visible on the screen of an oscilloscope (= vibration recorder ), results in a sine curve - can be sampled in small time steps, whereby from the (smooth) sound curve a stepped curve is created that approximates the original the better the more samples take place in the time unit, i.e. the higher the sampling rate . If the sound curve is made up of several sinusoidal curves, i.e. if a sound is recorded, then, according to the sampling theorem of Nyquist and Shannon (1948), the sampling rate must be set twice as high as the highest frequency occurring in the sum (number of vibrations f, measured in Hertz (Hz)), the stored sound should be completely recorded or reproduced in all its subtleties. If z. B. the sampling rate (sampling frequency) on a compact disc is 44.1 kilohertz, then the highest (theoretically) occurring frequency f is 22.05 kilohertz.

Physical modeling

When an instrumental sound is analyzed mathematically, sounds can be created through mathematical specifications. Because of the very high computational requirements, replacement models (Julius O. Smith) have been sought, e.g. B. electrical waveguides instead of longitudinal waves in a pipe, as they occur in a vibrating pipe , are examined.

Granular synthesis

Granular synthesis is a sampling method that can be used to create sounds from extremely short fragments. Forms of granular synthesis are Glisson synthesis , pulsar synthesis and the like. a. ( Curtis Roads ).

Sound control

Sequencer programs

Sequencer or composer programs are used for external pitch control. The keyboard of a synthesizer is replaced here by a computer that controls the synthesizer with pre-programmed pitch sequences. Musical parameters can either beentered individually or tones areplayed, saved and changedin real time .

Audio editor programs

Sound editor or voicing programs are used to clearly control the process of sound synthesis . The graphical representation of the parameters as well as dynamic sound progressions such as envelope curve representations on the screen are advantageous. The trend here is towards universal editor programs that can be used simultaneously for several types of synthesizers.

Tracker programs

Trackers are software sequencing programs ; the audio interfaces are mostly alphanumeric , parameters or effects areentered in hexadecimal .

See also

literature

  • Hubert Kupper: Computer and musical composition. Braunschweig 1970
  • Curtis Roads: The Computer Music Tutorial. MIT Press 1996
  • Martin Supper: computer music. in: MGG - Music in the past and present. General encyclopedia of music. Kassel 1995, Col. 967-982
  • André Ruschkowski: Soundscapes - Electronic sound generation and music , 1st edition 1990

Web links

Individual evidence

  1. Paul Door Busch: The Music of CSIRAC, Australia's First Computer Music . Common Ground Publishers, Australia 2005, ISBN 1-86335-569-3 (with accompanying CD recording).