Mastering (audio)

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Evinatea (talk | contribs) at 12:11, 29 March 2007 (obsolete tag removed). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Mastering, a form of post-production, is the process of preparing and transferring recorded audio to a data storage device (the master); the source from which all copies will be produced (via methods such as pressing, duplication or replication).

This cohesive audio program, in a tonal context, is fine tuned through equalization and audio level compression. More tasks such as editing, pre-gapping, leveling, fading in and out, noise reduction and other signal restoration and enhancement processes can be applied as part of the mastering stage. Program material is also usually put in the proper order at this stage.This is commonly called the assembly.

The specific medium varies, depending on the intended release format of the final product. For digital audio releases, there is more than one possible master medium, chosen based on replication factory requirements and/or record label security concerns.

A mastering engineer may be required to take other steps, such as the creation of a PMCD (Pre Mastered Compact Disc), where this cohesive material needs to be transferred to a master disc for mass replication. A good architecture of the PMCD is crucial for a successful transfer to a glass master that will generate stampers for reproduction.

History

Pre-1940s

In the earliest days of the recording industry, all phases of the recording and mastering process were entirely achieved by mechanical processes. Performers sang and/or played into a large acoustic horn and the master recording was created by the direct transfer of acoustic energy from the wax diaphragm of the recording horn to the mastering lathe, which was typically located in an adjoining room. The cutting head, driven by the energy transferred from the horn, inscribed a modulated groove into the surface of a rotating cylinder or disc. These masters were usually made from either a soft metal alloy or from wax; this gave rise to the colloquial term "waxing", referring to the cutting of a record.

After the introduction of the microphone and electronic amplification in the late 1920s, the mastering process became electro-mechanical, and electrically-driven mastering lathes came into use for cutting master discs (the cylinder format by then having been superseded).

However, until the introduction of tape recording, master recordings were almost always cut direct-to-disc. Artists performed live in a specially designed studio and as the performance was underway, the signal was routed from the microphones via a mixing desk in the studio control room to the mastering lathe, where the disc was cut in real time.

Only a small minority of recordings were mastered using previously recorded material sourced from other discs.

Advances

The recording industry was revolutionized by the introduction of magnetic tape in the the late 1940s, which enabled master discs to be cut separately in time and space from the actual recording process. Although tape and other technical advances dramatically improved audio quality of commercial recordings in the post-war years, the basic constraints of the electro-mechanical mastering process remained, and the inherent physical limitations of the main commercial recording media — the 78rpm disc and the later 7" single and LP record — meant that the audio quality, dynamic range and running time of master discs was still relatively limited compared to later media such as the compact disc.

Running times were constrained by the diameter of the disc and the density with which grooves could be inscribed on the surface without cutting into each other. Dynamic range was also limited by the fact that, if the signal level coming from the master tape was too high, the highly sensitive cutting head might jump off the surface of the disc during the cutting process.

From the 1950s until the advent of digital recording in the late 1980s, the mastering process typically went through several stages. Once the studio recording on multi-track tape was complete, a final mix was prepared and dubbed down to the master tape, usually either a single-track monophonic or two-track stereo tape.

Prior to the cutting of the master disc, the master tape was often subjected to further electronic treatment by a specialist mastering engineer. After the advent of tape it was found that, especially for pop recordings, master recordings could be 'optimized' by making fine adjustments to the balance and equalization prior to the cutting of the master disc.

Mastering became a highly skilled craft and it was widely recognized that good mastering could make or break a commercial pop recording. As a result, during the peak years of the pop music boom from the 1950s to the 1980s, the best mastering engineers were in high demand.

In large recording companies such as EMI, the mastering process was usually controlled by specialist staff technicians who were conservative in their work practices. These big companies were often reluctant to make changes to their recording and production processes — for example, EMI was very slow in taking up innovations in multi-track recording and they did not install 8-track recorders in their Abbey Road Studios until the late 1960s, more than a decade after the first commercial 8-track recorders were installed by American independent studios. As a result, by the time The Beatles were making their groundbreaking recordings in the mid-Sixties, they often found themselves at odds with EMI's mastering engineers, who were unwilling to meet the group's demands to 'push' the mastering process, because it was feared that if levels were set too high it would cause the needle to jump out of the groove when the record was played by listeners.

Digital technology

In the 1990s, the old electro-mechanical processes were largely superseded by digital technology, with digital recordings transferred to digital masters by an optical etching process that employs laser technology. The digital audio workstation (DAW) became common in many mastering facilities, allowing the off-line manipulation of recorded audio via a graphical user interface (GUI). Although many digital processing tools are common during mastering, it is also very common to use analogue media and processing equipment for the mastering stage. Many mastering engineers feel that digital technology has not progressed enough in quality to supercede analogue techology.

Process

The process of audio mastering varies depending on the specific needs of the audio to be processed. Steps of the process typically include but are not limited to:

  1. Transferring the recorded audio tracks into the Digital Audio Workstation (DAW) (optional).
  2. Sequence the separate songs or tracks (The spaces in between) as it will appear on the final product (for example, an Audio CD).
  3. Process or (sweeten) audio to maximize the sound quality for its particular medium.
  4. Transfer the audio to the final master format (i.e., Red Book compatible audio CD or a CD-Rom data, 1/2" reel tape, PCM 1630 U-matic tape, etc.).

Examples of possible actions taken during mastering:

  1. Adjust volumes.
  2. Edit minor flaws.
  3. Apply noise reduction to eliminate hum and hiss.
  4. Peak limit the tracks.
  5. Dynamic compression.
  6. Dynamic expansion.
  7. Adjust stereo "width".
  8. Add ambience.
  9. Equalize audio between tracks.

The guidelines above are mainly descriptive of the mastering process and not considered specific instructions that may or may not be applied in a given situation. Mastering engineers need to examine the types of input media, the expectations of the source producer or recipient, the limitations of the end medium and process the subject accordingly. General rules of thumb can rarely be applied.

RMS in music, average loudness

Loudness is often one of many concerns in the mastering process. The root mean square (RMS) in audio production terminology is a measure of average level and is found widely in software and hardware tools. In practice, a higher RMS number means higher average level; i.e. −9 dBFS RMS is 2 dB louder than −11 dBFS RMS. The maximum value for the RMS number is therefore zero. There is a standard loudness for digital media, which is -20dBFS RMS. This, standard is almost never followed outside of motion picture production. However, this designated operating level if followed, would help ensure

The loudest records of modern music are −5 to −9 dBFS RMS, the softest −12 to −16 dBFS RMS. The RMS level is no absolute guarantee of loudness, however; perceived loudness of signals of similar RMS level can vary widely since perception of loudness is dependent on several factors, including the spectrum of the sound (see Fletcher-Munson) and the density of the music (e.g., slow ballad versus metal rock). Overall loudness is ajusted by manual volume adjustments, equalization, compression/limiting and even deliberate distortion. The recent push for higher and higher RMS levels can often be destructive to sound quality as it often takes precedence over other attributes. This is because the high compression ratios to prevent peaking (overload distortion) take away the natural dynamics of the program. Thus the term loudness war was coined to show the competitive and often times harmful nature. Since digital media clips all signals at 0dBfs, compression/limiting/soft clipping must take place to reduce peak levels before the average level can be boosted. If the signal were to be boosted without such processing done first, all information that WOULD have been above 0dBfs, becomes flattened to the 0dBfs level yeilding severe distortion. However, many people are so obsessed with absolute loudness in digital media, this clipping is being allowed in many recent recordings on top of heavy limiting etc.

Loudness for analogue media is somewhat different of a nature. The goal is usually 0dB in reference to the operating level of the recording medium. For instance, the average RMS level of compact cassette tape is 0dB RMS in reference to 250 nanowebers per meter. For a newer 1/4" tape stock, something like +4dB over 250 nWb/M may be more common. However, when one uses a higher operating level such as 400nWb/M (+4dB over 250nWb/M), it is usually because the tape itself is designed to hold a stronger magnetic charge and thus can be recorded "hotter" without distortion than a formula that's designed to run at 250nWb/M. As of the late 1990's the peak levels can reach +12dB over 250 nWb/M or more. Recording at such high levels allows for a greater gap between the signal and the inherent noise of the medium. This can aid in better representation of the dynamic range. This is, of course, applies within the allowable dynamic range of the medium. When taken too far, the dynamic range can actually decrease.

This is important to understand even when the final output medium is digital because mastering engineers often record to tape and then record the tape to digital. This serves two purposes. #1, Analogue tape has a longer shelf life than digital tape as well as many other digital media so it serves for archival purposes. This is also very important because digital technology becomes obsolete so quickly, the analogue tape can be rearchived to the latest higher resolution digital technology as it becomes available. #2, Since analogue tape saturates gradually, it can be used to help reduce peak levels of the program material allowing for a louder signal in the digital medium. Analogue tape, unlike digital, saturates gradually around 12dB to 15dB above the designated operating level and can be pushed a bit "hotter" than said operating level with only minor degradation to the peak levels of the signal. This can at times come at the cost of lost clarity.

Audio mastering tools

See also