0% found this document useful (0 votes)
52 views

Sound and The Microphone Sound: CFT 402 - Sound in Production. Lecturer: Shapaya

Sound is an important element in film that enhances the experience beyond just visuals. The development of microphones allowed sound to be recorded and synchronized with film. Early "talkies" had poor quality dialogue, but over time filmmakers learned to integrate sound more artfully. Microphones transform sound waves into electrical signals and come in different types suited to various purposes. Good audio planning and microphone selection are essential for high quality sound capture.

Uploaded by

Mad Zahir
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views

Sound and The Microphone Sound: CFT 402 - Sound in Production. Lecturer: Shapaya

Sound is an important element in film that enhances the experience beyond just visuals. The development of microphones allowed sound to be recorded and synchronized with film. Early "talkies" had poor quality dialogue, but over time filmmakers learned to integrate sound more artfully. Microphones transform sound waves into electrical signals and come in different types suited to various purposes. Good audio planning and microphone selection are essential for high quality sound capture.

Uploaded by

Mad Zahir
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Sound and the Microphone

Sound
Beside the world of vision in film, there is the world of sound, another dimension, another aspect of
reality, the most significant thing about it perhaps being that it comes to us through a different sense
organ which determines the nature of our experience to it. Differences in its artistic use in the cinema
will stem from differences between the two senses of sight and hearing, but the general picture of the
artist expressing his experience through a medium in which he cannot accurately reproduce physical
reality, and which thus offers opportunities for the exercise of his art, will still hold good.

Sound is a form of energy which depending on its loudness, tone and concentration can have
tremendous power. It can start an avalanche, shatter glass, shake buildings and cripple human
eardrums. This is because sound energy literally moves air, in wave shaped patterns, which creates
pressure on any surface it strikes.

Next to sight, hearing is the richest and most complex of our senses. Sound is the basis of one of the
greatest of the arts, music. As speech it forms a medium for thought, and is the most important means
of communication among human beings. As one can imagine, those who hoped the cinema would
create a total illusion of reality were not likely to be satisfied with sight alone, and from the very
beginning of the cinema every effort was made to incorporate sound.

The mechanical reproduction of sound was developed as early as the first motion pictures, but the
problems of amplifying sound sufficiently for an audience and synchronising with the film image were
not solved until the late 1920s. Although sound attracted crowds to the cinema to hear the new miracle,
the artistic levels of the best silent pictures were not reached immediately. The new ‘talkies’ were
mostly poor imitations of theatrical plays, with dialogue and sounds used indiscriminately.

The incorporation of sound in films in the late 1920s was a system of ‘optical sound’. The principles
involved was that variations in sound waves are recorded as variations in light and shade on a separate
sound track, but on the same strip of film as the film is projected it turns these optical variations back
into sound waves, which audience hears through loudspeakers at the same time as they see the pictures
on the screen.

The sound brought with it disadvantages and advantages. Just as, in its beginnings, the cinema has
copied the scenery and static viewpoint of the theatre, so, when it was able to combine speech with its
pictures, it copied the continuers dialogue of the theatre. With this stage dialogue, even with
naturalistic acting, must be spoken clearly and loudly enough to be heard even if it means talking in a
‘stage whisper.’

To offset its drawbacks, sound brought important advantages. The silent cinema in the 20s was silent
only in name for it had from the start been accompanied by music. In fact, to watch a silent film
altogether in silence is a curious, incomplete experience. Often the accompaniments were bad,
inappropriate or hackneyed

An advantage of sound to film was that it freed the image to be itself, in other words relieved it of the
need to try and express sound in visual terms. In the silent film there was something particularly
strained about shorts of a factory siren blowing, women singing yet the audience could not hear the
actual sound of the subjects.

CFT 402 – Sound in Production. Lecturer: Shapaya


The possibility of using words at all was a great advantage. In silent cinema, the written captions were
always an alien element and never combined with the visuals as an artistic whole.

The coming of the sound film also enabled the artist to use silence in a film with a positive effect. On
the stage the effect of silence cannot be drawn out or made to last as it can in the cinema. In a film the
effect can be extremely vivid and varied and a silent glance can speak volumes.

Importance of sound
1. Attract attention
2. Increases emotional impact
3. Enhances understanding
4. It communicates
5. Increases recall of a visual message

The Microphone
A microphone is device that transforms sound/acoustical energy into electrical energy. Though
relatively simple, it forms the basis of all voice communication, enabling sound to be transmitted and
recorded by electronic means.

It is frequently assumed that by sticking a microphone into the scene at the last minute we have taken
care of the audio requirements, but good audio needs at least as much preparation and attention as the
video portion. Audio, like any other production element, should not be ‘added’ but integrated into the
production planning from the very beginning.

Because no camcorder’s built-in ‘squeaker’ can record more than general background sound – which
often includes a lot of out-of-shot and unwanted material – additional ‘specialised’ mics are often
essential.

The pickup of live sound is done through a variety of microphones. How good or bad a particular
microphone is depends not only on how it is built, but especially how it is used.

Since the invention of the microphone by Graham Bell, many types and designs of microphones have
been developed over the years, each having served well until its replacement.

Microphones may be classified according to their physical design, such as carbon, capacitor, ribbon-
velocity, moving-coil, semiconductor, crystal and ceramic. They also may be classified according to
their polar patterns as omnidirectional, bi-directional, directional, superdirectional, and cardioid.
Nomenclature given to microphones designed for special use, such as wireless, dual-stereophonic, in-
line, and high-intensity, might be considered yet another category. However, whatever classification
used, the designs of the microphones will vary according to the manufacturer.

Basic principles of microphone operations


Microphones are divided into two categories of operation, velocity and pressure. Pressure operated
microphones employ a diaphragm with only one surface exposed to the sound source. The
displacement of the diaphragm is proportional to the instantaneous pressure of the sound wave. At
lower frequencies such microphones are practically nondirectional.

CFT 402 – Sound in Production. Lecturer: Shapaya


A velocity microphone is one in which the electrical output substantially corresponds to the
instantaneous particle velocity in the impressed sound wave. The quality of a microphone can be
judged by the frequency response, sensitivity, distortion, internal noise, and field pattern.

Electronic characteristic of the microphone


In order to choose the most appropriate microphone and to operate it for optimal sound pickup, one
should know the basic electronic characteristics of the microphone. This include: sound-generating
element, pickup pattern and other special features.

1. Sound-generating element
All microphones transduce (convert) sound waves into electronic energy, which is amplified and
reconverted into sound waves by the loudspeaker. This initial conversion is accomplished by the
generating element of the microphone. There are three major sound converting systems that can also
be used to classify microphones; dynamic, condenser and ribbon.

There are four basic types of microphones.

i. The carbon microphone


It is the oldest and most widely used type. It is akin to the telephone transmitter.

This microphone consists of a metallic cup filled with carbon granules; a movable metallic diaphragm
mounted in contact with the granules covers the open end of the cup. Wires attached to the cup and
diaphragm are connected to an electrical circuit so that a current flows through the carbon granules.
Sound waves vibrate the diaphragm, varying the pressure on the carbon granules. The electrical
resistance of the carbon granules changes with the varying pressure, causing the current in the circuit
to change according to the vibrations of the diaphragm.

One of the principal disadvantages of the carbon microphone is that it has continuous high-frequency
hiss caused by the changing contact resistance between the carbon granules. In addition, the frequency
response is limited and the distortion is rather high.

ii. The crystal microphone


It is widely used in public address systems and home recording work.

It depends for its action on the piezoelectric effect of certain crystals, most commonly Rochelle salts.
The term ‘piezoelectric’ refers to the fact that when pressure (in this instance - sound waves) is applied
to the crystals in the proper direction, a proportionally varying voltage is produced between opposite
faces of the crystal.

This microphone has it as a basic concept that a voltage develops between two faces of the crystal
when pressure is applied to the crystal In this microphone sound waves vibrate a diaphragm, which in
turn varies the pressure on a piezoelectric crystal. This generates a small voltage, which is then
amplified.

The advantages of the crystal microphone are its relatively high output voltage, acceptable sound
quality and low cost.

iii. Dynamic microphone

CFT 402 – Sound in Production. Lecturer: Shapaya


It depends on magnetism for the translation of sound energy into electrical energy. A dynamic
microphone employs a small diaphragm and a voice coil, similar to a dynamic loudspeaker, moving in
an intense permanent magnetic field. Sound waves striking the surface of the diaphragm cause the coil
to be moved in the magnetic field, thus generating a voltage proportional to the sound pressure at the
surface of the diaphragm. This microphone is also referred to as a pressure or moving-coil
microphone.

Dynamic microphone is a term that incorporates the ribbon and velocity microphone.

The dynamic microphone in principle closely resembles the dynamic loudspeaker, found in all radios
and television receivers. In fact, a two-or-three inch dynamic speaker will make a satisfactory
microphone for such limited-quality used as intercommunication systems and is frequently so
employed.

The dynamic microphone consists of a number of turns of wire wound in what is called a voice coil
and rigidly attached to a diaphragm. This coil is suspended between the poles of a permanent magnet.
Sound causes the diaphragm to vibrate, moving the coil back and forth between the poles and
producing an alternation voltage proportional to the applied sound.

In ribbon microphones, a thin metallic ribbon is attached to the diaphragm and placed in a magnetic
field. When sound waves strike the diaphragm and vibrate the ribbon, a small voltage is generated in
the ribbon by electromagnetic induction.

The velocity microphone, a high-quality device widely used in commercial broadcasting and
recording, is similar in principle. A coil of light wire is suspended between the poles of a permanent
magnet. When vibrated by sound, the coil of wire cut the lines between the poles in alternation
directions, generating an alternating voltage across the length of the wire.

Currently, some modern microphones, designed to pick up sound from one direction only, combine
both ribbon and coil elements for a much richer sound pick-up.

Since the voltage generated by the dynamic microphone is very small, much greater amplification is
necessary for practical use than in the case of carbon and crystal microphones.

Generally, the dynamic mic is the most ragged. They can tolerate reasonably well the rough handling
microphones usually receive. They can withstand extremely high sound levels without damage to the
microphone or excessive distortion of the incoming sound – input overload.

iv. Condenser microphone


The condenser (also referred to as capacitor mic at times) microphone has two thin metallic plates
placed close to each other that serve as a capacitor. The back plate of the capacitor is fixed, and the
front plate serves as the diaphragm. Sound waves alter the spacing between the plates, changing the
electrical capacitance between them. By placing such a microphone in a suitable circuit, these
variations may be amplified, producing an electrical signal.

These microphones usually need a small battery to power their built-in preamplifier. Because of the
built in preamplifier they are more sensitive and powerful than other types of microphones.

CFT 402 – Sound in Production. Lecturer: Shapaya


Condenser microphones are among the highest quality microphones available, and are almost
universally used in the most exacting broadcasting and recording work. They usually produce higher
quality sound even when used at greater distances from the sound source.

Condenser microphones have a wide frequency response, low distortion, and little internal noise

The electret condenser is a development of the condenser with a ferroelectric material that has been
permanently electrically charged - polarized. These are the most common types of microphones in the
industry because of their low production costs due to their ease of manufacture. Nearly all lavaliere,
headset microphones, cell-phone microphones use the electret condenser mic technology.

However, the condenser microphone differs from the dynamic microphone in the sense that they are
more sensitive to physical shock, temperature changes and input overload.

Condenser mics have also been noted to have a lower signal-to-sound ratio than dynamic mics. It is,
though, only really obtrusive when recording quiet sounds in a quiet indoor environment (Richardson,
1992, 67).

2. Sound-pickup pattern
Like our ears, any type of microphone can hear from all directions as song as the sounds are within its
hearing range. But whereas some microphones hear sounds from all directions equally well, others
hear better in a specific direction. The territory within which a microphone can hear well is called its
pickup pattern.

In film production, there are omni-directional and unidirectional microphones. The omni-directional
microphone hears sounds from all directions equally well. The unidirectional microphone hears better
in one direction – the front of the microphone. Because of the polar pattern on unidirectional
microphones are roughly heart shaped, they are called cardioid. The super-cardioid, hyper-cardioid,
and ultra-cardioid have progressively narrower pickup patterns, which mean that their hearing is more
and more concentrated to what is happening in front rather than to the side.

Which type you use depend primarily on the production situation and the sound quality required. If
you require the actual scenery sounds for authenticity while your actors are performing, then an omni-
directional mic will be more appropriate. If on the other hand, you are in a studio trying to pick-up the
low key, intimate conversation between two people, you need a unidirectional mic. For example, an
ultra-cardioid mic, or shortgun mic, will give you a good pickup of their conversation, even if the mic
has to be relatively far away from the people so as to be out of the picture. Unlike the omni-directional
mic, the shortgun mic ignores most of the other sounds present, such as the inevitable noises of an
active studio – people and cameras moving about, humming of lights or the rumble of the air
conditioner.

Recording Sound
Picking the right Microphone
Different microphones are used for different purposes. The choice of which mic to use depends on
many factors, such as:
1. Frequency range of the sound source.
2. The nature of the sound – The sound might be a percussive sound; a large, vibrating single source,
such as a piano; a quiet temple bell; a screaming guitar; a string section.
3. The level of background noises in the recording area.
CFT 402 – Sound in Production. Lecturer: Shapaya
4. The acoustics – good acoustics that are a part of the recording of an orchestra, bad acoustics of a
jazz club.
5. The approach to the recording – multi-mic, simple stereo, news gathering in which the sound is on
the move and intelligibility is key, a lecture that includes question from the audience.
6. Purpose of the recording – film, music, podcast.
7. Planned output – mono, stereo, 5.1.

Music/Sound Track
Alberto Cavalcarti in his paper Sound in Film declares with finality that the silent film never existed
simply because music has always been integrated (or at least attempts were made to integrate the two
arts) with the motion picture from the turn of the century.

It is important to note that the art of recording sound was already present at the time of the invention of
the moving images; the only obstacle present was how to fuse the images to the sound. As soon as
films were invented, and long before there were such things as picture palaces, filmmakers and
showmen began to employ devices to provide a sound accompaniment and complete the illusion. First
they used the phonograph, but not for long because of their fragility.

The next device to which showmen turned was the ‘barker.’ In those early times, the bulk of film
distribution was in fairgrounds, where barkers were easy to find.

When the film started to move into special premises, called cinemas, the use of the barker in turn
ceased to be practical. A man’s voice could not be heard easily in a large hall. Besides, a running
commentary was monotonous in a full-length show.

As the barker went out, the inter-title came in to explain the action and comment upon it. Ambitious
filmmakers raided novels and stage successes for film subjects, without giving any thought to real
filmic possibilities, and indeed without any real conception of film itself.

By films being shown in houses for their purpose, the moving picture rose from the status of the pedlar
to a more bourgeois standard, to which the greater refinement of a musical accompaniment were
appropriate.

At the beginning music was used for two very different purposes at once: (a) to drown the noise of the
projectors; (b) to give emotional atmosphere.

As cinema developed commercially, the music became more elaborate and played a larger and larger
part in the show as a whole. Cinema owners vied with each other to attract the public: the piano
became the trio, the trio became a salon orchestra, the salon orchestra became a symphony orchestra.

Sometimes, for an important film, special music would be chosen or even composed for the occasion,
but, although many large cinemas had orchestras, these would mostly play music of their own
choosing. Many large cinemas had a special organ installed which was used to accompany all films
regardless. Most of the music was trite and unimaginative, sometimes it was excruciating. The music
was not intended to increase the reality of the mute image by adding sound; however, it did increase
the audience emotional receptiveness. Its main function was ‘to affirm and legitimate the silence’.
Thus, provided it was not actively offensive, the music could to a great extent be disregarded; it was
the visuals that mattered.

CFT 402 – Sound in Production. Lecturer: Shapaya


All the same, inferior music, if it did not invalidate silent films, did nothing to help them. Currently,
because the sound track is an integral part of the sound film the music can more easily be created as
part of the total concept of the film and may be the work of a great composer. The music does not have
to be continuous, and directors have learned the value of restraint. In most films today, music is used
only intermittently, to heighten the mood at key points. One sees documentaries and even feature films
in which extraneous music is avoided entirely, and the music in the film is that which arises
necessarily from the action or the actual setting. Since the music is on the same track as the visual, it
can be times to fit them, and can be used to stress a rhythmic beat (a train or a galloping horse), to
reinforce the noise of an angry crowd, to sublimate a noise or a cry (the shriek of a tortured prisoner or
the cry of a woman in childbirth) which mergers into a musical phrase.

So far as the type of music is concerned, one can say generally that the best music is that which is
composed by a sensitive artist to fit the film. All kinds of concert music have been used for films, from
Byrd to boogie-woogie, but, however good the music itself, it may be inappropriate for the film
concerned. Several ‘new wave’ French directors have used Vivaldi, Bach and Diabelli, but such formal
music as this, with so strong a shape and rhythm of its own, tends to take too prominent a place.

Film music is necessarily ‘canned’ music and as such cannot attain the delicacy and richness of the
actual concert hall.

The Musical Film


Musical films can be defines as a genre in which songs are interwoven with music. Ideally the songs
enhance the plot and may at times incorporate dance. There are many types of musicals as there are
music genres. It was a natural development from the stage musical after the advent of sound in film

Soon after the sound film was introduced, the ‘musical’ film came into being. This was at first an exact
analogy of the photographed stage play – only instead of a play, a big Broadway musical show was
photographed. So great were the opportunities for spectacle and mass effect, that this kind of film had
a big momentum at the beginning and for some years such spectacles continued to be produced. But
there was always something wrong with them – something that the public gradually recognised and
rejected. They were not films at all, in the pure sense of the word. Scenes stayed on the screen too
long. ‘Numbers’ dragged out their length on the track. The story was slight, and contained nothing
exciting.

The musicals took a different turn with the advent of the musical melodrama. The story was
strengthened and plot took shape. A great musical is an excellent (creative) combination between film
(image) and music.

However, it is important to note that there are two main categories of musicals – the backstage musical
and the straight musical.

Backstage musical will focus on the musician and their lives in relation to the music. It takes place in a
show biz kind of set-up i.e.; organised singers that train to rise above the obstacles or the single
musician that has to fight to get acceptance in the music industry.

The straight musical has people singing and dancing without any prior preparation. It is argued that
straight musicals are often in romantic set-ups. They have also been used a lot in children stories, with
films ranging from Wizard of Oz to Lion King employing music as a main technique in their
production.
CFT 402 – Sound in Production. Lecturer: Shapaya
By the 1980s and 1990s the live-action musical had become rare and animated films had largely taken
over the function of providing tales interspersed with musical numbers.

Rules governing the use of music


1. Time your music properly
2. Never use familiar music as background as it distracts the audience and draws attention to itself.
Research has also shown that familiar music reminds the audience of familiar issues that is attached to
the song.
3. Select music that compliments the voice. The voice has a tempo and rhythm which can be
complimented by music.
4. Edit music so that it is a complete entity of its own – it must have a beginning, middle and ending.
5. Select music with a weak melody line – the melody line is the most memorable in a piece of music

Silence as a device
Silence is also an acoustic effect but only where sound can be heard. The presentation of silence in one
of the most specific dramatic effects of the sound film. No other art can be reproduce silence like film
does. Even on the stage silence appears only rarely as a dramatic effect and then only for short
moments. Radio plays cannot make us feel the depth of silence at all, because when no sound comes
from our set, the whole performance has ceased, as we cannot see any silent continuation of the action.
The sole material of the wireless play being sound, the result of the cessation of sound is not silence
but just nothing.

Silence and Space


How do we perceive silence? By hearing nothing? This is a mere negative. Yet man has few
experiences more positive than the experience of silence. Deaf people do not know what it is. But if a
morning breeze blows the sound of a cock crowing over to us from a neighbouring village, if from the
top of a mountain we hear the tapping of a woodcutter’s axe far below the valley, if we can hear the
crack of a whip a mile away – then we are hearing the silence around us. We feel the silence when we
can hear the most distant sound of the slightest rustle near us. The silence is greater when we can hear
very distant sounds in a very large space. A completely soundless space on the contrary never appears
quite concrete and quite real to our perception; we feel it to be weightless and unsubstantial, for what
we merely see is only a vision.

On the stage, a silence which is the reverse of speech may have a dramaturgical function, as for
instance if a noisy company suddenly falls silent when a new character appears; but such a silence
cannot last longer than a few seconds, otherwise it curdles as it were and seems to stop the
performance. On the stage, the effect of silence cannot be drawn out or made to last.

In film, silence can be extremely vivid and varied, for although it has no voice, it has very many
expressions and gestures. A silent glance can speak volumes; its soundlessness makes it more
expressive because the facial movements of a silent figure may explain the reason for the silence, its
tension. In film, silence does not halt action even for an instance and such silent action gives the
silence a living face.

Dialogue and Monologue


Dialogue
With the invention of sound in film, the films went speech-mad. Now that films ‘could speak’
theatrical people descended on studios to make films. They further confounded the situation, because
CFT 402 – Sound in Production. Lecturer: Shapaya
they knew nothing about films, and started off with the assumption that in order to make a sound film
it is only necessary to photograph a play. However, it never occurred to the theatrical people that a
film is not, and never can be, the same thing as a play. The early silent directors learned by a process
of trial and error which lasted for many years, that the technique of stage acting is not the same as the
technique of film acting, a theory that theatrical people should have been informed.

The gestures and attitudes are far too striking. By a long process, a technique of film acting was built
up, in which the skilful actor employed restrained gestures, attitudes, and expressions which,
magnified and emphasized on the screen, got him the effect, he wanted. At the beginning of the sound
period, when the actors from the theatre poured into the studios, these lessons had to be learned all
over again.

Further a simple analogy which have been drawn, which would have indicates at the onset that just as
the screen required restraint in gesture it also required restraint in delivery of speech. The microphone
is a very searching instrument. The round-mouthed oratory of stage delivery becomes intolerable
affectation when it is amplified by loudspeakers in the cinema (unless the content justifies rhetoric).
Film dialogue, it was discovered, was most effective and dramatic when it was uttered clearly, rapidly,
and evenly, almost thrown away (Weis, 1985, 103). Emphasis and emotional effect must of necessity
be left to the care of visuals.

But the difference between stage and screen goes far beyond such externals as the technique of
miming and speaking. It is an organic difference. A play is all speech. When the early talkie directors
put whole plays on the screen, they were forgetting the lessons which the barker had taught them – that
the continuous utterance of words in the cinema is monotonous. More important, the preponderance of
speech element in the resulting film crushed out the other elements – visual interest, noise and music.

Moreover, films must move, or they become intolerable. Long stretches of dialogue inevitably cancel
movement and visual variety.

Film producers have learned that the use of speech must be economical, and be balanced with the other
elements in the film, that the style employed by dialogue writers must be literal, conversational, and
non-literary.

When we need dialogue


1. Dialogue is essential in helping your audience understand your characters and plot.
2. Good dialogue should reveal the characters motivation and it should help explain why they act as
they do. It should give the audience key information about the context and the setting of the story.

Good dialogue is only done when the script writer thinks in character and understands the characters
motives and background in each scene.

Monologue
A monologue is a moment in a play, film, or novel, where a character speaks without being interrupted
by any other characters. These speeches can be addressed to someone, or spoken to the actor’s self or
to the audience, in which case they are called soliloquies. Another type of this speech, especially in
novels, is the interior monologue, where a character has a long bout of thinking personal thoughts,
which aren’t interrupted by speech or actions. This technique may also be used in film where a
voiceover provides the inner thoughts of the character.

CFT 402 – Sound in Production. Lecturer: Shapaya


The monologue can act in a number of ways. It can forward the plot by signifying the character’s
intentions, it can reveal information about the character’s thought processes, or it may simply serve to
more fully flesh out a character. It also gives actors an opportunity to express dramatic range and is
akin to “solos” in music. In fact, some operatic arias are considered monologues, since a character has
a chance to sing alone, and this tradition continues in the modern musical. Many musicals make use of
songs sung by an individual to flesh out characters, forward plot or explain details.

Uninterrupted spoken parts are commonplace in most plays, movies and teleplays. In auditions, actors
must find monologues that are usually no more than two minutes in length, and they may be asked to
perform two. Most seasoned actors, especially in the theater world, develop several pieces they
particularly like, and that most represent their dramatic range or their abilities to play very different
types of characters.

The student of drama may start learning how to act by first learning how to perform a monologue.
There are some common mistakes along the way, such as performing monologues that have been
“done to death.” It’s also usually important to not take a monologue out of context. Reading a play and
digging deep to understand why a character is saying what he/she is saying, and how the person might
deliver a two minute speech is very valuable.

Noise
Noise basically includes background dialogues and actual environmental sounds. This element of
sound makes the scene and the whole film to be generally believable (realistic). They assist in building
up the world of make believe and absorbing the audience in the action.

It has been accepted that sound cannot be isolated from its acoustic environment, hence dialogue
cannot occur in a vacuum. For a camera shot, what is not within the frame cannot be seen by us even if
it is immediately besides the things that are. In sound things are different. An acoustic environment
inevitably encroaches on the close-up shot and what we hear in this case is not a shadow or a beam of
light, but sound themselves, which can always be heard throughout the whole space of the picture,
however small a section of that space is included in the close-up. Sound should not be blocked out.

Music played in a restaurant cannot be cut out if a special close-up of say two people softly talking
together in a corner is to be shown.

Sound Effects
These are sounds generated to produce the illusion of reality – what is happening off-stage. However,
all sounds other than speech, music, and the natural sounds generated by the actors in synchronous
filming are considered sound effects, whether intended to be noticed by the audience or not. Big
studios and large independent services maintain vast libraries of effects ranging from unsurpassed
stand-bys to general sound effects. There are sound designers who specialize in electronic or
mechanical effects that often serve the illusion better than the actual thing.

War films, science-fiction films, and disaster films may use numerous effects tracks, gunshots on one
explosions on another, sirens on a third. Complex sounds may be build up from several tracks, the
actual sound augmented or sweetened with library effects.

If there is any single trend that can be observed in sound effects practice it is towards a much more
detailed background sound ambience.

CFT 402 – Sound in Production. Lecturer: Shapaya


Common problems associated with sound and the microphone
Wind roar
Wind roar (not to be confused with the sound of wind) is a low frequency sound caused when the wind
strikes the mike. Even on a sunny day the wind speed can average 10 mph and buffet the mike, so
there is no better cure than preventing wind striking the mike in the first place.

Professionals use proper and expensive windbaskets in which the mike is held in an anti-shock mount
suspended centrally in a free air-space in a porous basket.

The object is to prevent vibration transmitted sound reaching the mike, and to channel the wind
through numerous perforations and baffles until its velocity, upon reaching the mike, is reduced to
zero.

It is important to cover the entire mike – not just the recording end. Any part left exposed will simply
transmit the wind roar through vibration.

Semi-professional equipment (microphone)


When working with semi-professional equipment, you have to watch that the impedance of the
microphone and frequency response. High-quality mics can pick-up higher and lower sounds
(frequencies) than lower-quality mics. Many high-quality mics are built to pick-up sounds equally well
over the entire frequency range – flat response.

Semi-professional microphones which are usually have low sensitivity typically demand higher
amplification which also results to the amplification of noise.

A sound engineer must also ensure that the impedance of the microphone matches that of the cable to
avoid high resistance resulting to noise. It is also a rule that high-impedance microphones should not
be used with long cables as the cables will exert a damping effect on the high frequency signals. If it
does become necessary to use high-impedance microphones with long cable runs, impedance-
converting transformers will be needed.

All professional microphones and microphone cables have a three pronged connector – XLR
connector.

The dynamic microphone also lack discrimination to low-frequency sounds arriving from random
directions. Sounds which originate at the rare of the microphone will be bent around the microphone
housing and actuate the diaphragm as if they arrived from the front. At the higher frequencies, for
sounds originating at its rear, the frequency response will drop off due to diffraction.

Electrical hum
Low-frequency noise, which can vary from a smooth, deep rumble to a spiky buzz, can be caused by a
number of factors. The commonest are:
1. Unbalanced, unscreened and damages cables
2. Dirty, corroded and broken connectors causing high resistance and imbalance
3. Moisture and condensation in the connectors, causing spurious leakage currents
4. Poorly screened microphones, causing static interference clicks and spurious hum leakages when
handles or placed in a certain position.
5. Microphones which are susceptible to interference from static and magnetic fields
6. Amplifying equipment linked incorrectly to the electricity mains, causing multi-path earth currents
CFT 402 – Sound in Production. Lecturer: Shapaya
Hum, radio break-through and various forms of static induction interference are minimised by using
the correct cable. The cable designed for the job is the microphone cable -a tough plastic-covered,
twin-conductor cable with an antistatic screen- and this should be used for all microphone circuits
irrespective of distance involved.

Hiss
Recordings need not be hissy. Noise-reduction systems are quite common in cassettes (otherwise
inclined to be rather hissy). However, the problem of hiss is complex. It is mostly due to thermal
movement of electrons in the circuit resistance of amplifiers; the more one amplifies, the higher the
hiss level. In other words, if one begins with an amplifier of good design –one in which the self-
generating noise is very low- the main source of noise generation is the very first resistance in the
circuit – the microphone. It is worthwhile to note that every microphone produces a certain amount of
noise along with the desired signal.

Acoustic noise
This is noise from the environment. A problem which may arise when recording in an a cove (or such
like environments) is that noises from things beyond the horizon (aircraft or ships) can be picked-up
by a microphone placed at the focal point of the cove by a recordist who is unaware of its acoustic
idiosyncrasies.

Editing Sound
Workspaces in Audition
Adobe video and audio applications provide a consistent, customizable workspace. Although each
application has its own set of panels (such as Project, Metadata, and Timeline), you move and group
panels in the same way across products.

The main window of a program is the application window. The default workspace contains groups of
panels as well as panels that stand alone. You customize a workspace by arranging panels in the layout
that best suits your working style. As you rearrange panels, the other panels resize automatically to fit
the window.

Viewing, zooming, and navigating audio


Comparing the Waveform and Multitrack editors
Adobe Audition provides different views for editing audio files and creating multitrack mixes. To edit
individual files, use the Waveform Editor. To mix multiple files and integrate them with video, use the
Multitrack Editor.

The Waveform and Multitrack editors use different editing methods, and each has unique advantages.

The Waveform Editor uses a destructive method, which changes audio data, permanently altering
saved files. Such permanent changes are preferable when converting sample rate and bit depth,
mastering, or batch processing. The Multitrack Editor uses a nondestructive method, which is
impermanent and instantaneous, requiring more processing power, but increasing flexibility. This
flexibility is preferable when gradually building and reevaluating a multilayered musical composition
or video soundtrack.

You can combine destructive and nondestructive editing to suit the needs of a project. If a multitrack
clip requires destructive editing, for example, simply double-click it to enter the Waveform Editor.
CFT 402 – Sound in Production. Lecturer: Shapaya
Likewise, if an edited waveform contains recent changes that you dislike, use the Undo command to
revert to previous states—destructive edits aren’t applied until you save a file.

Basic components of the editors


Though available options differ in the Waveform and Multitrack editors, both views share basic
components, such as the tool and status bars, and the Editor panel.

Switch editors
Do one of the following to switch editors:
• From the View menu, choose Waveform or Multitrack Editor.
• In the toolbar, click the Waveform or Multitrack Editor button.
• In the Multitrack Editor, double-click an audio clip to open it in the Waveform Editor.
Alternatively, double-click a file in the Files panel.
• In the Waveform Editor, choose Edit > Edit Original to open the multitrack session that created
a mixdown file.

Zoom audio in the Editor panel


Zoom into a specific time range - In either the zoom navigator or the timeline ruler, right-click and
drag. The magnifying glass icon (Bracketed in Red in the interface above) creates a selection showing
the range that will fill the Editor panel.

Zoom into a specific frequency range - In the vertical ruler for the spectral display, right-click and
drag.

Extend or shorten the displayed range - Place the pointer over the left or right edge of the highlighted
area in the zoom navigator, and then drag the magnifying glass icon.

CFT 402 – Sound in Production. Lecturer: Shapaya


Gradually zoom in or out - In the lower right of the Editor panel, click the Zoom In or Zoom Out
button.

Zoom out full (all tracks) - You can zoom out all tracks to the same height to fully cover vertical
spaces. The view will resize track heights to take up the full height of the multitrack editor panel.
Track heights will resize to a consistent height. Minimized tracks will still remain at their minimum
height.
To Zoom out full, choose View > Zoom Out Full (All Tracks).

Zoom with the mouse wheel or Mac trackpad - Place the pointer over the zoom navigator or ruler, and
either roll the wheel or drag up or down with two fingers. (In the Waveform Editor, this zoom method
also works when the pointer is over the waveform.)

Navigate through time


Navigate by scrolling - •In the zoom navigator, drag left or right.
• To scroll through audio frequencies in the spectral display, drag up or down in the vertical
ruler.

Navigate with the Selection/View panel - The Selection/View panel shows the start and end of the
current selection and view in the Editor panel. The panel displays this information in the current time
format, such as Decimal or Bars And Beats.
1. To display the Selection/View panel, choose Window > Selection/View Controls.
2. (Optional) Enter new values into the Begin, End, or Duration boxes to change the selection or
view.

Auto-scroll navigation - You can use auto-scroll to navigate on the waveform and multitrack editor. To
choose the scroll type, open Preferences > Playback. Use the radio buttons to choose the type of scroll
individually for both the editors.
• Pagewise scroll: The playhead moves from left to right and jumps to the next frame when it
hits the right corner.
• Centered scroll: The playhead is positioned at the center and the track beneath it moves.
Therefore, the current time of audio being played is always in the middle.

Waveform Editor
If you have multichannel audio or video files, you can edit each audio channel separately in the
Waveform Editor by following these steps.
1. Select File > Open. The opened file appears in the Files Panel.
2. Expand the drop-down to see each of the channels within the file.

Multitrack Editor
To use multichannel audio or video files within a session, you can bring each of the channels of the
file into the Multitrack Editor as one single multichannel clip (default behaviour). You can also
automatically split each channel or groups of channels into different clips by holding Alt (Windows) or
Option (Mac) while dragging.

CFT 402 – Sound in Production. Lecturer: Shapaya


• Select and drag the channels you want to import to the Multitrack panel. You can also select
multiple channels and drag them to the same track or different tracks.
• To add channels to different tracks, hold Alt (Windows) or Option (MAC).

Create, open, or import files in Adobe Audition


Create a new blank audio file
A new blank audio file is perfect for recording new audio or combining pasted audio.
1. Choose File > New > Audio File.
To quickly create a file from selected audio in an open file, choose Edit > Copy To New. (See .)
2. Enter a filename, and set the following options:
 Sample Rate - Determines the frequency range of the file. To reproduce a given frequency, the
sample rate must be at least twice that frequency.
 Channels - Determines if the waveform is mono, stereo, 5.1 surround. Audition saves the last
five custom audio channel layouts that you had used for quick access.
 Bit Depth - Determines the amplitude range of the file. The 32-bit level provides maximum
processing flexibility in Adobe Audition. For compatibility with common applications,
however, convert to a lower bit depth when editing is complete.

Navigate time and playing audio in Adobe Audition


Monitoring time
In the Editor panel, the following features help you monitor time:
1. In the timeline near the top of the panel, the current-time indicator lets you start playback or
recording at a specific point.
2. In the lower left of the panel, the time display shows the current time in numerical format. The
default time format is Decimal. The same format is used by the timeline.

CFT 402 – Sound in Production. Lecturer: Shapaya


Features that help you monitor time
A. Current-time indicator B. Timeline C. Time display

Position the current-time indicator


In the Editor panel, do any of the following:
1. In the timeline, drag the indicator or click a specific time point.
2. In the time display at lower left, drag across the numbers, or click to enter a specific time.
3. At the bottom of the panel, click one of the following buttons:
 Pause - Temporarily stops the current-time indicator. Click the Pause button again to
resume playback or recording.
 Move CTI to Previous - Places the current-time indicator at the beginning of the next
marker. If there are no markers, the current-time indicator moves to the beginning of the
waveform or session.
 Rewind - Shuttles the current-time indicator backward in time.
Note: Right-click the Rewind button to set the rate at which the cursor moves.
 Fast Forward - Shuttles the current-time indicator forward in time.
Note: Right-click the Fast Forward button to set the rate at which the cursor moves.
 Move CTI to Next - Moves the current-time indicator to the next marker. If there are no
markers, the current-time indicator moves to the end of the waveform or session.

Record audio in the Waveform Editor


You can record audio from a microphone or any device you can plug into the Line In port of a sound
card. Before recording, you have to adjust the input signal to optimize signal-to-noise levels.
1. Set audio inputs.
2. Do one of the following:
• Create a file.
• Open an existing file to overwrite or add new audio, and place the current-time indicator
where you want to start recording.
3. At the bottom of the Editor panel, click the Record button to start and stop recording.

Direct-to-file recording in the Multitrack Editor

CFT 402 – Sound in Production. Lecturer: Shapaya


In the Multitrack Editor, Adobe Audition automatically saves each recorded clip directly to a WAV
file. Direct-to-file recording lets you quickly record and save multiple clips, providing tremendous
flexibility.

Inside the session folder, you find each recorded clip in the [session name]_Recorded folder. Clip
filenames begin with the track name, followed by the take number (for example, Track 1_003.wav).

After recording, you can edit takes to produce a polished final mix. For example, if you create multiple
takes of a guitar solo, you can combine the best sections of each solo. You can also use one version of
the solo for a video soundtrack, and another version for an audio CD.

Monitoring recording and playback levels


Level meters overview
To monitor the amplitude of incoming and outgoing signals during recording and playback, you use
level meters. The Waveform Editor provides these meters only in the Levels panel. The Multitrack
Editor provides them in both the Levels panel, which shows the amplitude of the Master output, and
track meters, which show the amplitude of individual tracks.

You can dock the Levels panel horizontally or vertically. When the panel is docked horizontally, the
upper meter represents the left channel, and the lower meter represents the right channel.
Note: To show or hide the panel, choose Window > Level Meters.

Levels panel, docked horizontally


A Left channel B Right channel C Peak indicators D Clip indicators

The meters show signal levels in dBFS (decibels below full scale), where a level of 0 dB is the
maximum amplitude possible before clipping occurs. Yellow peak indicators remain for 1.5 seconds so
you can easily determine peak amplitude.

If amplitude is too low, sound quality is reduced; if amplitude is too high, clipping occurs and
produces distortion. The red clip-indicator to the right of the meters lights up when levels exceed the
maximum of 0 dB.

Adjust recording levels for standard sound cards


Adjust levels if recordings are too quiet (causing unwanted noise) or too loud (causing distortion). To
get the best sounding results, record audio as loud as possible without clipping. When setting recording
levels, watch the meters, and try to keep the loudest peaks in the yellow range below -3 dB

Adobe Audition doesn’t directly control a sound card’s recording levels. For a professional sound
card, you adjust these levels with the mixer application provided with the card (see the card’s
documentation for instructions). For a standard sound card, you use the mixer provided by Windows or
Mac OS.

Adjust sound card levels in Windows 7 and Vista:


1. Right-click the speaker icon in the taskbar, and choose Recording Devices.
2. Double-click the input source you want to use.

CFT 402 – Sound in Production. Lecturer: Shapaya


3. Click the Levels tab, and adjust the slider as needed.

Adjust sound card levels in Windows XP:


1. Double-click the speaker icon in the taskbar.
2. Choose Options > Properties.
3. Select Recording, and then click OK.
4. Select the input source you want to use, and adjust the Volume slider as needed.

Adjust sound card levels in Mac OS:


1. Choose System Preferences from the Apple menu.
2. Click Sound, and then click the Input tab.
3. Select the device you want to use, and adjust the Input Volume slider as needed.

Applying effects in the Multitrack Editor


Properties in the Effects Rack

Controls shared by the Waveform and Multitrack editors


A. Rack Preset controls B. Effect slots C. Level controls D. Main Power button

Insert, bypass, reorder, or remove effects in racks


In the Effects Rack, you manage groups of effects by using individual effect slots.

CFT 402 – Sound in Production. Lecturer: Shapaya


Reordering and inserting effects in racks:
A. Reorder by dragging B. Insert with the slot menu

 To insert an effect, choose it from a slot’s pop-up menu. Then adjust effect settings as desired.
To later re-access effect settings, double-click the effect name in the rack.
 To bypass an effect, click its Power button .
 To bypass all effects, click the main Power button in the lower left corner of a rack. You can
also click the fx power button in the Editor panel or Mixer.
 To bypass a selected group of effects, choose Toggle Power State of Selected Effects from the
panel menu.
Bypass effects to quickly compare processed and unprocessed audio.
 To remove a single effect, choose Remove Effect from a slot’s pop-up menu. Or select the slot,
and press Delete.
 To remove all effects, choose Remove All Effects from the panel menu.
 To reorder effects, drag them to different slots.
Reordering effects produces different sonic results. (For an example, place Reverb before
Phaser, or the other way around.)

Apply individual effects in the Waveform Editor


1. From any submenu in the Effects menu, choose an effect.
2. Click the Preview button, and then edit settings as needed.
As you edit settings, watch the Levels panel to optimize amplitude.
3. To compare original audio to processed audio, select and deselect the Power button.
4. To apply the changes to the audio data, click Apply.

Apply effects to clips or tracks


In the Multitrack Editor, you can apply up to 16 effects to each clip, track, and bus and adjust them
while a mix plays. (Apply clip effects if a track contains multiple clips that you want to process
independently.)

You can insert, reorder, and remove effects in the Editor, Mixer, or Effects Rack panel. Only in the
Effects Rack, however, can you save favorite settings as presets, which you can apply to multiple
tracks.

CFT 402 – Sound in Production. Lecturer: Shapaya


In the Multitrack Editor, effects are nondestructive, so you can change them at any time. To readapt a
session for different projects, for example, simply reopen it and change effects to create new sonic
textures.

Below is the process of applying effects:


1. Do any of the following:
• Select a clip, and click Clip Effects at the top of the Effects Rack.
• Select a track, and click Track Effects at the top of the Effects Rack.
• Display the fx section of the Editor or Mixer. (In the Editor panel, click the button in the upper-
left corner.)
2. Choose effects for up to 16 slots in the list. (See Insert, bypass, reorder, or remove effects in racks.)
3. Press the spacebar to play the session, and then edit, reorder, or remove effects as needed.

Fade and Gain Envelope effects (Waveform Editor only)


The Amplitude and Compression > Fade and Gain Envelope effects function similarly but modify
audio differently:
• Choose Fade Envelope to reduce amplitude by varying amounts over time.
• Choose Gain Envelope to boost or reduce amplitude over time.
In the Editor panel, click the yellow envelope line to add keyframes, and drag them up or down to
change amplitude. To quickly select, reposition, or delete multiple keyframes.
Note: Select the Spline Curves option to create smoother, curved transitions between keyframes, rather than
linear transitions.

Doppler Shifter effect (Waveform Editor only)


The Special > Doppler Shifter effect creates the increase and decrease in pitch we notice when an
object approaches and then passes us, such as when a police car passes with its siren on. When the car
comes toward you, the sound reaches your ears as a higher frequency because each sound wave is
compressed by the car moving forward. The opposite happens as the car passes by; the waves are
stretched out, resulting in a lower-pitched sound.

Note: Unlike many graphs in Adobe Audition effects, the Doppler Shifter graph is noninteractive: You can’t
directly manipulate the graph. Instead, the graph changes as you adjust the effect’s parameters.
Note: Path Type - Defines which path the sound source appears to take. Depending on the path type, a different
set of options is available.

Straight Line options:


• Starting Distance Away sets the virtual starting point (in meters) of the effect.
• Velocity defines the virtual speed (in meters per second) at which the effect moves.
• Coming From sets the virtual direction (in degrees) from where the effect appears to come.
• Passes In Front By specifies how far (in meters) the effect seems to pass in front of the
listener.
• Passes On Right By specifies how far (in meters) the effect seems to pass to the right of the
listener.
Circular options:
• Radius sets the circular dimensions (in meters) of the effect.
• Velocity defines the virtual speed (in meters per second) at which the effect moves.
• Starting Angle sets the beginning virtual angle (in degrees) of the effect.
• Center In Front By specifies how far (in meters) the sound source is from the front of the

CFT 402 – Sound in Production. Lecturer: Shapaya


listener.
• Center On Right By specifies how far (in meters) the sound source is from the right of the
listener.

Adjust Volume Based on Distance or Direction - Automatically adjusts the effect’s volume based on
the values specified.

Apply amplitude and compression effects to audio.


Amplify effect
The Amplitude And Compression > Amplify effect boosts or attenuates an audio signal. Because the
effect operates in real time, you can combine it with other effects in the Effects Rack.

Gain sliders - Boost or attenuate individual audio channels.


Link Sliders - Moves the channel sliders together.

Channel Mixer effect


The Amplitude and Compression > Channel Mixer effect alters the balance of stereo or surround
channels. You change the apparent position of sounds, correct mismatched levels, or address phasing
issues.

 Channel tabs - Select the output channel.


 Input channel sliders - To mix into the output channel, determine the percentage of the current
channels. For a stereo file, for example, an L value of 50 and an R value of 50 results in an
output channel that contains equal audio from the current left and right channels.
 Invert - Inverts a channel’s phase. (To understand this key audio concept, see How sound
waves interact.) Inverting all channels causes no perceived difference in sound. Inverting only
one channel, however, can greatly change the sound.

DeEsser effect
The Amplitude and Compression > DeEsser effect removes sibilance, “ess” sounds heard in speech
and singing that can distort high frequencies.

The graph reveals the processed frequencies. To see how much audio content exists in the processed
range, click the Preview button .

 Mode - Choose Broadband to uniformly compress all frequencies or Multiband to only


compress the sibilance range. Multiband is best for most audio content but slightly increases
processing time.
 Threshold - Sets the amplitude above which compression occurs.
 Center Frequency - Specifies the frequency at which sibilance is most intense. To verify, adjust
this setting while playing audio.
 Bandwidth - Determines the frequency range that triggers the compressor.
Note: To visually adjust Center Frequency and Bandwidth, drag the edges of the selection in
the graph.
 Output Sibilance Only - Lets you hear detected sibilance. Start playback, and fine-tune settings
above.
 Gain Reduction - Shows the compression level of the processed frequencies.

CFT 402 – Sound in Production. Lecturer: Shapaya


Dynamics Processing effect
The Amplitude And Compression > Dynamics Processing effect can be used as a compressor, limiter,
or expander. As a compressor and limiter, this effect reduces dynamic range, producing consistent
volume levels. As an expander, it increases dynamic range by reducing the level of low-level signals.
(With extreme expander settings, you can create a noise gate that totally eliminates noise below a
specific amplitude threshold.)

The Dynamics Processing effect can produce subtle changes that you notice only after repeated
listening. When applying this effect in the Waveform Editor, use a copy of the original file so you can
return to the original audio if necessary.

In the Dynamic Processing Effect, you can view the Level Meter and the Gain Reduction Meter.Level
Meter shows the input level of the audio and Gain Reduction Meter shows how audio signals are
compressed or expanded. These meters are visible on the right side of the graph as shown below.
Note: Use the Broadcast Limiter preset to simulate the processed sound of a contemporary radio station.

Level Meter and Gain Reduction Meter

Fade Envelope effect


To reduce amplitude by varying amounts over time, choose Fade Envelope (Effects > Amplitude and
Compression).

In the Waveform Editor panel, click the yellow envelope line to add keyframes, and drag them up or
down to change amplitude.

Gain Envelope effect


To boost or reduce amplitude over time, choose Gain Envelope (Effects > Amplitude and
Compression).

In the Waveform Editor panel, click the yellow envelope line to add keyframes, and drag them up or
down to change amplitude.

Volume Envelope effect

CFT 402 – Sound in Production. Lecturer: Shapaya


The Amplitude And Compression > Volume Envelope effect lets you change volume over time with
boosts and fades. In the Waveform Editor panel, simply drag the yellow line. The top of the panel
represents 100% (normal) amplification; the bottom represents 100% attenuation (silence).
Note: Though the Volume Envelope effect isn’t available in the Multitrack Editor, you can use automation lanes
to accomplish the same task.

Dragging an anchor point in the Editor panel

Diagnostics effects (Waveform Editor only) for Audition


Diagnostics are available either via the Effects menu or directly from the Diagnostics panel (Window
> Diagnostics). These tools let you quickly remove clicks, distortion, or silence from audio, as well as
add markers where silence occurs.
Note: For maximum audio restoration control, use diagnostics together with Spectral Display tools and Noise
Reduction effects.

Diagnose and repair, delete, or mark audio - Unlike conventional noise reduction effects, which
process all selected audio, diagnostics scan for problematic or silent areas, and then let you choose
which to address.
1. In the Diagnostics panel, choose an option from the Effect menu.
2. Click Scan.
3. At the bottom of the panel, do any of the following:
• Select one or more detected items in the list, and click Repair, Delete, or Mark. (The available
options depend upon the chosen diagnostic effect.)
Note: To mark detected clicks or clipping, right-click selected items in the list, and choose Create
Markers from the pop-up menu.
• Click Repair All, Delete All, or Mark All to address all detected items.
• Click the magnifying glass to zoom in on a selected problem in the Editor panel. Click the
icon again to zoom out.
• Click Clear Repaired, Deleted, or Marked to remove previously addressed items from the list.

DeClipper options
The Diagnostics > DeClipper effect repairs clipped waveforms by filling in clipped sections with new
audio data. Clipping occurs when audio amplitude exceeds the maximum level for the current bit
depth. Commonly, clipping results from recording levels that are too high. You can monitor clipping

CFT 402 – Sound in Production. Lecturer: Shapaya


during recording or playback by watching the Level Meters; when clipping occurs, the boxes on the far
right of the meters turn red.

Visually, clipped audio appears as broad flat areas at the top of a waveform. Sonically, clipped audio is
a static-like distortion.
Note: If you need to adjust the DC offset of clipped audio, first use the DeClipper effect. If you instead adjust
DC offset first, the DeClipper won’t identify clipped areas that fall below 0 dBFS.

In the Diagnostics panel, click Settings to access these options:


 Gain - Specifies the amount of attenuation that occurs before processing. Click Auto to base
the gain setting on average input amplitude.
 Tolerance - Specifies the amplitude variation in clipped regions. A value of 0% detects clipping
only in perfectly horizontal lines at maximum amplitude; 1% detects clipping beginning at 1%
below maximum amplitude, and so on. (A value of 1% detects most clipping.)
 Min. Clip Size - Specifies the length of the shortest run of clipped samples to repair. Lower
values repair a higher percentage of clipped samples; higher values repair clipped samples only
if they’re preceded or followed other clipped samples.
 Interpolation - The Cubic option uses spline curves to re-create the frequency content of
clipped audio. This approach is faster for most situations but can introduce spurious new
frequencies. The FFT option uses Fast Fourier transforms to re-create clipped audio. This
approach is typically slower but best for severe clipping. From the FFT Size menu, choose the
number of frequency bands to evaluate and replace. (More bands result in greater accuracy but
longer processing.)

Noise Reduction effect (Waveform Editor only)


The Noise Reduction/Restoration > Noise Reduction effect dramatically reduces background and
broadband noise with a minimal reduction in signal quality. This effect can remove a combination of
noise, including tape hiss, microphone background noise, power-line hum, or any noise that is constant
throughout a waveform.

The proper amount of noise reduction depends upon the type of background noise and the acceptable
loss in quality for the remaining signal. In general, you can increase the signal-to-noise ratio by 5 to 20
dB and retain high audio quality.

To achieve the best results with the Noise Reduction effect, apply it to audio with no DC offset. With a
DC offset, this effect may introduce clicks in quiet passages. (To remove a DC offset, choose Favorites
> Repair DC Offset.)

CFT 402 – Sound in Production. Lecturer: Shapaya


Evaluating and adjusting noise with the Noise Reduction graph:
A. Drag control points to vary reduction in different frequency ranges B. Low amplitude noise. C. High
amplitude noise D. Threshold below which noise reduction occurs.

Apply the Noise Reduction effect:


1. In the Waveform Editor, select a range that contains only noise and is at least half a second long.
To select noise in a specific frequency range, use the Marquee Selection tool.
2. Choose Effects > Noise Reduction/Restoration > Capture Noise Print.
3. In the Editor panel, select the range from which you want to remove noise.
4. Choose Effects > Noise Reduction/Restoration > Noise Reduction.
5. Set the desired options.
Note: When recording in noisy environments, record a few seconds of representative background noise that can
be used as a noise print later on.

Vocal Enhancer effect


The Special > Vocal Enhancer effect quickly improves the quality of voice-over recordings. The Male
and Female modes automatically reduce sibilance and plosives, as well as microphone handling noise
such as low rumbles. Those modes also apply microphone modeling and compression to give vocals a
characteristic radio sound. The Music mode optimizes soundtracks so they better complement a voice-
over.

Male - Optimizes audio for a man’s voice.


Female - Optimizes audio for a woman’s voice.
Music - Applies compression and equalization to music or background audio.

Mixing multitrack sessions


Editing multitrack sessions in the Editor Panel and Mixer
In the Multitrack Editor, the Editor panel provides several elements that help you mix and edit
sessions. In the track controls on the left, you adjust track-specific settings, such as volume and pan. In
the timeline on the right, you edit the clips and automation envelopes in each track.

CFT 402 – Sound in Production. Lecturer: Shapaya


Editor panel in Multitrack Editor
A. Track controls B. Zoom navigator C. Vertical scroll bar D. Track

The Mixer (Window > Mixer) provides an alternative view of a session, revealing many more tracks
and controls simultaneously, without showing clips. The Mixer is ideal for mixing large sessions with
many tracks.

Controls in the Mixer:


A. Inputs B Effects C. Sends D. Equalization E. Volume F. Outputs
CFT 402 – Sound in Production. Lecturer: Shapaya
Select ranges in the Multitrack Editor

Simultaneously selecting a range and clips in the Editor panel

To select a time range:


1. In the toolbar, select the Time Selection tool .
2. In the Editor panel, do one of the following:
• To select only a range, click an empty area of the track display, and drag left or right.
• To select a range and clips, click the center of a clip, and drag a marquee.

Add or delete tracks


In the Editor panel or Mixer, do the following:
• To add a track, select the track you want to precede it, and then choose Multitrack > Track >
Add [type of] Track.
• To delete a track, select it, and choose Multitrack > Track > Delete Selected Track.
Note: A multitrack session supports only one video track, which Adobe Audition always inserts at the top of the
Editor panel.

Name or move tracks


You can name tracks to better identify them, or move them to display related tracks together.
• In the Editor panel or Mixer, type in the name text box.

Name text box in the Editor panel

Vertically Zoom Tracks


When you use the vertical Zoom options in the lower right of the Editor panel, all tracks zoom
simultaneously. If a session contains many tracks, it is advised to zoom them individually.
• In the track controls, drag the top or bottom border of the track up or down.
Note: To quickly zoom all tracks, roll the mouse wheel over the track controls. To horizontally
resize all track controls, drag the right border.

Mute and solo tracks


CFT 402 – Sound in Production. Lecturer: Shapaya
You can mute solo tracks to hear them separately from the rest of a mix. Conversely, you can mute
tracks to silence them in a mix.
• To mute a track, click its Mute button in the Editor panel or Mixer.
• To solo a track, click its Solo button in the Editor panel or Mixer. To automatically
remove other tracks from Solo mode, Ctrl-click (Windows) or Command-click (Mac OS).
Tip: To remove other tracks from Solo mode by default, select Track Solo: Exclusive in the Multitrack section of
the Preferences dialog box. (Regardless of this setting, when you solo a bus, assigned tracks are always placed
in Solo mode.)

Apply an identical setting to all tracks


To increase your efficiency, you can quickly apply several settings to an entire session.
- Hold down Ctrl+Shift (Windows) or Command+Shift (Mac OS). Then select an Input, Output,
Mute, Solo, Arm For Record, or Monitor Input setting for any track.

Duplicate tracks
To perfectly copy all clips, effects, equalization, and envelopes in a track, duplicate it. Duplicate tracks
provide a great starting point for new adjustments, helping you compare different processing and
automation settings.
1. In the Editor panel or Mixer, select a track.
2. Choose Multitrack > Track > Duplicate Selected Track.

Arrange and edit multitrack clips with Audition


When you insert an audio file in the Multitrack Editor, the file becomes a clip on the selected track.
You can easily move clips to different tracks or timeline positions. You can also edit clips
nondestructively, trimming their start and end points, crossfading them with other clips, and more.

To arrange clips in the Editor panel, you use the Move or Time Selection tools.

Select and move clips


Do any of the following:
• To select an individual clip, click it in the Editor panel.
• To select all clips in selected tracks, choose Edit > Select > All Clips In Selected Track.
• To select all clips in a session, choose Edit >Select > Select All.
• To move selected clips, select the Move tool in the toolbar, and then drag the clips. Or choose
Clip > Nudge Right or Nudge Left to move clips one pixel at a time. (If you zoom in to see
individual samples, nudging moves clips one sample at a time.)
To move clips with the Time Selection tool, right-click and drag (similar to the Hybrid tool technique in
previous versions). You can also drag the clip header with any tool.

Snap to clip endpoints


Snapping lets you quickly align clips with other clips. If snapping is enabled, both dragged clips and
the current-time indicator snap to selected items. While you drag a clip, a white line appears in the
Editor panel when snapping points meet.
1. To enable snapping for selected items, click the Toggle Snapping icon at the top of the Editor
panel.
2. Choose Edit > Snapping > Snap To Clips.

Overlapping Clips

CFT 402 – Sound in Production. Lecturer: Shapaya


When clips overlap each other without crossfading, only the top-most clip plays.
You can change the order of the clips using any one of the following method:
• Select the Bring Clip to Front or Send Clip to Back command from the clip section of the
main menu to rearrange the selected clip.
• Select the Bring Clip to Front or Send Clip to Back command from the clip context menu to
rearrange the clip. In case a clip overlaps other clips, the clips are listed in the Bring Clip to
Front sub-menu where they are sorted by their start time to bring the hidden clips to the front.

Copy a clip
You can create two types of copied audio clips: reference copies that share source files and unique
copies that have independent source files. The type of copy you choose depends upon the amount of
available disk space and the nature of destructive editing you plan to perform in the Waveform Editor.

Reference copies consume no additional disk space, letting you simultaneously edit all instances by
editing the original source file. (For example, you can add the Flanger effect to a source file in the
Waveform Editor and automatically apply the effect to all 30 referenced copies in a session.)

Unique copies have a separate audio file on disk, allowing for separate editing of each version in the
Waveform Editor. (For example, you can add destructive effects to the version in an introduction while
leaving the version in a verse dry.)
To quickly copy a reference, press Ctrl + C (Windows) or Cmd + C (Mac OS). Alternatively, Alt-drag
(Windows) or Option-drag (Mac OS) the clip header.
1. Click the Move tool in the toolbar. Then right-click and drag the clip.
To copy with the Time Selection tool, right-click and drag the clip header (similar to the Hybrid tool technique
in previous versions).
2. Release the mouse button, and choose one of the following from the pop-up menu:
• Copy Here (to copy a reference)
• Copy Unique Here

Trimming and extending clips


You can trim or extend audio clips to suit the needs of a mix. Because the Multitrack Editor is
nondestructive, clip edits are impermanent; you can return to the original, unedited clip at any time. If
you want to permanently edit an audio clip, however, you can quickly open the source file in the
Waveform Editor.

Remove a selected range from clips


1. In the toolbar, click the Time Selection tool.
2. Drag across one or more clips to select them and a range.
3. Do one of the following:
• To remove the range from clips and leave a gap in the timeline, choose Edit > Delete.
• To remove the range and collapse the gap in the timeline, choose Edit > Ripple Delete, and
select one of the following options:
- Selected Clips - Removes selected clips, shifting remaining clips on the same tracks.
- Time Selection in Selected Clips - Removes the range from selected clips, splitting
them if necessary.
- Time Selection in All Tracks - Removes the range from all clips in the session.
- Time Selection in Selected Track - Removes the range only from the currently
highlighted track in the Editor panel.

CFT 402 – Sound in Production. Lecturer: Shapaya


Collapse a gap between clips on a track
Right-click the empty area between the clips, and choose Ripple Delete > Gap.

Trim or extend clips


1. If you want to repeat a clip, right-click it and select Loop.
2. In the Editor panel, position the cursor over the left or right edge of the clip. The edge-dragging icon
appears.
3. Drag clip edges.

The Clip > Trim option has three parameters.


 Trim to time selection - If you want to use a portion of the clip, select the portion and choose
Trim to time selection.
 Trim start to playhead - Place the playhead where you want your clip to start and choose this
option to trim the clip.
 Trim end to playhead - Place the playhead at the position where you want the clip to end and
choose this option to trim out the clip.

Shift the contents of a trimmed or looped clip


You can slip edit a trimmed or looped clip to shift its contents within clip edges.

Shift clip contents within clip edges

1. In the toolbar, click the Slip tool .


2. Drag across the clip.

Split clips
Split audio clips to break them into separate clips that you can independently move or edit.

Split clips with the Razor tool


1. In the toolbar, hold down the Razor tool , and choose one of the following from the pop-up
menu:
 Razor Selected Clips - Splits only clips you click.
 Razor All Clips - Splits all clips at the time point you click.
Tip: to switch between these modes in the Editor panel, press Shift.

2. In the Editor panel, click where you want the split to occur.

Split all clips at the current-time indicator


1. Position the current-time indicator where one or more audio clips exist.
2. Choose Clip > Split.
CFT 402 – Sound in Production. Lecturer: Shapaya
How to match, fade, and mix clip volume with Audition
Match multitrack clip volume
If multitrack clips have very different volume, making mixing difficult, you can match their volumes.
Because the Multitrack Editor is nondestructive, this adjustment is completely reversible.
1. Using the Move or Time Selection tool, Ctrl-click (Windows) or Command-click (Mac OS) to select
multiple clips.
2. Choose Clip > Match Clip Volume.
3. From the pop-up menu, choose one of the following options:
 Loudness - Matches an average amplitude you specify.
 Perceived Loudness - Matches a perceived amplitude you specify, accounting for middle
frequencies that the ear is most sensitive to. This option works well unless frequency emphasis
varies greatly (for example, midrange frequencies are pronounced in a short passage, but bass
frequencies are elsewhere).
 Peak Volume - Matches a maximum amplitude you specify, normalizing the clips. Because this
option retains dynamic range, it’s a good choice for clips you plan to process further, or for
highly dynamic audio like classical music.
 Total RMS Amplitude - Matches an overall root-mean-square amplitude you specify. For
example, if the majority of the files is -50 dBFS, the total RMS values would reflect that, even
if one file contains more loud passages.
4. Enter a Target Volume.

Fade or crossfade multitrack clips


On-clip fade and crossfade controls let you visually adjust fade curves and duration. Controls for fade
ins and fade outs always appear in the upper-left and upper-right corners of clips. Controls for
crossfades appear only when you overlap clips.

On-clip controls
A. Drag controls in clip corners to fade in and out B. Overlap clips to crossfade

Fade a clip in or out


In the upper-left or upper-right corner of the clip, drag the fade icon inward to determine fade
length, and drag up or down to adjust the fade curve.

Crossfade overlapping clips


When you crossfade clips on the same track, you overlap them to determine the size of the transition
region (the larger the overlapping area, the longer the transition).
CFT 402 – Sound in Production. Lecturer: Shapaya
1. Place two clips on the same track, and move them so they overlap.
2. At the top of the overlapping area, drag the left or right fade icon up or down to adjust the fade
curves.

Fade options
To access the following fade options, select a clip, and then either right-click a fade icon in the Editor
panel, or choose Clip > Fade In or Fade Out.
 No Fade - Deletes the fade or crossfade.
 Fade In, Fade Out, or Crossfade - If clips overlap, lets you choose the fade type.
 Symmetrical or Asymetrical (crossfades only) - Determines how the left and right fade curves
interact when you drag them up and down. Symmetrical adjusts both fades identically, while
asymetrical lets you adjust fades independently.
 Linear or Cosine - Applies either an even, linear fade or an S-shaped fade that starts slowly,
then rapidly changes amplitude, and ends slowly.
Tip: To switch between Linear and Cosine modes while dragging fade icons, hold down Ctrl (Windows)
or Command (Mac OS).
 Automatic Crossfades Enabled - Crossfades overlapping clips. Deselect this option if automatic
crossfades are undesirable or interfere with other tasks, such as trimming clips.

Create a single audio clip from multiple clips


You can combine the contents of multiple clips in the same time range, creating a single clip that you
can quickly edit in either the Multitrack or Waveform Editor.

Creating single clip from multiple clips in Multitrack Editor

1. In the Editor panel, do any of the following:


• Select a specific time range.
• Select specific clips if bouncing to a new track.
• Select nothing to mix down an entire session.
2. To combine the contents of the original clips, do either of the following:
• To create a track and clip in the Multitrack Editor, choose Multitrack > Bounce To New
Track.
• To create a file in the Waveform Editor, choose Multitrack > Mixdown To New File.

CFT 402 – Sound in Production. Lecturer: Shapaya


Automating mixes with envelopes
By automating mixes, you can change mix settings over time. For example, you can automatically
increase volume during a critical musical passage and later reduce the volume in a gradual fade out.

Automation envelopes visually indicate settings at specific points in time, and you can edit them by
dragging keyframes on envelope lines. Envelopes are nondestructive, so they don’t change audio files
in any way. If you open a file in the Waveform Editor, for example, you don’t hear the effect of any
envelopes applied in the Multitrack Editor.

Clip and track envelopes in the Editor panel


A. Clip envelope B. Track envelope

Automating clip settings


With clip envelopes, you can automate clip volume, pan, and effect settings.

On stereo tracks, clip volume and pan envelopes appear by default; you can identify them by color and
initial position. Volume envelopes are yellow lines initially placed across the upper half of clips. Pan
envelopes are blue lines initially placed in the center. (With pan envelopes, the top of a clip represents
full left, while the bottom represents full right.)
Note: On mono and 5.1 surround tracks, clips lack pan envelopes.

Two clip envelopes


A. Pan envelope B. Volume envelope

Automating track settings


With track envelopes, you can change volume, pan, and effect settings over time. Adobe Audition
displays track envelopes in an automation lane below each track. Each automated parameter has its
own envelope, which you edit just like clip envelopes.

CFT 402 – Sound in Production. Lecturer: Shapaya


Automating track settings in the Editor panel
A. Automation lane B. Envelope for parameter

Create track envelopes


Track envelopes let you precisely change track settings at specific points in time.

Showing automation lanes in Editor panel.

1. In Editor panel, click the triangle to the left of the Track Automation Mode menu for the track you
want to automate. (The menu is set to Read by default.)
2. From the Show Envelopes menu, select a parameter to automate.
3. On the envelope line, click and drag to add and adjust keyframes.

Adjust automation with keyframes


Keyframes on envelope lines change clip and track parameters over time. Adobe Audition
automatically calculates, or interpolates, all the intermediate values between keyframes using one of
two transition methods:

• Hold transitions create an abrupt change in value at each new keyframe.


• Linear transitions create a gradual, even change between keyframes.

You can also apply spline curves to an entire envelope, overriding the keyframe-specific setting above
to create natural-sounding transitions that change in speed near keyframes.

CFT 402 – Sound in Production. Lecturer: Shapaya


Transitions between keyframes
A. Hold B. Linear (the default) C. Spline curves

Add a keyframe
Do either of the following:
• Position the pointer over an envelope line. When a plus sign appears, click.
• Position the playhead where you’d like a track parameter to change. Then click the Add
Keyframe icon in the track controls.

Navigate between track keyframes


1. In the Editor panel, choose a parameter from the Select menu near the bottom of the track controls.
2. Click the Previous Keyframe or Next Keyframe icon.

Select multiple keyframes for a parameter


• Right-click any keyframe, and choose Select All Keyframes.
• Hold down Ctrl (Windows) or Command (Mac OS), and click specific keyframes.
• Hold down Shift, and click to select a series of keyframes.

Reposition keyframes or the envelope line


• To reposition selected keyframes, drag them. (To maintain time position or parameter value,
hold down Shift and drag.)
• To reposition a segment of an envelope without creating a keyframe, hold down Ctrl
(Windows) or Command (Mac OS), and drag.

Change the transition between two keyframes


Right-click the first keyframe, and select Hold Keyframe to abruptly change values, or deselect it to
gradually transition from one value to the next.

Apply spline curves to an entire envelope


Right-click an envelope line, and choose Spline Curves.

Delete keyframes
Right-click an envelope line, and choose Delete Selected Keyframes. Or, drag an individual keyframe
off a clip or track.

Disable clip keyframe editing


To avoid inadvertently creating or moving keyframes, disable keyframe editing.
- From the Multitrack menu, deselect Enable Clip Keyframe Editing.

Export a multitrack mix to Premiere Pro

CFT 402 – Sound in Production. Lecturer: Shapaya


1. Choose Multitrack > Export to Adobe Premiere Pro.
2. Specify a name and location for the exported session folder, and set the following options:
 Sample Rate - By default, reflects the sample rate of the original sequence. Select another rate
to resample the file for different output mediums.
 Export each track or bus as a stem - Converts the full timeline duration of each track into a
single clip, combining multiple clips if necessary. Select this option to extend and align clips
with sequence start and end points.
 Mixdown Session To - Exports the session to a single mono, stereo, or 5.1 file.
 Open in Adobe Premiere Pro - Automatically opens the sequence in Premiere Pro. Deselect this
option if you plan to edit the sequence later or transfer it to a different machine.
3. Click Export.
4. When Premiere Pro opens the exported XML file (either automatically or via the File > Import
command), the Copy Adobe Audition Tracks dialog box appears.

From the Copy to Active Sequence menu, choose where the exported Audition tracks begin. Any new
tracks are added below existing ones.

Saving and exporting audio files


Save audio files
In the Waveform Editor, you can save audio files in a variety of common formats. The format you
choose depends on how you plan to use the file. Keep in mind that each format stores unique
information that might be discarded if you save a file in a different format.

1. In the Waveform Editor, do one of the following:


• To save changes in the current file, choose File > Save.
• To save changes under a different filename, choose File > Save As. Or choose File > Export >
File to keep the current file open.
• To save currently selected audio as a new file, choose File > Save Selection As.
• To save all open files in their current formats, choose File > Save All.
2. Specify a filename and location, and choose a file format.
3. Set the following options:
 Sample Type - Indicates the sample rate and bit depth. To adjust these options, click Change.
 Format Settings - Indicates data compression and storage modes; to adjust these, click Change.
 Include Markers and Other Metadata - Includes audio markers and information from the
Metadata panel in saved files.

Save multitrack sessions


A multitrack session file is a small, non-audio file. It merely stores information about locations of
related audio files on your hard drive, the duration of each audio file within the session, the envelopes
and effects applied to various tracks, and so forth. You can reopen a saved session file later to make
further changes to the mix.

If you create multitrack mixes entirely in Adobe Audition, save session files in the native SESX
format.

1. In the Multitrack Editor, do one of the following:


• To save changes to the current session file, choose File > Save
• To save changes under a different filename, choose File > Save As. Or choose File > Export >

CFT 402 – Sound in Production. Lecturer: Shapaya


Session to keep the current session open.
• To save the session file and all the audio files it contains, choose File > Save All .
2. Specify a filename and location.
3. To include audio markers and information from the Metadata panel, select Include Markers And
Other Metadata.

CFT 402 – Sound in Production. Lecturer: Shapaya

You might also like