Sound and The Microphone Sound: CFT 402 - Sound in Production. Lecturer: Shapaya
Sound and The Microphone Sound: CFT 402 - Sound in Production. Lecturer: Shapaya
Sound
Beside the world of vision in film, there is the world of sound, another dimension, another aspect of
reality, the most significant thing about it perhaps being that it comes to us through a different sense
organ which determines the nature of our experience to it. Differences in its artistic use in the cinema
will stem from differences between the two senses of sight and hearing, but the general picture of the
artist expressing his experience through a medium in which he cannot accurately reproduce physical
reality, and which thus offers opportunities for the exercise of his art, will still hold good.
Sound is a form of energy which depending on its loudness, tone and concentration can have
tremendous power. It can start an avalanche, shatter glass, shake buildings and cripple human
eardrums. This is because sound energy literally moves air, in wave shaped patterns, which creates
pressure on any surface it strikes.
Next to sight, hearing is the richest and most complex of our senses. Sound is the basis of one of the
greatest of the arts, music. As speech it forms a medium for thought, and is the most important means
of communication among human beings. As one can imagine, those who hoped the cinema would
create a total illusion of reality were not likely to be satisfied with sight alone, and from the very
beginning of the cinema every effort was made to incorporate sound.
The mechanical reproduction of sound was developed as early as the first motion pictures, but the
problems of amplifying sound sufficiently for an audience and synchronising with the film image were
not solved until the late 1920s. Although sound attracted crowds to the cinema to hear the new miracle,
the artistic levels of the best silent pictures were not reached immediately. The new ‘talkies’ were
mostly poor imitations of theatrical plays, with dialogue and sounds used indiscriminately.
The incorporation of sound in films in the late 1920s was a system of ‘optical sound’. The principles
involved was that variations in sound waves are recorded as variations in light and shade on a separate
sound track, but on the same strip of film as the film is projected it turns these optical variations back
into sound waves, which audience hears through loudspeakers at the same time as they see the pictures
on the screen.
The sound brought with it disadvantages and advantages. Just as, in its beginnings, the cinema has
copied the scenery and static viewpoint of the theatre, so, when it was able to combine speech with its
pictures, it copied the continuers dialogue of the theatre. With this stage dialogue, even with
naturalistic acting, must be spoken clearly and loudly enough to be heard even if it means talking in a
‘stage whisper.’
To offset its drawbacks, sound brought important advantages. The silent cinema in the 20s was silent
only in name for it had from the start been accompanied by music. In fact, to watch a silent film
altogether in silence is a curious, incomplete experience. Often the accompaniments were bad,
inappropriate or hackneyed
An advantage of sound to film was that it freed the image to be itself, in other words relieved it of the
need to try and express sound in visual terms. In the silent film there was something particularly
strained about shorts of a factory siren blowing, women singing yet the audience could not hear the
actual sound of the subjects.
The coming of the sound film also enabled the artist to use silence in a film with a positive effect. On
the stage the effect of silence cannot be drawn out or made to last as it can in the cinema. In a film the
effect can be extremely vivid and varied and a silent glance can speak volumes.
Importance of sound
1. Attract attention
2. Increases emotional impact
3. Enhances understanding
4. It communicates
5. Increases recall of a visual message
The Microphone
A microphone is device that transforms sound/acoustical energy into electrical energy. Though
relatively simple, it forms the basis of all voice communication, enabling sound to be transmitted and
recorded by electronic means.
It is frequently assumed that by sticking a microphone into the scene at the last minute we have taken
care of the audio requirements, but good audio needs at least as much preparation and attention as the
video portion. Audio, like any other production element, should not be ‘added’ but integrated into the
production planning from the very beginning.
Because no camcorder’s built-in ‘squeaker’ can record more than general background sound – which
often includes a lot of out-of-shot and unwanted material – additional ‘specialised’ mics are often
essential.
The pickup of live sound is done through a variety of microphones. How good or bad a particular
microphone is depends not only on how it is built, but especially how it is used.
Since the invention of the microphone by Graham Bell, many types and designs of microphones have
been developed over the years, each having served well until its replacement.
Microphones may be classified according to their physical design, such as carbon, capacitor, ribbon-
velocity, moving-coil, semiconductor, crystal and ceramic. They also may be classified according to
their polar patterns as omnidirectional, bi-directional, directional, superdirectional, and cardioid.
Nomenclature given to microphones designed for special use, such as wireless, dual-stereophonic, in-
line, and high-intensity, might be considered yet another category. However, whatever classification
used, the designs of the microphones will vary according to the manufacturer.
1. Sound-generating element
All microphones transduce (convert) sound waves into electronic energy, which is amplified and
reconverted into sound waves by the loudspeaker. This initial conversion is accomplished by the
generating element of the microphone. There are three major sound converting systems that can also
be used to classify microphones; dynamic, condenser and ribbon.
This microphone consists of a metallic cup filled with carbon granules; a movable metallic diaphragm
mounted in contact with the granules covers the open end of the cup. Wires attached to the cup and
diaphragm are connected to an electrical circuit so that a current flows through the carbon granules.
Sound waves vibrate the diaphragm, varying the pressure on the carbon granules. The electrical
resistance of the carbon granules changes with the varying pressure, causing the current in the circuit
to change according to the vibrations of the diaphragm.
One of the principal disadvantages of the carbon microphone is that it has continuous high-frequency
hiss caused by the changing contact resistance between the carbon granules. In addition, the frequency
response is limited and the distortion is rather high.
It depends for its action on the piezoelectric effect of certain crystals, most commonly Rochelle salts.
The term ‘piezoelectric’ refers to the fact that when pressure (in this instance - sound waves) is applied
to the crystals in the proper direction, a proportionally varying voltage is produced between opposite
faces of the crystal.
This microphone has it as a basic concept that a voltage develops between two faces of the crystal
when pressure is applied to the crystal In this microphone sound waves vibrate a diaphragm, which in
turn varies the pressure on a piezoelectric crystal. This generates a small voltage, which is then
amplified.
The advantages of the crystal microphone are its relatively high output voltage, acceptable sound
quality and low cost.
Dynamic microphone is a term that incorporates the ribbon and velocity microphone.
The dynamic microphone in principle closely resembles the dynamic loudspeaker, found in all radios
and television receivers. In fact, a two-or-three inch dynamic speaker will make a satisfactory
microphone for such limited-quality used as intercommunication systems and is frequently so
employed.
The dynamic microphone consists of a number of turns of wire wound in what is called a voice coil
and rigidly attached to a diaphragm. This coil is suspended between the poles of a permanent magnet.
Sound causes the diaphragm to vibrate, moving the coil back and forth between the poles and
producing an alternation voltage proportional to the applied sound.
In ribbon microphones, a thin metallic ribbon is attached to the diaphragm and placed in a magnetic
field. When sound waves strike the diaphragm and vibrate the ribbon, a small voltage is generated in
the ribbon by electromagnetic induction.
The velocity microphone, a high-quality device widely used in commercial broadcasting and
recording, is similar in principle. A coil of light wire is suspended between the poles of a permanent
magnet. When vibrated by sound, the coil of wire cut the lines between the poles in alternation
directions, generating an alternating voltage across the length of the wire.
Currently, some modern microphones, designed to pick up sound from one direction only, combine
both ribbon and coil elements for a much richer sound pick-up.
Since the voltage generated by the dynamic microphone is very small, much greater amplification is
necessary for practical use than in the case of carbon and crystal microphones.
Generally, the dynamic mic is the most ragged. They can tolerate reasonably well the rough handling
microphones usually receive. They can withstand extremely high sound levels without damage to the
microphone or excessive distortion of the incoming sound – input overload.
These microphones usually need a small battery to power their built-in preamplifier. Because of the
built in preamplifier they are more sensitive and powerful than other types of microphones.
Condenser microphones have a wide frequency response, low distortion, and little internal noise
The electret condenser is a development of the condenser with a ferroelectric material that has been
permanently electrically charged - polarized. These are the most common types of microphones in the
industry because of their low production costs due to their ease of manufacture. Nearly all lavaliere,
headset microphones, cell-phone microphones use the electret condenser mic technology.
However, the condenser microphone differs from the dynamic microphone in the sense that they are
more sensitive to physical shock, temperature changes and input overload.
Condenser mics have also been noted to have a lower signal-to-sound ratio than dynamic mics. It is,
though, only really obtrusive when recording quiet sounds in a quiet indoor environment (Richardson,
1992, 67).
2. Sound-pickup pattern
Like our ears, any type of microphone can hear from all directions as song as the sounds are within its
hearing range. But whereas some microphones hear sounds from all directions equally well, others
hear better in a specific direction. The territory within which a microphone can hear well is called its
pickup pattern.
In film production, there are omni-directional and unidirectional microphones. The omni-directional
microphone hears sounds from all directions equally well. The unidirectional microphone hears better
in one direction – the front of the microphone. Because of the polar pattern on unidirectional
microphones are roughly heart shaped, they are called cardioid. The super-cardioid, hyper-cardioid,
and ultra-cardioid have progressively narrower pickup patterns, which mean that their hearing is more
and more concentrated to what is happening in front rather than to the side.
Which type you use depend primarily on the production situation and the sound quality required. If
you require the actual scenery sounds for authenticity while your actors are performing, then an omni-
directional mic will be more appropriate. If on the other hand, you are in a studio trying to pick-up the
low key, intimate conversation between two people, you need a unidirectional mic. For example, an
ultra-cardioid mic, or shortgun mic, will give you a good pickup of their conversation, even if the mic
has to be relatively far away from the people so as to be out of the picture. Unlike the omni-directional
mic, the shortgun mic ignores most of the other sounds present, such as the inevitable noises of an
active studio – people and cameras moving about, humming of lights or the rumble of the air
conditioner.
Recording Sound
Picking the right Microphone
Different microphones are used for different purposes. The choice of which mic to use depends on
many factors, such as:
1. Frequency range of the sound source.
2. The nature of the sound – The sound might be a percussive sound; a large, vibrating single source,
such as a piano; a quiet temple bell; a screaming guitar; a string section.
3. The level of background noises in the recording area.
CFT 402 – Sound in Production. Lecturer: Shapaya
4. The acoustics – good acoustics that are a part of the recording of an orchestra, bad acoustics of a
jazz club.
5. The approach to the recording – multi-mic, simple stereo, news gathering in which the sound is on
the move and intelligibility is key, a lecture that includes question from the audience.
6. Purpose of the recording – film, music, podcast.
7. Planned output – mono, stereo, 5.1.
Music/Sound Track
Alberto Cavalcarti in his paper Sound in Film declares with finality that the silent film never existed
simply because music has always been integrated (or at least attempts were made to integrate the two
arts) with the motion picture from the turn of the century.
It is important to note that the art of recording sound was already present at the time of the invention of
the moving images; the only obstacle present was how to fuse the images to the sound. As soon as
films were invented, and long before there were such things as picture palaces, filmmakers and
showmen began to employ devices to provide a sound accompaniment and complete the illusion. First
they used the phonograph, but not for long because of their fragility.
The next device to which showmen turned was the ‘barker.’ In those early times, the bulk of film
distribution was in fairgrounds, where barkers were easy to find.
When the film started to move into special premises, called cinemas, the use of the barker in turn
ceased to be practical. A man’s voice could not be heard easily in a large hall. Besides, a running
commentary was monotonous in a full-length show.
As the barker went out, the inter-title came in to explain the action and comment upon it. Ambitious
filmmakers raided novels and stage successes for film subjects, without giving any thought to real
filmic possibilities, and indeed without any real conception of film itself.
By films being shown in houses for their purpose, the moving picture rose from the status of the pedlar
to a more bourgeois standard, to which the greater refinement of a musical accompaniment were
appropriate.
At the beginning music was used for two very different purposes at once: (a) to drown the noise of the
projectors; (b) to give emotional atmosphere.
As cinema developed commercially, the music became more elaborate and played a larger and larger
part in the show as a whole. Cinema owners vied with each other to attract the public: the piano
became the trio, the trio became a salon orchestra, the salon orchestra became a symphony orchestra.
Sometimes, for an important film, special music would be chosen or even composed for the occasion,
but, although many large cinemas had orchestras, these would mostly play music of their own
choosing. Many large cinemas had a special organ installed which was used to accompany all films
regardless. Most of the music was trite and unimaginative, sometimes it was excruciating. The music
was not intended to increase the reality of the mute image by adding sound; however, it did increase
the audience emotional receptiveness. Its main function was ‘to affirm and legitimate the silence’.
Thus, provided it was not actively offensive, the music could to a great extent be disregarded; it was
the visuals that mattered.
So far as the type of music is concerned, one can say generally that the best music is that which is
composed by a sensitive artist to fit the film. All kinds of concert music have been used for films, from
Byrd to boogie-woogie, but, however good the music itself, it may be inappropriate for the film
concerned. Several ‘new wave’ French directors have used Vivaldi, Bach and Diabelli, but such formal
music as this, with so strong a shape and rhythm of its own, tends to take too prominent a place.
Film music is necessarily ‘canned’ music and as such cannot attain the delicacy and richness of the
actual concert hall.
Soon after the sound film was introduced, the ‘musical’ film came into being. This was at first an exact
analogy of the photographed stage play – only instead of a play, a big Broadway musical show was
photographed. So great were the opportunities for spectacle and mass effect, that this kind of film had
a big momentum at the beginning and for some years such spectacles continued to be produced. But
there was always something wrong with them – something that the public gradually recognised and
rejected. They were not films at all, in the pure sense of the word. Scenes stayed on the screen too
long. ‘Numbers’ dragged out their length on the track. The story was slight, and contained nothing
exciting.
The musicals took a different turn with the advent of the musical melodrama. The story was
strengthened and plot took shape. A great musical is an excellent (creative) combination between film
(image) and music.
However, it is important to note that there are two main categories of musicals – the backstage musical
and the straight musical.
Backstage musical will focus on the musician and their lives in relation to the music. It takes place in a
show biz kind of set-up i.e.; organised singers that train to rise above the obstacles or the single
musician that has to fight to get acceptance in the music industry.
The straight musical has people singing and dancing without any prior preparation. It is argued that
straight musicals are often in romantic set-ups. They have also been used a lot in children stories, with
films ranging from Wizard of Oz to Lion King employing music as a main technique in their
production.
CFT 402 – Sound in Production. Lecturer: Shapaya
By the 1980s and 1990s the live-action musical had become rare and animated films had largely taken
over the function of providing tales interspersed with musical numbers.
Silence as a device
Silence is also an acoustic effect but only where sound can be heard. The presentation of silence in one
of the most specific dramatic effects of the sound film. No other art can be reproduce silence like film
does. Even on the stage silence appears only rarely as a dramatic effect and then only for short
moments. Radio plays cannot make us feel the depth of silence at all, because when no sound comes
from our set, the whole performance has ceased, as we cannot see any silent continuation of the action.
The sole material of the wireless play being sound, the result of the cessation of sound is not silence
but just nothing.
On the stage, a silence which is the reverse of speech may have a dramaturgical function, as for
instance if a noisy company suddenly falls silent when a new character appears; but such a silence
cannot last longer than a few seconds, otherwise it curdles as it were and seems to stop the
performance. On the stage, the effect of silence cannot be drawn out or made to last.
In film, silence can be extremely vivid and varied, for although it has no voice, it has very many
expressions and gestures. A silent glance can speak volumes; its soundlessness makes it more
expressive because the facial movements of a silent figure may explain the reason for the silence, its
tension. In film, silence does not halt action even for an instance and such silent action gives the
silence a living face.
The gestures and attitudes are far too striking. By a long process, a technique of film acting was built
up, in which the skilful actor employed restrained gestures, attitudes, and expressions which,
magnified and emphasized on the screen, got him the effect, he wanted. At the beginning of the sound
period, when the actors from the theatre poured into the studios, these lessons had to be learned all
over again.
Further a simple analogy which have been drawn, which would have indicates at the onset that just as
the screen required restraint in gesture it also required restraint in delivery of speech. The microphone
is a very searching instrument. The round-mouthed oratory of stage delivery becomes intolerable
affectation when it is amplified by loudspeakers in the cinema (unless the content justifies rhetoric).
Film dialogue, it was discovered, was most effective and dramatic when it was uttered clearly, rapidly,
and evenly, almost thrown away (Weis, 1985, 103). Emphasis and emotional effect must of necessity
be left to the care of visuals.
But the difference between stage and screen goes far beyond such externals as the technique of
miming and speaking. It is an organic difference. A play is all speech. When the early talkie directors
put whole plays on the screen, they were forgetting the lessons which the barker had taught them – that
the continuous utterance of words in the cinema is monotonous. More important, the preponderance of
speech element in the resulting film crushed out the other elements – visual interest, noise and music.
Moreover, films must move, or they become intolerable. Long stretches of dialogue inevitably cancel
movement and visual variety.
Film producers have learned that the use of speech must be economical, and be balanced with the other
elements in the film, that the style employed by dialogue writers must be literal, conversational, and
non-literary.
Good dialogue is only done when the script writer thinks in character and understands the characters
motives and background in each scene.
Monologue
A monologue is a moment in a play, film, or novel, where a character speaks without being interrupted
by any other characters. These speeches can be addressed to someone, or spoken to the actor’s self or
to the audience, in which case they are called soliloquies. Another type of this speech, especially in
novels, is the interior monologue, where a character has a long bout of thinking personal thoughts,
which aren’t interrupted by speech or actions. This technique may also be used in film where a
voiceover provides the inner thoughts of the character.
Uninterrupted spoken parts are commonplace in most plays, movies and teleplays. In auditions, actors
must find monologues that are usually no more than two minutes in length, and they may be asked to
perform two. Most seasoned actors, especially in the theater world, develop several pieces they
particularly like, and that most represent their dramatic range or their abilities to play very different
types of characters.
The student of drama may start learning how to act by first learning how to perform a monologue.
There are some common mistakes along the way, such as performing monologues that have been
“done to death.” It’s also usually important to not take a monologue out of context. Reading a play and
digging deep to understand why a character is saying what he/she is saying, and how the person might
deliver a two minute speech is very valuable.
Noise
Noise basically includes background dialogues and actual environmental sounds. This element of
sound makes the scene and the whole film to be generally believable (realistic). They assist in building
up the world of make believe and absorbing the audience in the action.
It has been accepted that sound cannot be isolated from its acoustic environment, hence dialogue
cannot occur in a vacuum. For a camera shot, what is not within the frame cannot be seen by us even if
it is immediately besides the things that are. In sound things are different. An acoustic environment
inevitably encroaches on the close-up shot and what we hear in this case is not a shadow or a beam of
light, but sound themselves, which can always be heard throughout the whole space of the picture,
however small a section of that space is included in the close-up. Sound should not be blocked out.
Music played in a restaurant cannot be cut out if a special close-up of say two people softly talking
together in a corner is to be shown.
Sound Effects
These are sounds generated to produce the illusion of reality – what is happening off-stage. However,
all sounds other than speech, music, and the natural sounds generated by the actors in synchronous
filming are considered sound effects, whether intended to be noticed by the audience or not. Big
studios and large independent services maintain vast libraries of effects ranging from unsurpassed
stand-bys to general sound effects. There are sound designers who specialize in electronic or
mechanical effects that often serve the illusion better than the actual thing.
War films, science-fiction films, and disaster films may use numerous effects tracks, gunshots on one
explosions on another, sirens on a third. Complex sounds may be build up from several tracks, the
actual sound augmented or sweetened with library effects.
If there is any single trend that can be observed in sound effects practice it is towards a much more
detailed background sound ambience.
Professionals use proper and expensive windbaskets in which the mike is held in an anti-shock mount
suspended centrally in a free air-space in a porous basket.
The object is to prevent vibration transmitted sound reaching the mike, and to channel the wind
through numerous perforations and baffles until its velocity, upon reaching the mike, is reduced to
zero.
It is important to cover the entire mike – not just the recording end. Any part left exposed will simply
transmit the wind roar through vibration.
Semi-professional microphones which are usually have low sensitivity typically demand higher
amplification which also results to the amplification of noise.
A sound engineer must also ensure that the impedance of the microphone matches that of the cable to
avoid high resistance resulting to noise. It is also a rule that high-impedance microphones should not
be used with long cables as the cables will exert a damping effect on the high frequency signals. If it
does become necessary to use high-impedance microphones with long cable runs, impedance-
converting transformers will be needed.
All professional microphones and microphone cables have a three pronged connector – XLR
connector.
The dynamic microphone also lack discrimination to low-frequency sounds arriving from random
directions. Sounds which originate at the rare of the microphone will be bent around the microphone
housing and actuate the diaphragm as if they arrived from the front. At the higher frequencies, for
sounds originating at its rear, the frequency response will drop off due to diffraction.
Electrical hum
Low-frequency noise, which can vary from a smooth, deep rumble to a spiky buzz, can be caused by a
number of factors. The commonest are:
1. Unbalanced, unscreened and damages cables
2. Dirty, corroded and broken connectors causing high resistance and imbalance
3. Moisture and condensation in the connectors, causing spurious leakage currents
4. Poorly screened microphones, causing static interference clicks and spurious hum leakages when
handles or placed in a certain position.
5. Microphones which are susceptible to interference from static and magnetic fields
6. Amplifying equipment linked incorrectly to the electricity mains, causing multi-path earth currents
CFT 402 – Sound in Production. Lecturer: Shapaya
Hum, radio break-through and various forms of static induction interference are minimised by using
the correct cable. The cable designed for the job is the microphone cable -a tough plastic-covered,
twin-conductor cable with an antistatic screen- and this should be used for all microphone circuits
irrespective of distance involved.
Hiss
Recordings need not be hissy. Noise-reduction systems are quite common in cassettes (otherwise
inclined to be rather hissy). However, the problem of hiss is complex. It is mostly due to thermal
movement of electrons in the circuit resistance of amplifiers; the more one amplifies, the higher the
hiss level. In other words, if one begins with an amplifier of good design –one in which the self-
generating noise is very low- the main source of noise generation is the very first resistance in the
circuit – the microphone. It is worthwhile to note that every microphone produces a certain amount of
noise along with the desired signal.
Acoustic noise
This is noise from the environment. A problem which may arise when recording in an a cove (or such
like environments) is that noises from things beyond the horizon (aircraft or ships) can be picked-up
by a microphone placed at the focal point of the cove by a recordist who is unaware of its acoustic
idiosyncrasies.
Editing Sound
Workspaces in Audition
Adobe video and audio applications provide a consistent, customizable workspace. Although each
application has its own set of panels (such as Project, Metadata, and Timeline), you move and group
panels in the same way across products.
The main window of a program is the application window. The default workspace contains groups of
panels as well as panels that stand alone. You customize a workspace by arranging panels in the layout
that best suits your working style. As you rearrange panels, the other panels resize automatically to fit
the window.
The Waveform and Multitrack editors use different editing methods, and each has unique advantages.
The Waveform Editor uses a destructive method, which changes audio data, permanently altering
saved files. Such permanent changes are preferable when converting sample rate and bit depth,
mastering, or batch processing. The Multitrack Editor uses a nondestructive method, which is
impermanent and instantaneous, requiring more processing power, but increasing flexibility. This
flexibility is preferable when gradually building and reevaluating a multilayered musical composition
or video soundtrack.
You can combine destructive and nondestructive editing to suit the needs of a project. If a multitrack
clip requires destructive editing, for example, simply double-click it to enter the Waveform Editor.
CFT 402 – Sound in Production. Lecturer: Shapaya
Likewise, if an edited waveform contains recent changes that you dislike, use the Undo command to
revert to previous states—destructive edits aren’t applied until you save a file.
Switch editors
Do one of the following to switch editors:
• From the View menu, choose Waveform or Multitrack Editor.
• In the toolbar, click the Waveform or Multitrack Editor button.
• In the Multitrack Editor, double-click an audio clip to open it in the Waveform Editor.
Alternatively, double-click a file in the Files panel.
• In the Waveform Editor, choose Edit > Edit Original to open the multitrack session that created
a mixdown file.
Zoom into a specific frequency range - In the vertical ruler for the spectral display, right-click and
drag.
Extend or shorten the displayed range - Place the pointer over the left or right edge of the highlighted
area in the zoom navigator, and then drag the magnifying glass icon.
Zoom out full (all tracks) - You can zoom out all tracks to the same height to fully cover vertical
spaces. The view will resize track heights to take up the full height of the multitrack editor panel.
Track heights will resize to a consistent height. Minimized tracks will still remain at their minimum
height.
To Zoom out full, choose View > Zoom Out Full (All Tracks).
Zoom with the mouse wheel or Mac trackpad - Place the pointer over the zoom navigator or ruler, and
either roll the wheel or drag up or down with two fingers. (In the Waveform Editor, this zoom method
also works when the pointer is over the waveform.)
Navigate with the Selection/View panel - The Selection/View panel shows the start and end of the
current selection and view in the Editor panel. The panel displays this information in the current time
format, such as Decimal or Bars And Beats.
1. To display the Selection/View panel, choose Window > Selection/View Controls.
2. (Optional) Enter new values into the Begin, End, or Duration boxes to change the selection or
view.
Auto-scroll navigation - You can use auto-scroll to navigate on the waveform and multitrack editor. To
choose the scroll type, open Preferences > Playback. Use the radio buttons to choose the type of scroll
individually for both the editors.
• Pagewise scroll: The playhead moves from left to right and jumps to the next frame when it
hits the right corner.
• Centered scroll: The playhead is positioned at the center and the track beneath it moves.
Therefore, the current time of audio being played is always in the middle.
Waveform Editor
If you have multichannel audio or video files, you can edit each audio channel separately in the
Waveform Editor by following these steps.
1. Select File > Open. The opened file appears in the Files Panel.
2. Expand the drop-down to see each of the channels within the file.
Multitrack Editor
To use multichannel audio or video files within a session, you can bring each of the channels of the
file into the Multitrack Editor as one single multichannel clip (default behaviour). You can also
automatically split each channel or groups of channels into different clips by holding Alt (Windows) or
Option (Mac) while dragging.
Inside the session folder, you find each recorded clip in the [session name]_Recorded folder. Clip
filenames begin with the track name, followed by the take number (for example, Track 1_003.wav).
After recording, you can edit takes to produce a polished final mix. For example, if you create multiple
takes of a guitar solo, you can combine the best sections of each solo. You can also use one version of
the solo for a video soundtrack, and another version for an audio CD.
You can dock the Levels panel horizontally or vertically. When the panel is docked horizontally, the
upper meter represents the left channel, and the lower meter represents the right channel.
Note: To show or hide the panel, choose Window > Level Meters.
The meters show signal levels in dBFS (decibels below full scale), where a level of 0 dB is the
maximum amplitude possible before clipping occurs. Yellow peak indicators remain for 1.5 seconds so
you can easily determine peak amplitude.
If amplitude is too low, sound quality is reduced; if amplitude is too high, clipping occurs and
produces distortion. The red clip-indicator to the right of the meters lights up when levels exceed the
maximum of 0 dB.
Adobe Audition doesn’t directly control a sound card’s recording levels. For a professional sound
card, you adjust these levels with the mixer application provided with the card (see the card’s
documentation for instructions). For a standard sound card, you use the mixer provided by Windows or
Mac OS.
To insert an effect, choose it from a slot’s pop-up menu. Then adjust effect settings as desired.
To later re-access effect settings, double-click the effect name in the rack.
To bypass an effect, click its Power button .
To bypass all effects, click the main Power button in the lower left corner of a rack. You can
also click the fx power button in the Editor panel or Mixer.
To bypass a selected group of effects, choose Toggle Power State of Selected Effects from the
panel menu.
Bypass effects to quickly compare processed and unprocessed audio.
To remove a single effect, choose Remove Effect from a slot’s pop-up menu. Or select the slot,
and press Delete.
To remove all effects, choose Remove All Effects from the panel menu.
To reorder effects, drag them to different slots.
Reordering effects produces different sonic results. (For an example, place Reverb before
Phaser, or the other way around.)
You can insert, reorder, and remove effects in the Editor, Mixer, or Effects Rack panel. Only in the
Effects Rack, however, can you save favorite settings as presets, which you can apply to multiple
tracks.
Note: Unlike many graphs in Adobe Audition effects, the Doppler Shifter graph is noninteractive: You can’t
directly manipulate the graph. Instead, the graph changes as you adjust the effect’s parameters.
Note: Path Type - Defines which path the sound source appears to take. Depending on the path type, a different
set of options is available.
Adjust Volume Based on Distance or Direction - Automatically adjusts the effect’s volume based on
the values specified.
DeEsser effect
The Amplitude and Compression > DeEsser effect removes sibilance, “ess” sounds heard in speech
and singing that can distort high frequencies.
The graph reveals the processed frequencies. To see how much audio content exists in the processed
range, click the Preview button .
The Dynamics Processing effect can produce subtle changes that you notice only after repeated
listening. When applying this effect in the Waveform Editor, use a copy of the original file so you can
return to the original audio if necessary.
In the Dynamic Processing Effect, you can view the Level Meter and the Gain Reduction Meter.Level
Meter shows the input level of the audio and Gain Reduction Meter shows how audio signals are
compressed or expanded. These meters are visible on the right side of the graph as shown below.
Note: Use the Broadcast Limiter preset to simulate the processed sound of a contemporary radio station.
In the Waveform Editor panel, click the yellow envelope line to add keyframes, and drag them up or
down to change amplitude.
In the Waveform Editor panel, click the yellow envelope line to add keyframes, and drag them up or
down to change amplitude.
Diagnose and repair, delete, or mark audio - Unlike conventional noise reduction effects, which
process all selected audio, diagnostics scan for problematic or silent areas, and then let you choose
which to address.
1. In the Diagnostics panel, choose an option from the Effect menu.
2. Click Scan.
3. At the bottom of the panel, do any of the following:
• Select one or more detected items in the list, and click Repair, Delete, or Mark. (The available
options depend upon the chosen diagnostic effect.)
Note: To mark detected clicks or clipping, right-click selected items in the list, and choose Create
Markers from the pop-up menu.
• Click Repair All, Delete All, or Mark All to address all detected items.
• Click the magnifying glass to zoom in on a selected problem in the Editor panel. Click the
icon again to zoom out.
• Click Clear Repaired, Deleted, or Marked to remove previously addressed items from the list.
DeClipper options
The Diagnostics > DeClipper effect repairs clipped waveforms by filling in clipped sections with new
audio data. Clipping occurs when audio amplitude exceeds the maximum level for the current bit
depth. Commonly, clipping results from recording levels that are too high. You can monitor clipping
Visually, clipped audio appears as broad flat areas at the top of a waveform. Sonically, clipped audio is
a static-like distortion.
Note: If you need to adjust the DC offset of clipped audio, first use the DeClipper effect. If you instead adjust
DC offset first, the DeClipper won’t identify clipped areas that fall below 0 dBFS.
The proper amount of noise reduction depends upon the type of background noise and the acceptable
loss in quality for the remaining signal. In general, you can increase the signal-to-noise ratio by 5 to 20
dB and retain high audio quality.
To achieve the best results with the Noise Reduction effect, apply it to audio with no DC offset. With a
DC offset, this effect may introduce clicks in quiet passages. (To remove a DC offset, choose Favorites
> Repair DC Offset.)
The Mixer (Window > Mixer) provides an alternative view of a session, revealing many more tracks
and controls simultaneously, without showing clips. The Mixer is ideal for mixing large sessions with
many tracks.
Duplicate tracks
To perfectly copy all clips, effects, equalization, and envelopes in a track, duplicate it. Duplicate tracks
provide a great starting point for new adjustments, helping you compare different processing and
automation settings.
1. In the Editor panel or Mixer, select a track.
2. Choose Multitrack > Track > Duplicate Selected Track.
To arrange clips in the Editor panel, you use the Move or Time Selection tools.
Overlapping Clips
Copy a clip
You can create two types of copied audio clips: reference copies that share source files and unique
copies that have independent source files. The type of copy you choose depends upon the amount of
available disk space and the nature of destructive editing you plan to perform in the Waveform Editor.
Reference copies consume no additional disk space, letting you simultaneously edit all instances by
editing the original source file. (For example, you can add the Flanger effect to a source file in the
Waveform Editor and automatically apply the effect to all 30 referenced copies in a session.)
Unique copies have a separate audio file on disk, allowing for separate editing of each version in the
Waveform Editor. (For example, you can add destructive effects to the version in an introduction while
leaving the version in a verse dry.)
To quickly copy a reference, press Ctrl + C (Windows) or Cmd + C (Mac OS). Alternatively, Alt-drag
(Windows) or Option-drag (Mac OS) the clip header.
1. Click the Move tool in the toolbar. Then right-click and drag the clip.
To copy with the Time Selection tool, right-click and drag the clip header (similar to the Hybrid tool technique
in previous versions).
2. Release the mouse button, and choose one of the following from the pop-up menu:
• Copy Here (to copy a reference)
• Copy Unique Here
Split clips
Split audio clips to break them into separate clips that you can independently move or edit.
2. In the Editor panel, click where you want the split to occur.
On-clip controls
A. Drag controls in clip corners to fade in and out B. Overlap clips to crossfade
Fade options
To access the following fade options, select a clip, and then either right-click a fade icon in the Editor
panel, or choose Clip > Fade In or Fade Out.
No Fade - Deletes the fade or crossfade.
Fade In, Fade Out, or Crossfade - If clips overlap, lets you choose the fade type.
Symmetrical or Asymetrical (crossfades only) - Determines how the left and right fade curves
interact when you drag them up and down. Symmetrical adjusts both fades identically, while
asymetrical lets you adjust fades independently.
Linear or Cosine - Applies either an even, linear fade or an S-shaped fade that starts slowly,
then rapidly changes amplitude, and ends slowly.
Tip: To switch between Linear and Cosine modes while dragging fade icons, hold down Ctrl (Windows)
or Command (Mac OS).
Automatic Crossfades Enabled - Crossfades overlapping clips. Deselect this option if automatic
crossfades are undesirable or interfere with other tasks, such as trimming clips.
Automation envelopes visually indicate settings at specific points in time, and you can edit them by
dragging keyframes on envelope lines. Envelopes are nondestructive, so they don’t change audio files
in any way. If you open a file in the Waveform Editor, for example, you don’t hear the effect of any
envelopes applied in the Multitrack Editor.
On stereo tracks, clip volume and pan envelopes appear by default; you can identify them by color and
initial position. Volume envelopes are yellow lines initially placed across the upper half of clips. Pan
envelopes are blue lines initially placed in the center. (With pan envelopes, the top of a clip represents
full left, while the bottom represents full right.)
Note: On mono and 5.1 surround tracks, clips lack pan envelopes.
1. In Editor panel, click the triangle to the left of the Track Automation Mode menu for the track you
want to automate. (The menu is set to Read by default.)
2. From the Show Envelopes menu, select a parameter to automate.
3. On the envelope line, click and drag to add and adjust keyframes.
You can also apply spline curves to an entire envelope, overriding the keyframe-specific setting above
to create natural-sounding transitions that change in speed near keyframes.
Add a keyframe
Do either of the following:
• Position the pointer over an envelope line. When a plus sign appears, click.
• Position the playhead where you’d like a track parameter to change. Then click the Add
Keyframe icon in the track controls.
Delete keyframes
Right-click an envelope line, and choose Delete Selected Keyframes. Or, drag an individual keyframe
off a clip or track.
From the Copy to Active Sequence menu, choose where the exported Audition tracks begin. Any new
tracks are added below existing ones.
If you create multitrack mixes entirely in Adobe Audition, save session files in the native SESX
format.