0% found this document useful (0 votes)
20 views4 pages

Human Sensing 02

Human Sensing 02

Uploaded by

pesa77
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views4 pages

Human Sensing 02

Human Sensing 02

Uploaded by

pesa77
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Audio signal processing

20 languages

 Article
 Talk
 Read
 Edit
 View history
Tools


Appearance
hide
Text


Small
Standard
Large
Width

Standard
Wide
Color (beta)


Automatic
Light
Dark
From Wikipedia, the free encyclopedia
"Audio processor" redirects here. For audio processing chips, see Sound chip.

This article needs additional citations for verification. Please help improve
this article by adding citations to reliable sources. Unsourced material may
be challenged and removed.
Find sources: "Audio signal
processing" – news · newspapers · books · scholar · JSTOR (June 2021) (Learn
how and when to remove this message)

Audio signal processing is a subfield of signal processing that is concerned with


the electronic manipulation of audio signals. Audio signals are electronic
representations of sound waves—longitudinal waves which travel through air,
consisting of compressions and rarefactions. The energy contained in audio signals
or sound power level is typically measured in decibels. As audio signals may be
represented in either digital or analog format, processing may occur in either domain.
Analog processors operate directly on the electrical signal, while digital processors
operate mathematically on its digital representation.

History
[edit]

The motivation for audio signal processing began at the beginning of the 20th
century with inventions like the telephone, phonograph, and radio that allowed for the
transmission and storage of audio signals. Audio processing was necessary for
early radio broadcasting, as there were many problems with studio-to-transmitter
links.[1] The theory of signal processing and its application to audio was largely
developed at Bell Labs in the mid 20th century. Claude Shannon and Harry Nyquist's
early work on communication theory, sampling theory and pulse-code
modulation (PCM) laid the foundations for the field. In 1957, Max Mathews became
the first person to synthesize audio from a computer, giving birth to computer music.

Major developments in digital audio coding and audio data


compression include differential pulse-code modulation (DPCM) by C. Chapin
Cutler at Bell Labs in 1950,[2] linear predictive coding (LPC) by Fumitada
Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) in
1966,[3] adaptive DPCM (ADPCM) by P. Cummiskey, Nikil S. Jayant and James L.
Flanagan at Bell Labs in 1973,[4][5] discrete cosine transform (DCT) coding by Nasir
Ahmed, T. Natarajan and K. R. Rao in 1974,[6] and modified discrete cosine
transform (MDCT) coding by J. P. Princen, A. W. Johnson and A. B. Bradley at
the University of Surrey in 1987.[7] LPC is the basis for perceptual coding and is
widely used in speech coding,[8] while MDCT coding is widely used in modern audio
coding formats such as MP3[9] and Advanced Audio Coding (AAC).[10]

Types
[edit]

Analog
[edit]
Further information: Analog signal processing

An analog audio signal is a continuous signal represented by an electrical voltage or


current that is analogous to the sound waves in the air. Analog signal processing
then involves physically altering the continuous signal by changing the voltage or
current or charge via electrical circuits.

Historically, before the advent of widespread digital technology, analog was the only
method by which to manipulate a signal. Since that time, as computers and software
have become more capable and affordable, digital signal processing has become the
method of choice. However, in music applications, analog technology is often still
desirable as it often produces nonlinear responses that are difficult to replicate with
digital filters.

Digital
[edit]
Further information: Digital signal processing

A digital representation expresses the audio waveform as a sequence of symbols,


usually binary numbers. This permits signal processing using digital circuits such
as digital signal processors, microprocessors and general-purpose computers. Most
modern audio systems use a digital approach as the techniques of digital signal
processing are much more powerful and efficient than analog domain signal
processing.[11]

Applications
[edit]

Processing methods and application areas include storage, data compression, music
information retrieval, speech processing, localization, acoustic
detection, transmission, noise cancellation, acoustic fingerprinting, sound
recognition, synthesis, and enhancement (e.g. equalization, filtering, level
compression, echo and reverb removal or addition, etc.).

Audio broadcasting
[edit]
See also: Broadcasting
Audio signal processing is used when broadcasting audio signals in order to
enhance their fidelity or optimize for bandwidth or latency. In this domain, the most
important audio processing takes place just before the transmitter. The audio
processor here must prevent or minimize overmodulation, compensate for non-linear
transmitters (a potential issue with medium wave and shortwave broadcasting), and
adjust overall loudness to the desired level.

Active noise control


[edit]

Active noise control is a technique designed to reduce unwanted sound. By creating


a signal that is identical to the unwanted noise but with the opposite polarity, the two
signals cancel out due to destructive interference.

Audio synthesis
[edit]
See also: Synthesizer

Audio synthesis is the electronic generation of audio signals. A musical instrument


that accomplishes this is called a synthesizer. Synthesizers can either imitate
sounds or generate new ones. Audio synthesis is also used to generate
human speech using speech synthesis.

Audio effects
[edit]
Main article: Effects unit

Audio effects alter the sound of a musical instrument or other audio source. Common
effects include distortion, often used with electric guitar in electric blues and rock
music; dynamic effects such as volume pedals and compressors, which affect
loudness; filters such as wah-wah pedals and graphic equalizers, which modify
frequency ranges; modulation effects, such
as chorus, flangers and phasers; pitch effects such as pitch shifters; and time
effects, such as reverb and delay, which create echoing sounds and emulate the
sound of different spaces.

Musicians, audio engineers and record producers use effects units during live
performances or in the studio, typically with electric guitar, bass guitar, electronic
keyboard or electric piano. While effects are most frequently used
with electric or electronic instruments, they can be used with any audio source, such
as acoustic instruments, drums, and vocals.[12][13]

You might also like