Human Sensing 02
Human Sensing 02
20 languages
Article
Talk
Read
Edit
View history
Tools
Appearance
hide
Text
Small
Standard
Large
Width
Standard
Wide
Color (beta)
Automatic
Light
Dark
From Wikipedia, the free encyclopedia
"Audio processor" redirects here. For audio processing chips, see Sound chip.
This article needs additional citations for verification. Please help improve
this article by adding citations to reliable sources. Unsourced material may
be challenged and removed.
Find sources: "Audio signal
processing" – news · newspapers · books · scholar · JSTOR (June 2021) (Learn
how and when to remove this message)
History
[edit]
The motivation for audio signal processing began at the beginning of the 20th
century with inventions like the telephone, phonograph, and radio that allowed for the
transmission and storage of audio signals. Audio processing was necessary for
early radio broadcasting, as there were many problems with studio-to-transmitter
links.[1] The theory of signal processing and its application to audio was largely
developed at Bell Labs in the mid 20th century. Claude Shannon and Harry Nyquist's
early work on communication theory, sampling theory and pulse-code
modulation (PCM) laid the foundations for the field. In 1957, Max Mathews became
the first person to synthesize audio from a computer, giving birth to computer music.
Types
[edit]
Analog
[edit]
Further information: Analog signal processing
Historically, before the advent of widespread digital technology, analog was the only
method by which to manipulate a signal. Since that time, as computers and software
have become more capable and affordable, digital signal processing has become the
method of choice. However, in music applications, analog technology is often still
desirable as it often produces nonlinear responses that are difficult to replicate with
digital filters.
Digital
[edit]
Further information: Digital signal processing
Applications
[edit]
Processing methods and application areas include storage, data compression, music
information retrieval, speech processing, localization, acoustic
detection, transmission, noise cancellation, acoustic fingerprinting, sound
recognition, synthesis, and enhancement (e.g. equalization, filtering, level
compression, echo and reverb removal or addition, etc.).
Audio broadcasting
[edit]
See also: Broadcasting
Audio signal processing is used when broadcasting audio signals in order to
enhance their fidelity or optimize for bandwidth or latency. In this domain, the most
important audio processing takes place just before the transmitter. The audio
processor here must prevent or minimize overmodulation, compensate for non-linear
transmitters (a potential issue with medium wave and shortwave broadcasting), and
adjust overall loudness to the desired level.
Audio synthesis
[edit]
See also: Synthesizer
Audio effects
[edit]
Main article: Effects unit
Audio effects alter the sound of a musical instrument or other audio source. Common
effects include distortion, often used with electric guitar in electric blues and rock
music; dynamic effects such as volume pedals and compressors, which affect
loudness; filters such as wah-wah pedals and graphic equalizers, which modify
frequency ranges; modulation effects, such
as chorus, flangers and phasers; pitch effects such as pitch shifters; and time
effects, such as reverb and delay, which create echoing sounds and emulate the
sound of different spaces.
Musicians, audio engineers and record producers use effects units during live
performances or in the studio, typically with electric guitar, bass guitar, electronic
keyboard or electric piano. While effects are most frequently used
with electric or electronic instruments, they can be used with any audio source, such
as acoustic instruments, drums, and vocals.[12][13]