Signals and Systems Tutorial
Signals and Systems Tutorial
Signals and Systems tutorial is designed to cover analysis, types, convolution, sampling and
operations performed on signals. It also describes various types of systems.
AUDIENCE
This tutorial is designed for students and all enthusiastic learners, who are willing to learn signals
and systems in simple and easy steps. This tutorial will give you deep understanding on Signals
and Systems concepts. After completing this tutorial, you will be at intermediate level of expertise
from where you can take yourself to higher level of expertise.
PREREQUISITES
Before proceeding with this tutorial, you must have a basic understanding of differential and
integral calculus, limits and adequate knowledge of mathematics.
SIGNALS AND SYSTEMS OVERVIEW
http://www.tutorialspoint.com/signals_and_systems/signals_and_systems_overview.htm Copyright © tutorialspoint.com
What is Signal?
Signal is a time varying physical phenomenon which is intended to convey information.
OR
OR
Signal is a function of one or more independent variables, which contain some information.
Note: Noise is also a signal, but the information conveyed by noise is unwanted hence it is
considered as undesirable.
What is System?
System is a device or combination of devices, which can operate on signals and produces
corresponding response. Input to a system is called as excitation and output from it is called as
response.
For one or more inputs, the system can have one or more outputs.
1 t⩾0
Unit step function is denoted by ut . It is defined as ut = {
0 t<0
1 t=0
Impulse function is denoted by δ t . and it is defined as δ t = {
0 t≠0
∞
∫ δ(t)dt = u(t)
−∞
du(t)
δ(t) =
dt
Ramp Signal
t⩾0
{
t
Ramp signal is denoted by rt , and it is defined as rt =
0 t<0
∫ u(t) = ∫ 1 = t = r(t)
dr(t)
u(t) =
dt
Area under unit ramp is unity.
Parabolic Signal
t2 /2 t⩾0
Parabolic signal can be defined as xt = {
0 t<0
t2
∬ u(t)dt = ∫ r(t)dt = ∫ tdt = = parabolicsignal
2
d 2 x(t)
⇒ u(t) = 2
dt
dx(t)
⇒ r(t) =
dt
Signum Function
⎧
⎪ 1 t>0
Signum function is denoted as sgnt . It is defined as sgnt = ⎨ 0
⎩
t=0
⎪
−1 t<0
sgnt = 2ut – 1
Exponential Signal
Case i: if α = 0 → xt = e0 = 1
Case ii: if α < 0 i.e. -ve then xt = e−αt . The shape is called decaying exponential.
Case iii: if α > 0 i.e. +ve then xt = eαt . The shape is called raising exponential.
Rectangular Signal
Let it be denoted as xt and it is defined as
Triangular Signal
Let it be denoted as xt
Sinusoidal Signal
Where T 0 = 2π
w0
Sinc Function
It is denoted as sinct and it is defined as sinc
sinπt
(t) =
πt
= 0 for t = ±1, ±2, ±3. . .
Sampling Function
It is denoted as sa t and it is defined as
sint
sa(t) =
t
= 0 for t = ±π, ±2π, ±3π . . .
SIGNALS CLASSIFICATION
http://www.tutorialspoint.com/signals_and_systems/signals_classification.htm Copyright © tutorialspoint.com
Let xt = t2
x−t = −t 2 = t2 = xt
∴, t2 is even function
Example 2: As shown in the following diagram, rectangle function xt = x−t so it is also even
function.
Let xt = sin t
ƒ(t ) = ƒe(t ) + ƒ0 (t )
where
The above signal will repeat for every time interval T 0 hence it is periodic with period T 0 .
1 T
Power P = lim ∫ x2 (t)dt
T →∞ 2T −T
NOTE:A signal cannot be both, energy and power simultaneously. Also, a signal may be neither
energy nor power signal.
Example:
1. Amplitude
2. Time
Amplitude Scaling
C xt is a amplitude scaled version of xt whose amplitude is scaled by a factor C.
Addition
Addition of two signals is nothing but addition of their corresponding amplitudes. This can be best
explained by using the following example:
Multiplication
Multiplication of two signals is nothing but multiplication of their corresponding amplitudes. This
can be best explained by the following example:
As seen from the diagram above,
Time Shifting
x(t $\pm$ t0 ) is time shifted version of the signal xt .
x (t + t0 ) → negative shift
x (t - t0 ) → positive shift
Time Scaling
xAt is time scaled version of the signal xt . where A is always positive.
Note: uat = ut time scaling is not applicable for unit step function.
Time Reversal
x−t is the time reversal of the signal xt .
SYSTEMS CLASSIFICATION
http://www.tutorialspoint.com/signals_and_systems/systems_classification.htm Copyright © tutorialspoint.com
T [a 1 x1 t + a 2 x2 t ] = a 1 T[x1 t ] + a 2 T[x2 t ]
Example:
t = x2t
Solution:
y1 t = T[x1t ] = x12t
y2 t = T[x2t ] = x22t
T [a 1 x1 t + a 2 x2 t ] = [ a 1 x1 t + a 2 x2 t ]2
y n, t = yn −t
The condition for time variant system is:
y n, t ≠ yn − t
Where y n, t = T[xn − t ] = input change
yn − t = output change
Example:
yn = x−n
Liner Time variant LTV and Liner Time Invariant LTI Systems
If a system is both liner and time variant, then it is called liner time variant LT V system.
If a system is both liner and time Invariant then that system is called liner time invariant LT I
system.
Example 1: yt = 2 xt
For present value t=0, the system output is y0 = 2x0. Here, the output is only dependent upon
present input. Hence the system is memory less or static.
Example 2: yt = 2 xt + 3 xt − 3
Here x−3 is past value for the present input for which the system requires memory to get this
output. Hence, the system is a dynamic system.
For non causal system, the output depends upon future inputs also.
Example 1: yn = 2 xt + 3 xt − 3
Here, the system output only depends upon present and past inputs. Hence, the system is causal.
Example 2: yn = 2 xt + 3 xt − 3 + 6xt + 3
For present value t=1, the system output is y1 = 2x1 + 3x−2 + 6x4 Here, the system output
depends upon future input. Hence the system is non-causal system.
Y S = X S H1S H2S
1
= X S H1S · Since H2S = 1/H1(S )
(H1(S))
∴, YS = XS
t t
→ yt = xt
Hence, the system is invertible.
Example 1: y t = x2 t
Let the input is ut unitstepboundedinput then the output yt = u2t = ut = bounded output.
Example 2: y t = ∫ x(t) dt
unitstepboundedinput then the output yt = ∫ u(t) dt = ramp signal
Let the input is u t
unboundedbecauseamplitudeoframpisnotfiniteitgoestoinfinitewhent$ → $infinite .
Vector
A vector contains magnitude and direction. The name of the vector is denoted by bold face type
and their magnitude is denoted by light face type.
Example: V is a vector with magnitude V. Consider two vectors V 1 and V 2 as shown in the
following diagram. Let the component of V 1 along with V 2 is given by C12 V 2 . The component of a
vector V 1 along with the vector V 2 can obtained by taking a perpendicular from the end of V 1 to
the vector V 2 as shown in diagram:
V 1 = C12 V 2 + V e
But this is not the only way of expressing vector V 1 in terms of V 2 . The alternate possibilities are:
V 1 =C1 V 2 +V e1
V 2 =C2 V 2 +V e2
The error signal is minimum for large component value. If C12 =0, then two signals are said to be
orthogonal.
V 1 . V 2 = V 1 .V 2 cosθ
V 1 . V 2 =V 2 .V 1
V1 . V2
V2 = C1 2 V2
V1 . V2
⇒ C12 =
V2
Signal
The concept of orthogonality can be applied to signals. Let us consider two signals f1 t and f2 t .
Similar to vectors, you can approximate f1 t in terms of f2 t as
1 t2
∫ [fe (t)]dt
t2 − t1 t1
1 t2
∫ [f1 (t) − C12 f2 (t)]dt
t2 − t1 t1
However, this step also does not reduce the error to appreciable extent. This can be corrected by
taking the square of error function.
1
ε= t2 −t1
∫tt12 [fe (t)]2 dt
1
⇒ t2 −t1
∫tt12 [fe (t) − C12 f2 ]2 dt
Where ε is the mean square value of error signal. The value of C12 which minimizes the error, you
need to calculate dε
dC12
=0
⇒ d
[ 1
dC12 t2 −t1
∫tt12 [f1 (t) − C12 f2 (t)]2 dt] = 0
1
⇒ t2 −t1
∫tt12 [ dCd f12 (t) − d
dC12
2f1 (t)C12 f2 (t) + d
dC12
f22 (t)C12
2 ]dt = 0
12
Derivative of the terms which do not have C12 term are zero.
t2
∫ f1 (t)f2 (t)dt = 0
t1
VX . VX = VY . VY = VZ . VZ = 1
VX . VY = VY . VZ = VZ . VX = 0
You can write above conditions as
1 a=b
Va . Vb = {
0 a≠b
The vector A can be represented in terms of its components and unit vectors as
A = X1 VX + Y1 VY + Z1 VZ . . . . . . . . . . . . . . . . (1)
Any vectors in this three dimensional space can be represented in terms of these three unit
vectors only.
If you consider n dimensional space, then any vector A in that space can be represented as
A = X1 VX + Y1 VY + Z1 VZ +. . . +N1 VN . . . . . (2)
As the magnitude of unit vectors is unity for any vector A
= A. V G. . . . . . . . . . . . . . . (3)
Substitute equation 2 in equation 3.
Let a function ft , it can be approximated with this orthogonal signal space by adding the
components along mutually orthogonal signals i.e.
The component which minimizes the mean square error can be found by
dε dε dε
= =. . . = =0
dC1 dC2 dCk
Let us consider dε
dCk
=0
d 1 t2
[ ∫ [f(t) − Σnr=1 Cr xr (t)]2 dt] = 0
dCk t2 − t1 t1
All terms that do not contain Ck is zero. i.e. in summation, r=k term remains and all other terms
are zero.
t2 t2
∫ −2f(t)xk (t)dt + 2Ck ∫ [x2k (t)]dt = 0
t1 t1
1
ε= t2 −t1
∫tt12 [fe (t)]2 dt
1
= t2 −t1
∫tt12 [fe (t) − Σnr=1 Cr xr (t)]2 dt
1
= t2 −t1
[∫tt12 [fe2 (t)]dt + Σnr=1 Cr2 ∫tt12 x2r (t)dt − 2Σnr=1 Cr ∫tt12 xr (t)f(t)dt
t
You know that Cr2 ∫t 2
1
x2r (t)dt = Cr ∫tt12 xr (t)f(d)dt = Cr2 Kr
1
ε= t2 −t1
[∫tt12 [f 2 (t)]dt + Σnr=1 Cr2 Kr − 2Σnr=1 Cr2 Kr ]
1
= t2 −t1
[∫tt12 [f 2 (t)]dt − Σnr=1 Cr2 Kr ]
∴ε= 1
t2 −t1
[∫tt12 [f 2 (t)]dt + (C12 K1 + C22 K2 +. . . +Cn2 Kn )]
The above equation is used to evaluate the mean square error.
ft can be approximated with this orthogonal set by adding the components along mutually
orthogonal signals i.e.
∫tt2 | f2 (t)|2 dt
1
Jean Baptiste Joseph Fourier,a French mathematician and a physicist; was born in Auxerre,
France. He initialized Fourier series, Fourier transforms and their applications to problems of heat
transfer and vibrations. The Fourier series, Fourier transforms and Fourier's Law are named in his
honour.
2π
ϕk (t) = {ejkω0 t } = {ejk( T )t }where k = 0 ± 1, ±2. . n . . . . . (1)
All these signals are periodic with period T
∞
x(t) = ∑ ak ejkω0 t . . . . . (2)
k=−∞
∞
∞
= ∑ ak kejkω0 t
k=−∞
The term k = ±2 having fundamental frequency 2ω0 , is called as 2nd harmonics, and so on...
∞
x(t)e −jnω0 t
= ∑ ak ejkω0 t . e−jnω0 t
k=−∞
T T ∞
∫ x(t)e jkω0 t
dt = ∫ ∑ ak ejkω0 t . e−jnω0 t dt
0 0 k=−∞
T ∞
=∫ ∑ ak ej(k−n)ω0 t . dt
0 k=−∞
T ∞ T
∫ x(t)e jkω0 t
dt = ∑ ak ∫ ej(k−n)ω0 t dt. . . . . . (2)
0 k=−∞ 0
by Euler's formula,
T T T
∫ e j(k−n)ω0 t
dt. = ∫ cos(k − n)ω0 dt + j ∫ sin(k − n)ω0 t dt
0 0 0
T
k=n
ej(k−n)ω0 t dt. = {
T
∫
0 0 k≠n
Hence in equation 2, the integral is zero for all values of k except at k = n. Put k = n in equation 2.
T
T
⇒∫ x(t)e−jnω0 t dt = an T
0
1 T −jnω0 t
⇒ an = ∫ e dt
T 0
Replace n by k.
1 T −jkω0 t
⇒ ak = ∫ e dt
T 0
∞
∴ x(t) = ∑ ak ej(k−n)ω0 t
k=−∞
1 T −jkω0 t
whereak = ∫ e dt
T 0
FOURIER SERIES PROPERTIES
http://www.tutorialspoint.com/signals_and_systems/fourier_series_properties.htm Copyright © tutorialspoint.com
Linearity Property
fourier series coefficient fourier series coefficient
If x(t) ←−−−−−−−−−−−−−−→ fxn & y(t) ←−−−−−−−−−−−−−−→ fyn
Conjugate symmetry property for real valued time signal states that
f ∗xn = f−xn
& Conjugate symmetry property for imaginary valued time signal states that
f ∗xn = −f−xn
FOURIER SERIES TYPES
http://www.tutorialspoint.com/signals_and_systems/fourier_series_types.htm Copyright © tutorialspoint.com
x(t) = a0 cos 0ω0 t + a1 cos 1ω0 t + a2 cos 2ω0 t+. . . +an cos nω0 t+. . .
+b0 sin 0ω0 t + b1 sin 1ω0 t+. . . +bn sin nω0 t+. . .
= a0 + a1 cos 1ω0 t + a2 cos 2ω0 t+. . . +an cos nω0 t+. . .
+b1 sin 1ω0 t+. . . +bn sin nω0 t+. . .
∞
∴ x(t) = a0 + ∑(an cos nω0 t + bn sin nω0 t) (t0 < t < t0 + T )
n=1
t0 +T
2
bn = ⋅∫ x(t) ⋅ sin nω0 t dt
T t0
∞
∴ f(t) = ∑ Fn ejnω0 t (t0 < t < t0 + T). . . . . . . (1)
n=−∞
Equation 1 represents exponential Fourier series representation of a signal ft over the interval (t0 ,
t0 +T). The Fourier coefficient is given as
t +T
∫t 0 f(t)(ejnω0 t )∗ dt
Fn = 0
t +T
∫t 0 ejnω0 t (ejnω0 t )∗ dt
0
t +T
∫t 0 f(t)e−jnω0 t dt
= 0
t +T
∫t 0 e−jnω0 t ejnω0 t dt
0
t +T
∫t 0 f(t)e−jnω0 t dt 1 t0 +T
= 0
t +T
= ∫ f(t)e−jnω0 t dt
∫t 0 1 dt T t0
0
t0 +T
1
∴ Fn = ∫ f(t)e−jnω0 t dt
T t0
x(t) = a0 + Σ∞
n=1 (an cos n ω0 t + bn sin n ω0 t). . . . . . (1)
x(t) = Σ∞
n=−∞Fn e
jnω0 t
a0 = F0
an = Fn + F−n
bn = j(Fn − F−n )
Similarly,
= 1( −j )
Fn = 12 (an − jbn )
The main drawback of Fourier series is, it is only applicable to periodic signals. There are some
naturally produced signals such as nonperiodic or aperiodic, which we cannot represent using
Fourier series. To overcome this shortcoming, Fourier developed a mathematical model to
transform signals between time orspatial domain to frequency domain & vice versa, which is
called 'Fourier transform'.
Fourier transform has many applications in physics and engineering such as analysis of LTI
systems, RADAR, astronomy, signal processing etc.
∞
f(t) = ∑ ak ejkω0 t
k=−∞
∞
= ∑ ak e
j 2π kt
T
0 . . . . . . (1)
k=−∞
f(t) = ∑∞
k=−∞ ak e
j2πkΔft . . . . . . (2)
1 t +T
ak = T0
∫t00 f(t)e−jkω0 t dt
Substitute in equation 2.
1 t +T
2⇒ f(t) = Σ∞
k=−∞ T0
∫t00 f(t)e−jkω0 t dt ej2πkΔft
Let t0 = T
2
T
= Σ∞
k=−∞ [∫ −T
2
f(t)e−j2πkΔft dt] ej2πkΔft . Δf
2
k=−∞ [∫
2
f(t)e−j2πkΔft dt] ej2πkΔft . Δf}
−T
2
∞ ∞
=∫ [∫ f(t)e−j2πft dt]ej2πft df
−∞ −∞
∞
f(t) = ∫ F [ω]ejωt dω
−∞
∞
Where F [ω] = [∫−∞ f(t)e−j2πft dt]
FT of GATE Function
ωT
F[ω] = AT Sa( )
2
FT of Impulse Function
∞
FT [ω(t)] = [∫−∞ δ(t)e−jωt dt]
= e−jωt | t = 0
= e0 = 1
∴ δ(ω) = 1
F.T
ejω0t ⟷ δ(ω − ω0 )
FT of Signum Function
F.T 2
sgn(t) ⟷ jω
There must be finite number of discontinuities in the signal ft ,in the given interval of time.
An absolutely summable sequence has always a finite energy but a finite-energy sequence is not
necessarily to be absolutely summable.
FOURIER TRANSFORMS PROPERTIES
http://www.tutorialspoint.com/signals_and_systems/fourier_transforms_properties.htm Copyright © tutorialspoint.com
Linearity Property
F.T
If x(t) ⟷ X(ω)
F.T
& y(t) ⟷ Y (ω)
Then linearity property states that
F.T
ax(t) + by(t) ⟷ aX(ω) + bY (ω)
F.T
x(t − t0 ) ⟷ e−jωt0 X(ω)
F.T
ejω0t . x(t) ⟷ X(ω − ω0 )
F.T
x(−t) ⟷ X(−ω)
x(at) | a1 | X ωa
dn x(t) F.T
dtn
⟷ (jω)n . X(ω)
and integration property states that
F.T 1
∫ x(t) dt ⟷ jω
X(ω)
F.T 1
∭ . . . ∫ x(t) dt ⟷ (jω)n
X(ω)
F.T
x(t). y(t) ⟷ X(ω) ∗ Y (ω)
and convolution property states that
F.T 1
x(t) ∗ y(t) ⟷ 2π
X(ω). Y (ω)
DISTORTION LESS TRANSMISSION
http://www.tutorialspoint.com/signals_and_systems/distortion_less_transmission.htm Copyright © tutorialspoint.com
Transmission is said to be distortion-less if the input and output have identical wave shapes. i.e., in
distortion-less transmission, the input xt and output yt satisfy the condition:
y t = Kx(t - td )
k = constant.
FT[ y t ] = FT[Kx(t - td )]
= K FT[x(t - td )]
= KX we−jωtd
∴ Y (w) = KX(w)e−jωtd
Thus, distortionless transmission of a signal xt through a system with impulse response ht is
achieved when
A physical transmission system may have amplitude and phase responses as shown below:
HILBERT TRANSFORM
http://www.tutorialspoint.com/signals_and_systems/hilbert_transform.htm Copyright © tutorialspoint.com
Hilbert transform of a signal xt is defined as the transform in which phase angle of all components
of the signal is shifted by ±90o .
1 ∞ x(k)
^(t) =
x ∫ dk
π −∞ t − k
The inverse Hilbert transform is given by
1 ∞ x(k)
^(t) = ∫
x dk
π −∞ t − k
^ t is called a Hilbert transform pair.
xt , x
^ t.
The energy spectral density is same for both xt and x
^ t are orthogonal.
xt and x
^ t is -xt
The Hilbert transform of x
If Fourier transform exist then Hilbert transform also exists for energy and power signals.
CONVOLUTION AND CORRELATION
http://www.tutorialspoint.com/signals_and_systems/convolution_and_correlation.htm Copyright © tutorialspoint.com
Convolution
Convolution is a mathematical operation used to express the relation between input and output of
an LTI system. It relates input, output and impulse response of an LTI system as
x t = input of LTI
Continuous convolution
Discrete convolution
Continuous Convolution
Discrete Convolution
Deconvolution
Deconvolution is reverse process to convolution widely used in signal and image processing.
Properties of Convolution
Commutative Property
x1 (t) ∗ x2 (t) = x2 (t) ∗ x1 (t)
Distributive Property
x1 (t) ∗ [x2 (t) + x3 (t)] = [x1 (t) ∗ x2 (t)] + [x1 (t) ∗ x3 (t)]
Associative Property
Shifting Property
Scaling Property
Differentiation of Output
dy(t) dh(t)
dt
= x(t) ∗ dt
Note:
Here, we have two rectangles of unequal length to convolute, which results a trapezium.
−1 + −2 < t < 2 + 2
−3 < t < 4
Hence the result is trapezium with period 7.
∴ Ay = Ax Ah
DC Component
DC component of any signal is given by
area of the signal
DC component = period of the signal
Ex: what is the dc component of the resultant convoluted signal given below?
= 3 × 4 = 12
Duration of the convoluted signal = sum of lower limits < t < sum of upper limits
= -3 < t < 4
Period=7
area of the signal
∴ Dc component of the convoluted signal = period of the signal
Dc component = 12
7
Discrete Convolution
Let us see how to calculate discrete convolution:
Note: if any two sequences have m, n number of samples respectively, then the resulting
convoluted sequence will have [m+n-1] samples.
= [-1, 0, 3, 10, 6]
Here x[n] contains 3 samples and h[n] is also having 3 samples so the resulting sequence having
3+3-1 = 5 samples.
Periodic convolution is valid for discrete Fourier transform. To calculate periodic convolution all
the samples must be real. Periodic or circular convolution is also called as fast convolution.
If two sequences of length m, n respectively are convoluted using circular convolution then
resulting sequence having max [m,n] samples.
Ex: convolute two sequences x[n] = {1,2,3} & h[n] = {-1,2,2} using circular convolution
= [-1, 0, 3, 10, 6]
Here x[n] contains 3 samples and h[n] also has 3 samples. Hence the resulting sequence obtained
by circular convolution must have max[3,3]= 3 samples.
Now to get periodic convolution result, 1st 3 samples [as the period is 3] of normal convolution is
same next two samples are added to 1st samples as shown below:
Auto correlation
Cros correlation
Consider a signals xt . The auto correlation function of xt with its time delayed version is given by
∞
R11 (τ) = R(τ) = ∫ x(t)x(t − τ)dt [+ve shift]
−∞
∞
=∫ x(t)x(t + τ)dt [-ve shift]
−∞
F. T [R(τ)] = Ψ(ω)
∞
Ψ(ω) = ∫−∞ R(τ)e−jωτ dτ
R(τ) = x(τ) ∗ x(−τ)
Auto correlation of power signal exhibits conjugate symmetry i.e. R(τ) = R ∗ (−τ)
Auto correlation function of power signal at τ = 0 atorigin is equal to total power of that
signal. i.e.
R(0) = ρ
Auto correlation function of power signal ∞ 1τ ,
|R(τ)| ≤ R(0) ∀ τ
Auto correlation function and power spectral densities are Fourier transform pairs. i.e.,
F. T [R(τ)] = s(ω)
∞
s(ω) = ∫−∞ R(τ)e−jωτ dτ
R(τ) = x(τ) ∗ x(−τ)
Density Spectrum
Let us see density spectrums:
Consider two signals x1 t and x2 t . The cross correlation of these two signals R 12 (τ) is given by
∞
R12 (τ) = ∫ x1 (t)x2 (t − τ) dt [+ve shift]
−∞
∞
=∫ x1 (t + τ)x2 (t) dt [-ve shift]
−∞
T
For power signal if limT →∞ 1 ∫ 2
−T x(t)x∗ (t) dt then two signals are said to be orthogonal.
T
2
Cross correlation function corresponds to the multiplication of spectrums of one signal to the
complex conjugate of spectrum of another signal. i.e.
Parsvel's Theorem
Parsvel's theorem for energy signals states that the total energy in a signal can be obtained by the
spectrum of the signal as
∞
E= 1
2π
∫−∞ |X(ω)|2 dω
Note: If a signal has energy E then time scaled version of that signal xat has energy E/a.
SIGNALS SAMPLING THEOREM
http://www.tutorialspoint.com/signals_and_systems/signals_sampling_theorem.htm Copyright © tutorialspoint.com
Statement: A continuous time signal can be represented in its samples and can be recovered
back when sampling frequency fs is greater than or equal to the twice the highest frequency
component of message signal. i. e.
f s ≤ 2f m .
Proof: Consider a continuous time signal xt . The spectrum of xt is a band limited to fm Hz i.e. the
spectrum of xt is zero for |ω|>ωm.
Sampling of input signal xt can be obtained by multiplying xt with an impulse train δ t of period T s.
The output of multiplier is a discrete signal called sampled signal which is represented with yt in
the following diagrams:
Here, you can observe that the sampled signal takes the period of impulse. The process of
sampling can be explained by the following mathematical expression:
δ(t) = a0 + Σ∞
n=1 (an cos nωs t + bn sin nωs t) . . . . . . (2)
T
1 1 1
Where a0 = Ts
∫ −T2 δ(t)dt = Ts
δ(0) = Ts
2
T
2 2 2
an = Ts
∫ −T2 δ(t) cos nωs dt = T2
δ(0) cos nωs 0 = T
2
T
T
2 2
bn = Ts
∫ 2
−T δ(t) sin nωs t dt = Ts
δ(0) sin nωs 0 = 0
2
∴ δ(t) = 1
Ts
+ Σ∞ 2
n=1 ( T cos nωs t + 0)
s
Substitute δ t in equation 1.
1
= Ts
[x(t) + 2Σ∞
n=1 (cos nωs t)x(t)]
1
y(t) = Ts
[x(t) + 2 cos ωs t. x(t) + 2 cos 2ωs t. x(t) + 2 cos 3ωs t. x(t) . . . . . . ]
∴ Y (ω) = 1
Ts
Σ∞
n=−∞ X(ω − nωs ) where n = 0, ±1, ±2, . . .
To reconstruct xt , you must recover input signal spectrum X ω from sampled signal spectrum Y ω,
which is possible when there is no overlapping between the cycles of Y ω.
Possibility of sampled frequency spectrum with different conditions is given by the following
diagrams:
Aliasing Effect
The overlapped region in case of under sampling represents aliasing effect, which can be
removed by
considering fs >2fm
Impulse sampling.
Natural sampling.
Impulse Sampling
Impulse sampling can be performed by multiplying input signal xt with impulse train
Σ∞n=−∞ δ(t − nT ) of period 'T'. Here, the amplitude of impulse changes with respect to amplitude
of input signal xt . The output of sampler is given by
= x(t) × Σ∞
n=−∞ δ(t − nT )
y(t) = yδ (t) = Σ∞
n=−∞ x(nt)δ(t − nT ) . . . . . . 1
To get the spectrum of sampled signal, consider Fourier transform of equation 1 on both sides
1
Y (ω) = T
Σ∞
n=−∞ X(ω − nωs )
This is called ideal sampling or impulse sampling. You cannot use this practically because pulse
width cannot be zero and the generation of impulse train is not possible practically.
Natural Sampling
Natural sampling is similar to impulse sampling, except the impulse train is replaced by pulse train
of period T. i.e. you multiply input signal xt to pulse train Σ∞
n=−∞ P (t − nT ) as shown below
p(t) = Σ∞
n=−∞Fn e
jnωs t . . . . . . (2)
= Σ∞
n=−∞Fn e
j2πnfs t
T
1
Where Fn = T
∫ 2
−T p(t)e−jnωs tdt
2
1
= TP
(nωs )
Substitute Fn value in equation 2
∴ p(t) = Σ∞ 1
n=−∞ T P(n ωs )e
jnωs t
= T1 Σ∞
n=−∞P(n ωs )e
jnωs t
Substitute pt in equation 1
y(t) = T1 Σ∞
n=−∞P(n ωs ) x(t) e
jnωs t
To get the spectrum of sampled signal, consider the Fourier transform on both sides.
F . T [y(t)] = F . T[ T1 Σ∞
n=−∞P(n ωs ) x(t) e
jnωs t ]
= T1 Σ∞
n=−∞P(n ωs ) F . T [x(t) e
jnωs t ]
Theoretically, the sampled signal can be obtained by convolution of rectangular pulse pt with
ideally sampled signal say yδt as shown in the diagram:
To get the sampled spectrum, consider Fourier transform on both sides for equation 1
Nyquist Rate
It is the minimum sampling rate at which signal can be converted into samples and can be
recovered back without distortion.
Nyquist interval = 1 = 1
seconds.
fN 2fm
The sampling rate is large in proportion with f2 . This has practical limitations.
To overcome this, the band pass theorem states that the input signal xt can be converted into its
samples and can be recovered back without distortion when sampling frequency fs < 2f2 .
Also,
1 2f
fs = = 2
T m
f2
Where m is the largest integer <
B
1 2KB
fs = =
T m
For band pass signals of bandwidth 2fm and the minimum sampling rate fs= 2 B = 4fm,
1
the spectrum of sampled signal is given by Y [ω] = T
Σ∞
n=−∞ X[ω − 2nB]
LAPLACE TRANSFORMS LT
http://www.tutorialspoint.com/signals_and_systems/laplace_transforms.htm Copyright © tutorialspoint.com
Complex Fourier transform is also called as Bilateral Laplace Transform. This is used to solve
differential equations. Consider an LTI system exited by a complex exponential signal of the form x
t = Gest.
Where s = any complex number = σ + jω ,
σ = real of s, and
ω = imaginary of s
The response of LTI can be obtained by the convolution of input with its impulse response i.e.
∞
y(t) = x(t) × h(t) = ∫−∞ h(τ) x(t − τ)dτ
∞
= ∫−∞ h(τ) Ges(t−τ) dτ
∞
= Gest . ∫−∞ h(τ) e(−sτ) dτ
∴ ()= 1
( ) st . . . . . . (4)
∞
∴ x(t) = 1
2πj
∫−∞ X(s)est ds . . . . . . (4)
There must be finite number of discontinuities in the signal ft ,in the given interval of time.
Linearity Property
L.T
If x(t) ⟷ X(s)
L.T
& y(t) ⟷ Y (s)
Then linearity property states that
L.T
ax(t) + by(t) ⟷ aX(s) + bY (s)
L.T
x(t − t0 ) ⟷ e−st0 X(s)
L.T
es0t . x(t) ⟷ X(s − s0 )
L.T
x(−t) ⟷ X(−s)
L.T 1
x(at) ⟷ |a|
X( sa )
L.T
∫ x(t)dt ⟷ 1s X(s)
L.T 1
∭ . . . ∫ x(t)dt ⟷ sn
X(s)
Multiplication and Convolution Properties
L.T
If x(t) ⟷ X(s)
L.T
and y(t) ⟷ Y (s)
Then multiplication property states that
L.T 1
x(t). y(t) ⟷ 2πj
X(s) ∗ Y (s)
L.T
x(t) ∗ y(t) ⟷ X(s). Y (s)
REGION OF CONVERGENCE ROC
http://www.tutorialspoint.com/signals_and_systems/region_of_convergence.htm Copyright © tutorialspoint.com
The range variation of σ for which the Laplace transform converges is called region of
convergence.
Example 1: Find the Laplace transform and ROC of x(t) = e −at u(t)
1
L. T [x(t)] = L. T [e −at u(t)] = S+a
Re > −a
ROC : Res >> −a
Example 2: Find the Laplace transform and ROC of x(t) = eat u(−t)
1
L. T [x(t)] = L. T [eat u(t)] = S−a
Res < a
ROC : Res < a
Example 3: Find the Laplace transform and ROC of x(t) = e−at u(t) + eat u(−t)
1 1
L. T [x(t)] = L. T [e−at u(t) + eat u(−t)] = S+a
+ S−a
1
For
S+a
Re{s} > −a
1
For
S−a
Re{s} < a
A system is said to be stable when all poles of its transfer function lay on the left half of s-
plane.
A system is said to be unstable when at least one pole of its transfer function is shifted to the
right half of s-plane.
A system is said to be marginally stable when at least one pole of its transfer function lies on
the jω axis of s-plane.
ft Fs ROC
1
u(t) ROC: Re{s} > 0
s
1
t u(t) ROC:Re{s} > 0
s2
n!
tn u(t) ROC:Re{s} > 0
sn+1
1
eat u(t) ROC:Re{s} > a
s−a
1
e−at u(t) ROC:Re{s} > -a
s+a
1
eat u(t) − ROC:Re{s} < a
s−a
e−at 1
− ROC:Re{s} < -a
u(−t) s+a
1
t eat u(t) ROC:Re{s} > a
(s − a)2
n!
tn eat
(s ROC:Re{s} > a
u(t)
− a)n+1
t e−at 1
ROC:Re{s} > -a
u(t) (s + a)2
n!
tn e−at
(s ROC:Re{s} > -a
u(t)
+ a)n+1
t eat 1
− ROC:Re{s} < a
u(−t) (s − a)2
n!
tn eat −
(s ROC:Re{s} < a
u(−t)
− a)n+1
t e−at 1
− ROC:Re{s} < -a
u(−t) (s + a)2
n!
tn e−at −
(s ROC:Re{s} < -a
u(−t)
+ a)n+1
s+a
e−at cos (s + a)2
bt
+ b2
b
e−at sin
(s + a)2
bt
+ b2