KTU-EC-404 Mod 2 - Cec Notes
KTU-EC-404 Mod 2 - Cec Notes
lle
ge
of
En
gi
ne
er
in
g
Ch
er
MODULE II
th
a la
MODULE 2
DIGITAL TV & MODULATION
ala
th
er
• Digital TV: Digitized Video, Source coding of Digitized Video,
Ch
Compression of Frames, DCT based JPEG, Compression of Moving
g
Pictures (MPEG). Basic blocks of MPEG2 and MPEG4,Digital Video
in
Broadcasting (DVB)
er
ne
• Modulation: QAM (DVB-S, DVB-C), OFDM for Terrestrial Digital
gi
TV (DVB –T). Reception of Digital TV Signals (Cable, Satellite and
En
terrestrial). Digital TV over IP, Digital terrestrial TV for mobile
of
• Display Technologies: Basic working of Plasma, LCD and LED
ge
lle
Displays
Co
• Refer: Herve Benoit, Digital Television Satellite, Cable, Terrestrial, IPTV, Mobile TV
in the DVB Framework, 3/e, Focal Press, Elsevier, 2008
la
a
th
er
Chapter 1
Ch
g
Digital TV & Compression Standards
in
er
ne
gi
En
of
ge
lle
Co
Introduction
• 1980s
• The possibility of broadcasting fully digital pictures to the consumer
la
•
a
Not technically or economically realistic.
th
• Still seen as a faraway prospect
er
Ch
• The main reason for this was the very high bit-rate required for the transmission of
digitized 525- or 625-line live video pictures
g
in
• Another reason was it seemed more urgent and important to improve the quality of the
er
TV picture
ne
• As a result huge amounts of money were invested by Japan, Europe, and the U.S.A. in
gi
order to develop Improved Definition Television (IDTV) and High Definition
En
Television systems (HDTV)
of
• The beginning of the 1990s saw very quick development of efficient compression
ge
algorithms
lle
la
a
• The first digital TV broadcasting for the consumer started in mid-1994 with the
th
“DirecTV” project
er
Ch
• Europeans by the end of 1991 stopped working on analog HDTV (HD-MAC), and
created the European Launching Group (ELG) in order to define and standardize
g
in
a digital TV broadcasting system.
er
• This gave birth in 1993 to the DVB project (Digital Video Broadcasting), based on
ne
the “main profile at main level” (MP@ML) of the international MPEG-2
gi
compression standard.
En
• Three variants of DVB for the various transmission media occurred between 1994
of
and 1996 ge
• Satellite (DVB-S),
lle
• Cable (DVB-C),
Co
• Terrestrial (DVB-T)
Introduction
• The other forms of digital television appeared by the last years of the
twentieth century which include
la
a
• Digital cable television,
th
er
• Digital terrestrial television,
Ch
• Digital television via the telephone subscriber line (IPTV over ADSL).
g
in
er
• The rapid price decrease of large flat-screen TVs (LCD or Plasma) with a
ne
resolution compatible with HDTV requirements makes them now
gi
En
accessible to a relatively large public.
of
• This price drop coincides with the availability of more effective
ge
compression standards such as MPEG-4.
lle
Co
Monochrome TV basics
• All current TV standards in use today are derived from the “black and white” TV
standards started in the 1940s and 1950s, which have defined their framework.
la
• The first attempts at electromechanical television began at the end of the 1920s, using
a
th
the Nipkow disk for analysis and reproduction of the scene to be televised,
er
Ch
• This had a definition of 30 lines and 12.5 images per second and a video bandwidth of
less than 10 kHz
g
in
• These pictures were broadcast on an ordinary AM/MW or LW transmitter
er
ne
• The resolution soon improved to 240 lines by around 1935.
gi
• Progressive scanning was used, which means that all lines of the pictures were scanned
En
sequentially in one frame
of
• Cathode ray tube (CRT) started to be used for display at the receiver side.
ge
• Refresh rates of 25 or 30 pictures/s was used.
lle
Co
• The bandwidth required was of the order of 1 MHz, which implied the use of VHF
frequencies (40–50 MHz) for transmission.
• The spatial resolution of these first TV pictures was still insufficient, and they were
affected by a very annoying flicker due to the fact that their refresh rate was too low.
Monochrome TV basics
• Interlaced scanning was invented in 1927
• This ingenious method consisted of scanning a first field made of the odd lines of the frame and then a
ala
second field made of the even lines
th
• This allowed the picture refresh rate to be doubled for a given vertical resolution to 50 or 60 Hz without
er
increasing the broadcast bandwidth required.
Ch
• The need to maintain a link between picture rate and mains frequency, led to different standards on
g
different parts of the world.
in
er
• However, these systems shared the following common features:
ne
• A unique composite picture signal combining video, blanking, and synchronization information known
gi
as Video Baseband Signal or VBS
En
• An interlaced scanning (order 2), recognized as the best compromise between flicker and the required
of
bandwidth. ge
lle
Co
Co
lle
ge
of
En
gi
ne
er
in
g
Ch
er
th
a la
Monochrome TV basics
• The following characteristics were finally chosen in 1941 for the U.S. monochrome system, which later
became NTSC (National Television Standards Committee) when it was upgraded to color in 1952:
la
• 525 lines, interlaced scanning (two fields of 262.5 lines);
a
th
• Field frequency, 60 Hz
er
• Line frequency, 15,750 Hz
Ch
• Video bandwidth, 4.2 MHz;
g
in
• Negative video modulation;
er
• FM sound with carrier 4.5MHz above the picture carrier.
ne
gi
• After World War II, from 1949 onward, most European countries (except France and Great Britain)
En
adopted the German GERBER standard, also known as CCIR with the following characteristics
• 625 lines, interlaced scanning (two fields of 312.5 lines);
• Field frequency, 50 Hz;
of
ge
lle
• Line frequency, 15,625 Hz (50×312.5);
Co
la
as NTSC (National Television Standard Committee).
a
th
• Extensive preliminary studies on color perception and a great deal of ingenuity were
er
required to define these standards
Ch
• The triple red/green/blue (RGB) signals delivered by the TV camera had to be
g
in
transformed into a signal which, could be displayable without major artifacts on current
er
black and white receivers, and could be transmitted in the bandwidth of an existing TV
ne
channel.
gi
• The basic idea was to transform, by a linear combination, the three (R, G, B) signals into
En
three other equivalent components, Y, Cb, Cr (or Y , U, V):
of
• Y = 0.587G + 0.299R + 0.1145B is called the luminance signal
ge
• Cb= 0.564(B−Y ) or U = 0.493(B−Y ) is called the blue chrominance or color difference
lle
Co
la
subcarrier was added within the video spectrum, modulated by the reduced bandwidth
a
th
chrominance signals
er
• This gave rise to a new composite signal called the CVBS (Color Video Baseband
Ch
Signal).
g
in
• This carrier had to be placed in the highest part of the video spectrum and had to stay
er
within the limits of the existing video bandwidth in order not to disturb the luminance
ne
gi
and the black and white receivers.
En
• The differences in the three main standards NTSC, PAL, SECAM mainly concern the
of
way of modulating this subcarrier and its frequency.
ge
lle
Co
Monochrome TV to Color TV
• MAC (Multiplexed Analog Components)
la
• During the 1980s, a common standard for satellite broadcasts was defined with the goal
a
th
of improving picture and sound quality by eliminating drawbacks of composite systems
er
Ch
and by using digital sound.
• This resulted in the MAC systems, with a compatible extension toward HDTV (called
g
in
HD-MAC).
er
ne
• D2-MAC is the most well-known of these hybrid systems
gi
• It replaces frequency division multiplexing of luminance, chrominance, and sound of
En
composite standards by a time division multiplexing or in other words bandwidth
of
sharing is replaced by time sharing.
ge
• It is designed to be compatible with normal (4:3) and wide-screen (16:9) formats and can
lle
Co
la
• Video professionals at television studios have been using various digital formats, such as D1
a
th
(components) and D2 (composite), for recording and editing video signals.
er
• Standardized conditions of digitization and interfacing of digital video signals have been
Ch
established by CCIR for the ease of the interoperability of equipment and international program
g
exchange.
in
The main advantages of these digital formats are that
er
ne
• They allow multiple copies to be made without any degradation in quality,
gi
• The creation of special effects not otherwise possible in analog format
En
• They simplify editing of all kinds
of
• They permit international exchange independent of the broadcast standard to be used for
ge
diffusion (NTSC, PAL, SECAM, D2-MAC, MPEG).
lle
Co
la
• According to Shannon sampling theorem - To digitize an analog signal of bandwidth
a
th
Fmax, it is necessary to sample its value with a sampling frequency Fs of at least twice the
er
Ch
maximum frequency of this signal to keep its integrity and to avoid the negative aliasing.
• In effect, sampling a signal creates two parasitic sidebands above and below the
g
in
sampling frequency, which range from Fs −Fmax to Fs +Fmax, as well as around harmonics
er
ne
of the sampling frequency
gi
• In order to avoid mixing the input signal spectrum and the lower part of the first
En
parasitic sideband, the necessary and sufficient condition is that Fs−Fmax >Fmax, which is
of
realized if Fs>2Fmax. ge
• This means that the signal to be digitized needs to be efficiently filtered in order to
lle
ala
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
Digitization of video signals
Digitization Formats
The different formats
la
–Example illustration 4: 4:4
a
th
Let us look at a small part of a frame –
er
Ch
just a 4×4 matrix of pixels in an image
g
Here every pixel has a Y value, a Cb value,
in
and a Cr value. Or there are 4 values of Y,
er
ne
4 values for U, and 4 values of V. We’d
gi
say that this is a 4:4:4 image.
En
of
ge
lle
Co
• The different formats –example illustration 4:2:2
Here’s what that 4×4 matrix would look like for 4:2:2:
la
a
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
• The different formats –example illustration 4:2:0
Here’s what that 4×4 matrix would look like for 4:2:0:
la
a
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
Digitization of video signals
Digitization Formats
la
a
The 4:2:2 format
th
er
• In this scheme the digitization parameters for video signals in component form based on
Ch
a Y, Cb, Cr signal in 4:2:2 format (four Y samples for two Cb samples and two Cr samples)
g
with 8 bits per sample (with a provision for extension to 10 bits per sample) have been
in
defined.
er
ne
• The sampling frequency is 13.5MHz for luminance and 6.75MHz for chrominance,
gi
regardless of the standard of the input signal.
En
• This results in 720 active video samples per line for luminance, and 360 active samples
of
per line for each chrominance.
ge
lle
• The position of the chrominance samples corresponds to the odd samples of the
Co
luminance
• Chrominance signals Cr and Cb being simultaneously available at every line, vertical
resolution for chrominance is the same as for luminance
Digitization Formats
The 4:2:2 format
The position of samples in the 4:2:2 format
la
a
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
The 4:2:2 format
• The total bit-rate resulting from this process is 13.5×8+2×6.75×8 = 216 Mb/s.
• With a quantization of 10 bits, the bit-rate becomes 270 Mb/s!
la
a
• If the redundancy involved in digitizing the inactive part of the video signal (horizontal
th
er
and vertical blanking periods) is taken into account, the useful bit-rate goes down to 166
Ch
Mb/s with 8 bits per sample.
g
• These horizontal and vertical blanking periods can be filled with other useful data, such
in
er
as digital sound, sync, and other information.
ne
• Standardized electrical interfacing conditions for 4:2:2 digitized signals have been
gi
En
defined by CCIR. This is the format used for interfacing D1 digital video recorders, and is
therefore sometimes referred to as the D1 format.
of
ge
lle
Co
4:2:0, SIF, CIF, and QCIF formats
These are formats that have been defined for applications that are less demanding in terms
of resolution, and in view of the bit-rate reduction. They are byproducts of the 4:2:2
la
format.
a
th
The 4:2:0 format
er
Ch
• This format is obtained from the 4:2:2 format by using the same chroma samples for two
g
successive lines, in order to reduce the amount of memory required in processing
in
circuitry while at the same time giving a vertical resolution of the same order as the
er
ne
horizontal resolution.
gi
• Luminance and horizontal chrominance resolutions are the same as for the 4:2:2 format,
En
and thus
of
• luminance resolution is 720×576 (625 lines)
ge
lle
• This 4:2:0 format is of special importance as it is the input format used for D2-MAC and
MPEG-2 (MP@ML) coding.
Digitization Formats
The 4:2:0 format
The position of samples in the 4:2:0 format
la
a
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
The SIF (source intermediate format)
• This format is obtained by halving the spatial resolution in both directions
as well as the temporal resolution,
la
• The resolution becomes 25 Hz for 625-line systems and 29.97 Hz for 525-
a
th
line systems.
er
Ch
• Depending on the originating standard, the spatial resolutions are then:
g
• luminance resolution: 360×288 (625 lines) or 360×240 (525 lines);
in
er
• chrominance resolution: 180×144 (625 lines) or 180×120 (525 lines).
ne
gi
• Horizontal resolution is obtained by filtering and subsampling the input
En
signal.
of
• The reduction in temporal and vertical resolution is normally obtained by
ge
interpolating samples of the odd and even fields, but is sometimes achieved
lle
• The resolution obtained is the base for MPEG-1 encoding, and is resulting in
a so-called “VHS-like” quality in terms of resolution.
Digitization Formats
The SIF format
The position of samples in the SIF format
la
a
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
The CIF (common intermediate format)
• This is a compromise between European and American SIF formats:
la
• Spatial resolution is taken from the 625-line SIF (360×288) and temporal
a
th
resolution from the 525-line SIF (29.97 Hz).
er
Ch
• It is the basis used for video conferencing.
g
in
er
The QCIF (quarter CIF)
ne
gi
• Once again, this reduces the spatial resolution by 4 (2 in each direction) and
En
the temporal resolution by 2 or 4 (15 or 7.5 Hz).
of
• It is the input format used for ISDN video telephony using the H.261
ge
lle
compression algorithm.
Co
High definition formats 720p, 1080i
• Two standard picture formats have been retained for broadcast HDTV
applications, each existing in two variants (59.94 Hz or 50 Hz depending on
la
continent)
a
th
The 720p format:
er
Ch
• This is a progressive scan format with a horizontal resolution of 1280 pixels and a
vertical resolution of 720 lines (or pixels).
g
in
The 1080i format:
er
ne
• This interlaced format offers a horizontal resolution of 1920 pixels and a vertical
gi
resolution of 1080 lines (or pixels).
En
• For these two formats, the horizontal and vertical resolution are equivalent
of
(square pixels) because they have the same ratio as the aspect ratio of the picture
ge
(16:9).
lle
• Required bit-rate for the digitization in 4:4:4 format of these two HD formats
Co
gives bit-rates on the order of1 to 1.5 Gb/s depending on the frame rate and
resolution, which is 4 to 5 times greater than for standard-definition interlaced
video
Source coding and Channel coding
A bit-rate of the order of 200 Mb/s, as required by the 4:2:2 format, cannot be used for
la
direct broadcast to the end user, as it would occupy a bandwidth of the order of 40MHz
a
for cable, or 135MHz for satellite.
th
er
This would represent 5–6 times the bandwidth required for transmission of an analog PAL
Ch
or SECAM signal.
g
The essential conditions required to start digital television broadcast services were the
in
er
development of technically and economically viable solutions to problems which can be
ne
classified into two main categories:
gi
• Source coding. This is the technical term for compression. It encompasses all video and
En
audio compression techniques used to reduce as much as possible the bit-rate (in terms
of
of Mb/s required to transmit moving pictures of a given resolution and the associated
ge
sound) with the lowest perceptible degradation in quality.
lle
associated with the most spectrally efficient modulation techniques (in terms of Mb/s
per MHz), taking into account the available bandwidth and the foreseeable
imperfections of the transmission channel.
Source coding and Channel coding
• Sequence of operations on the transmitter side
ala
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
Source coding: Compression of video
• Compression algorithms are an absolute must in order to be able to broadcast
TV pictures in a channel of acceptable width.
la
a
• They can reduce this bit-rate to about 30 Mb/s, for a spectrum bandwidth
th
er
comparable to conventional analog broadcasts with the necessary error
Ch
correction algorithms and modulation schemes
g
• The principles and various steps of video compression which allow these bit-
in
er
rates to be achieved, and which are currently being used in the various video
ne
compression standards are explained here.
gi
• These compression methods use general data compression algorithms
En
applicable to any kind of data
of
• They exploit the following features:
ge
• spatial redundancy (correlation of neighboring points within an image)
lle
Co
• specificities of visual perception (lack of sensitivity of the eye to fine details) for fixed
pictures
• very high temporal redundancy between successive images in the case of moving pictures
General data compression principles
Run length coding (RLC)
la
• This is the simplest method of compression
a
th
• The general idea behind this method is to replace consecutive repeating occurrences
er
of a symbol by one occurrence of the symbol followed by the number of occurrences
Ch
• When an information source emits successive message elements which can deliver
g
relatively long series of identical elements (example DCT after thresholding and
in
er
quantization), it is advantageous to transmit the code of this element and the
ne
number of successive occurrences rather than to repeat the code of the element.
gi
• This gives a variable compression factor (the longer the series, the bigger the
En
compression factor).
of
• This type of coding which does not lose any information is defined as reversible.
ge
• This method is commonly employed for file compression related to disk storage or
lle
transmission by computers (zip, etc.).It is also the method used in fax machines.
Co
Run length coding (RLC)
Co
lle
ge
of
En
gi
ne
er
in
g
Ch
er
th
a la
Variable length coding (VLC) or entropy coding
• The probability of occurrence of an element generated by a source and
coded on n bits is sometimes not the same for all elements among the 2n
la
different possibilities.
a
th
er
• This bit-rate reduction method used to reduce the bit-rate required to
Ch
transmit the sequences generated by the source, is based on this fact.
g
in
• The principle behind this method to encode the most frequent elements
er
with less than n bits and the less frequent elements with more bits,
ne
gi
resulting in an average length that is less than a fixed length of n bits.
En
• To perform this in real time, it is needed to have a knowledge of the
of
probability of occurrence of each possible element generated by the
ge
source.
lle
Co
• This method can be applied for text compression and video compression
by means of DCT.
Variable length coding (VLC) or entropy coding
• The information quantity Q transmitted by an element is equal to the
logarithm (base 2) of the inverse of its probability of appearance p:
la
𝟏
a
th
𝐐 = 𝐥𝐨𝐠 𝟐 = −𝐥𝐨𝐠 𝟐 (𝐩)
er
𝐩
Ch
• The sum of the information quantity of all elements generated by a source
g
in
multiplied by their probability of appearance is called the Entropy, H, of
er
the source:
ne
𝟏
gi
En
𝐇= 𝐩𝐢 𝐥𝐨𝐠 𝟐
𝐩𝐢
of
ge
• The most well-known method for variable length coding is the Huffman
lle
element
• Huffman coding assigns shorter codes to symbols that occur more
frequently and longer codes to those that occur less frequently.
Huffman Coding
• It works in the following way:
• Each element is classified in order of decreasing probability, forming an
la
“occurrence table”
a
th
• The two elements of lowest probability are then grouped into one
er
Ch
element, the probability of which is the sum of the two probabilities. Bit 0
is attributed to the element of lowest probability and 1 to the other
g
in
element.
er
ne
• The new element is then grouped in the same way with the element
gi
having the next highest probability.
En
• 0 and 1 are attributed in the same way as above, and the process is
of
continued until all the elements have been coded (sum of the probability
ge
of the last two elements = 100 %).
lle
Co
la
a
th
• Assume that the frequency of the characters is as shown in Table below.
er
Ch
g
in
er
ne
gi
En
of
ge
• #bits = 3bits*(17+12+12+27+32)=300bits
lle
Co
Huffman Coding
Co
lle
ge
of
En
gi
ne
er
in
g
Ch
er
th
a la
Huffman Coding
• A character’s code is found by starting at the root and following the
branches that lead to that character.
la
a
• The code itself is the bit value of each branch on the path, taken in sequence.
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
• #bits = 2bits*(17+27+32)+3bits*(12+12)=224bits
Huffman Coding
• This type of coding is reversible since it does not lose information
• This method can be applied to video signals as a complement to other
la
a
methods which generate elements of non-uniform probability ( eg: DCT
th
er
followed by quantization ).
Ch
• The overall gain can then be much more important.
g
in
er
ne
gi
En
of
ge
lle
Co
Image compression:
The discrete cosine transform (DCT)
la
a
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
Image compression:
The discrete cosine transform (DCT)
• The discrete cosine transform is a particular case of the Fourier transform applied
la
to discrete (sampled) signals, which decomposes a periodic signal into a series of
a
th
sine and cosine harmonic functions.
er
• The signal can then be represented by a series of coefficients of each of these
Ch
functions.
g
in
• Under certain conditions, the DCT decomposes the signal into only one series of
er
harmonic cosine functions in phase with the signal, which reduces by half the
ne
number of coefficients necessary to describe the signal compared to a Fourier
gi
En
transform.
of
• In the case of pictures, the original signal is a sampled bidimensional signal
ge
• Hence the DCT will also be bidimensional in the horizontal and vertical
lle
directions.
Co
• This will transform the luminance (or chrominance) discrete values of a block of N
×N pixels into another block (or matrix) of N ×N coefficients representing the
amplitude of each of the cosine harmonic functions.
The discrete cosine transform (DCT)
• In the transformed block, coefficients on the horizontal axis represent
increasing horizontal frequencies from left to right.
la
a
• On the vertical axis they represent increasing vertical frequencies from top
th
er
to bottom.
Ch
• The first coefficient in the top left corner (coordinates: 0, 0) represents null
g
in
horizontal and vertical frequencies, and is therefore called the DC
er
coefficient
ne
gi
• The bottom right coefficient represents the highest spatial frequency
En
component in the two directions.
of
• In order to reduce the complexity of the circuitry and the processing time
ge
lle
required, the block size chosen is generally 8×8 pixels which the DCT
Co
la
a
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
The discrete cosine transform (DCT)
la
a
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
The discrete cosine transform (DCT)
• Depending on the number of details contained in the original block, the high
frequency coefficients will be bigger or smaller, but generally the amplitude
la
a
decreases rather quickly with the frequency, due to the smaller energy of
th
er
high spatial frequencies in most “natural” images.
Ch
• The DCT thus has the remarkable property of concentrating the energy of
g
in
the block on a relatively low number of coefficients situated in the top left
er
corner of the matrix.
ne
gi
• In addition, these coefficients are decorrelated from each other.
En
• These two properties will be used to advantage in the next steps of the
of
ge
compression process.
lle
reversible.
The discrete cosine transform (DCT)
• Taking advantage of the reduced sensitivity to high spatial frequencies of
human vision, it is possible without perceptible degradation of the picture
la
a
quality, to eliminate the values below a certain threshold function of the
th
er
frequency.
Ch
• The eliminated values are replaced by 0 and this operation is known as
g
in
thresholding.
er
ne
• This part of the process is obviously not reversible, as some data are thrown
gi
away.
En
• The remaining coefficients are then quantized with an accuracy decreasing
of
ge
with the increasing spatial frequencies, which once again reduces the
lle
• Here again the process is not reversible, but it has little effect on the
perceived picture quality.
The discrete cosine transform (DCT)
la
a
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
Compression of fixed pictures
JPEG Standard
• In the 1980s, work started towards reducing the amount of information required
la
for coding fixed pictures
a
th
• The primary objective was the significant reduction of the size of graphics files
er
Ch
and photographs in view of storing or transmitting them.
•
g
In 1990, the ISO (International Standards Organization) created an international
in
working group called JPEG (Joint Photographic Experts Group) which had the
er
task of elaborating an international compression standard for fixed pictures of
ne
various resolutions in Y, Cr, Cb, or RGB format.
gi
En
• The resulting international standard (widely known as JPEG) was published in
of
1993 under the reference ISO/IEC 10918, and it can be considered a toolbox for
ge
fixed picture compression.
lle
la
1. Decomposition of the picture
a
th
into blocks.
er
Ch
The picture, generally in Y, Cb, Cr
format, is divided into elementary
g
in
blocks of 8×8 pixels. For a 4:2:2
er
CCIR-601 picture this means a
ne
gi
total number of 6480 luminance
En
(Y) blocks and 3240 blocks for
of
each Cr and Cb component. Each
ge
block is made up of 64 numbers
lle
a la
generates for each one a new 8×8 matrix made up
th
er
of the coefficients of increasing spatial frequency
Ch
as one moves away from the origin (upper left
g
corner) which contains the DC component
in
er
representing the average luminance or
ne
chrominance of the block. The value of these
gi
En
coefficients decreases quickly when going away
of
from the origin of the matrix, and the final values
ge
are generally a series of small numbers or even
lle
la
details below a certain luminance
a
th
level. The coefficients below a
er
predetermined threshold are zeroed
Ch
out. The remaining ones are
quantized with decreasing accuracy
g
as the frequency increases. The DC
in
coefficient is DPCM coded
er
(differential pulse code modulation)
ne
relative to the DC coefficient of the
gi
previous block, which allows a more
En
accurate coding with a given number
of bits. The eye, although not very
of
sensitive to fine details, is very
ge
sensitive to small luminance
lle
la
Except for the DC
a
th
coefficient, which is
er
treated separately, the 63
Ch
AC coefficients are read
g
in
using a zig-zag scan in
er
ne
order to transform the
gi
En
matrix into a flow of data
of
best suited for the next ge
coding steps (RLC/VLC).
lle
Co
JPEG Standard
la
a
th
In order to make the best possible use of the long series of zeros produced by
er
the quantization and the zigzag scan, the number of occurrences of zero is
Ch
coded, followed by the next non-zero value, which reduces the amount of
g
in
information to transmit.
er
ne
gi
En
6. Variable length coding (Huffman coding).
of
This last step uses a conversion table in order to encode the most frequently
ge
occurring values with a short length, and the less frequent values with a
lle
longer one. These last two steps (RLC and VLC) alone ensure a compression
Co
la
a
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
JPEG Standard
The simplified principle of a JPEG decoder can be seen in the block diagram in
Figure below.
ala
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
Compression of moving pictures
MPEG Standard
To store and reproduce moving pictures and the associated sound in digital
la
a
format for multimedia applications on various platforms was realized by the
th
year 1990.
er
Ch
This requirement led the ISO to form an expert group along the same lines as
g
JPEG, with members coming from the numerous branches involved namely
in
er
• Computer industry,
ne
• Telecom
gi
En
• Consumer electronics
• Semiconductors,
of
ge
lle
• Broadcasters,
Co
• Universities, etc.
This group was called the MPEG (Motion Pictures Experts Group).
MPEG Standards
MPEG-1
The first outcome of its work was the International Standard ISO/ IEC 11172,
ala
widely known as MPEG-1. The main goal was to allow the storage on CD-
th
ROM or CD-I (single speed at that time) of live video and stereo sound, which
er
Ch
implied a maximum bit-rate of 1.5 Mb/s.
g
The MPEG-1 standard consists of three distinct parts
in
er
• MPEG-1 system (ISO/IEC 11172-1): defines the MPEG-1 multiplex structure
ne
gi
• MPEG-1 video (ISO/IEC 13818-2): defines MPEG-2 video coding
En
• MPEG-1 audio (ISO/IEC 13818-3): defines MPEG-2 audio coding.
of
The picture quality of MPEG-1 was not suitable for broadcast applications,
ge
lle
since it did not take into account the coding of interlaced pictures or
Co
la
The MPEG-2 standard consists of three distinct parts
a
th
• MPEG-2 system (ISO/IEC 13818-1): defines the MPEG-2 streams
er
Ch
• MPEG-2 video (ISO/IEC 13818-2): defines MPEG-2 video coding;
g
in
• MPEG-2 audio (ISO/IEC 13818-3): defines MPEG-2 audio coding.
er
ne
gi
En
MPEG-2 is the source coding standard used by the European DVB (Digital
Video Broadcasting) TV broadcasting system, which is the result of the work
of
ge
started in 1991 by the ELG (European Launching Group), later to become the
lle
DVB committee.
Co
Principles of MPEG-2 video coding
• The video coding uses the same principles as lossy JPEG, to which new
techniques are added to form the MPEG “toolbox”.
la
• The strong correlation between successive pictures are exploited by these
a
th
techniques
er
Ch
• This factor considerably reduces the amount of information required to
g
transmit or store the pictures.
in
er
• These techniques, are known as “prediction with movement compensation,”
ne
• They consist of deducing most of the pictures of a sequence from preceding
gi
En
and even subsequent pictures, with a minimum of additional information
representing the differences between pictures.
of
ge
• This function is carried out by a movement estimator in the MPEG encoder .
lle
• This is the most complex function and greatly determines the encoder’s
Co
performance
• However this function is not required in the decoder.
Principles of MPEG-2 video coding
The practical realization of the encoder is dependent on the factors
• Speed
ala
• Compression rate
th
er
• Complexity
Ch
• Picture quality.
g
in
• Synchronization time
er
ne
• Random access time to a sequence within an acceptable limit
gi
En
• Since moving pictures are concerned, decoding has to be
of
accomplished in real time
ge
lle
la
They are coded without reference to other pictures, in a very similar
a
th
manner to JPEG. This means that they contain all the information necessary for
er
their reconstruction by the decoder. They are the essential entry point for access
Ch
to a video sequence. The compression rate of I pictures is relatively low, and is
comparable to a JPEG coded picture of a similar resolution.
g
in
• P (predicted) pictures
er
ne
They are coded from the preceding I or P picture, using the techniques of
gi
motion-compensated prediction. P pictures can be used as the basis for next
En
predicted pictures, but since motion compensation is not perfect, it is not
possible to extend the number of P pictures between two I pictures a great deal.
of
The compression rate of P pictures is significantly higher than that for I
ge
pictures.
lle
la
a
• N is the distance between two successive I pictures, defining a “group of pictures”
th
er
(GOP).
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
Principles of MPEG-2 video coding
• Re-ordering of the pictures
The sequence of the pictures after decoding has to be in the same order as the
la
a
original sequence before encoding.
th
er
• MPEG hierarchy of layers within a video sequence
Ch
g
Each of these 6 layers has specific function(s) in the MPEG process.
in
er
Starting from the top level, the successive layers are:
ne
• Sequence: Highest layer which defines basic video parameters
gi
En
• Group of Pictures: Layer for random access to the sequence
•
of
Picture: The elementary display unit layer (I,P or B)
ge
• Slice: Layer for intra frame addressing and (re)synchronization
lle
• Block: Layer where the DCT takes place (picture is divided into blocks of 8×8 pixels)
Principles of MPEG-2 video coding
Prediction, motion estimation, and compensation
• P and B pictures are “predicted” from preceding and/or subsequent pictures.
la
a
• Consider a sequence of moving pictures
th
er
• There are differences between corresponding zones of consecutive pictures due to
Ch
the presence of moving objects.
g
• So there is no obvious correlation between these two zones.
in
er
• Motion estimation : consists of defining a motion vector which ensures the
ne
correlation between an arrival zone on the second picture and a departure zone on
gi
En
the first picture, using a technique known as block matching.
• This is done at the macroblock level (16×16 pixels) by moving a macroblock of the
of
ge
current picture within a small search window from the previous picture, and
lle
comparing it to all possible macroblocks of the window in order to find the one
Co
la
a
th
temporal distance between these pictures (three pictures in the case of M=3,
er
N=12), block matching will generally not be perfect.
Ch
• Hence motion vectors can be of relatively high amplitude.
g
in
er
• The difference (or prediction error) between the actual block to be encoded
ne
and the matching block is calculated and encoded in a similar way to the
gi
En
blocks of the I pictures (DCT, quantization, RLC/VLC). This process is
called Motion compensation.
of
ge
lle
Co
Principles of MPEG-2 video coding
Formation of motion vector
• For B pictures, motion vectors are calculated by temporal interpolation of the vectors
of the next P picture in three different ways - forward, backward, and bi-directional
la
a
• The result giving the smallest prediction error is retained.
th
er
• The error is encoded in the same way as for P pictures.
Ch
• Only the macroblocks differing from the picture(s) used for prediction will need to be
g
encoded
in
• This substantially reduces the amount of information required for coding B and P
er
ne
pictures
gi
• The size of the moving objects is generally bigger than a macroblock
En
• Hence there is a strong correlation between the motion vectors of consecutive blocks
of
• A differential coding method (DPCM) is used to encode the vectors, thus reducing
ge
the number of bits required.
lle
• When the prediction does not give a usable result (for instance in the case of a
Co
moving camera where completely new zones appear in the picture), the
corresponding parts of the picture are “intra” coded, in the same way as for I pictures.
Principles of MPEG-2 video coding
Output bit-rate control
• The bit stream generated by the video (or audio) encoder is called
la
a
the elementary stream (ES).
th
er
• The bit-rate of this elementary stream must generally be kept
Ch
constant.
g
in
• In order to control the bit-rate at the output of the encoder, the
er
ne
encoder output is equipped with a FIFO buffer
gi
En
• The amount of information held in this buffer is monitored and
of
maintained within predetermined limits by means of a feedback
ge
loop modifying the quantization parameters.
lle
lle
ge
of
En
gi
ne
er
in
g
Ch
er
th
a la
Co
Block diagram of MPEG
lle
ge
of
En
gi
ne
er
in
g
Ch
er
th
a la
Principles of MPEG-2 video coding
MPEG-2 levels and profiles
la
The MPEG-2 standard has
a
th
• Four levels which define the resolution of the picture
er
Ch
• Five profiles which determine the set of compression tools used
g
in
The four levels are:
er
ne
• Low level corresponds to the SIF in MPEG-1 (up to 360×288)
gi
En
• Main level corresponds to 4:2:0 format (up to 720×576)
of
ge
• High-1440 level aimed at HDTV (up to 1440× 1152)
lle
ala
th
er
The five profiles are:
Ch
• The simple profile defined in order to simplify the encoder and the decoder.
g
in
It does not use bi-directional prediction (B pictures).
er
ne
• The main profile that uses all three image types (I, P, B) but leads to a more
gi
complex encoder and decoder.
En
• The scalable profile that is intended for future use. This profile includes
of
ge
• spatially scalable profile and
lle
• The high profile that is intended for HDTV broadcast applications in 4:2:0 or
4:2:2 format.
The MPEG-4 video compression standard (H.264/AVC)
• The standard is also referred to as H.264/AVC (Advanced Video
Coding).
la
a
th
• This standard is the result of the efforts of a Joint Video Team
er
Ch
(JVT) that includes members of the Video Coding Expert Group
g
(VCEG) and of the Motion Pictures Expert Group (MPEG) which
in
er
is the reason for its double naming.
ne
• This standard provides a considerable increase in compression
gi
En
efficiency over MPEG-2 of at least 50% and is important in HDTV
application. of
ge
lle
Co
The MPEG-4 video compression standard (H.264/AVC)
la
a
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
The MPEG-4 video compression standard (H.264/AVC)
la
a
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
The MPEG-4 video compression standard (H.264/AVC)
la
MPEG-4 Video Encoding creates the video stream from the video input stored in YUV
a
format.
th
er
• The first step is to decide the frame type (I or P).
Ch
• In the case of P frames the motion vectors for MacroBlocks should be computed. This
g
step requires the presence of the reference frame that needs to be created in the same
in
way as the decoder.
er
• Motion Compensation step calculates the differences between the corresponding
ne
region of the reference frame and the current frame.
gi
En
• DCT step transforms these values into the frequency domain, and Quantization step
cuts the not coefficients by reducing the storing precision. This cutting step
of
determines the compression of the encoding of the texture data.
ge
• The quantized values are scanned in a special (zig- zag) order and then coded with
lle
la
• MPEG-4 Simple Profile decoding procedure consists on bit stream
a
th
parsing, Variable Length Decoding (VLD), inverse DC and AC
er
prediction, Inverse Scanning, Inverse Quantization (IQ), Inverse
Ch
Discrete Cosine Transformation (IDCT), Motion Compensation (MC)
g
and Video Object Plane (VOP) reconstruction
in
er
• The video stream is parsed into motion and texture sub streams for
ne
macroblocks.
gi
En
• Motion sub stream describes the movement vector of the macroblock,
of
and the decoding process copies the corresponding portion of the
reference frame to the actual frame
ge
lle
• Texture sub stream holds the data that describes the macroblock
Co
Ch
Modulation
er
th
a la
Processes after compression
• Packetization
la
• Scrambling
a
th
er
• Channel Coding
Ch
g
in
• Modulation
er
ne
• Up-conversion
gi
En
• Amplification
of
ge
• Transmission
lle
Co
Processes after compression
Packetization
la
• The bit stream generated by the video (or audio) encoder is called
a
th
the elementary stream (ES).
er
Ch
• The elementary streams are the constituents of the so-called
g
Compression layer.
in
er
• Each elementary stream carries
ne
gi
• Access units (AU) which are the coded representations of
En
• Presentation units (PU), i.e., decoded pictures or audio frames
of
• These bit streams, as well as other streams carrying other private
ge
data, have to be combined in an organized manner and
lle
Co
ala
• packetization and combination of multiple streams into one single bit
th
er
stream,
Ch
• addition of time stamps on elementary streams for synchronization at
g
in
playback,
er
ne
• initialization and management of the buffers required to decode the
gi
elementary streams.
En
• Each elementary stream is cut into packets to form a packetized elementary
of
ge
stream (PES); a packet starts with a packet header followed by the
lle
• The length of the MPEG-2 transport packet has been fixed to 188 bytes for
the transmission of TV programs via satellite, cable, or terrestrial
transmitters following the European DVB standard
Processes after compression
Scrambling
la
• The next step after packetization is encryption of the video stream which is
a
th
er
called scrambling. This is carried out by using an algorithm known as
Ch
Common Scrambling Algorithm (CSA). This is done to ensure that piracy can
g
be resisted for an appropriate length of time. Each operator wants to guard
in
er
their own system for commercial and security reasons. The scrambling
ne
algorithm is designed to resist attacks from hackers for as long as possible
gi
En
and consists of a cipher with two layers.
of
ge
lle
Co
Processes after compression
Channel Coding
la
• Once the source coding operations have been performed a transport stream made of a
a
th
fixed number of byte packets is available for transmission to the end users via a radio
er
Ch
frequency link (satellite, cable, terrestrial network).
• These transmission channels are not error-free, but rather error-prone due to a lot of
g
in
disturbances which can combine with the useful signal.
er
ne
• A digital TV signal when almost all its redundancy has been removed, requires a very low
gi
bit error rate (BER) for good performance.
En
• It is therefore necessary to take preventive measures before modulation in order to allow
of
detection and, as far as possible, correction in the receiver of most errors introduced by
ge
the physical transmission channel.
lle
Co
• These measures, are grouped under the terms forward error correction (FEC) or channel
coding.
Processes after compression
Channel Coding
la
• Figure below illustrates the successive steps of the forward error correction
a
th
er
encoding process in the DVB standard
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
Modulation
• Once the source coding operations and the channel coding have been carried
out, we have a data stream ready to be used for modulation of a carrier for
la
a
transmission to the end users.
th
er
• Depending on the medium (satellite, cable, terrestrial network), the
Ch
bandwidth available for transmission depends on technical and
g
in
administrative considerations
er
ne
• Hence the modulation techniques adopted have to be different for the three
gi
media because
En
• For a satellite reception, CNR) can be very small (10dB or less) but the signal do not
suffer from echoes
of
ge
• For cable reception, the SNR is quite strong (generally more than 30 dB), but the signal
lle
Co
ala
th
• In these methods, the carrier is directly modulated by the bit stream
er
representing the information to be transmitted, either in amplitude or in
Ch
frequency.
g
in
• However, the low spectral efficiency of these modulations makes them
er
ne
inappropriate for the transmission of high bit-rates on channels with a
gi
bandwidth which is as small as possible.
En
• In order to increase the spectral efficiency of the modulation process,
of
ge
different kinds of Quadrature amplitude modulations (QAM) are used.
lle
la
a
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
Quadrature Amplitude Modulation
• Input symbols coded on n bits are converted into two signals I (in-phase) and Q
(quadrature), each coded on n/2 bits, corresponding to 2n/2 states for each of the two
la
a
signals.
th
er
• After digital-to-analog conversion (DAC), the I signal modulates an output of the local
Ch
oscillator and the Q signal modulates another output in quadrature with the first (out of
phase by /2).
g
in
er
• The result of this process can be represented as a constellation of points in the I-Q space,
ne
which represents the various values that I and Q can take.
gi
• Table below gives the main characteristics and denomination of some quadrature
En
modulation schemes
of
ge
lle
Co
Quadrature Amplitude Modulation
• The constellation of QPSK or 4-QAM is shown below. It represents the situation at the
output of the modulator.
la
a
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
Quadrature Amplitude Modulation
• The constellation of 64-QAM is shown below. It represents the situation at the output of the modulator.
ala
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
Modulation Characteristics for cable
and satellite digital TV broadcasting
(DVB-C and DVB-S)
la
a
• A number of theoretical studies and practical tests have been performed for cable as well
th
as for satellite to make the best possible choice for these modulations.
er
Ch
• It is observed that the theoretical bit error rate (BER) as a function of the SNR ratio in
ideal conditions QPSK has an advantage over 64-QAM of up to 12 dB.
g
in
• Taking into account the signal-to-noise ratio obtained on the receiving side, 2 bits/symbol
er
ne
(QPSK modulation) has been found to be the practical maximum, and therefore the best
spectral efficiency, for satellite transmissions.
gi
En
• In the case of cable, the signal-to-noise ratio is much higher, and a 64-QAM modulation (6
of
bits/symbol), roughly three times more efficient in spectral terms, can be used.
ge
• The effect of noise on the output of a QPSK receiver’s for satellite reception is more
lle
pronounced.
Co
• In the case of a cable reception of a 64-QAM signal with a low signal-to-noise ratio above
a certain noise level, the demodulator will be unable to distinguish, with certainty, a point
in the constellation from its neighbors.
Modulation Characteristics for cable
and satellite digital TV broadcasting
(DVB-C and DVB-S)
ala
th
• The cable reception is generally characterized by a high signal-to-noise ratio, but it suffers
er
from echoes.
Ch
• The effect that these echoes produce on the constellation is that of a very high inter-
g
in
symbol interference where the different points cannot be distinguished.
er
• This problem necessitates the use of an appropriate echo equalizer in the receiver for the
ne
gi
recovery of an almost perfect constellation.
En
• Other types of transmission disturbances or imperfections in the transmitting and
of
receiving devices reinforce the need for the error correction systems.
ge
• Another problem which the receiver has to cope with in the case of digital QAM
lle
modulations is that it does not have an absolute phase reference to demodulate the
Co
constellation. For this reason, there is a phase ambiguity of 90 which will prevent the
receiver from synchronizing itself as long as the demodulation phase is incorrect.
Modulation Characteristics for cable and satellite digital TV broadcasting
(DVB-C and DVB-S)
• In the case of the QAM modulation used for cable, this problem is avoided by using a
la
a
differential modulation for the two MSBs of the symbol: the state of the MSB of I and Q
th
corresponds to a phase change and not to an absolute phase state, which allows the
er
Ch
receiver to operate in any of the four possible lock conditions.
• In the case of the (non-differential) QPSK modulation used for satellite, the “out of
g
in
synchronization” information can be used to modify (up to three times) the phase relation
er
ne
between the recovered carrier and the received signal until synchronization is obtained.
gi
• Taking into account all the above-mentioned considerations the main characteristics
En
retained for the DVB compliant digital TV transmissions are detailed in Table below.
of
ge
lle
Co
Modulation Characteristics of DVB- C and DVB-S
la
QAM QPSK
a
th
BER is lower BER as a function of SNR is more by 12
er
dB
Ch
6 bits /symbol gives the best spectral 2bits /symbol gives the best spectral
g
efficiency efficiency
in
er
Noise effect is lower on comparison Effect of noise is more profound.
ne
There is a phase ambiguity of 90 “out of synchronization” information
gi
which will prevent the receiver from can be used to modify the phase
En
synchronizing itself relation between the recovered carrier
and the received signal
of
ge
lle
Co
OFDM Modulation for terrestrial digital TV (DVB-T)
• The European Digital Video Broadcasting – Terrestrial system (DVB-T) defined by the DVB is
based on 2 K/8K OFDM.
• OFDM modulation is Orthogonal Frequency Division Multiplexing.
ala
• The principle behind this type of modulation involves the distribution of a high rate bit stream
th
er
over a high number of orthogonal carriers, each carrying a low bit-rate.
Ch
• Its main advantage is its excellent behavior in the case of multipath reception, which is common in
terrestrial mobile or portable reception.
g
in
• In this case the delay of the indirect paths becomes much smaller than the symbol period.
er
ne
• N number of carriers with a spacing of 1/Ts between two consecutive carriers is modulated with
gi
symbols of duration Ts . This determines the condition of orthogonality between the carriers. Nis a
En
very high number and consecutive carriers are separated by a spacing of 1/Ts
• OFDM is a digital multi-carrier modulation scheme that uses multiple subcarriers within the same
of
single channel. Rather than transmit a high-rate stream of data with a single subcarrier, OFDM
ge
makes use of a large number of closely spaced orthogonal subcarriers that are transmitted in
lle
parallel. Each subcarrier is modulated with a conventional digital modulation scheme (such as
Co
QPSK, 16QAM, etc.) at low symbol rate. However, the combination of many subcarriers enables
data rates similar to conventional single-carrier modulation schemes within equivalent
bandwidths.
OFDM Modulation for terrestrial digital TV (DVB-T)
la
a
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
OFDM Modulation for terrestrial digital TV (DVB-T)
• The subcarriers are orthogonal to each other. Each transmitted subcarrier results in
a sinc function spectrum with side lobes that produce overlapping spectra between
subcarriers. At orthogonal frequencies, the individual peaks of subcarriers all line
la
a
up with the nulls of the other subcarriers. Orthogonality prevents interference
th
between overlapping carriers.
er
Ch
• The relationship between the frequency f0 of the lowest carrier and that of carrier
g
k(0<k <N −1), fk , is given by fk = f0+k/Ts.
in
• The frequency spectrum of such a set of carriers shows secondary parasitic lobes of
er
ne
width 1/Ts.
gi
• In real terrestrial receiving conditions, signals coming from multiple indirect paths
En
added to the direct path mean that the condition of orthogonality between carriers
of
is no longer fulfilled, which results in inter-symbol interference.
ge
• This problem can be circumvented by adding a guard interval before the symbol
lle
period Ts in order to obtain a new symbol period Ns= +Ts. This guard interval is
Co
generally equal to or less than Ts/4; it is occupied by a copy of the end of the useful
symbol.
Residential digital terrestrial television (DVB-T)
• For terrestrial digital television, the DVB technical module has adopted
OFDM modulation with 2048 carriers (2 K) or 8192 carriers (8 K),
ala
• This spectrum can be considered as virtually rectangular as shown below.
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
Residential digital terrestrial television (DVB-T)
• The DVB-T standard has been designed to be compatible with all existing TV
channelization systems in the world (channel widths of 6, 7 or 8 MHz). It is,
however, mainly used with channel widths of 7.
la
a
• It has also been designed in order to be able to coexist with analog television
th
er
transmissions with good protection against adjacent channel interference (ACI)
Ch
and against interference from within the channel itself (co-channel interference or
g
CCI).
in
• After channel coding, data follow a complex two step process of interleaving which
er
ne
consist of
gi
• A first interleaving process at the “bit” level is applied and forms matrices of 126 words of 2, 4,
En
or 6 bits
of
• A grouping of these matrices by 12 (2K mode) or 48 (8K mode) in order to form OFDM
ge
symbols of 1512×2 bits up to 6048×6 bits .
lle
• The carriers are then modulated by an inverse Fast Fourier Transform (iFFT) on
Co
la
a
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
Block diagram of the complete DVB Transmission/reception
la
a
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
Description of the DVB Transmission/reception
Transmission side :
The steps in this section which delivers a multiplex of MPEG-2 programs on one RF
la
a
channel are as described below:
th
er
1. MPEG-2 Encoder encodes the video and audio signals of the programs that are to be
Ch
broadcast. These video and audio PES (packetized elementary streams) are delivered to
g
the multiplexer (about four to eight programs per RF channel).
in
er
2. These PESs are used by the multiplexer to form 188 byte transport packets, which are
ne
then scrambled for the electronic program guide (EPG).
gi
En
3. Error correction methods like R-S coding or convolutional coding increases the packet
of
length . Formatting of the data (symbol mapping) is performed followed by the filtering
and D/A conversion to produce the I and Q analog signals.
ge
lle
4. An IF carrier of 70 MHz modulate the I and Q signals in QPSK (satellite) and QAM
Co
la
a
the transmission side.
th
er
6. Down conversion from VHF/UHF to If takes place in single step for cable and
Ch
terrestrial media. For satellite reception, an initial down-conversion takes place in the
g
antenna head followed by a second down-conversion after RF channel selection to an
in
er
intermediate frequency that is usually 480 MHz at the input of the set –top box or
ne
Integrated Receiver Decoder (IRD).
gi
7. The IF is coherently demodulated to deliver the I and Q analog signals.
En
of
8. The I and Q signals are A/D converted, filtered and reformatted after which the
forward error correction recovers the transport packets of 188 bytes.
ge
lle
9. The PES corresponding to the program is selected by the demultiplexer chosen by the
Co
user, which may previously have been descrambled with the help of the user key (smart
card).
10. The MPEG-2 decoder reconstructs the video and audio of the desired program.
Block Diagram of
the DVB satellite receiver
la
a
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
Description of the DVB satellite receiver
• The signals received from the satellite are amplified and down-converted into
the 950–2150MHz range by the low noise converter (LNC) located at the
la
antenna focus, and applied to the IRD’s input.
a
th
er
• Tuner : The tuner or front end controlled by an I2C bus, selects the required
Ch
RF channel in the 950–2150MHz range. This is converted into a 480MHz IF
g
and the required selection is achieved by means of a surface acoustic wave
in
er
filter (SAW). The signal is amplified and coherently demodulated according
ne
to the 0 and 90 axes to obtain the analog I and Q signals. Recovery of the
gi
En
carrier phase required for demodulation is carried out in combination with
of
the next stages of the receiver which lock the phase and the frequency of the
ge
local oscillator by means of a carrier recovery loop.
lle
la
applied on the transmitter side to the I and Q signals
a
th
er
• Forward error correction (FEC): The FEC block achieves the complete error
Ch
correction in the following order: Viterbi decoding of the convolutional code,
g
de-interleaving, Reed–Solomon decoding and energy dispersal
in
er
derandomizing.The output data are 188 byte transport packets which are
ne
generally delivered in parallel form.
gi
En
• The three blocks—ADC, QPSK, and FEC—now form a single integrated
of
circuit (single chip satellite channel decoder).
ge
• Descrambler (DESCR): The DESCR block receives the transport packets and
lle
Co
communicates with the main processor by a parallel bus to allow quick data
transfers. It selects and descrambles the packets of the required program
under control of the conditional access device.
Description of the DVB satellite receiver
• Demultiplexer :The DEMUX selects, by means of programmable filters, the
PES packets corresponding to the program chosen by the user.
la
a
• MPEG :The audio and video PES outputs from the demultiplexer are applied
th
er
to the input of the MPEG block, which generally combines MPEG audio and
Ch
video functions and the graphics controller functions required, among other
g
things, for the electronic program guide (EPG).
in
er
• Digital video encoder (DENC) : Video signals reconstructed by the MPEG-2
ne
gi
decoder are then applied to a digital video encoder (DENC) which ensures
En
their conversion into analog RGB+ sync. for the best possible quality of
of
display on a TV set ge
• Digital-to-analog converter (DAC) : Decompressed digital audio signals in I2S
lle
Co
la
commands from the remote control, and manages the smart card reader(s)
a
th
and the communication interfaces which are generally available.
er
Ch
• The four blocks—DEMUX, MPEG, DENC, and P—are now integrated in a
g
single IC, often referred to as a single chip source decoder.
in
er
• Smart card readers : The conditional access device generally includes one or
ne
gi
two of these (one might be for a banking card, for instance).
En
• Communication ports : The IRD can communicate with the external world
of
by means of one or more communication ports eg; a USB port as well as a
ge
telephone line interface (via an integrated modem).
lle
Co
Block Diagram of the DVB cable receiver
This receiver differs from its satellite counterpart only in respect of the tuning, demodulation, and channel
decoding parts which are suited to the cable frequency bands (UHF/VHF) and the QAM modulation used.
ala
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
Description of the DVB cable receiver
• Tuner : The tuner selects the desired channel in the cable band, converts it
into an IF frequency centered on 36.15 MHz, and achieves the appropriate
la
selection by means of an SAW. After amplification, the IF signal is down-
a
converted to the symbol frequency by means of a mixer oscillator of which
th
er
the frequency and phase are controlled by a carrier recovery loop from the
Ch
following QAM demodulator.
g
• Analog-to-digital converter (ADC): The transposed QAM signal is applied to
in
er
an analog-to-digital converter (ADC) working at a sampling frequency
ne
generally equal to four times the symbol frequency. The sampling frequency is
gi
locked to the symbol frequency by means of a clock recovery loop coming
En
from the next block (QAM).
of
• QAM: This is the key element in the channel decoding process which
ge
lle
la
data are the 188-byte transport packets in parallel form.
a
th
er
• The three blocks—ADC, QAM, and FEC—now form a single integrated
Ch
circuit (single chip cable channel decoder).
g
in
• Other functions : The processor, conditional access, descrambling,
er
demultiplexing, MPEG-2 audio/video decoding and all other “secondary”
ne
gi
functions are, in principle, identical to those described for the satellite for the
En
same level of functionality.
of
ge
lle
Co
Block Diagram of the DVB terrestrial TV receiver
The block diagram differs from its cable and satellite counterparts only by its front-end parts Which have to
be suited to the UHF and VHF frequency bands already used by analog television, and to the COFDM
modulation prescribed by the DVB-T standard.
ala
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
Description of the DVB terrestrial TV receiver
• Tuner : The tuner and IF part are functionally similar to the cable parts, but
they have to satisfy more stringent requirements regarding the phase noise of
la
the tuner’s PLL and also the selectivity. The tuner selects the desired channel
a
in the VHF/UHF bands , transposes it into an intermediate frequency
th
er
centered on 36.15MHz and realizes the required selectivity by means of two
Ch
SAW filters . The AGC amplified IF signal is directly applied to a COFDM
g
demodulator.
in
er
• ADC :The COFDM signal, is applied to an analog-to digital converter with a
ne
resolution of 9 or 10 bits, and digitized with a sampling frequency of 20MHz.
gi
En
• COFDM demodulation: This block is the main element of the demodulation
of
process. I and Q signals are reconstituted from the digitized IF signal, and the
ge
OFDM signal is demodulated by means of a fast Fourier transform (FFT) on
lle
2K or 8K points.
Co
la
demodulation.
a
th
er
• Channel decoding : This block performs frequency de-interleaving and
Ch
demapping of the COFDM symbols. It is followed by error correction which
g
includes Viterbi decoding, Forney de-interleaving, RSdecoding and energy
in
er
dispersal removal. The output transport stream data is in the form of 188
ne
bytes transport packets in parallel format.
gi
En
of
ge
lle
Co
Digital TV over IP
• A new way of transmitting digital TV to subscribers became a commercial reality at the
beginning of the twenty-first century—over the phone line by ADSL (Asymmetric Digital
la
Subscriber Line)—with a particular transport protocol known as IP (Internet Protocol).
a
th
• This new technology made it possible to use the phone line as a new way of transmitting
er
Ch
TV broadcasts to consumers. It is called TV over IP (Television over Internet Protocol), or
more simply IPTV (Internet Protocol Television). It is sometimes referred to as Broadband
g
in
TV.
er
ne
• At the subscriber level, TV over IP differs from other transportation media (terrestrial,
gi
cable, satellite) in that, due to the limited bandwidth of the telephone line, all accessible
En
channels are not transmitted simultaneously to the subscriber; only one channel at a time
of
• Only one TV program (the one which the subscriber has selected) is transmitted at any
ge
given moment on the subscriber’s line, the selection being performed at the DSLAM(
lle
• TV over IP also permits the delivery of true video on demand (VOD), since it is possible to
send one specific video stream to only one subscriber at the time he or she has required it,
without occupying the whole network with this stream.
Digital TV over IP
Co
lle
ge
of
En
gi
ne
er
in
g
Ch
er
th
a la
la
a
th
Chapter 3
er
Ch
g
Display technologies
in
er
ne
gi
En
of
ge
lle
Co
Co
lle
ge
of
En
gi
ne
er
in
g
(LCD)
Ch
er
th
a la
Liquid Crystal displays
A Liquid Crystal Display is a thin flat panel display device used for
electronically displaying information such as text, images and moving
picture.
la
The material “liquid crystal” was discovered accidentally by the botanist Freidrich Reinitzer as
a
th
early as 1888. However the commercially available liquid crystals were not developed until the
er
late 1960’s.
Ch
Liquid Crystal Displays (LCDs) are used in digital devices like
• Computers and notebooks
g
in
• digital watches
er
• DVD and CD players
ne
• Optoelectronic displays
gi
• Flat panel displays
En
• They have taken a giant leap in the screen industry by replacing the use of Cathode Ray Tubes
of
(CRT) ge
• LCD’s have made displays thinner than CRT’s.
lle
• The power consumption is lesser it works on the basic principle of blocking light rather than
Co
dissipating.
• It is a passive device which does not produce any light of its own.
• It simply alters the light travelling through it.
• It works on the polarization property of light.
Merits of LCDs
la
Liquid Crystal Displays are advantageous due to the following
a
th
er
factors:
Ch
• Smaller size –occupy 60% less space than CRT displays
g
• Low power consumption-consume about half the power than
in
er
CRTs and emit less heat.
ne
• Lighter weight- around 70% lighter in weight than CRTs
gi
En
• No electromagnetic fields- They are not susceptible to e.m fields
of
and do not emit e.m fields.
ge
• Longer life- longer useful life than CRTs
lle
Co
Basic structure
• A liquid crystal cell consists of a thin layer (about 10 m) of a liquid crystal sandwiched between two glass
sheets with transparent electrodes deposited on their inside faces. With both glass sheets transparent, the
cell is known as transmittive type cell. When one glass is transparent and the other has a reflective
la
coating, the cell is called reflective type.
a
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
Principles of Liquid Crystal Displays
• Liquid crystals are liquid chemicals
in a state that has properties in
la
between those of conventional liquid
a
th
and solid crystals.
er
Ch
• A liquid crystal may flow like a
g
liquid, but its molecules may be
in
er
oriented in a crystal like way.
ne
• Liquid crystals molecules can be
gi
En
aligned precisely when subjected to
of
electric fields, as like as in the way
ge
metal filings line up in the field of a
lle
Co
magnet.
• When properly aligned, the liquid
crystals allow light to pass through.
Principles of Liquid Crystal Displays
• The orientation of molecules in the three types of matter are shown below.
• A solid has an orientational as well as a positional order for the molecules.
la
• An LC has an orientational order only.
a
th
• A liquid phase is isotropic with neither positional nor orientational order.
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
Principles of Liquid Crystal Displays
Two types of liquid crystal materials important in display technology are
• Nematic phase Liquid crystals and
la
a
th
• Smectic phase Liquid crystals
er
• The most popular liquid crystal structure is the nematic liquid
Ch
crystal(NLC). When they are in a nematic phase, liquid crystals are a bit
g
in
like a liquid: their molecules can move around and shuffle past one
er
ne
another, but they all point in broadly the same direction. The liquid is
gi
En
normally transparent, but if it is subjected to a strong electric field, ions
of
move through it and disrupt the well ordered crystal structure, causing
ge
the liquid to polarize and hence turn opaque. The removal of the applied
lle
field allows the crystals structure to reform and the material regains its
Co
transparency.
Principles of Liquid Crystal Displays
• Liquid crystals can adopt a twisted up structure and when we apply electricity to
them, they straighten out again. This is the key how LCD displays turn pixels on and
la
a
off. The polarization property of light is used in LCD screen to switch its colored
th
er
pixels on or off. At the back of the screen, there is a bright light that shines out
Ch
towards the viewer. In front of this, there are the millions of pixels, each one made
g
up of smaller areas called sub-pixels, that are colored Red, Green, or Blue.
in
• Each pixel has a polarizing glass filter behind it and another in front of it at 90
er
ne
degrees. Normally the pixels looks dark. In between the two polarizing filters there
gi
is a tiny twisted, nematic liquid crystal that can be switched on or off electronically.
En
When it is switched on, it rotates the light passing through it through 90 degrees,
of
effectively not allowing light to flow through the two polarizing filters and making
ge
the pixel look dark. Each pixel is controlled by a separate transistor that can switch
lle
ala
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
LCD Working
Co
lle
ge
of
En
gi
ne
er
in
g
Ch
er
th
a la
LCD Working
An LCD screen consists of a reflective mirror setup in the back. An electrode plane made of
la
indium-tin oxide is kept on top and a glass with a polarizing film is also added on the bottom
a
side. The entire area of the LCD has to be covered by a common electrode and above it should
th
er
be the liquid crystal substance. Next comes another piece of glass with an electrode in the shape
Ch
of the rectangle on the bottom and, on top, another polarizing film. It must be noted that both
g
of them are kept at right angles. When there is no current, the light passes through the front of
in
the LCD it will be reflected by the mirror and bounced back. As the electrode is connected to a
er
ne
temporary battery the current from it will cause the liquid crystals between the common-plane
gi
electrode and the electrode shaped like a rectangle to untwist. Thus the light is blocked from
En
passing through. Thus that particular rectangular area appears blank.
of
With no voltage applied across the pixel, the LC molecules twist to align to the rubbing of the
ge
glass plates. Light entering the first polarizer is twisted and can exit the second polarizer - pixel
lle
is ON.
Co
With a voltage applied across the pixel, the LC molecules untwist to align with the electric
field. Light entering the first polarizer cannot exit the second polarizer - pixel is OFF.
Types of LCD
• PASSIVE MATRIX DISPLAY
la
Uses a grid of vertical and horizontal conductors comprised of Indium Tin
a
th
Oxide (ITO) to create an image. There is no switching device. Pixels are
er
Ch
addressed one at a time by row and column matrix. Only used in low-
g
resolution displays (such as watch, calculator). Slow response time, poor
in
er
contrast.
ne
gi
En
• ACTIVE MATRIX DISPLAY
of
It is based on Thin Film Transistor (TFT) Technology. Switching element
ge
present at each pixel. Individual pixels are isolated from each other. Thin
lle
Film Transistors are most commonly used. Each row line is activated
Co
la
screen is illuminated by a tiny bit of plasma or charged gas, somewhat like a
a
th
tiny neon light.
er
• Plasma displays are thinner than cathode ray tube ( CRT ) displays and
Ch
brighter than liquid crystal displays ( LCD ).
g
in
• A plasma display panel (PDP) is a type of flat panel display common to
er
ne
large TV displays 30 inches (76 cm) or larger.
gi
• They are called "plasma" displays because the technology utilizes small cells
En
containing electrically charged ionized gases, or what are in essence
of
chambers more commonly known as fluorescent lamps.
ge
• In plasma display panels the light of each picture element is emitted from
lle
Co
la
a
th
er
Ch
g
in
er
ne
gi
En
of
ge
lle
Co
Basic structure and working principle
• Gases like xenon and neon which contain millions of tiny cells are filled between two plates of
glass.
• Electrodes are placed inside the glass plates in such a way that they are positioned in front and
la
a
behind each cell.
th
• The rear glass plate is fitted with the address electrodes in such a position that they are
er
positioned behind the cells.
Ch
• The front glass plate has the transparent display electrodes, which are surrounded on all sides by
g
a magnesium oxide layer and a dielectric material. They are kept in front of the cell.
in
er
• Both sets of electrodes extend across the entire screen. The display electrodes are arranged in
ne
horizontal rows along the screen and the address electrodes are arranged in vertical columns.
gi
• When a voltage is applied, the electrodes get charged and cause the ionization of the gas
En
resulting in plasma. This also includes the collision between the ions and electrons resulting in
the emission of photon light. An electric current flows through the gas in the cell which creates a
of
rapid flow of charged particles, which stimulates the gas atoms to release ultraviolet photons.
ge
The released ultraviolet photons interact with phosphor material coated on the inside wall of the
cell. When an ultraviolet photon hits a phosphor atom in the cell, one of the phosphor's
lle
electrons jumps to a higher energy level and the atom heats up. When the electron falls back to
Co
its normal level, it releases energy in the form of a visible light photon.
• Each pixel has three composite coloured sub-pixels. When they are mixed proportionally, the
correct colour is obtained.
Characteristics of Plasma Displays
• Plasma displays can be made upto large sizes like 150 inches diagonal.
la
a
• Very low-luminance “dark-room” black level.
th
er
• Very high contrast.
Ch
• The plasma display panel has a thickness of about 2.5 inches, which makes
g
in
er
the total thickness not more than 4 inches.
ne
• For a 50 inch display, the power consumption increases from (50-400)
gi
En
watts in accordance with images having darker colours.
of
• Has a life-time of almost 100,000 hours. After this period, the brightness of
ge
the display reduces to half.
lle
Co
ADVANTAGES DISADVANTAGES
• Picture quality – Capable of • Use more electrical power, on
producing deeper blacks allowing
la
average, than an LCD TV.
a
for superior contrast ratio.
th
• Does not work well at high
er
• Wider viewing angles than those of altitudes above 2 km due to
Ch
LCD; images do not suffer from pressure differences between the
g
degradation at high angles like
in
gases inside the screen and the
LCDs.
er
air pressure at altitude.
ne
• Less visible motion blur
gi
• It may cause a buzzing noise.
En
• Very high refresh rates
• For those who wish to listen to
of
• Faster response time, contributing
ge AM radio, or are amateur radio
to superior performance when
operators (hams) or shortwave
lle
displaying content.
listeners (SWL), the radio
Co
la
a
th
er
With the necessary figures explain the
Ch
g
working principles of
in
er
ne
LED Displays
gi
En
of
ge
lle
Submit on 11.03.2019.
Co