Television Broadcasting - For Engineering Assistants (Induction Course) - Fifth Edition - 2014
Television Broadcasting - For Engineering Assistants (Induction Course) - Fifth Edition - 2014
BROADCASTING
FOR
ENGINEERING ASSISTANTS
(INDUCTION COURSE)
Any part or parts of this publication are not to be re-produced in any form or published in
electronic form on website without the written permission of this Academy.
1
FUNDAMENTALS OF
MONOCHROME AND COLOUR
TV SYSTEM
INTRODUCTION
A picture can be considered to be made of several numbers of dots. Each dot can be a small
elementary area with a variable intensity light or shade, which can be called as PICTURE
ELEMENT. One may notice that these elements contain the visual image of the scene
brightness. In the case of a TV camera the scene is focused on the photosensitive surface of
pick up device and an optical image is formed. The photoelectric properties of the pick-up
device, convert the optical image to an electric charge image depending on the light and shade
of the scene (picture elements). Now it is necessary to pick up this information and transmit it.
For this purpose scanning is employed The electron beam scans the image, element by
element, line by line and then field by field in time domain to provide signal variations in a
successive order, called Colour Composite Video Signal (CCVS).
OBJECTIVES
PICTURE FORMATION
In the case of a TV camera, the scene is focused on a photosensitive surface of pickup device
and an optical image is formed. The photoelectric properties of the pickup device convert the
optical image into an electric charge image depending on the light and shade of the scene
(picture elements). This electrical charge image is then scanned by an electron beam, from left
to right, line by line and from top to bottom, field by field, in successive order to provide the
signal variations contained in the original scene. The scanning is done both in horizontal and
vertical direction simultaneously. This scanned image is then transmitted in electrical form for
1
Induction Course (Television)
reproduction at the receiving end where the electrical image is converted back to the original
optical image. The frame is divided in two fields. Odd lines are scanned first and then the even
lines. The odd and even lines are interlaced. .
SCANNING SYSTEM
There are various standards for scanning pictures for generating video signals in electrical form.
The scanning of any picture for converting it to a video signal has to take place in horizontal and
vertical direction simultaneously. These scanning parameters may vary from a system to
system. India has adopted a system called PAL B of CCIR international standard, with the
horizontal and vertical scanning frequency of 15,625 Hertz and 50 Hz respectively. The frame is
divided into two fields. Odd lines are scanned first and then the even lines. The odd and even
lines are interlaced. Thus the frame is divided into 2 fields to reduce the flicker. The field rate is
50 Hertz. The frame rate is 25 Hertz (Field rate is the same as power supply frequency). In
progressive scanning there is no interlacing and each line is scanned in a sequence. This result
in 50 frames instead of 25 frames per second as compared to interlaced system. The bandwidth
(BW) requirement for transmission will also get doubled. Progressive scanning is very common
in computer monitor.
Higher number of TV lines means larger bandwidth for video and hence requires a larger RF
channel width. If we go for larger RF channel width the number of channels in the RF spectrum
will be reduced. However, with more no. of TV lines on the screen the clarity of the picture i.e.
resolution improves. A compromise between quality and conservation of RF spectrum led to the
selection of 625 lines in CCIR, PAL B system. Odd number is preferred for ease of sync pulse
generator (SPG) circuitry to enable interlacing of fields.
Horizontal retrace
Field 1 Field 2
Vertical retrace
2
Fundamentals of Monochrome and Colour TV System
RESOLUTION
As shown in Fig. 1(a), the scanning spot (beam) scans from left to right. The beam starts at the
left hand edge of the screen and goes to right hand edge in a slightly slanty way as the beam is
progressively pulled down due to vertical deflection of beam (as top to bottom scanning is to
take place simultaneously). When the beam reach the right hand edge of the screen the
direction of beam is reversed and goes at a faster rate to the left hand edge (below the line
scanned). Once again the beam direction is reversed and scanning of next line starts. This
goes on till the beam completes scanning of 312 and half lines reaching the bottom of the
screen. At this moment the beam flies back to top and starts scanning starting from half line to
complete the next 312 and half lines of the frame. 625 lines in a frame are scanned at the rate of
25 frames per second, so the number of lines scanned per second becomes 625 multiplied by
25, which are 15,625 lines in a second. So the horizontal frequency is 15,625 hertz. Also to
avoid distortions in the picture whenever the beam changes its direction, it is blanked out for
certain duration called as blanking period.
Since the number of lines to be scanned per second is 15625, one line will require 64
microseconds. Out of this period, the horizontal blanking period is 12 microseconds. So the
active period of line becomes 64 -12 = 52 micro seconds.
Similarly there is vertical blanking period and 25 TV lines are blanked out during this period after
every field. So in one frame 50 TV lines are blanked out. Hence effective lines are. 575. (625-
50)
The vertical resolution depends on the number of scanning lines and the resolution factor based
on human eye response is known as Kell factor. Assuming a reasonable value of Kell factor as
0.69, the vertical resolution becomes nearly 400 lines. (575 X 0.69)
The capability of the system to resolve maximum number of picture elements along scanning
lines determines the horizontal resolution. It means to find how many alternate black and white
elements can be there in a line. We have seen earlier that the vertical resolution is limited by the
number of active lines and this number is 575 lines. So for the same resolution in both vertical
and horizontal directions the number of alternate black and white elements per line can be 575
multiplied by Kell factor and aspect ratio. Therefore, the number of alternate black and white
dots on line can be 575 x 0.69 x 4/3 which is equal to 528. It means there are 264 cycles per line
(528 divided by 2). These 264 cycles are there during 52 micro seconds. Hence the highest
frequency is 5 MHz
264 10 6
fhighest 5 MHz
52
Therefore the horizontal resolution of the system is 5 MHz One can also conclude that horizontal
bandwidth of signal in 625 lines system is 5 MHz
3
Induction Course (Television)
GREY SCALE
In black and white (monochrome) TV system all the colours appear as gray on a 10-step gray
scale chart. TV white corresponds to a reflectance of 60% and TV black to 3 % giving rise to a
contrast Ratio of 20:1 (Film can handle more than 20:1 and eye’s capability is much more).
In black and white TV the concept of gray scale is adopted for studio properties, costumes and
scenery etc. while designing the TV sets. If the foreground and back ground are identical in gray
scale, they may merge and the separation may not be noticed clearly on the screen.
BRIGHTNESS
Brightness reveals the average illumination of the reproduced image on the TV screen.
Brightness control in a TV set adjusts the voltage between grid and cathode of the picture tube
(Bias voltage).
CONTRAST
Contrast is the relative difference between black and white parts of the reproduced picture. In a
TV set the contrast control adjusts the level of video signal fed to the picture tube. Brightness
and contrast controls are to be adjusted in a TV set to reproduce faithfully as many gray scale
steps as possible. Ultimately the adjustment depends on individual viewing habit.
VIEWING DISTANCE
Optimum viewing distance from TV set should be about 4 to 8 times the width of the TV screen
with no direct light falling on the TV screen.
The TV signals have varying frequency contents. The lowest frequency is zero. (When we are
transmitting a white window in the entire active period of 52 micro seconds the frequency is
Zero). In CCIR system B the highest frequency that can be transmitted is 5 MHz even though
the TV signal can contain much higher frequency components. (In film the reproduction of
frequencies is much higher than 5 MHz and hence clarity is superior to TV system) Long shots
carry much higher frequency components than mid close ups. Hence in TV productions long
shots are kept to a minimum. In fact standard TV is a medium of close ups and mid close ups.
4
Fundamentals of Monochrome and Colour TV System
0.7V
1.0V
0.3V
0.3V Back Porch
5.8 Sec.
Active Period
52 Sec. Sync Tip
Colour H Blanking
Burst 4.7 Sec.
H Period 12 Sec.
64 Sec.
H
H.P.F
CCVS
SIGNAL
Sync
Separator
V
L.P.F
A TV signal has varying amplitude signal which depends on the amount of incident light on the
picture elements. Hence the video signal has an average value i.e. a DC component
corresponding to the average brightness of the scene. Let us examine figure 3, the blanking
level is the reference black level which is also assigned as zero The DC component in a video
signal represents scene brightness (mean value) and the AC component of Video signal carries
information regarding the scene contrast. For correct reproduction both the AC and DC
components should be present at the input of picture tube.
In fig 3 the DC levels vary in all the three cases. Such signals when pass through the AC
coupled amplifiers, the DC will be lost. If such signals are fed to the picture tube of a TV set, the
picture will not be faithful and the concept of maintaining the original scene brightness is lost. We
know that the blanking level is always the same irrespective of the average brightness of the
scene. The method of restoring the DC level (with respect to black level) is called DC
restoration. DC restoration circuits are also called as clamping circuits. Clamping of signal at
back porch level is called back porch clamping. It means at the end of each TV line the level is
always brought to the reference black level before the next line starts. In this way DC is
5
Induction Course (Television)
restored. TV receivers as well as TV monitors employ the back porch clamping. Clamping at
sync tip level is also possible.
-0.3
Dotted line shows DC level
1V With clamping
the brightness
is faithful
GAMMA CORRECTION
At two places, i.e., in Cameras and TV receiver, the conversion between optical to electrical
signals and vice versa takes place. We would like this to be a linear transfer. For a complete
system, it is the combination of individual transfer characteristics of camera and the picture tube
of the TV receiver. This transfer characteristic is called Gamma. ()
If Gamma is less than unity whites are compressed (crushed) and blacks are expanded
(stretched). If Gamma is more than unity whites are stretched and blacks are crushed. A
Gamma of slightly more than unity is preferred to compensate for the loss of contrast in the
system due to optical flare etc.
For example if the scene has a contrast of 10:1 and is transmitted through a system whose
overall Gamma is 2, the displayed image will have a contrast ratio of 100:1 (10 raised to the
power of 2 is 100). This is too much as there will be intolerable white stretching and black
compression. A Gamma of around 1.2 is preferred.
6
Fundamentals of Monochrome and Colour TV System
Reproduced
Brightness
>1
OUTPUT
=1
Black INPUT
(Scene Brightness)
Overall System Gamma: We know that the Gamma of picture tube is around 2.8. If the
Gamma of pick up device is unity, then the Gamma correction required can be calculated, as
follow:
So the value of required Gamma correction G becomes = 0.43. This can be achieved by video
circuits before transmission and is usually applied to video cameras as a part of the camera
processing.
INITIAL TV STANDARD
NTSC
National Television System Committee
525/59.939
Lines/Field 525/60
15.734 kHz
Horizontal Frequency 15.750 kHz
59.939 Hz
Vertical Frequency 60 Hz
Color Subcarrier
- 3.579545 MHz
Frequency
7
Induction Course (Television)
PAL
Phase Alternating Line
SECAM
Sequential Couleur Avec Memoire
or Sequential Color with Memory
SYSTEM SECAM B,G,H SECAM D,K,K1,L
Line/Field 625/50 625/50
Horizontal Frequency 15.625 kHz 15.625 kHz
Vertical Frequency 50 Hz 50 Hz
Video Bandwidth 5.0 MHz 6.0 MHz
Sound Carrier 5.5 MHz 6.5 MHz
All these systems are a compromise and many efforts have been made over the years to
address the shortcomings in each of the systems.
ACTIVITIES
Prepare a chart indicating the different TV standards used in different countries of the world.
RECAP
The image formed by the camera optics on the face plate of a camera is made of several small
picture elements. This scattered information in the desired aspect ratio needs to be converted
into electrical form and collected in time space to convert it into an electrical signal. This process
of reading the information is called scanning of the image. Scanning parameters are specified by
the available system adopted by the user. Timings and levels of the system are also specified.
Addition of synchronizing pulses, necessary correction like Gamma and DC restoration etc.,
facilitates the faithful reproduction of signals on TV receiver
8
Fundamentals of Monochrome and Colour TV System
FURTHER READINGS
1. Television and Video Engineering, Dhake A.M. (1999), Tata McGraw-Hill, New Delhi.
2. Modern Television Practice: Principles, Technology and Servicing, Gulati, R.R. (2002)
New Age, New Delhi.
3. Television and Video Engineering: Lakshmi A. Veera (2010). Ane Books, New Delhi.
4. Video demystified, Jack. K, (2007), New York, Elsevies.
5. Digital Television, Fischer, WA (2004), Berlin, Sbringer.
6. Colour Television, Gulati R.R. (2001), New Delhi, New Age.
******
9
2
THE PAL COLOUR TELEVISION
SYSTEM
INTRODUCTION
Elementary physics says that it is possible to obtain any desired colour by mixing three primary
colours in suitable proportion. Thus now we need to convert optical information of these three
colours to their respective electrical signals and then transmit it on different carriers to be
decoded by the TV receiver. This can then be converted back to the optical image at the display
device. The phosphors for all the three colours i.e. R, G and B are easily available to the
manufacturers of the picture tube. So the pick up from the cameras and output for the picture
tube should consist of three signals i.e. R, G and B. It is only in between the camera and the
picture tube of the receiver we need to have a system to transmit this information.
OBJECTIVES
MODULATED RF CARRIER
RGB
R
G TV TV TV
B STUDIO TRANSMITTER RECEIVER
11
Induction Course (Television)
COLOUR TELEVISION
Colour television was designed to be compatible and reverse compatible with the monochrome
television system as a requirement. This makes it slightly complicated. Compatibility means
that the colour transmission is also to be received by B&W TV sets. This is achieved by sending
Y as monochrome information along with the chroma signal. Y is obtained by mixing R, G & B
as per the well-known equation:
Reverse compatibility means when Black & White TV Transmission is also to be received on
the colour TV sets. Now consider the above equation that connects four variables, so if we can
transmit any three say R,B and Y , fourth one can always be generated electronically :
For a colour TV set, when we transmit black and white signal. G becomes equal to 1.7 Y, in
case of all black and 1.0 in case of all white. The net result is black & white pictures on a colour
TV screen appear as Green pictures. So the reverse compatibility is not achieved.
To achieve reverse compatibility, we do not want any colour component to have Y in it. Now
when we transmit Y, R-Y and B-Y instead of Y, R & B, (we do not selected G-Y as this will have
much lower level than R-Y and B-Y and hence will need more amplification causing noise into
the system). So it is a good idea to derive G-Y electronically in the TV receiver.
Colour difference signals fulfill the compatibility and reverse compatibility. Because in this case
the colour difference signals becomes zero.
As such colour difference signals are zero for white or any shade of gray whereas; Y carries the
entire Luminance information.
It is to be noted while R, G, B signals always have positive value whereas R-Y, B-Y and G-Y
signals can either be positive or negative or even zero.
12
The Pal Colour Television System
We have already seen that compatibility calls for utilizing the same bandwidth as that of existing
monochrome. In the system we are following it is 5 Megahertz for Video. Restricting the
bandwidth of Luminance will result in poor resolution. Then how to share the same 5 Megahertz
bandwidth between Y and the colour difference signals R-Y and B-Y. A way has to be found out
to accommodate the colour difference signal within the Luminance bandwidth without causing
any significant interference. Also Luminance signal is to be transmitted in the same way as that
monochrome receiver can receive it. Hence a method of inter leaving was adopted to suit the
compatibility.
Spectral analysis of luminance signal shows that various frequency components occur at
multiples of line (H frequency) due to the periodic scanning. The space between the two energy
contents is utilized to accommodate chrominance signal within luminance signal.
If an oscillator output is connected to the TV picture tube input, several patterns appear on the
screen. When the Oscillator frequency is a multiple of TV line frequency (H frequency) the
patterns become stable. As the oscillator frequency rises through the Luminance band the
pattern becomes finer eventually becoming a series of dots. If the oscillator frequency is an odd
multiple of Line frequency then the dots pattern of one field lies exactly between the dots
produced two fields later. Persistence of vision will cause dot pattern to go to a minimum. This
has also led to the selection of colour sub-carrier frequency that gets modulated by the colour
difference signal which is close to the edge of bandwidth on the high frequency side
.
13
Induction Course (Television)
As we know the video spectrum is occupied only at multiples of Line frequency and in their
vicinity. The spectrum exhibits gaps in between these frequency groups. So if the chrominance
spectrum is placed in these gaps the interference will be negligible.
From the above it is clear that colour sub carrier frequency should be near to the upper edge of
video bandwidth (i.e. as high as possible).
The 4.43361875 MHz frequency of the colour carrier is a result of 283.75 (4.43361875 X64) SC
cycles per line plus a 25 Hz offset to avoid interferences. Since the line frequency (number of
lines per second) is 15625 Hz, the colour carrier frequency calculates as follows:
4.43361875 MHz = 283.75 × 15625 Hz + 25 Hz.
It is the process of modulating sub carrier that differs in the NTSC, PAL and SECAM system.
Luckily the requirement of bandwidth of chrominance signal is less. This is because of the
capacity of human eye. The capacity of human eye to distinguish between hues depends on the
size of the objects, the lighting condition and the distance. In a very badly lit room you cannot
distinguish the colour of the objects if they are small in size and are at a distance. However, you
can notice the objects by their Luminance value. It means they give rise to Luminance signal but
not chrominance signal.
Even in good lighting condition we cannot notice hue till we go near the objects. However their
brightness value is first noticed as we go near and when go still nearer we see colour. This only
14
The Pal Colour Television System
shows that the bandwidth requirement of chrominance signal is much less. In the PAL system
the chrominance bandwidth is restricted to 1.3 MHz. The sub-carrier frequency is 4.43 Mega
hertz.
But this subcarrier is single and we need two separate carriers for R - Y and B – Y, to modulate
them independently. This is achieved by having two different SC of the same frequency but
different in phase by 90 degrees. Hence, quadrature modulation of sub-carrier frequency by the
colour difference signals. The type of modulation used is Amplitude Modulation. One carrier is
amplitude modulated with R - Y and the other with B - Y and in both cases the carrier is
suppressed. The two modulated signals with phase at 90 degrees to each other produce the
resultant chrominance signal. This chrominance signal when gets added to Luminance signal, it
form a Composite Colour Video Signal (CCVS).
The R-Y and B-Y chrominance signals are recovered in the television receiver by demodulation.
But sub-carrier so generated by a local oscillator in the receiver must have same phase and
frequency as that of transmitted sub-carrier. This is achieved by transmitting 10 cycles of sub-
carrier frequency on the back porch of transmitted H synchronizing pulse. This signal of 10
cycles of sub-carrier, known as BURST or colour BURST is gated and recovered in TV receiver
for demodulation.
15
Induction Course (Television)
Chroma signal is the resultant of the two vectors of modulated colour component. This when
added with Y to form a CCVS signal causes peak excursions going up to 1.79 in case of
saturated yellow and similarly for some other cases it may go even below the black level. The
resultant amplitude is considered too great for transmission over equipment used also for
monochrome. Hence the chrominance information is reduced in amplitude such that it is limited
to 1.33. This requires reduction in all the colour vectors by a suitable weighing factor which is as
per below:
When a CCVS signal passes through a long chain or network, the chrominance signal may
suffer a phase change with respect to burst resulting in wrong hue. This is because the
relationship between burst phase and that of the instantaneous modulated sub-carrier which is
superimposed on the Luminance signal will determine HUE of a colour. If any serious changes
occur in this relationship in the transmission path, wrong Hues will result.
Phase changes of the order of plus or minus 5 degrees and above will produce noticeable
changes of hue. However, it depends to a great extent on the picture content. (A phase error of
10 degrees is quite noticeable and a phase error of 5 degrees is just detectable).
In NTSC because of the phase errors induced by the system, the resultant chroma vector at the
receiver used to vary causing noticeable impairment. Where as in PAL system, any phase
16
The Pal Colour Television System
variation in one line is compensated or cancelled by reversing the phase of V component in the
subsequent line. The phase errors in any line and the subsequent line cancel each other
restoring the original phase (i.e. original hue). Since the phase of V vector is changed by 180
degree after every line the system is called phase alternate line (PAL).
As mentioned earlier, the phase of the sub-carrier to the R-Y modulator is reversed (180
degrees) each alternate line. It means the phase of R-Y of a particular line is 180 degrees
opposite to the preceding line as well as succeeding line. It is mentioned that the phase of sub-
carrier for B-Y signal remains same in each line.
Let us assume a phase error of (alpha) in the reception for the resultant chrominance signal of
nth line with respect to transmitted signal. In the successive line also there is going to be a same
phase error (alpha) but with a different polarity of the resultant chrominance. So when we
combine the chrominance of nth line and (n +1)th line in the reception, the net result is
chrominance signal with the original phase. This is a major improvement in PAL system over
the NTSC system.
Even if we do not combine the chrominance outputs of n and n + 1 lines electronically by using
delay line of 1H, our eyes can average out the outputs of n and n + 1 lines providing the original
colour or Hue. Such TV receivers are called as PAL-S (PAL-Simple) TV receivers.
Please note the adjacent lines referred to lines laid down in time sequence and not as the lines
that appear on the TV screen when the fields are interlaced.
In the case of PAL-S receiver the ability of eyes to combine the hues on the adjacent lines is
utilized. However in this case the resultant picture is less satisfactory for phase errors
17
Induction Course (Television)
+ / - Alternate
2V (Modulated R-Y
V component V
INVERTER Add Component for decoder)
1 H Delay
U
Decoded SC
U 2U (Modulated B-Y
Add Component for decoder)
U component
exceeding 15 degrees, PAL-D demodulator uses the delay line to combine the information of a
line with the information of the previous line. This is done by adding chroma information on two
adjutant lines as per the block diagram above. The separated 'U' and 'V' corresponds to the
average hue and saturation of the present and previous lines received.
You may note that in view of the phase alternation line by line, a given hue will be represented
on a vector diagram at two alternating positions symmetrically displaced above and below B-Y
axis in alternate lines. You might have noticed two colour vectors for each colour on a vector
scope display because of this reason.
18
The Pal Colour Television System
PAL ENCODER
The design of PAL encoder may vary from manufacturer to manufacturer. In some of the PAL
encoders instead of reversing the phase of V component on every alternating line, they found it
much easier to change the phase of carrier modulating the R-Y component by 180 degree every
alternate line. This switching is controlled by the H/2 oscillator i.e., by a 7.80 kHz PAL Indent
pulse. (= H/2 because of PAL). In order to facilitate TV receiver to decode which line has +V
component and which line has -V component, additional information is sent by modifying the
burst. Burst preceding a line carry this information. It is a burst phase of 135 and 225 degrees,
for the line with +V & -V respectively. It is also known as swinging burst.
The block diagram of PAL encoder explains a system having the following steps:
Also please note that the burst preceding the line indicates whether the V component is +ve or -
ve, and it contains equal component of U and V.
19
Induction Course (Television)
PAL - DECODER
PAL decoder is a reverse of encoding process. The objective of recovering R, G & B from the
received signal is achieved in the following steps:
20
The Pal Colour Television System
1) Y & S is recovered by decoding video & using LPF and Sync separator circuit of receiver.
2) Chroma is separated by using BPF (center at 4.43 MHz)
3) Chroma is keyed or gated to get back the burst i.e. SC by using K - Pulse.
4) L.O. 4.43 MHz is phase locked with the recovered burst to make it of same phase as that
of the transmitted one.
5) 4.43 MHz SC is processed further to modify it for 90 degree phase.
6) Modulated chroma is demodulated by these two SC with phase 0 & 90 degree. This will
retrieve U & V components.
7) Phase of the V component is restored back to normal by using the concerned information
from the transmitted burst.
8) U & V are demodulated back to R-Y & B-Y.
9) Y, R-Y & B-Y are mixed to retrieve R, G, & B which will control the three grids of picture
tube.
ACTIVITIES
1. Draw the block diagram of a Colour TV receiver highlighting the decoder portion.
2. Study the drawing for a TV studio monitor and compare it with a TV Set.
RECAP
PAL colour TV system was designed with the concept of compatibility and reverse compatibility
with the available monochrome TV System. A colour signal includes both luminance and
chrominance signals. A parking slot for chroma with 1.3 MHz of BW centered around 4.43 MHz
was created within the BW of Monochrome TV. The modulation used for the colour difference
signal is suppressed carrier amplitude modulation. The modulated chroma information is then
added with luminance signal to get CCVS. Modulated Chroma includes phase reversal for every
alternate modulated R-Y component to remove phase errors in the reception. This is called as
PAL encoding. Addition of associated colour burst along with sync. pulses in the CCVS
facilitates TV receiver to recover R, G, and B from the CCVS . This is called PAL decoding
process.
FURTHER READINGS
1. Dhake A.M. (1999) Television and Video Engineering, Tata McGraw-Hill, New Delhi.
2. Gulati, R.R. (2002) Modern Television Practice, Principles, Technology and Servicing,
New Age, New Delhi.
3. Groab, B and Herndon, CE (1999) Basic Television and Video Systems, McGraw- Hill,
New York.
4. Lakshmi A. Veera (2010). Television and Video Engineering: Ane Books, New Delhi.
5. Basic Radio & TV, 2nd Edition, Sharma, S.P. (2008), New Delhi: TMH.
6. Broadcast Engineer’s, Reference books, Tozer, EPJ(2004), London: Focal.
******
21
3
DIGITAL VIDEO SIGNAL
AND HDTV STANDARDS
INTRODUCTION
Migration from analog to digital in TV production and transmission had begun in 1995. Presently
Digital video is used worldwide. Digital video is distributed within the studio in form of Serial
Digital Interface (SDI). SDI can carry Standard Definition TV which is designated as SD-SDI and
High Definition TV which is designated as HD-SDI. SDI videos have constant data rates. For
SD-SDI it is 270 Mbps and for HD-SDI it is 1.485 Gbps. These high speed signals are distributed
through high quality Co-axial cable and Optical fiber. For HDTV, fiber cable is preferred for long
distance. SDI carries raw digital video (uncompressed) and its bandwidth requirement is so high
that it cannot be supported by any transmission network. Therefore raw video is compressed
using various encoding standards like MPEG-2 and MPEG-4. Compressed video is transported
using another interface called Asynchronous Serial Interface (ASI).
OBJECTIVES
DIGITAL VIDEO
Uncompressed digital video signals have been used for some time in television studios. Based
on the original CCIR standard CCIR 601, designated as ITU-BT.R601 today, this data signal is
obtained as follows:
To start with, the video camera (Fig.1) supplies the analog Red, Green and Blue (R, G, B)
signals. These signals are mixed in a matrix of the camera to form luminance (Y) and
chrominance (color difference CB and CR) signals.
23
Induction Course (Television)
Y 8/10 Bit
A
R D Y
MATRIX
G Cb 8/10 Bit
A 270 Mbit/s
D Cb ITU-BT, R 601
B “CCIR601”
Cr 8/10 Bit
A Cr
D
2.75 MHz
Low Pass Filter
6.75 MHz Chrominance sampling frequency
The luminance bandwidth is then limited to 5.75 MHz using a low-pass filter. The two color
difference signals are limited to 2.75 MHz, i.e. the color resolution is clearly reduced compared
with the brightness resolution. In analog television (NTSC, PAL, SECAM), too, the color
resolution is reduced to about 1.3 MHz. The low-pass filtered Y, CB and CR signals are then
sampled and digitized by means of analog/digital converters. The A/D converter in the
luminance branch operates at a sampling frequency of 13.5 MHz and the two CB and CR color
difference signals are sampled at 6.75 MHz each. This meets the requirements of the sampling
Cb
SAV = Start of
active video
6.75 MHz sampling frequency
EAV = End of
active video
Cr
EAV
SAV
EAV
SAV
Cb
Cb
Cr
Y
24
Digital Video Signal and HDTV Standards
theorem: There are no more signals components above half the sampling frequency. The three
A/D converters can all have a resolution of 8 or 10 bits. With a resolution of 10 bits, this will
result in a gross data rate of 270 Mbit/sec which is suitable for distribution in the studio but too
high for TV transmission via existing channels (terrestrial, satellite or cable). The samples of all
three A/D converters are multiplexed in the following order: CB Y CR Y CB Y. In this digital video
signal (Fig. 2), the luminance value thus alternates with a CB or a CR value and there are twice
as many Y values as compared to CB or CR. This is called a 4:2:2 resolution, compared with the
resolution immediately after the matrix, which was the same for all components, namely 4:4:4.
This digital signal can be present in parallel form at a 25-pin sub-D connector or serially at a 75-
Ohm BNC socket. The serial interface is called SDI which stands for serial digital interface and
has become the most widely used interface because a conventional 75-Ohm BNC cable can be
used.
Within the data stream (Fig. 3), the start and the end of the active video signal is marked by
special code words called SAV (start of active video) and EAV (end of active video), naturally
enough. Between SAV and EAV, there is the horizontal blanking interval which does not contain
any information related to the video signal, i.e. the digital signal does not contain the sync pulse.
In the horizontal blanking interval, supplementary information can be transmitted such as audio
signals or error protection information for the digital signal.
TRS-ID TRS-ID
E S E S
A A Cb Y Cr Y Cr A A
V V V V
The SAV and EAV code words consist of four 8- or 10-bit code words each. SAV and EAV
begins with one code word in which all bits are set to one, followed by two words in which all bits
are set to zero. The fourth code word contains information about the respective field or the
vertical blanking interval, respectively. This fourth code word is used for detecting the start of a
frame, field and active picture area in the vertical direction. The most significant bit of the fourth
code word is always 1. The next bit (bit 8 in a 10-bit transmission or bit 6 in an 8-bit
transmission) flags the field; if this bit is set to zero, it is a line of the first field and if it is set to
one, it is a line of the second field. The next bit (bit 7 in a 10-bit transmission or bit 5 in an 8-bit
transmission) flags the active video area in the vertical direction. If this bit is set to zero, then
this is the visible active video area and if not, it is the vertical blanking interval. Bit 6 (10 bit) or
25
Induction Course (Television)
bit 4 (8-bit) provides information about whether the present code word is an SAV or an EAV. It is
SAV if this bit is set to zero and EAV if it is not. Bits 5...2 (10-bit) or 3…0 (8-bit) are used for
error protection of the SAV and EAV code words. Fourth Code word of the timing reference
sequence (TRS) is.
Neither the luminance signal (Y) nor the color difference signals (CB ,CR ) use the full dynamic
range which is 255 for 8 bit and 1023 for 10 bit case of Y signal (Fig. 4). There is a prohibited
range which is reserved as headroom, on the one hand, and, on the other hand allows SAV and
EAV to be easily identified. A Y signal ranges between 16 and 235 decimal (8 bits) or 64 and
960 decimal (10 bits).
Level Diagram
Y 0mv 128 / 512
Cb / Cr
0mv 16 / 64 -350mv 16 / 64
0/0 0/0
The dynamic range of CB and CR is 16 to 240 decimal (8 bits) or 64 to 960 decimal (10 bits).
The area outside this range is used as headroom and for sync identification purposes.
This video signal conforming to ITU-BT.R 601, which is normally available as a SDI signal, forms
the input signal to an MPEG encoder.
26
Digital Video Signal and HDTV Standards
Where, 858 and 864 are total number of samples in horizontal duration of NTSC and PAL signal
respectively. f h, NTSC , f h, PAL are horizontal scanning frequencies for NTSC and PAL
Uncompressed SDTV video signals have a data rate of 270 M bits/s. They are distributed either
paralleled signals via twisted pair lines or serially via 75 ohm coaxial cables. In most cases,
however, only the serial CCIR 601 interface is used today. It is called the serial digital interface
(SDI) and uses waveform that is symmetrical about ground and has initial amplitude of 800
mVpp across 75 ohm load. This signal can be fed down 75 ohm coaxial cable having BNC
connectors. Unlike analog video, serial digital receivers contain correct termination that is
permanently present and passive loop through is not available. In permanent installations, no
attempt should be made to drive more than one load using T-connectors, as this will result in
signal reflections that seriously compromise the data integrity.
MSB LSB
Parallel data
SD / HD
NRZ Scrambled
Shift Encoder
Data 1
Register Scrambler SDI
Load
Clock
Shift
Clock
X10
F Clock 10x F
In Fig. 5 (a), we can see the process of serializing the parallel data into a single data stream.
The 10-bit data is formed to Non-return to Zero and then scrambled to prevent long runs of zeros
or ones in the data stream. The same scrambling techniques are used for SD and HD.
Scrambling is used to randomize the data and prevent long string of 1’s or 0’s. Since the clock
is embedded within the data stream and in order to recover the clock successfully a large
number of transition are required.
The essential parts of a serial link are shown in Fig. 5 (b). Parallel data having a word length of
up to ten bits forms the input. These are fed to a ten-bit shift register which is clocked at ten
times the input word rate.
27
Induction Course (Television)
10
SDI Shift LATCH Parallel
Equalizer Descrambler
Register Register Data out
PLL 10
Regenerated
clock
In component SDI, there is provision for ancillary data packets to be sent during blanking. There
is capacity for up to 16 audio channels sent in four groups.
The data content of the AES/EBU digital audio sub-frame consists of validity (V), user (U) &
channel (C) status bits, a 20 bit sample and four auxiliary bits which optionally may be appended
to the main sample to produce a 24-bit sample. The AES recommends sampling rates of 48,
44.1 and 32 kHz, but the interface permits variable sampling rates.
SDI equipment is designed to run at a closely defined bit rate of 270 Mbits/s and has phase-
locked loops in receiving and repeating devices which are intended to remove jitter. These will
lose lock if the channel bit rate changes. Transport streams are fundamentally variable in bit
rate and to retain compatibility with SDI routing equipment ASI uses stuffing bits to keep the
transmitted bit rate constant.
The use of an 8/10 code means that although the channel bit rate is 270 Mbits/s, the data bit
rate is only 80% of that, that is, 216 Mbits/s. A small amount of this is lost to overheads.
HD-SDI has almost the similar interface as SD-SDI. ITU designated it as ITU-BT.R609 signal.
The data rate is 1.485 Gbps. HD-SDI format is shown in Fig.6
The high speed signal is generally distributed through optical fiber. For short distances, coaxial
cable can be used. The HD-SDI signal is compressed in similar way as SD-SDI signal is
28
Digital Video Signal and HDTV Standards
TRS-ID TRS-ID
E S E S
A A Cb Y Cr Y Cr A A
V V V V
DV100 (DVPRO HD, D-12). This one is with 4:2:2 sampling and is based on DV
compression developed by Panasonic
HDV format. It records at a constant data rate and stores data on the DV tape with 4:2:0
sampling. The video compression is based on MPEG-2 and the audio with MPEG-1
Layer 2. It has a resolution of 1280X720 and 1440X1080 with a quantization of 8-bit. The
data rate is 25Mbps.
HDCam format. It records on similar cassette as Digi Beta with a sampling of 3:1:1. The
compression used for video is MPEG-2 (Intra). Audio has 4 uncompressed channels with
20 bit/48KHz. The resolution is 1280X720 and 1440X1080, with a quantization of 8-bit.
The data rate is 112Mbps to 142 Mbps. Frame rate options are, 1080/23.98,
1080/24PsF, 1080/59.94i, 50i, 29.97PsF, and 25 PsF.
HDCam SR. It also records on similar cassette as used for Digi Beta with a sampling of
4:2:2 / 4:4:4 & the compression based on MPEG-4 Studio Profile (not H.264). Audio has
12 numbers of uncompressed 24 bit/48KHz channels. The resolution is 1280X720 and
1920X1080 with a quantization of 10-bit. The data rate is 440 Mbps. Frame rates
options are, 60P, 59.94P, 50P, 60i, 59.94i, 50i, 30PsF, 29.94PsF, 25PsF, 24PsF, 24p
and 23.98PsF.
DVCProHD or (DVCPro 100). This has a sampling of 4:2:0, with a DV compression and
8 channel uncompressed audio of 16 bit/48KHz. The resolution is 960X720 or
1440X1080. Quantization is 8-bit per sample for a data rate of 100 Mbps.
D5 HD uses D5 format to record HD with sampling of 4:2:2 and a compression of 4.5
Intra coded. Audio is uncompressed, 20/24bits/48KHz. The resolution is 1280X720 or
1920X1080 with a quantization of 10-bit. The data rate is 235 Mbps.
XDCAM is a tapeless professional video system introduced by Sony in 2003. XDCAM
HD (or XDCAM HD420) supports multiple quality-modes. The HQ-mode records at up to
35 Mbit/s by using a variable bit rate (VBR) for MPEG-2, long-GOP compression. It also
29
Induction Course (Television)
provides an optional 18 Mbit/s (VBR) and 25 Mbit/s (CBR) modes for increased
recording-time.
XDCAM EX. Codec of this format has either 25 Mbit/s CBR for SP mode (1440x1080) or
35 Mbit/s VBR for HQ mode (1920x1080). The recorded video is carried in an MP4 file
wrapper, versus XDCAM HD's MXF file wrappers.
XDCAM HD422 (or MPEG HD422). This third generation XDCAM uses the 4:2:2 profile
of the MPEG-2 codec, which has double the chroma-resolution of the previous
generations. To accommodate the chroma-detail, the maximum video-bit rate has been
increased to 50 Mbit/s.
P2 format (Tapeless). P2 (Professional Plug-In) is a professional digital recording format
with solid-state memory storage. This format is introduced by Panasonic in the year
2004. It is specially tailored for ENG applications. It also supports recording
of DV, DVCPRO, DVCPRO25, DVCPRO50, DVCPRO-HD, or AVC-Intra
ACTIVITIES
Trace the SDI signal from studio to Master Switching Room and VTR room using digital
waveform monitor. If there is some deterioration in the signal, find the cause and improve the
quality.
RECAP
The chapter has described the conversion of component analog video to digital form. Digitally
converted colour component are further multiplexed to form SDI format. The data rate is 270
Mbps for SDTV and 1.485 Gbps for HDTV. Uncompressed audio is then embedded into the
horizontal blanking of SDI stream as embedded SDI. SDI is an uncompressed data stream. SDI
data is further compressed using MPEG-2 and MPEG-4 encoders. After compression the signal
becomes an ASI stream for final transmission.
FURTHER READINGS:
******
30
4
TELEVISION STUDIO
INTRODUCTION
Television production is an art of creating stories with visuals from multiple cameras in the
studios of live action, pre-recorded TV programmes on video tapes/servers, graphics, texts or
film based material. TV Production utilizes all these sources with the required tools to put them
on to the desired timeline to create a story.
OBJECTIVES
Whether the productions originate in the studio or at field, the system works on the same basic
principle. Television camera converts optical images into electrical signals (called video signals)
and the microphone converts sounds into electrical signals (called audio signals). These
signals are then processed further for transmission / recording.
TV STUDIO FLOOR
Area and volume: A well-designed television studio floor provides the proper place and
environment for cameras, lighting, sound, scenery, and the action area for the performers.
(Figure 1) Studios are normally rectangular in shape with varying amounts of floor space. Larger
studio of about 20 x 20 square metres or more are required for the elaborate productions, such
as music, dramas, or audience participation shows. Besides floor space they also need a height
of about 7 metres or more to accommodate lighting and sets. If the ceiling is too low, handling of
lights and the AC ducts will become difficult and there may not be enough room for the heat to
dissipate. Also, the low lights and the boom microphone will encroach into the scene. Medium-
sized or small studios are more efficient to manage and are cost effective. These are used for
programme presentation, discussion and News etc.
31
Induction Course (Television)
Floor- The studio floor is evenly levelled so that cameras can move smoothly and freely. This
also facilitates other studio operations concerning set erecting, scenery, and handling of other
properties used as part of the set. Most studios have concrete floors that are polished or covered
with linoleum, tiles, or hard plastic sheet.
Acoustic Treatment- The studio ceiling and walls are usually treated with acoustic material
that prevents sound from bouncing indiscriminately around the studio. For a TV studio, it is done
keeping in mind the multiple types of production in the same studio. This means a compromise
unlike radio studios where different acoustic treatment is provided for talk, music and drama and
has dedicated studios for this purpose. TV studio with lot of operational crew along with
stationary and moving equipment during the recording will require special care in designing the
sound pick up. In a way TV studios are multi-purpose studios with compromise on acoustic
treatment. It is for this reason that many professional artists bring their own audio track after
getting it recorded in a professional radio studio for their TV performance.
Air-Conditioning- Huge volume in TV studios will also require adequate air conditioning. It is
of the order of about 200 Tons for a bigger studio. Incandescent studio lights, equipment, artists,
invited audience and crew generate large amount of heat. Normally studios are cooled with
centrally air-conditioned plant maintaining very low noise level. AC accounts for the maximum
power consumption in the studio set up.
Doors- Studios will need heavy and soundproof doors that are large enough to accommodate
scenery, furniture, and even vehicles. The studios are having two doors, the bigger one for the
sets and the smaller one for the staff and performing artist. Both the doors will also have a sound
lock area, thus requiring a double door for each entry site to the floor.
Inter communication system- The inter communication system, allows voice contact
between all production and engineering crew who are actively engaged in a production. The
32
Television Studio
director, in the control room has to rely on the intercom to communicate cues and instructions to
the production team. Each member of the production team wears a headset with an earphone
and a small microphone for talkback.
Studio monitors- A large size video monitor in the studio floor displays the video feeds from
the production control room (PCR). It is usually program output also called PGM out. It is an
important production aid for both crew and the performing artists. It helps the crew to monitor the
studio output and conduct their operations as per his requirement for different shots. It also helps
the presenter to see whether the various tape or live inserts are actually appearing as per the
script.
Program Speakers- PCR can feed any required program sound for monitoring or
performance in the studio. Normally the speaker is muted during live performance to avoid
echoes. However for certain programme requirement, the audio feed with special care to avoid
echoes during the live performance is provided. This facility is called as providing a fold back
sound.
Wall Outlets- The outlets for camera and microphone cables, intercoms, and power sockets
are distributed along the four studio walls for easy access.
VIDEO CAMERAS
The Studio cameras are heavy and usually have better specifications as compared to smaller
portable cameras. Such cameras require heavy duty stands along with other accessories.
Viewfinder is a small television monitor and is mounted on the camera that shows the camera
picture and facilitates the camera operator to compose pictures. Most viewfinders of professional
cameras are in black-and white while others may have a colour viewfinder. Black-and-white
viewfinders are preferred for easier focussing. The most common studio camera mount is with
33
Induction Course (Television)
pneumatic pedestal control (Fig. 2). This pedestal control allows raising and lowering the camera
height and moving it smoothly across the studio floor while it is live or on the air. Some news
studios use robotic cameras that are remotely controlled via computer by a single operator in the
studio control room. Robotic cameras are relatively small and light. A typical studio may have 3
to 5 cameras.
All these cameras are having an additional built in communication for the camera persons,
production desk and the camera control unit. Viewfinder and the camera front is also equipped
with a red light called tally light which provide a visual indication to the camera person and the
studio crew about the camera which is selected for final output on the production panel
STUDIO LIGHTING
This is achieved by using two types of basic lights called as directional Lights and diffused lights.
Directional light has sharp beam that produces harsh shadows. Diffused light has wide beam
that illuminates a large area with soft shadows. These light fittings are usually suspended from
the studio ceiling (Fig.1). Various kind of suspension system available in TV studios are a fixed
horizontal and vertical bars, catwalk with arrangement for walking along with the grids for the
adjustment of lights or the motorised hoist in case bigger studios.
Most studios have a dimmer control board to switch and adjust the relative intensity of desired
studio lights. The lighting engineer has the monitoring facilities for the studio cameras and
programme output to facilitate light adjustment. Lighting control operator is also connected with
the director via intercom.
Program control refers to the live production room for television transmission or recording.
Location of the PCR is always preferred adjacent to the studio floor so as to efficiently co-
ordinate all the production activities. (Fig. 3) The Producer/director and the production crew for
the programme have the monitoring access to all the sources needed for a production as per the
script. The director has to co-ordinate and knit his programme as a team leader of the production
crew. The crew needs to select and organize the various video and audio inputs as per his
requirement. The program control room is equipped with (1) video monitors, (2) Audio monitors,
(3) intercom systems, (4) vision mixer, (5) audio console, and (4) a clock and an off-the-air
television set to receives the broadcast signal.
VIDEO SWITCHING
Video Switching refers to the mixing and switching operations required for a production. Please
34
Television Studio
refer diagram in Fig. 6, cameras 1 and 2 deliver their pictures first to the CCUs (Camera Control
Units) and then to the video tape monitor. Video tape monitor shows the pre-recorded source of
videotape. These three video signals are fed to the switcher as input. Each source (camera 1,
camera 2, and VTR) can be selected for final output, this switcher “output” (line-out) is what goes
on the air or is recorded on videotape. Any switcher, simple or complex, can perform three basic
functions:
As an example, let us assume that we are required to produce a small TV story for news. In this
story a new computer lab has been opened by a school and the principal of this school arranges
a visit of students and their parents to this facility. The event has already taken place and the
recording is available. The TV script for this in short will be as under:
Execution- Cameras 1 and 2 are focused on the two news anchors. Camera 1 provides a close-
up of one of the anchors, and camera 2 shows a close-up of the co-anchor 2. The CCUs
enhance and match the pictures of the two cameras. The pictures from both cameras are fed
into preview monitors for the director to see what they look like. A third preview monitor is
necessary to show the videotape of the principal. These three video signals are simultaneously
fed into the switcher, which allows selecting and switching any of the three video feeds to the
35
Induction Course (Television)
line-out. Pressing camera 1 will select the close-up of one of the anchors on the line monitor
that becomes the studio PGM output. Pressing camera 2 will select the close-up of the co-
anchor on the line monitor. After this a videotape insert will select the principal on the line
monitor. Simultaneously the audio signals from the news anchors’ microphones or from the tape
playback is switched by the audio operator. It is possible to select the voice of the person on the
screen, match the volume of the three sound sources (anchor, co-anchor, and principal), or keep
one lower than the others on the audio console.
AUDIO CONTROL
The audio console is used to control the associated sounds for the production of a TV program.
PCR monitors are shared by the audio operators in the separate enclosure with a glass window
to avoid noise and disturbance. It has the following function:
(1) Select a specific microphone or other sound input as per the cue from the director,
(2) Process the signal
(3) Control the quality of the sound and mix two or more incoming sound sources.
Operations: Recall the example of the news anchor inserting a videotape of the principal and the
visitors at the new computer lab. As the principal is busy escorting the visitors into the room, one
of the news anchors talks over the initial part of the videotape insert. To convey a sense of
actuality, one can mix the background sound as an effect along with the anchor’s narration at
low level. The excited voices of the parents or the occasional laughter of the students adds a lot
to the programme. Finally when the principal begins to speak, the anchor’s microphones can be
switch off. Audio console can also facilitate to add any pre-recorded sound, such as music from
various digital storage devices, tape (DAT), or compact discs (CDs).
..
Camera control allows the video operator to match all the camera pictures by optimising the
CCU (Camera Control Unit) of different cameras involved in production at studio. CCU engineer
has to work in close co-ordination to the Lighting Engineer. The basic requirement of matching
all the camera pictures will help the viewer to see a continuous video clip without noticing the
36
Television Studio
switching from one to another camera. The picture below shows CCU having all the camera
controls for each camera in a three camera set up (Fig. 5).
Besides routine TV programmes, bigger television shows are also pre-recorded. All videotape
recorders records video and audio signals on a single videotape in a cassette in the form of
magnetic signal and then recovered back into electrical TV signals for display.
CAMERA PREVIEW
#1 CCU # 1 MONITOR
(CAMERA # 1)
PREVIEW
CAMERA MONITOR
CCU # 2
#2 (CAMERA # 2)
VIDEO VIDEOTAPE
VIDEOTAPE
PLAYBACK MONITOR
VIDEOTAPE
AUDIO
MIC # 1
SPARE VIDEO INPUTS
MIC # 2
(AUDIO IN)
VIDEO PREVIEW
AUDIO MONITOR
CONSOLE SWITCHER (PVM)
(LINE-OUT)
PROGRAM
AUDIO
MONITOR
MONITOR
(PGM)
(SPEAKER)
ANTENNA
JUNCTION
BOX VIDEO RECORDER
AUDIO (RECORDS AUDIO VIDEO
AND VIDEO
SIGNALS)
AUDIO
VIDEO
TV TRANSMITTER
RF CABLE
37
Induction Course (Television)
AMPLIFIER
3 VDA PP PP PVW TO PCR
B.STATION
SDI with
PP PGM VIDEO Embedded
VIDEO OUTPUT SDI Audio
AMPLIFIER PP PP VDA TO Con-
2 VDA STAB
B.STATION S VARIOUS verter
W DESTINATION
I OR
VTRs T
AMPLIFIER FS AUDIO O/P
1 VDA C
B.STATION CG H From ADA
E
OB R END DIGITAL
AUDIO M/F
Ext O/P SDI
TIC M/F DDA
Digital
ADA
Signal
1 STAB Distributer
1 2 3 TO
1 REMOTE
2 DIFFERENT
SOURCES ADC M
2 DESTINATION
REMOTE CONTROL FOR 3
MONI-
3
TORING REMOTE M PGM B A S B VIDEO LEVEL
PGM CONTROL A BLANKING LEVEL
CCU M SOURCES FOR S SYNC LEVEL /
MONITORING BURST LEVEL
IN / OUT
MCR INPUT SOURCE
SPG1 MONITOR END MONITORING
AUDIO A.C.
SOURCES VIDEO
(ANALOG)
AUDIO CONSOLE
M
TO VARIOUS PRODUCTION
AUDIO CONDITIONED P/S,
PDA
38
Television Studio
Master control is the nerve centre of a television station. Its main job is to provide routing to all
the source signals to different destinations. Programmes generated by the Kendra and all other
outside feeds including OB (Outdoor Broadcasting) signals are routed through the master control
room to different required destinations. Master control is also responsible for the technical quality
of the programs. It has to check all program material for the required technical standards. In non-
broadcast production centres, master control refers to a room that houses the camera control
unit (CCU), video-recording equipment and special effects devices etc.,
ACTIVITIES
RECAP
A TV studio has the following main areas (1) the cameras, (2) Studio lighting, (3) Audio Control,
(4) Video switching, (5) Videotape recording / tapeless systems and (6) Camera control and (7)
MSR operations. These areas are knitted in an audio video chain as per above in fig. 6
FURTHER READINGS
1. Digital Video and Audio broadcasting Technology, Fischer, W (2008), Berlin; Sbringen
2. Digital Television London; Beroit, M (2008), Focal Press
3. Broadcast Engineer’s Reference Book, Tozer, EPJ (2004), London; Focal Press.
4. Television and Video Engineering, 2nd Edition, Dhake, AM (1999), New Delhi TMH.
******
39
5
TELEVISION FIELD
PRODUCTIONS
INTRODUCTION
Nothing demonstrates the magic of television better than the live broadcasts of an event. Such
events are usually covered by using TV cameras and microphones attached to a mobile van with
associated television equipment in the van. These vans are called Outdoor Broadcast Vans (or
OB Vans). These OB Vans are well equipped and capable of working even in extreme weather
conditions from sub-zero to 50 deg. Celsius. The biggest requirement for any live event is speed
and accuracy in the production process.
OBJECTIVES
This is similar to ENG application with additional facilities. It is used to acquire visuals for the
production of telefilms, drama, interviews or documentaries etc. For such EFP productions, the
41
Induction Course (Television)
raw footage will require extensive post production work especially while shooting with a single
camera on locations. It requires lot of research, planning and a well-documented script. Some of
the field productions may also require a portable video mixer with a two cameras setup along
with necessary cables. To make it little easier, one may use a portable audio mixer, preview
monitors, portable lighting and power generators along with sufficient batteries. Sound pick up
with additional mikes on locations is always preferred in TV to save time instead of compulsory
dubbing used in film industry. This is so because the film cameras are very noisy as compared
to video cameras.
Multi-camera outside production or a live telecast can be of any magnitude and complexity. OB
production may require the following equipment & facilities:-
Multiple cameras (at least three)
A mobile control room with operational crew
Production control room
Staging for the presentation area
Transmission and monitoring equipment
Events commonly covered by OB units include sports, concerts, ceremonies, shows with invited
audiences etc. Activities inside OB van are normally divided into five parts.
1) The first row is the video production area which is operated / handled by a television
producer, a technical director, character generator (CG) operator and a production
assistant. This forms the first row in front of a wall of video monitors. The video monitors
show all the available video sources, like computer graphics, cameras, video tape
recorders (VTR), servers and slow-motion replay machines. The monitors wall also have
PVW & PGM monitor.
2) The second row is usually meant for an Audio Engineer who will be sharing the monitors in
front for audio operations. Audio arrangements for a typical OB are similar to the one
described in Chapter 7 of Sound Broadcasting (Volume II)
3) A desk with dedicated monitors in the second row is for VTR operations. It is essential
that all the operational crew is in communication with each other during the coverage of
events, so that replays and slow-motion shots can be selected and aired.
4) The fourth activity area is the video control area that takes care of the technical quality
along with the camera control unit (CCU) to control the operations of video camera.
5) The fifth activity is the transmission from where the final output signal is monitored and is
transmitted to required destinations.
STUDIO CONNECTIVITY
42
Television Field Productions
multicore cable to the up converter fixed at the back of the transmitting dish installed at a
maximum possible height to achieve maximum range in the line of sight for the receiving dish.
After the up converter and Solid State Power Amplifier (SSPA), it is fed to the PDA for
transmission. At the receiving end the RF is down converted, demodulated and de - multiplexed
to get back the audio/ video base band signal. Microwave links use two kinds of antennae, 1)
Parabolic type & 2) Horn type. These antennae are highly directional. Parabolic dish antenna is
widely used with a MW guide as a feed placed at its focus. Parabolic antennae used for TV link
have size varying from 1 to 3.6 m diameters. Portable links use smaller dish.
b) Using Optical Fibre (OFC)
TV productions on locations can also be sent back to the studio as a feed for the control room by
using OFC
c) Using DSNG
Live telecast from location can also be sent by up linking with a standalone type Fly away DSNG
set up or by a Mobile DSNG Van.
DESIGN OF OB VEHICLE
a) Designing aspects
OB Vehicle has to carry a large number of sophisticated and costly equipment, accordingly
utmost care is taken while selecting the chassis and designing the body of the van. Vehicle
customization includes all aspects including structural analysis, system design, coach building,
proper heat insulation, equipment installation, field testing etc. Effective provision of shock
absorbers is an integral part of design of the OB Van for avoiding any damage to equipment
during the vehicle movement. The salient features of design are:
I) Chassis normally not to exceed an overall length of 12 meters for easy manoeuvrability.
II) Equipped with hydraulic stabilization jacks.
III) Load-bearing capacity to withstand the weight of all the video & audio equipment, A/C
units, various cable drums, camera stands, packaging boxes, UPS, power distribution
panels, furniture, equipment racks, personnel and the weight of the body .
MONITORING WALL
ARRANGEMENT OF VIDEO MONITORS
43
Induction Course (Television)
40 feet(12m)
Crew
Back Door
CTRL
END
VTR
Side Door
Side (2)
Door (1)
2 x 6 Ch Slow 32 Channel 16 - 32 Input Cable
Motion Server Audio Console Vision Mixing drums
for
Entrance End Console UPS Cameras
& Audio Production & &
Slow Motion & Control Power Power
Server VTR Dist. Supply
IV) The interior and exterior of the OB van is ergonomically & aesthetically designed to meet
the physical and environmental conditions as well as comfortable working for personnel.
V) To be provided with suitable storage area for the storage of cameras, lens, cable drums
and other miscellaneous equipment.
VI) Portable fire fighting extinguishers of suitable types are to be provided along with
provision of First Aid Kit.
VII) Technical furniture for operating & production staff.
I) The van is designed with a conditioned power supply having a UPS of desired rating,
depending on the load.
II) AC mains and the DG Power supply inputs sockets are made available at appropriate
location considering safety and exposer to rain water.
III) Power distribution panel along with on load change-over switch for DG with digital
metering, protective devices & monitoring panels for equipment racks, internal lighting
and air-conditioning etc.
IV) Technical and power earthing for video, audio and power equipment is properly done as
per the standard.
V) Internal lighting suitable for production, operation & maintenance is provided inside the
van.
VI) Air conditioners of adequate tonnage for the equipment racks and operators is also
provided.
44
Television Field Productions
The OB van should also have storage for essential accessories to meet-out any emergency
such as:-
I) Motorized cable drums for cameras, multi pair audio, video and data cables
II) Flexible mains copper cables of suitable rating of 100 meters or so.
III) Termination panels for external in & out connections for video (HD-SDI, SD-SDl and
Composite), Audio (Analogue and AES/EBU) and Power Supply.
IV) Input power switchboard and outlets for external equipment such as DSNG, microwave
links etc.
Present day OB Vans are equipped with state of the art production setup inside the customised
Van. The pictures below give an idea of the various installed equipment panels.
i) Production panel and monitor wall ii) Slow motion server and VTR panel
45
Induction Course (Television)
1) Cameras
The OB Van is normally designed, wired and equipped between 4 to 10 cameras along with
accessories. The Cameras are usually equipped with the following lenses:
1) 22 X or better with 2X built in zoom extender, servo zoom and servo focus
2) 86X or better with 2X built in extender, servo focus and servo zoom.
3) Wide Angle Lens with 2X built in extender; with quick zoom facility
4) Heavy duty camera support systems
5) Suitable carrying cases and rain cover for cameras and lenses and tripods.
Lenses with quick zoom facility, built-in optical image stabilizer and suitable large lens adapters
are always desirable. Various types of cameras interfaced with OB vans are described below:-
For several sports coverage, we may require portable cabled cameras. These are utilized
at start /finish areas for interview and close-ups of the players. Portable cameras require
cable assistants to support the cameraman and to avoid cable obstruction because of
restrictions of organizers.
The OB Van may also include required number of COFDM based MW camera with a
dock able MW link. These cameras usually have a collinear Omni directional antenna of
6 dBi gain and operate in the frequency from 2.4 GHz to 2.4835 GHz with an output RF
power of less than 1 watt. These antennae facilitate the camera to move freely without
any cable and enable to go near the event for better coverage. These signals are
received back in the control room of the van with a M/W receiver. These cameras are
extremely helpful in the coverage of different kinds of racing or other sport events.
Continuous start to finish coverage for an event like along the marathon route is highly
dependent on reliable motorbike /helicopter mounted MW cameras with helicopter
turnaround of signals to the production control room.
These are normal digital TV cameras with image stabilizers to ensure non shaky shoots.
These signals are fed to a battery operated COFDM microwave link (typically 2.5 GHz)
with auto-tracking directional transmit and receive antennae. Cameras on helicopters
generally operate at altitude between 500 to 1000 feet. It shall be a boom if these units
are equipped with a return audio channel that can be used for communication with the
cameraman or else including panel producer at production control room.
46
Television Field Productions
Motorbike mounted MW cameras are typically hand held units operated by pillion rider on
the bike. These bikes follow the runners along the entire route and hence are the most
important for a continuous TV coverage. Bike camera with a low power COFDM
microwave link transmits via an Omni directional antenna and is received via receive
antenna at the helicopter flying right above (500 feet). The helicopter will then turn
around this camera signal using a high power transmit link to the production control
room. This helicopter may also have another camera on board for aerial shots if required.
The camera positions are determined by the director of TV coverage. The Cameras are
typically mounted on specially designed platforms whose height is determined by its
positional and operational situation.
2) Communication
A good voice communication between various persons deployed in coverage and the
programme producer at production panel is vital for a good production and no compromise on its
quality is to be made. The communication unit should be extensive and its reliability is the main
issue with capability for interfacing UHF duplex radio telephone equipment at Motorbike and
helicopter camera operators as and when required.
The slow motion server is an integral part of any sports OB van . The server used are usually a
4-channel multi-cam live slow motion video server. It has sufficient storage for slow motion
applications.
VCRs are used for playback of Video inserts and for the recording of program output of the
events. The OB vans accommodate at least 2 VCRs/Decks.
The Van is designed for accommodating at least one character generator (CG) with 2D & 3D
graphics capabilities. The character generator normally have dual channel and it can edit all its
features on preview channel while the program output is on air.
The OB vans are normally equipped with 16 inputs HD-SDI Multi-format digital –production
switcher. The switcher may have HD/SD up/down/cross converters on some of the inputs. OB
van is also supported by at least 16 x 16 HD-SDI/SD-SDI type digital routing switcher.
47
Induction Course (Television)
ACTIVITIES
Observe the multiple camera live coverage for an event with a OB van right from the preparation
and rehearsals.
RECAP
Outdoor coverage of any kind is an important part of its production activities for broadcasting in
any DD Kendra. In this chapter we have studied about all the activities relating to the coverage
of different types of field events by using ENG/ EFP equipment and OB Vans. Various kinds of
cameras for different requirements and other production tools including their operations are also
introduced. Importance of studio link, communication equipment and issue relating to power
supply and its distributions have also been highlighted.
FURTHER READINGS
******
48
6
VIDEO PROCESSING AND
AUXILIARY EQUIPMENT
INTRODUCTION
Studio centers of Doordarshan are required to generate TV program in desired standards. This
requires lot of conversion and quality control for video, which may be in different formats. This is
achieved by signal processing using auxiliary equipment in addition to the main equipment.
Auxiliary equipment includes scan converters, standard converters, frame synchronizers, multi
viewers, routing switchers, distribution amplifiers, monitoring devices, audio processors and
delay units.
OBJECTIVES
To provide a bi-level sync signal, for analog equipment. This reference signal is commonly
known as black or black burst.(BB)
In some high-definition (HDTV) applications one requires a tri-level sync signal, which is
virtually identical to the synchronization signal used in component analogue video or SDTV
set up. So SPG is also required to provide this reference signal as well.
It must have provision for standard colour bar signal in SD / HD format
HDTV has brought a number of new concepts and technologies with it. The concept of tri-level
sync solves some traditional problems found with bi-level sync as described below:-
.
49
Induction Course (Television)
We use the negative-going leading edge of this pulse to triggers the synchronization process for
the regeneration of sync which is frequently required for video processing. As one can see, it is a
pulse having two voltage levels (a high and a low level). This DC offset disturbs the dc level of
composite video signal and can affects the brightness level of the picture. To avoid this shift in
DC level of analog video it needs DC clamping after every time they pass through the coupling
capacitor. Figure 2 shows the video with black level clamping at 0 volt to avoid the effects of DC
off set on video signal
Bi-Level Sync
Leading Edge
DC Offset
Voltage
DC Offset Requires
(i) Clamping at ( ii )
Black Level
Fig. 1: Bi level Sync Fig. 2: Video Signal with Black level Clamping
TRI-LEVEL SYNC
Fig. 3 shows a graphic representation of a tri-level sync signal. The pulse will start at the zero
volt (specified black level) and first transitions is negative, to -300 mV. After a specified period, it
changes to + 300 mV holds for a specified period and then returns to zero or black level. The
display system "looks" for the zero crossing of the sync pulse. This symmetry of design results in
a net DC value of zero volt. This is one major advantage of tri-level sync. This solves the
problem of a bi-level signal introducing a DC component into the video signal. The elimination of
DC offset makes signal processing easier. Tri-level sync eliminates the DC component and
provides a more robust way to identify the coming of synchronization in the signal chain. In order
to provide more precise synchronization and relative timing of the three components video
signals, HDTV component video has sync present on all three channels. Fig. 4 shows the
relationship of a tri-level sync signal to a properly-timed bi-level sync signal.
Tri-Level Sync
Fig. 3: Tri- Level Sync Fig. 4: Sync Trigger for Bi-level & Tri-level
50
Television Studio Processing & Auxiliary equipment
Because of its importance, two such units of SPG are installed in any studio setup with
automatic change over in case of any failure of one unit. SPG provides the following outputs/
pulses:
Line drive, ( H)
Field drive, (V)
Mixed blanking, (A )
Mixed sync,( S)
colour subcarrier, (SC)
A burst insertion pulse, (BG or K)
PAL phase Indent pulses, (P)
Black burst, (BB)
Colour Bar signal, (CB)
BB is a mix of all the pulses described above except CB and is called as black burst (BB). It is
usually fed to all the video generating equipment to achieve the objective of synchronism. Sync
pulse generator can work as stand-alone unit and can be locked to an external reference input if
required.
Genlock
REF IN
Word Clock
Extractor Black Burst
Generator
Color 1
Sync Tri-Level Sync
Framing Generator
Extractor Manager Sync
Word Clock Out
LTC IN
Generator
Timecode
2
Extractor
Sync Pulse Generators also provide test signals for both traditional analog studios and also with
mixed digital and analog facilities. This includes AES/EBU digital audio and test signals.
Frequency and level are adjustable of these AES/EBU unbalanced outputs.
51
Induction Course (Television)
From camera
CCVS
R Y Adder
G R-Y
+ 135
Matrix
Burst
B H & V Sync.
B-Y + 225 Modulated Burst
Modulator
PAL-ID SC BG
Fig. 6: Use of different pulses in a PAL encoder in a video camera for generating a colour
composite video signal (CCVS)
0.7V
1.0V
0.3V
0.3V Back Porch
5.8 Sec.
Active Period
52 Sec. Sync Tip
Colour H Blanking
Burst 4.7 Sec.
H Period 12 Sec.
64 Sec.
GENLOCK
For any TV production one may require, switching or mixing between different pictures sources.
This is possible only if the sources are properly synchronized with respect to a studio reference
generated by the installed sync pulse generator ( SPG). Every source need to have a timing
accuracy between 50 ns to 200 ns in H phase and an accuracy of 1.5 to 5 degrees of SC phase
with respect to the studio sync Since the feeds are coming from different places with their own
timings, they are not in sync with the studio. We need to process them so as to make them
synchronized with respect to the reference. Sources that have their sync coincident with the
station sync are called synchronous, while others having their own independent sync, are called
non-synchronous. Often in a production it becomes necessary to mix two sources whose
waveforms are not synchronized. This is not possible until the local SPG has been synchronized
with the external source so that the locally produced signals arrive at the mixer in
synchronization with the external source. When this occurs captions, and credits produced
locally can be superimposed on external sources such as outstation feeds and OBs. For non-
Synchronous sources mixing is not possible and the signal can only be cut to another source.
52
Television Studio Processing & Auxiliary equipment
This will have a visible disturbance in outgoing sync pulses, causing a frame rolls on monitors
and certainly servo disturbance to VCR machine being used for recording.
SC H-Delay
Phase
shifter
Video Source Source
(1) 1
Video O/P
Vision
SC
Phase H-Delay Mixer
shifter
Video Source Source
(2) 2
SC Phase
shifter H-Delay
SPG
Adjustment Controls
Video Outputs
1 C/O 1
Equalizatio
n Video 16 Channel 2
Video Additional
Synchro- Pattern & Auxilliary Audio SDI
Proc. Delay TX 3
2 nizer Caption Blanking Multiplexer
Amp
Equalizatio & Firewall 4
n
8 Pair
Dolby
Re-align
16 Channel -ment Sample Channel
Audio Channel & Rate
Non-PCM
Audio Pro. Amp Router Status Pair Router
De-multiplexer Frame Conversion Inserter
Sync
Global
Tracking
Dolby
Tone
Generator
Reference Input Adjustable
Bi & Tri-
level Test
Reference Pattern
Generator
To overcome this problem, SPG provides a Genlock facility, which allows the master oscillator to
lock to the incoming video source and get itself synchronized with the external signal. Genlock
facility, as an option in the video sources at a TV studio provide proper adjustment with variable
delays in timings for H and SC to get them synchronized with respect to the reference.
A digital synchronizer called frame synchronizers (FS) can also be used for this purpose
especially for equipment which doesn’t have built in Genlock facility.
53
Induction Course (Television)
These units are now frequently used especially for outstation / OB feeds. Present day FS
provide different format of video input like 3Gbps HD SDI, 1.5 Gbps HD SDI or 270Mbps SD-SDI
signals with embedded audio and its processing. These units provide lot of adjustment
possibilities while processing for video and audio in the digital domain. This adjustment includes
variable audio delays, levels, gain and timings with respect to reference. (Fig 9)
A TO D AND D TO A CONVERTERS
An ADC is defined by its bandwidth and its signal to noise ratio. The actual frequency range of
an ADC is characterized by its sampling rate. The dynamic range of an ADC is influenced by
many factors, including the number of output levels it can quantize to, linearity and accuracy. If
an ADC operates at a sampling rate greater than twice the bandwidth of the signal, then perfect
reconstruction is possible neglecting the quantization error. The inverse operation is performed
by a digital-to-analog converter (DAC).
Digital-to-analog converter (DAC, D/A, DA or D-to-A) converts digital data signal into an
analog signal. It is preferred to process signals in digital form because, it can be easily
transmitted, manipulated, and stored without degradation. Finally DAC is needed to convert the
digital signal back to analog in order to drive audio monitors in case of audio and video
monitors in case of a video signals.
These two applications use DACs at opposite ends of the speed/resolution trade-off. The audio
DAC is a low speed high resolution type while the video DAC is a high speed low to medium
resolution type.
One may require a particular video source at several locations in a TV studio. It could be for
monitoring, recording and alignment etc. This is achieved by a unit called video distribution
amplifier (VDA). It may have one video source as input and multiple outputs. Outputs will have
low impedance to avoid loading. Most of the VDAs also include built in equalization and hum
suppression. Some VDAs with dual inputs are also available. They provide multiple outputs for
each input separately Number of outputs may vary between 3 to 8 for the distribution of HD-
SDI- 3 Gbps, 1.5 Gbps or else 270 Mbps SD-SDI signals in a single package.
54
Television Studio Processing & Auxiliary equipment
Re-clock 1
DigitalVideo By pass SDI O/P
2
Input SDI
3
Re-clocks
(i) 270Mbps SD - SDI 4
or Equalization
(II) 3Gbps / 1.5Gbps
HD-SDI
Re-clocking
Rate
VDA usually provide an inbuilt equalisation of about 80m for 3 Gbps- HD and 180m for 1.5 Gbps
HD input. These units also have a provision for re-clocking the input to clean it before
distribution. This makes a VDA ideal for the distribution applications.
Application of Digital audio distribution amplifiers is similar to a VDA described above. Digital
ADAs can receive and handle digital audio with AES cable (balanced inputs) upto150 m or 500
m (unbalanced inputs). This is possible because of built-in corrections and relocking. Detection
of 32, 44.1, 48 and 96 kHz sampling rates used can be automatic while processing it. They are
also configurable for multiple relocked outputs. Balanced and unbalanced I/O’s are usually
available simultaneously along with channel status monitoring. Important specifications for
digital ADA are as per below:-
NOISE REDUCTION
Noise reduction utilizes the power of digital processing of video signals. Advanced noise
reduction techniques are now available to clean the images and make them error-free. This
includes temporal recursive, median and adaptive horizontal filtering for a clean output. In some
of the equipment this facility is available as an option inside that unit, like standard converters
with built in Noise reducers etc.
LOGO INSERTER
These units provide logo generation and insertion in HD/SD-SDI video streams. Logo inserters
are capable of adding multiple (HD/ SD) static colour logos into the SDI stream at any point
within active picture.
55
Induction Course (Television)
User defined logos from a PC network with direct support for TGA, TIFF or BMP based
files which can be directly loaded.
In-built test pattern generator
Input loss detection.
Input pass through.
Built in multiple user and logo memories
Digital Video 1
Input SD / HD - SDI SDI O/P 2
Standard Logo Audio CRC
(i) 270Mbps Equalization Detection Keyer Embedder Insertion 3
SD - SDI 4
or
(II) 3Gbps /
1.5Gbps
HD-SDI
Logo
GPI Network
Logo Sequencing
Control Control
Store &
Control
KEYERS
Stand-alone keyers enable images or logos to be added prior to transmission in the stream for
both HD-SDI and SD-SDI signals. It provides both linear and luminance keying with automatic
fade up/down capability. The unit also provides a dedicated program output along with
selectable preview/program outputs which include a clean feed option. This is used to key
scrolling text, clock or any other source as a layer on video
These units convert from any SD/HD-SDI standard to any other SD/ HD-SDI standard. This may
include 1080 and 720 with 23/24/25/29p frame rate. It can also create simultaneous HD and SD
outputs from one input source. One can also select frame synchronization or a bypass on
primary SDI inputs. Powerful picture enhancement tools like noise reducer- and enhancer can
also be available inside the unit as an option.
56
Television Studio Processing & Auxiliary equipment
Control UP /
DOWN /
CROSS
Digital Video Converted
Input De-embed Video
Video Embed
Processor
(i) 270Mbps SD - SDI Output
or
(II) 3Gbps / 1.5Gbps
HD-SDI
Audio
Processor
The conversion tool mostly used is motion adaptive method of broadcast quality. It can handle
multiple inputs such as SD, HD digital video and even analog video with analog and AES digital
audio. Some of the units also have a built in Frame Synchronization, audio channel routing,
delay and level controls etc.,
AUDIO EMBEDDER
In component SDI, there is provision for ancillary data packets which can be sent during
horizontal blanking. There is capacity for up to 16 audio channels sent in four groups.
The data content of the AES/EBU digital audio sub frame consists of V,U & C (validity, user &
channel respectively) status bits, a 20 bit sample and four auxiliary bits which optionally may be
appended to the main sample to produce a 24-bit sample. The AES recommends sampling
rates of 48, 44.1 and 32 kHz, but the interface permits variable sampling rates.
Television production and post-production still prefers separate analog audio with SDI video.
This offers lot of operational convenience especially regarding mixing of sound from various
sources in various formats. Once the production is finalized then only the sound is embedded on
57
Induction Course (Television)
SDI stream. This is achieved by a unit called audio embedder. This unit has the following
features:-
Aspect Ratio of a video is the ratio between the width and the height of the image. The most
widely used video aspect ratios are 4:3 and 16:9. ARC are used especially when a 16X9 TV
production is required to use inserts of file footage from video with 4X3 aspect ratio.
ARC coverts the aspect ratio to match or suite the required output. As an example lot of modern
switcher can work with multi-format inputs. If one intend to have HDTV production one can
match the aspect ratio of multiple format video clips to be used in that production. An ARC with
270Mbps SD-SDI as input and output signal can provide 4:3. 14:9, 16:9, letterbox, Pillar Box or
full frame images as per requirement within the same format.
58
Television Studio Processing & Auxiliary equipment
Digital Video 1
SDI O/P 2
Input
Input Proc Interpolater 3
(i) 270Mbps Amplifier & Enhancer Formatter
4
SD - SDI
or
(II) 3Gbps / Multiplexer
1.5Gbps
HD-SDI EDH SDI
Check Active Monitoring Monitoring
Window Processing Output
GPI Inserter
Control
CPU
.
. Fig. 14: Aspect Ratio Converter (ARC)
To change the aspect ratio, the picture has to be reframed, (with loss in data by cutting sides
with pan control etc.) stretched or compressed in either or both the horizontal and vertical plane.
To stretch or compress the picture, data has to be interpolated from adjacent pixels.
ROUTING SWITCHERS
Routing Switchers allow multiple signal sources to be routed to different destinations without
moving input and output cables. Routing Switchers can handle variety of different video and
audio formats. Routers are normally described by number of inputs and by number of
outputs e.g. 2x1 or 256x256. They are available in different sizes, the size mentioned as 32x32
will mean, the unit can be wired for 32 video sources as input and 32 destinations as video
outputs. These switchers are quite useful and have almost replaced operationally inconvenient
patch panels which were used earlier for routing signals in TV studios. The signal format that the
router transports can be anything from analogue composite video or SD- SDI or HD-SDI format.
Some routers have the ability to internally convert digital to analog and analog to digital.
Because any of the sources can be routed to any destination, the internal arrangement of the
router is arranged as a number of cross points which can be activated to pass the corresponding
source signal to the desired destination. Many type of broadcast automation systems can be
used to control a video router via IP, or serial communications such as the RS-422 / 9-Pin
Protocol. They can operate as standalone switchers with local or remote control.
AUDIO PROCESSORS
Audio processors are introduced to control audio levels of multiple inputs say between playback
of commercial spots and programme output from different studios just before transmission or up
linking to a satellite. Audio processors also facilitate to remove noise, insert required delays for
audio synchronization and desired filters and equalizers for improving the sound clarity. The
main reason for the insertion of audio delay is because of different processing time required for
video and audio. Video processing usually takes more time while passing through the video
equipment like FS etc. This causes the audio to become ahead of video. A difference of more
59
Induction Course (Television)
than three frames can be noticed. Hence audio may require a delay usually in ms or frames for
its proper synchronization with video. One frame of delay is equivalent to 1/25 seconds. It is
mentioned that embedded audio has to be retrieved corrected and then embedded again as SDI
embedded stream before up linking or transmission
Multi-viewer This unit helps to view multiple video sources in a single LCD display unit for
operational convenience and monitoring. Audio levels of each source in bar graph and the
source identification is also included in the display matrix. This display matrix can be configured
as per the requirement.
ACTIVITIES
Visit a TV studio of a Doordarshan Kendra and observe the working of auxiliary equipment
mentioned in this chapter.
Use video test signal generator and pass its signal of a particular format through a standard
converter as well as an ARC with a video monitor. Observe the output and record the
difference between the outputs of these two units i.e. standard converter and ARC with
different settings.
RECAP
Programme output from TV studio involves several feeds of pictures in different formats and
quality. Hence to maintain uniform quality standards, a studio may need lot of processing and
auxiliary equipment in addition to the main equipment and production tools. This includes multi
format SPG, distribution devices VDA and ADA, noise reducers, scan converters, standard
converters, frame synchronizers, audio embedders, keyers and logo-inserters etc. Working in a
digital domain facilitates to process and manipulate such signals efficiently and offers lot of
options.
FURTHER READINGS:
1. Essential guide to Video processing, Bovik, Al. (2009), New York, Elsevier.
2. Digital Signal Processing, Williston, Kenton(2009), World Class Design, Newness.
3. Digital Signal Processing, Proaleis, JA (2006), Singapore: Persion.
4. Video Processing & Communications, Wang, Y (2002), New Delhi
******
60
7
CAMERA OPTICS
INTRODUCTION
Camera optics creates the optical image for presentation to a video camera. It’s important parts
are camera lens and optical assembly to focus the brightness variations in the image faithfully on
the face plate of the camera sensors for getting images of good quality.
OBJECTIVES
A video camera can be broadly divided into the following three sections:
The lens used for the video cameras depends on the size of the image sensor i.e. pickup device.
Video cameras are usually using image sensor of sizes 1 inch, 2/3 inch, 1/2 inch or 1/3 inch.
Lenses meant for particular size devices can be used only with cameras having same size of
device. Fig. 1 below list all the main parts of a typical Video Camera lens assembly. The optical
section of a video camera contains following sections:
1. Focus section
2. Zoom section
61
Induction Course (Television)
BACK
FOCUS
MACRO
FOCUS
1. Focus section
Elementary theory of lenses states that when the object is at a distance of infinity from the focal
plane of the lens, we will get its focused image at a distance equal to the focal length of the lens
from that focal plane. Infinity distance, for all practical purposes is considered to be about ten
times the focal length. Focal length is measured in milli meters (mm) and is marked on the lens
itself. A camera lens is generally known by its focal length. Fig. 2 suggests that every focal
length has an angle of view associated with it. Thus different compositions from camera are
possible by changing the viewing angle or focal length from a fixed camera position.
2. Zoom ratio
Since we need to have different operational requirements from a video camera, we would prefer
lens with a variable focal length. This lens is called as Zoom lens. A typical ENG/EFP camera
may have a variable focal length varying say between 9 mm to 108 mm. The zoom ratio then
becomes,
62
Camera Optics
Zoom lenses are known by this ratio, in this example this zoom lens will be called as X12 lens.
Long shots (wide angle) can be composed by using shorter focal length by rotating the zoom
ring & close-ups shot (narrow angle) by higher focal length. One may also note that the viewing
angle of the lens is determined by the focal length & size of the image sensor. It can be
represented by,
Fig. 2: Focal Length and angle of view (Both Horizontal & Vertical)
We have different lenses for different sizes of image sensor for a particular angle of coverage.
The table below can give you an idea why smaller studio should prefer smaller pick up devices
for wider angles/close ups whereas larger size of image sensor is preferred for larger studios for
shooting from a larger distances
63
Induction Course (Television)
A zoom servo drive is a small motor controlled by a lever (Fig. 1). The pressure by which this
lever is pressed, determines the speed of the zoom — typically from 2.5 to 20 seconds. The
lever is also called a T/W switch — T stands for telephoto and W for wide angle. Zooms
operation causes change in picture composition from close up to wide angle shots or vice versa
while the camera is on shot. This movement should be steady and smooth operation especially
during slow zooms. The zoom control is easy to operate and allows simultaneous auto focusing
during zoom operation. The zoom servo also allows the camera person to operate the manual
focus and aperture controls as and when required. Although relatively quiet, some zoom servo
motors emit a humming noise that is picked up by the camera-
mounted microphone. Additionally, the servo drive assembly and associated motors drives
power from the camera battery.
f: Focal Length
Lens
4. Aperture control
Light entering the cameras needs to be controlled as per the lighting conditions on locations.
The mechanism which provides this control is called aperture control or iris. The opening of the
lens for the light to enter is controlled by collapsible fins inside the lens which in turn change the
diameter of the lens opening D (Fig 3). This control can be either manual or automatic. Since
camera man has to control focus and zoom with two hands, the third variable i.e., iris is
preferred on auto mode most of the time.
As we know reflected light from the object forms the image on the sensor and its intensity is
inversely proportional to focal length. Lesser focal length will cause more light from wider region
view. Also light entering the camera will be proportional to the area of lens opening through the
64
Camera Optics
fins. So the larger opening area of the lens i.e., higher value of D will also cause more light to
enter the camera.
Hence,
area of opened lens through fins
Exposure of light to camera image sensor
focal length
f stop number indicates the stopping of light and is inverse of exposure of light. It is given by,
focal length
f stop no.
diameter of lens opening through fins
Because we are interested in exposure of light to camera image sensor which also depends on
distance of the object to be focused or indirectly on focal length besides diameter of lens
opening. Hence the f- stop becomes the real measure of light falling on the image sensor.
Please note, higher the f-stop numbers, lesser is the lens opening. Lowest f stop number
indicates maximum exposure and is also called as speed of the lens (LS). It is usually a number
which does not fit the f stop series marked on lens aperture ring as:
Where LS is lens speed and its typical values can be 1.4 or 1.7 etc. One may note that the next
number on this series can be found by multiplying the just previous number by 2. A change of
one f-stop in either direction, gives a change in exposer by a factor of two. This causes the video
level to change in steps of 6 dBs i.e. either increase in video level by a factor of 2 or reduction by
one half.
6. Macro focus
As shown in fig. 1 adjacent to back focus adjusting screw, another ring which is generally kept
locked is called macro control. This is unlocked and used for focusing when working with very
small objects by shooting close to them. ( a small insects or postage stamps etc.). This control
helps to get such tiny images in focus. One has to be on extreme wide angle; and zoom action
become ineffective for macro shooting. One must return to normal marking of macro ring after
this assignment on macro is complete.
ZOOM OPERATIONS
While performing zoom operation, though f is varying, the f-stop remains constant by readjusting
diameter of lens opening automatically. Also when focused in close up position, zoom operation
65
Induction Course (Television)
should not affect focusing when it is moved to long shot. If it happens then it requires a back
focus adjustment on the lens. This adjustment to focus on long shot is to be done by the back
focus ring by unlocking it, (Fig 1) adjusting it and then locking it again. Front focus is not touched
for this adjustment on long shot. The back focus should be adjusted repeatedly by loosening the
screw of back focus assembly (otherwise kept tight) and adjusting back focal length in zoom out
condition. Main focus once done during zoom in condition need not be disturbed during the
entire zoom operation.
DEPTH OF FIELD
Depth of field in a picture is the distance between the nearest and farthest object in focus. One
may prefer large depth of field in most of the pictures. It can be increased by the following ways:
While working in news or presentation studios we may not require more depth of field and can
afford to work at f 2.8 or so. But the same is not true for large studios having bigger sets with
moving artists as it may demand working at higher f- stop, thus more lighting.
CHROMATIC ABERRATION
Chromatic aberration is caused by the failure of the lens to focus different colours to the same
spot.These aberrations arise because the refraction index for glass varies with wavelength. As
the light is visible range of Electro Magnetic Waves, each colour (in camera Red, Green and
Blue) has different wavelengths. So, light gets refracted differently for each colour results in a
different image plane for each colour. This phenomenon is more noticeable in lenses with longer
focal lengths, and results in deterioration of the edges of the image.
66
Camera Optics
Recent advancement has made it possible to reduce chromatic aberration by combining a series
of converging and diverging refraction characteristics lenses and use of special substances on
lenses to offset the aberration and accordingly focus the image.
FLARE
Flare is a phenomenon that is likely to occur when strong light passes through the camera lens.
Flare is caused by numerous diffused reflections of the incoming light inside the lens. This
results in the black level of each red, green and blue channel being raised, and/or inaccurate
color balance between the three channels. On a video monitor, flare causes the picture to
appear as a misty image, sometimes with a color shade. In order to minimize the effects of flare,
professional video cameras are provided with a flare adjustment function, which optimizes the
pedestal level and corrects the balance between the three channels electronically.
These filters may vary slightly from camera to camera. The proper selection of this filter from the
choice available on filter wheel depends on the lighting condition on location. Selected filter then
facilitate the white balance adjustment for the video cameras. Fine adjustment for white balance
is done by means of white balance operations with the help of automatics on camera electronics.
2) Lens mount
Lens mount is an arrangement to connect the lens to the camera. It is usually of two types:
a) Bayonet type
Bayonet mounts generally have a number of tabs (often three) around the base of the
lens, which fit into appropriately sized recesses in the lens mounting plate on the front
of the camera. The tabs are often "keyed" in some way to ensure that the lens is only
67
Induction Course (Television)
inserted in one orientation, often by making one tab a different size. Once inserted the
lens is fastened by turning it slightly. It is then locked in place by a spring-loaded pin,
which can be operated to remove the lens.
b) C – Type
3) Dichroic/optical unit
This block is also called beam splitter. It splits the incoming light into three beams i.e. red, green
and blue. Incoming light when reaches the first Dichroic mirror DM-1 it reflects only blue & pass
the green and red wave lengths, similarly, DM-2 reflects Red & pass Green to be collected by G-
Channel image sensor. Reflected Red and Blue are passed on to the respective image sensors
via fully reflecting mirrors.
Many camcorders include some form of image stabilization technology. Image stabilization is
important and crucial in camcorders that have long optical zoom lenses. When a lens is zoomed
in to its maximum magnification, it becomes extremely sensitive to even the slightest jerks and
motion. There are two major forms of image stabilization, optical and digital.
Optical
Camera lens with optical image stabilization typically feature tiny gyro-sensors inside the
lens that quickly shift pieces of the lens glass to off-set your motion. An image
stabilization technology is considered "optical" if it features a moving element inside the
68
Camera Optics
lens. Some manufacturers provide the option to have optical image stabilization feature
along with on and off control.
Digital
Unlike optical systems, digital image stabilization uses software technology to reduce the
impact of shaky hands on your video. Depending on the model, this can be accomplished
in several ways. Some units will calculate the impact of camera movement and use that
data to adjust which pixels on the camcorder's image sensor are to be used.
A wide angle lens is a powerful tool for exaggerating depth and relative size in a photo.
However, it's also one of the most difficult types of lenses to learn how to use.
If we shoot very close to an object with a wide angle we get bigger image of object which are
nearer than the objects which are away. A lens is generally considered to be "wide angle" when
its focal length is less than around 35 mm.
a) Object very near to the wide angle lens b) Resulting picture, edge looking much bigger than the rest
LENS CLEANING
Lenses and optical blocks are expensive items of video cameras and one of their worst enemies
is dust. Care must be taken to ensure that they are kept free from dust by capping the lens
when not in use. Lens or optical block surfaces should never be touched by hand or cleaned
with handkerchief. Use the following procedure for cleaning the lenses:
Always
Brush off the loose dust with a soft lens brush with air blower.
69
Induction Course (Television)
For removing grease and finger marks etc., use pure alcohol. To test its purity, just put a
drop on a glass surface. Pure alcohol will evaporate fast without leaving any residue.
The alcohol should be applied with a suitable lint free cloth.
To ensure that alcohol does not dissolve grease on your hands and get re-deposited on
glass surface, wash your hands or wear clear rubber gloves while cleaning lenses.
Never
ACTIVITIES
i) Remove and reaffix the lens assembly of different types of camera available in a television
studio.
ii) Compose the various shots in a TV camera at different focal length and study the
relationship of focal length with composition.
iii) Do the back focus adjustment on a TV camera and then try the entire zoom operation and
check for continuous focusing during the entire zoom shot
iv) Try to shoot objects very near to the camera by using macro focus.
RECAP
All video cameras use Zoom lens with variable focal length. Aperture and focussing are
the important operational control of the lens. These controls facilitate to get the desired
compositions and sharp image on the camera sensors.
The size of the optical block decide the resolution of the camera, professional television
camera normally uses 1/2” or 2/3” size image sensor.
Digital image stabilization is usually less effective than optical stabilization.
The lens for this high-performance video camera warrants a careful handling. Special
precaution is required during routine maintenance to avoid damage to the lens
FURTHER READINGS
******
70
8
VIDEO CAMERA &
SUPPORT SYSTEM
INTRODUCTION
Video camera is a basic source of picture for television. Visible spectrum of white light radiation
is split into three primary colours Red, Green & Blue in a video camera. These primary colours
R, G & B are then converted into electrical signals and processed to get a composite video
signal in electrical form. In this chapter we will further study the basics of video camera and its
operation procedures.
OBJECTIVES
PICK - UP DEVICES
Fig. 1a: Camera prism assy. Pick up devices are made of the following materials
71
Induction Course (Television)
d) CMOS devices
72
Video Camera & Support System
conversion, and the sensor often also includes amplifiers, noise-correction, and digitization
circuits.
Both CCD and CMOS types of sensor accomplish the same task of capturing light and
converting it into electrical signals. Each cell of a CCD image sensor is an analog device. When
light strikes the chip it is held as a small electrical charge in each photo sensor. The charges are
converted to voltage, one pixel at a time as they are read from the chip. Additional circuitry in the
camera converts the voltage into digital information.
CMOS sensors can potentially be implemented with fewer components, use less power, and/or
provide faster readout than CCD sensors. CCD is a more mature technology though CMOS
sensors are less expensive to manufacture than CCD sensors.
CCD BASICS
In order to understand the working of a CCD chip consider its construction as shown in Fig. 2.
Initially with V=0, there will be an even distribution of holes (majority carriers) in the substrate
doped with P type of impurities.
V
ELECTRODE
OXIDE (INSULATING LAYER)
If V is now increased to 10 volts, free holes are repelled deeper into the substrate and a
depletion layer is formed below the electrode (Fig. 3a). The potential within this depletion layer
is highest at the surface, and decreases with depth.
+ 10 V + 10 V
Inversion
eeee
p Layer
Reduced
Light Depletion
Layer
(a) (b)
When the light falls on such device, electron-hole pairs will be formed in the substrate. Amount
of electron-hole pairs thus generated are proportional to the amount of light or optical image.
Since the substrate is p type, holes formed by the light, get mixed with the majority carriers
already present and can therefore be ignored. But the Photo generated electrons will get
73
Induction Course (Television)
attracted by the positive potential forming an inversion layer, dominated by electrons just below
the electrode. Recombination of electron-hole pairs cannot occur as there are no free holes in
the inversion and depletion layers. Also the negative charge on the electrons causes the
potential at the semi-conductor surface to drop. This in turn reduces the depth of the depletion
layer (Fig. 3b). You may also note that this mechanism sets a theoretical limit on the storage
capacity of a CCD since the depletion layer is must to prevent recombination.
Charge coupling
Now it is important to convert the charge packets into an output voltage. The process by which
charge packets are moved through the device and eventually delivered to the amplifier is known
as charge-coupling (sometimes charge transfer), Fig. 4 shows this process in three stages.
When V2 goes positive a depletion layer is formed in the usual way, so that as V2 becomes
more positive than V1, the charge packet will move to the new site without encountering any free
holes in the process.
This process of coupling the charge between adjacent electrodes continues until the whole
charge image is routed to the output. It would be impractical to control every electrode
individually as this would require far too many connections to the device. One solution is to
connect every 3rd electrode together and drive them with a 3-phase clock signal as shown in
figure 5
10V V1 V2 10V
0V V2 V1 0V
V1 V2 V1 V2 V1 V2
10V 0V 5V 5V 0V 10V
One of the clock phases is held high and the other two are held low during the charge collection
and storage period. In this way charge image is built up under the ‘on’ electrodes. Switching on
the three phase clocks will then shift the charge packets through the device towards the on chip
amplifier. It is important to notice that with the 3-phase clock system, 3 electrodes are required
for every picture element.
74
Video Camera & Support System
3
2
1
1
2
3
Charge detection
In order to convert charge image to a more convenient form, the charge packets are passed on
to an on chip capacitor. Using the relationship V = Q/C gives the output voltage, corresponding
to optical image. V here is voltage, Q the charge and C the value of capacitance. Fig. 6 shows
the basic principle of charge detection.
In order to understand its working, one can note that between charge packets, capacitor C is
charged to V RESET. The next charge packet is then dumped on to C partially discharging it,
resulting in a signal voltage. C must be very small (approx. 0.1 pF) so that a reasonable
1 2 3 VRESET
O/P
eee
C
Charge Packets
Reset Clock
Control
voltage O/P
Reset Clock
3
signal voltage is developed (approx. 100 mV). The buffer stage is essential to screen C from
external capacitive loading. The output voltage thus developed across the capacitor is
proportional to the optical image or light falling on it.
75
Induction Course (Television)
There are three different types of CCD chips used as a pick up device for CCD cameras, namely
The only different about these types is the way charge is collected or transferred. CCD cameras
with good resolution offers about 4,00,000 pixels or CCD element.
The IT type CCD consists of a light receiving device (a kind of photo diode),a vertical transfer
CCD and a horizontal transfer CCD. The light receiving device converts light into electrical
signals. Thus the photosensitive and the storage section are interleaved in this type. For the
transfer of charge, during the vertical blanking period the charges are first transferred to the
vertical transfer CCD (1) and during horizontal blanking they are transferred to the horizontal
transfer CCD (2) for each scanning line (1H) in sequence. The charges transferred to the
horizontal transfer CCD are transferred at horizontal scanning speed to the signal detector
where they are converted into a voltage
Frame transfer (FT) type CCD
These types of chips are bigger in the size, almost twice in size to accommodate additional
storage area. The upper section of this chip is image section and the lower as storage section
with a storage time of 20 ms or half field. The storage section is masked and is not exposed to
light. FT devices are using shutter during the transfer of data from light receiving chip to this
additional storage chip at fast rate to reduce smear. The shutter is synchronized with vertical
blanking period. This type is not being in use in most of the present day
76
Video Camera & Support System
cameras because of limitation of larger size, problem relating to the use of shutter and also
because of improvement in IT cameras to reduce smear. Smear is caused when bright light
enters the CCD and is seen as a comet – like effect above and below the light source.
Although both the IT and FT type CCDs have excellent performance in their way, they cannot
suppress the smear completely which is inherent in the CCD. The FIT type CCD consists of a
light receiving CCD, vertical transfer CCD, storage CCD and horizontal transfer CCD. For the
transfer of charge, during vertical blanking the charge, the result of light image converted to
charge image by the photo diode (CCD pixel) is transferred to the vertical transfer CCD(2). This
takes place after the residual charge in this CCD – the cause of smear – has been swept out (1)
via drain. Then the charges are transferred to the storage CCD at high speed (3). It is the high
speed of the charge transfer that is the major factor in reducing smear due to light. Out of these
types IT and FIT cameras are preferred. The most popular size of chip is ½ &1/3” for outdoor
and 2/3” for studios
77
Induction Course (Television)
the hole accumulated diode (HAD) sensor which enabled up to 750 pixels/line, with
increased sensitivity and a reduction in vertical smear;
the hyper HAD sensor, which included a micro-lens on each pixel to collect the light
more efficiently (this gave a one stop increase in sensitivity over the HAD sensor);
the power HAD sensor with improved signal-to- noise ratio which has resulted in at
least half an ƒ-stop gain in sensitivity; in some cases a full ƒ-stop of extra sensitivity
has been realized.
Fig. 10: CCD IT cameras showing vertical line while shooting lights called as smear
78
Video Camera & Support System
Correction
Shading AGC W/C Blanking
Aperture
P Gamma Blanking
CCD
A
Clamo Flair
To PA Stage
Of RGB Automatics
a) Studio cameras
The studio camera is usually very large and heavy to be used in the field. Because of its size,
studio cameras may be placed on a three-legged stand, called a tripod, for support. To allow
smooth camera movement, the feet of the tripod are placed into a three wheeled cart called a
Dolly, (Fig. 12.) A studio pedestal type of camera support has a large, single column on
wheels that is pneumatically or hydraulically controlled height or pedestal (Fig. 13).
79
Induction Course (Television)
In a multi-camera production the CCU operator will usually be responsible for more than one
camera (2-3 cameras are common). CCU controllers are embedded into the desk in front of the
CCU operator. Video monitors showing the pictures from each camera facilitates the operator to
adjust the Iris, shutter speed, black level, gain, colour balances and wide range of other
technical parameters
Professional camcorders are lightweight, portable cameras (fig. 15). The professional
camcorder is a television camera and recorder in one unit and is relatively simple to take into the
80
Video Camera & Support System
field. While in use, it is placed either on the operator’s right shoulder or on a field tripod. A
remote camera package configuration usually includes a 1″ viewfinder. The operator is likely to
have the camera on his shoulder with his right eye pressed against the eyecup of the viewfinder,
so a larger viewfinder is not necessary. The same camera for studio package includes a CCU
and a larger viewfinder (a small television monitor) of least 5″ diagonally.
The camera head is the main part of the equipment (Fig. 15). It contains all the electronics
needed to convert the reflection of light from the subject into an electronic signal. Three chip
CCDs are available in 1/3", 1/2" or 2/3" size of CCDs. Professional studio cameras generally
have 1/2” or larger CCDs. Lens mounts are standardized and matched to the corresponding
CCD size. A viewfinder is a small video monitor mounted on the camera head that allows the
camera operator to view the images in the shot.
While not in use, both studio cameras and camcorders should be stored in a protected and
temperature-controlled location. All the related cables should be coiled and stored properly along
with the camera or camcorder
Studio camera
Lock the pedestal and camera mounting head to prevent movement while not in use.
Close the iris and attach the lens cap.
Move the camera to a safe location within the studio after covering them.
Camcorder
81
Induction Course (Television)
ACTIVITIES
1. Practice to mount a camera on the camera stand.
2. Practice to mount a battery on the camera.
3. Practice to assemble, fix & levelling of camera stand.
4. Study the different controls available on a camcorder.
5. Study the different Input output port available on a camcorder.
6. Study the type of media used in a camcorder and its relative advantages.
7. Study the file format used in Sony XDCAM camcorder & its relative advantages.
RECAP
In television production different types of video cameras are used as per the requirement. The
studio camera is too heavy to be used for field production. Camera control unit (CCU) is used for
multi camera set up both in studios and outdoor. It facilitates the CCU operator to match the
pictures from all the cameras and maintain their quality. The professional camcorder is a
television camera and recorder in one unit and is relatively lightweight, portable & simple to take
into the field.
Television cameras are based on CCD or CMOS technology. CMOS is the emerging
technology. Doordarshan is presently using CCD cameras of different sizes for indoor and
outdoor TV productions. Three types of CCD chips are FT, IT and FIT. Out of these IT and FIT
cameras are preferred. The most popular size of chip is 1/2” & 1/3” for outdoor and 2/3” for
studios. Outdoor application with a lighter camera uses shoulder or tripod mounting to support a
camera, whereas TV studios with heavier cameras are using dollies and studio pedestal support
system.
FURTHER READINGS
******
82
9
TV LIGHTING
INTRODUCTION
Lighting for television sets in studio and outdoor locations for TV productions is a creative work.
It involves both art and science. High-quality lighting plays a major role in creating extraordinary
scenes in television and movie productions. Professional illumination provides much more than
just the right level of brightness. It creates exciting moods and supports the compositions and
concepts of the lighting designers.
OBJECTIVES
GENERAL PRINCIPLES
Lighting for TV scenes in a television production is done by using a suitable combination of
directional and diffused lights. Such combination can create two different kinds of illumination
required for indoor and outdoor locations.
a. Directional light illuminates only a relatively small area with a distinct beam. It produces a
well-defined shadow and produces fast falloff. To achieve directional light one has to use
spotlights.
b. Diffused light illuminates a relatively large area with a wide indistinct beam. It produces
soft, undefined shadows and causes slow falloff. The lighting sources used to emit
diffused light are called floodlights.
c. Outdoor illumination is primarily accomplished by the most reliable source, the Sun. But
the sun does not always emit the same type of light. On a cloudless day, the sun emits a
83
Induction Course (Television)
highly directional light, like a spotlight. On an overcast day, the clouds act as diffusers
and change the sun into a diffused light source, like a floodlight. This light is non-
directional (diffused) and has a slow falloff. Although we use special light sources and
reflectors to adjust the lighting while working outdoors, but we generally have little control
over outdoor illumination.
d. Indoor illumination will almost always require the use of lighting instruments. If the room
is partially illuminated by a light coming through a window, the job of matching outdoor
source of light with the indoor one is more challenging. The amount and types of lighting
instruments used varies from a handheld light to complete lighting grids that allow total
control over the light.
CHARACTERISTICS OF LIGHT
Quantity Light Intensity refers to the quantity of light falling on any particular area.
Intensity of light is measured in foot-candles (ft-c) or lux. Studio lights are usually
rated in foot candles.
Contrast Contrast refers to difference between the brightest and the darkest spots
in a video picture.
Attached shadows seem affixed on the subject. Hold an object next to a lamp.
The shadow of the object opposite the light source (lamp) is the attached
shadow. It gives depth to an object. Without them the object would appear one-
dimensional.
Cast shadows can be seen independent of the object. Cast shadows are what
we see on bright sunny days. Shadows of street lights, people, cars and trees
are examples. Cast shadows help us to see where the object is located relative
to its surroundings and they help use sometimes to relate to time. Longer
shadow means late evening as compared to short shadows during afternoon.
84
TV Lighting
If we heat a black body from a lower temperature to a higher temperature it changes its colour
from black to red and then to blue on further heating. This is how a concept of relating
temperature with colour has started. Colour temperature (CT) is used in television to measure
the relative reddishness and bluishness of a "white" light. It is expressed in Kelvin. The table
below gives idea about the kind of radiation from different kinds of lamps in terms of colour
temperature.
85
Induction Course (Television)
Single filter with combination of full CT orange and a neutral density are also available::
The HMI light source has a CT of about 6000 K and can be used with exterior day light without
the need for a CTC filter.
Example: Suppose a CTC filter used on a light source of 3000 K changed its colour
temperature to 4000 K then:
10 6
Mired value of light source 333 Mired
3000
10 6
Mired value of light source + filter 250 Mired
4000
So the Mired shift produced by the filter is -83 Mired.
If the same filter were to be used on a light source of colour temperature of 5000 K we can
estimate the new colour temperature as follows:
10 6
Mired value of light source 200
5000
86
TV Lighting
The blue filter used in the above example, increased the colour temperature but decreased the
Mired value.
Consider a moonlight night exterior with natural light sources inside. The difference
between the two will be about 140 Mired. Luminaries to simulate moonlight are
filtered blue (-140 Mired) and objects illuminated with this light will appear deep
blue while objects under natural sources can be under white light. If an orange
filter of about + 70 Mired is used on the camera, the white light of the natural
sources will appear slightly orange; the moonlight will come but less blue.
Colour television monitors in a studio have a colour temperature of about 6500o K. If they
are photographed by another camera their screens should be covered by an orange gel
of about - 100 Mired.
LIGHT SOURCES
The most common form of light source used in television studios is the incandescent (hot
filament) lamp. It is simple in operation and construction. It can easily be dimmed, and it does
not require auxiliary equipment like chokes etc. The disadvantages are that most of the power
input gets dissipated as heat and the lamps have limited life.
87
Induction Course (Television)
b) Tungsten as filament
Tungsten wire has high resistance and is capable of dissipating power in the form of heat. If
sufficient electrical energy is supplied to raise the filament temperature above approximately
5000 C, light is emitted. Oxygen has to be excluded to avoid combustion. The filament is
enclosed in an evacuated glass bulb. The tungsten filament has light emitting characteristics
similar to those of a black body radiator, where high temperature provides greater efficiency and
higher colour temperature. But this lamp usually fails because of evaporation of filament, higher
the filament temperature, higher will be the rate of evaporation. This causes the filament to
become thinner, thus reducing the light output and colour temperature. The inside of the bulb is
darkened by the deposition of evaporated filament. This deposit absorbs some of the light thus
reducing its intensity. The evaporation can be reduced by having a suitable gas (which does not
attack the filament) into the bulb. However the gas molecules conduct the heat away from the
filament and in order to conserve the heat the filament is wound as a tight coil. This gas enables
them to be run at temperatures several hundred degrees higher than those of vacuum lamps.
Evaporation and bulb blackening still take place in the gas-filled lamp. To reduce this effect a
large surface for condensation is required. Hence these studio lamps have very large bulb.
Even then towards the end of the filament life, this deposited layer reduces the light output and
colour temperature to unacceptable levels. Increasing the pressure of the gas filling may
suppress the ratio of evaporation, but the sizes used are not capable of withstanding more
pressure. Use of smaller bulbs in harder glasses or silica permit the use of higher pressures but
the advantage gained is lost by virtue of the smaller surface area for condensation.
Tungsten Halogen
Tungsten halogen lamp is a major breakthrough in lamp design. By the chemical removal of the
deposited tungsten on the bulb the light output is increased by 100%. The useful life is at least
double that of normal tungsten filament lamps and the lamp is physically smaller.
Halogen is a general term for a family of very reactive elements like, fluorine, chlorine, bromine
and iodine. These elements combine with tungsten in a reversible reaction controlled by
temperature. Colour less halogen vapours at bulb temperatures between 250 - 8000o C,
combines with the deposited tungsten to form tungsten halide in vapour form. At temperatures
88
TV Lighting
above 1250o C, encountered in the region of the filament, the tungsten halide dissociates
tungsten which gets re-deposited on the filament, and halogen being released to repeat the
cycle. It would appear that we now have a lamp with a bulb which never blackens and an ever-
lasting filament. The tungsten however does not deposit itself back from where it came but on
the cooler part of the filament. Because of the high bulb temperatures involved and small size
the glass has to be strong, so it is made fused silica (quartz). This also enables to have high gas
pressure, which can be 4 times or more than the normal lamps. This reduces filament
evaporation and gives an extension to filament life.
Quartz bulbs should not be touched by hand since minute quantities of oils etc., deposited on
bulb surface will damage the glass when it is heated. This lamp has an efficiency of 20 Lumens
per watt.
CSI is gas discharge light source. It produces white light with higher efficiency then halogens.
This lamp is small and bright. It consists of a glass envelope which contains suitable gas vapour
and two electrodes. When suitable voltage is applied between the electrodes the gas gets
ionized. This results into a discharge current, which requires to be limited by an external
inductor for ac operation.
The warm up period of these lamps is about 30 seconds, during which time the colour of the
light changes from the purple-blue mercury radiation to white radiation indicating the evaporation
and dissociation of the metal iodides. If the lamp is switched off for a few seconds de-ionization
will occur and it will be then impossible to ionize the gas until the lamp cools for several minutes.
For these reasons it is not used for TV studios.
The improvement in the discharge light spectrum came with the development of the Metal
halide. In these lamps rare earth elements are added in their halide form to produce a more
continuous visible spectrum and a "whiter" light source.
89
Induction Course (Television)
HMI Lamp: H stands for mercury, M for medium arc and I for Iodide. The CSI and the HMI
lamps operate on the same principle. They use mercury vapour as the basic gas but the
spectrum is largely determined by the rare earth metals that are added in their halide form. The
principle metals used are Thallium, Dysprosium and Holmium, the latter two being the metals
used in current HMI bulbs.
Wattage rating of HMI bulbs: HMI bulbs are available in a range of sizes 200 W, 1200 W, 2500
W and 4000 W. They have a luminous efficiency of about 90 lumens / watt and are therefore
much brighter than an equally rated tungsten bulb.
Operation of HMI bulbs: A striking voltage of 24 kV is used with HMI to ensure that the bulb
can be instantly restarted when the lamp is hot. This voltage is necessary to ensure the restrike
with the high gas pressure inside the hot bulb. This removes the major problem of CSI lamps.
Like all other discharge source a choke is used to limit the current when the lamp is running.
HMI has a colour temperature of 6000 K. It takes several minutes after first striking for the lamp
to reach this colour temperature. The initial colour is blue due to the mercury vapour, but
gradually the light "whitens" as the metal halides dissociate to add more red and green content
to the spectrum. When viewed with the naked eye or colour camera the lamp could possible
appear more magenta then daylight. In this case a white flame green filter will assist in
removing the magenta appearance of the lamp. Filters may also be necessary to equalize the
colour temperature of HMI bulbs. With an orange filter the HMI bulb can be made to approximate
to Tungsten light, but the main use of a high colour temperature source is to mix it with daylight.
e) Cool Lights
Cool lights are basically high frequency fluorescent light sources that emit 70 - 100 lumen/watt
as compared to Tungsten Halogen lamps, which emit 25 lumen/watt. Tungsten Halogen has
very poor efficiency and converts lot of input into heat in the studios, whereas these lights
convert 33% of the input power into visible light as compared to 10 to 15% of the Tungsten
Halogen. The range of fixture available is from 17 watt to 832 watt, with a range of colour
temperature from 2700 K, 3000 K, 3500 K & 4100 K.
Each fixture incorporates an integral dimmer and control electronics. The control electronics
produce app. 33 kHz drive for the lamps. 100% to 20% dimming is possible without reduction in
colour temperature. These lights are free from radiations and flicker.
Light sources are usually referred to as being “hard” or “soft” depending on the type of shadow
they produce.
90
TV Lighting
Hard Sources
Soft Sources
LIGHTING EQUIPMENT
a) Sun/Moon Projector
b) Water roller
c) Strobe flicker light
d) Cloud Projector
e) Running water Projector
f) Sequential chaser with manual/music control
g) Par lights with parabolic glass reflector for confined beam.
91
Induction Course (Television)
h) Multi ten/Multi twenty, An open face light with broad beam, that can be changed
with the movement of lamp.
i) Follow spot with a circular concentrated beam
Lighting consoles
In a television production, each scene will require its own lighting plan to give the desired effect.
This is achieved by a console that provides:-
Modern lighting consoles also provide file and memory to store and recall the appropriate
luminaries used for a particular lighting plot. These consoles also provide mimic display panels
to show which channels are in use and which memories or files have been recalled.
TYPES OF LUMINARIES
This is a focused light source in which the position of the light relative to the lens can be
increased or decreased to 'SPOT' or 'FLOOD's. The lens used is a Fresnel lens. Barn doors are
fitted to these lights to restrict illumination to specific areas. .
This is a focused light source in which a circular aperture of the light housing. It is projected by a
lens system to give a disc of light with a hard edge to it. Alternatively, patterns cut from metal
can be projected by inserting them near the circular aperture.
Usually has a large area with diffused of light, producing as few shadows as possible. These
light sources are not usually focused, so often they are fitted with louvers to reduce the sideways
spill of light.
92
TV Lighting
93
Induction Course (Television)
Lighting subjects
In Television production, most of the time, is spent in lighting people. Classic studio lighting uses
three lights on the subject and usually one or more on the background, this is often called three-
point lighting. Thus the four lights used in lighting for a basic subject to camera are:
Key Lights
The key light provides the main illumination, typically mimicking an actual light source. It is often
placed higher than the subject’s face, Fig. 10. The key light is typically a spotlight, so the hard-
edged beam is often softened with a sheet of spun glass clipped to the barn doors. Even so, it
throws distinct shadows on the subject’s cheek, upper lip, and neck. Hence its basic features
are:
94
TV Lighting
The fill light literally fills in the shadows created by the key light, (Fig. 11). Placed opposite the
key light, the fill is often farther to the side than and not as high as the key, which helps reduce
the cheek, lip, and neck shadows. Fill light moderates these shadows depending on the setting
and mood of the scene. In a cheerful interior, the shadows might be slight, for night scene, they
might be as deep as to obscure details. In any case, the fill light should not be bright enough to
make the subject lose the “modelling” that creates the illusion of depth. So its basic features are:
Usually a soft source of light, used to control the density of the shadows produced by
the key light.
Usually positioned on the opposite side of the camera to the filler, i.e. on the ‘shadow’
side of the subject.
If positioned 90deg. from the key light, it will enable the key light and filler light
intensity to be balanced separately.
Should be of a low elevation to ensure that it reaches into eyes.
95
Induction Course (Television)
Back Lights
Back light (also called rim light) is typically behind the subject and placed quite high, Fig. 12
Back light is frequently mounted overhead on clamps or on stands. The brightness of the back
light depends mainly on the lighting style—pronounced for pictorial realism and moderate for
realism. For naturalism, the back light is just bright enough to visually separate the subject from
the background. In some instances, it is omitted entirely.
The brightness of key and fill lights is adjusted by moving the lights toward or away from the
subject. Rim light, however, may be controlled by a dimmer, since the warming effect of dimming
a light is usually acceptable in this application. Hence its basic features are:
Like the key light, the background light is usually “motivated”—that is, it mimics light that would
naturally fall on the walls or other background, like a wall lamp, a window light, or spill from a
room light (Fig. 13). When working with just a few lights, it is usually possible to achieve
background lighting by directing spill from the key and/or fill lights. Background light intensity
need to be adjusted so that subject and background seem lit by the same environment, but the
subject is slightly brighter or two or more background lights may be needed to do the job.
Background lights often produce less intense effects because the lighting instruments must be
placed well away from the background to keep them out of the frame.
With the four lights in place, we can build a complete lighting setup, (Fig. 14) Hence its basic
features are:
96
TV Lighting
Thus though in realism, studio lighting is a complicated process, yet same basic scheme can be
used with any of the four major lighting styles. The basic lighting setup demonstrated here uses
four lights and only covers a space about the size of a single action area. However, at large
shooting locations, the lighting can involve many more instruments, but they tend to be deployed
in multiples of these basic layouts. Here, in three-point lighting the ratio of 3/2/1 (Back/Key/Fill)
for mono and 3/2/2 for color provides good portrait lighting.
Position of artists
It should be about 4', preferably 6' away from back ground to avoid artist's shadow on the
background and to ensure that the back light angle will not be too steep. To ensure control of
background lighting, whenever possible use barn doors to keep key light off background.
Similarly avoid background light catching the artist. In colour, plain background are often used,
care must be taken to ensure that on the monochrome picture there is a difference in tonal value
between the face and the background.
Lighting Balancing
The ‘Key’ or ‘Mood’ of the picture is determined by the ratio of the relative intensities of Key and
Filler. Types of moods frequently used in different lighting situation are given below:
Low Key The ratio of 5:1 will cause more of dark tones, large area of shadow, with causing
a dramatic effect.
Medium Key The ratio here will provide a facial contrast approx. 3 : 1.
High Key Here the ratio will cause more of light tones, small areas of ‘thin’ shadows, with
contrast ratio less than 3:1.
The intensity of the backlight should be roughly same as the key light but it will also depend on
the subject.
97
Induction Course (Television)
Lighting Techniques
Eye light, Low intensity light on camera to get extra sparkle for actor's eye.
Rim light, to highlight actor's outline, an extra back light on entire body at camera level.
Kicker light, extra light on shadow side of the face at an angle behind and to the side
of the actor.
Limbo Lighting, Only object is visible, no back ground light.
Silhouette lighting, No light on subject, background is highly lit.
Drama lighting
Openings such as windows within a set should be highlighted without overstating them. Where
the walls having such feature should be lit to reveal these features but care must be taken to
ensure that there is only one shadow. The top of the set should be darkened off by using the
barn doors; this puts a “ceiling” on the set by giving the feeling of a roof. If more than the top of
the set is darkened, that gives enclosed feeling.
98
TV Lighting
If there is a choice in the direction of the ‘sun’ (Key) take the shortest route inside the set to a
wall, and if possible throw the shadow of window bars onto a door – it usually is in shot.
A patch of light on the floor inside the set, backlight from outside using a soft source at steep
elevation adds realism.
When a set does not have a window, a window pattern can be projected onto a wall to
produce a suitable window effect.
Roof and Ceiling Pieces - if they make lighting impossible, check if they can be removed at
the planning state. Light any ceiling pieces from outside, use a soft source at ground level.
The outside of the window should be dark, except for a possible dim skyline if the room is
well above adjacent streets, or lit by an outside practical lamp i.e. street lighting.
Practical lamp should be placed such that they are in shot i.e. a standard lamp or table
lamp. Where possible make the light from this practical source.
In general for night effects it is not a good plan to just simply dim the set lighting when
changing from day to night. This is because the excessive change in colour temperature
of the light source and the apparent increase in saturation of surfaces at low luminance.
ACTIVITIES
Use the three-point lighting setups and experiment how lighting affects the mood of the
scene.
Use of colour gels, diffusers, and other filters in both the internal and external cases of
lighting situations and explore impact of such experiment.
Make a lighting plan and then shoot an interview by using window or outdoor lighting and
internal studio lights
Same experiment may be done on outdoor shoot with natural light on the subject and
experiment with reflectors, colour gels, diffusers, and other filters.
Experiment and find out, that how the mobile lighting is ideal in documentaries and
situations like walk and talk kind of programme.
Shoot an action scene using mobile lighting.
RECAP
In television production Lighting plays a very important role. It highlights the camera work.
Properties of light such as Intensity, Contrast, Shadow, Color Temperature etc. are
manipulated to give a scene the desired look.
Dimmers are simple light intensity reducers and are very important part of lighting in the
television studio.
The natural light sources, Sun & Moon are hard light sources. Both these natural light
sources are effectively used in Television Production.
99
Induction Course (Television)
There are various incandescent light sources each of them has its own advantages as well
as disadvantages.
Classic studio lighting uses three lights on a subject and usually one or more on the
background.
Background light intensity should be adjusted so that subject and background seem lit
by the same environment, but the subject is slightly brighter.
Techniques for lighting indoor night scenes include: using low-key mode, establishing
table top demonstration and controlling window light etc.
Hence the two types of lights, i.e., directional and diffused ones are used by the TV lighting staff
to create two different types of illuminations for outdoor and indoor locations as per the
programme requirement.
FURTHER READINGS
1. Art Studio Lighting Design (how to avoid being kept in the dark) by WILL KEMP
2. Television Production, Fourteenth Edition by GERALD MILLERSON & JIM OWENS
3. Motion Picture and Video Lighting, Second Edition by BLAIN BROWN
4. Lighting for Video & Television, Jakshan, J (2010), London; Focal.
5. Colour Temperature Correction & Natural density filters in TV lighting, Bermingham, A
(1989) Middx, SOTLD
******
100
10
VISION MIXING
INTRODUCTION
A production switcher is a device used to select or edit different video sources that are available
at the input of the switcher. It is also called video switcher, video mixer or vision mixer. The
selected output then becomes the PGM (Program) output for further distribution as studio output
for recording or transmission. Besides selecting a particular source as PGM, such switcher also
allows different kind of transitions in between while the video sources are switched. This is
similar to the concept of live editing using different sources.
OBJECTIVES
Vision mixing involves basically three types of switching with transitions between various
sources. These are mixing, wiping and keying. These transitions can also be accompanied by
special effects in some of the vision mixers. Fig. 1 below illustrates the various transitions in a
switcher. The basic transitions mentioned in this figure are -
Mixing
In this case two input video sources are mixed in proportion in a summing amplifier as decided
by the position of control fader. Two extreme position of the fader gives either of the sources at
the output. Middle of the fader gives mixed output of the two sources. Control to the summing
amplifier is derived from the position of fader.
101
Induction Course (Television)
Wipe
In this case the control for the two input sources is generated by the wipe pattern generator
(WPG), the shape and wipe direction can either be selected or derived from a saw tooth or
parabola wave shape at H, V or both H & V rate. Unlike MIX, during WIPE, one source is
present in one side of the wipe and the second source on other side of the wipe. A very simple
to very complex wipe patterns can be generated from the WPG.
Key
While keying between two sources i.e. foreground (FG) and background (BG) the control for
switching can be derived from one of the video source itself ( called overlay), or it could be by a
third video source ( called external key). This keying signal from the selected video source can
be defined and generated either by the predetermined selected value of luminance, hue or
chrominance of the source input. The keying area is determined by the key signal which is then
filled with the same or external source by auto switching as decided by the key signal. It could
also be filled by the switcher generated video signal called Matte, i.e., the internally generated
plane BG video by the switcher with the choice of colours from the vision mixer.
VISION MIXING
COMPOSITE COMPONENT
102
Vision Mixing
Special effects
Special effects as a transition have become very common and convenient especially while using
digital video signals. Most commonly used special effects are like page turning with a selected
source in different direction, various 2D or 3D, effects, borders on edges, matte and other
effects with several windows or a turning cube with different selected sources on its faces.
Modern switcher even provides storage for logos, promotional clips, program bumpers or any
other short program sequence called macro, designed and stored by the operator in the switcher
itself for repeated use.
How it is done
In case of simple cutting between sources, It is during the vertical blanking period, this
switching take place to avoid visible noise on screen. Though the operational switch might have
been pressed by the operator, the actual switching will wait for this vertical blanking period. That
is why such switchers are also called as vertical interval switcher
ELECTRONIC
INPUT A SWITCH
A
OUTPUT PICTURE
A B
B
X 52-X
52μЅ
INPUT B
SWITCHING
PULSE
ELECTRONIC
PATTERN
GENERATOR
103
Induction Course (Television)
(b): Key with Mix / Wipe (c): Mix or Wipe from Keyed
between BG1 & BG2 on picture to next item
Fig. 3 (a), (b) & (c): Showing the various combinations of VM operations
104
Vision Mixing
In case of wipes as a transition (shown in Fig. 2(a) & 2(b), the switch is controlled by the
switcher electronics in conjunction with operating fader. It will switch input video A say for x µs,
then to input B for (52- x) µs for line no. 1 of the output picture. This amounts to a complete 52
µs of active line from both A and B signals. Control signal based on fader position on the
switcher will decide the instant value of X. This process will repeat till we get the entire transition
for full frame along with the fader control. The switch control shown as dotted line will get its
control drive from EPG (Electronics Pattern Switcher) board of the switcher. Further possibilities
about different kind of transitions are described in Fig. 3 (a,b & c)
WHAT IS AN M/E?
Mix Effect is also referred as M/E or ME. Here all the functions that a switcher performs are
executed on a Mix Effect Bus. It consists of several rows of buttons; the number of buttons on
the bus is also called as cross point. So if there are two rows of buttons with eight buttons each,
then probably it is an eight input switcher.
However this isn’t always true. Due to space constraints, some manufacturers have created
‘Shift’ buttons to allow for more physical inputs to be on the switcher chassis than there are
physical buttons. By using the Shift button it is allowed to see twice as many sources on a
second bank of shifted inputs. Potential of the switcher operations is often known by the number
of MEs, a switcher have. Multiple layers with effects may demand more MEs. As an example,
every Chroma key will occupy 1 ME and for operating two such keys in a single frame we may
require an additional ME. Switcher with more MEs becomes expensive. Two to three ME
switcher are considered as adequate for reasonable studio operations with multiple layers.
Software based video mixers available now, may not have such limitations of creating maximum
layers.
TRANSITION AREA
The transition area is the next aspect of the Mix Effect. Program and Preview are selected by
the operator in the cross point area and then taken to air by the transition area. The transitions
are the ‘Take’ button, which is a cut between program and preview. T-Bar fader can be used to
dissolve between Program and Preview. The ‘Auto-Trans’ button will automatically dissolve
between program and preview at a pre-defined transition rate (like 15 frames as duration of the
transition period).
The hallmarks of a true M/E are a robust T-Bar operational control for the operator. The T-Bar
operation will generally provide transition between the Program bus and the Preview bus.
KEYING
In this effect we create a kind hole in the background (BG) picture and then insert another
picture, fore ground (FG) in that hole. We can also say that a selected portion of the BG is
replaced with FG. Geometrical dimensions of the hole in the BG will decide the FG portion to be
inserted. Such geometrical dimension of the hole in turn is determined from a keying signal.
Keying signal is derived in different ways depending on the type of keying.
105
Induction Course (Television)
CHROMA-KEYING
Chroma keying in TV studio is mostly designed for replacing the plain BG with a computer
generated set/graphics. Anchors or News readers are made to perform in front of blue or green
curtains. In this case, the keying signal is generated from any of the predefined color from the
studio real BG. It is either blue or green in this case. Keying signal is then derived by locating
the blue or green in the BG picture with anchor. Background picture now contains anchor in front
of the blue/green curtain. Blue and green curtains have been preferred as both these colours do
not contain any skin tone of the presenter and are safe to remove. You may notice the anchor
presenting the weather report standing in front of a blue or green screen which is ‘keyed-out’
and replaced with video of a state map or any other graphics.
Basic principles
Generation of keying signal needs a proper adjustment level for HUE, i.e., its saturation, phase
and luminance. Once a combination has been selected, any area of the source picture which
approximates this hue, with equal or greater saturation and luminance values, will produce a
keying signal. Keying signal is a black and white signal.
Operations
i) Choice of keying colour – the keying colour should not be visible on the foreground
to be overlaid.
ii) Uniform lighting – background light should be as uniform as possible with no dark
patches or shadows present.
iii) Better Edge --The artist should have a well-defined “edge”, if possible one must
avoid wispy hairstyles. Two backlights will generally give a better edge to the artist
and help to derive a good keying waveform.
Luminance Keying
Here the keying signal is derived from the level of luminance in the video signal. For
example in the source named character generator (CG), text when it is white, can
generate a key signal where ever the video level is corresponding to that white. This key
signal will create a hole similar to text in the BG which can be filled with any other source
or Matte colour from the switcher. This is called luminance keying. This can be further
classified as per below:-
106
Vision Mixing
Internal keying
When the keying signal is generated by one of the selected source (INPUT A) it becomes
an internal keying.
107
Induction Course (Television)
SYNCHRONOUS SOURCES
As Vision Mixers combine various video signals such as VTRs, cameras & out door OB feeds
etc., it is very important that all these sources are in proper synchronization with one another. It
means, their sync timing should match with respect to the Sync Pulse Generator (SPG)
installed in the studio. Synchronous video sources are those sources which are in synchronism.
Such synchronised video sources should have their H - Timing within + 50 nano seconds
and their sub-carrier phase within + 1.5 degrees with respect to a fixed reference signal of
SPG. Only these sources can be mixed with each other in video switchers. Video Switchers
usually have much higher tolerance limits so far as these timing errors are concerned, but at the
time of installation these errors are kept within +50 nano seconds and + 1.5 degree as
mentioned above. This is done to avoid visible switching noise and roll in the picture on
switching. Also mixing wipes and keying etc., is possible only with synchronous sources.
All Video sources that do not meet the above specifications regarding timing errors are called
non-synchronous sources. We can only CUT such sources, mixing and special effect operations
are not possible. Such sources when switched will cause a roll in the picture at the time of
switching. If all inputs to a vision mixer are synchronous then there is no visible disturbance
when cutting, mixing are wiping between them.
GENERAL LOCK
General locking a source requires an arrangement to adjust timing of all the incoming video
signals at the switcher (Both H-phase and SC phase of the source) with respect to the SPG
generated Black Burst. This is done to convert such nonsynchronous sources to synchronous
ones. Equipment that do not have this kind of arrangement may have to use a frame
synchronizer for this purpose.
SWITCHER OPERATION
The main concept of a vision mixer operations involves an understanding of the bus. A bus is
basically a row of buttons with each button representing a video source. Pressing such a button
will select the particular video out of that bus. Older video mixers had two equivalent buses
(called the A and B bus known as an A/B mixer). One of these buses used for selecting the
main out (or program) bus. Most modern mixers, however, have one bus that is always the
program bus, the second main bus as preview bus (sometimes called preset) . These mixers
are called flip-flop mixers, since the selected source of the preview and program buses can be
exchanged. Both preview and program bus, usually have their own respective video monitor.
Another important feature of a vision mixer is the transition lever also called a T-bar or Fader
Bar. This lever, similar to an audio fader, creates a smooth transition between two buses; Note
that in a flip-flop mixer, the position of the main transition lever does not indicate which bus is
active. Since the program bus is always active or hot bus. Also Instead of moving the lever by
hand, a button (commonly labeled “mix”, “auto” or “auto trans”) can be used, which performs the
transition over a user-defined period of time. Another button, usually labeled “cut” or “take”
108
Vision Mixing
directly swaps the buses without any transition. The type of transition can be selected in the
transition section.
The third bus on a vision mixer is the key bus. A mixer can actually have many more buses
depending on the ME rating of the switcher. On a keying bus, a signal can be selected for
keying into the program. The image that will be seen in the program is called the fill, while the
mask is used to create the keying signal. Note that instead of the key bus, other video sources
can also be selected for the fill signal. Usually, a key is turned on and off the same way as
transition. For this, the transition section can be switched from program (or background) mode to
key mode. Often, the transition section allows background video and one or more keyers to be
transitioned separately or in any combination with one push of the “auto” button.
.
Another keying stage called the downstream keyer. It is mostly used for keying logo, text or
graphics, and has own “Cut” and “Mix” buttons. The signal before the downstream keyer is
called clean feed. After the downstream keyer is one last stage that overrides any signal with
black, usually called FTB or Fade to Black.
Mixers are often equipped with effects memory registers, which can store a snapshot of any part
of a complex mixer configuration and then recall the setup with one button press.
It is recommended to go through the operational manual of such mixers installed at a Kendra for
their special features and operational details.
SETUP
Vision mixers often separate the control panel from the actual circuitry because of noise,
temperature and cable length considerations. The control panel is located in the production
control room, while the main unit, to which all cables are connected, is located in a MCR
alongside the other hardware. Remote control panel is usually connected to the main electronics
via an Ethernet cable with RJ 45 connectors. (Fig. 7)
109
Induction Course (Television)
DD has now replaced all analog switcher with digital one because of their several advantages:
Much easier for complex signal processing and manipulation for efficient operations
110
Vision Mixing
Can provide wider luminance and chrominance bandwidth, thus providing better keying
facilities and other built-in special effects. These switchers can also handle multiple
formats and can have an optional built in frame synchronizer for every bus.
A change from analog to a digital format in Doordarshan has brought a major improvement in
picture quality.
111
Induction Course (Television)
ACTIVITIES
1. Find out the entire requirement to synchronize multiple cameras to a video switcher.
2. Study the preparation required for chroma keying setup.
3. Study the different features available in the video switcher installed on your kendra.
4. Study one AV switcher i.e. the switcher that can handle audio as well as video. Find put
the application of such AV switchers in our network.
5. Study the difference in switchers used for web streaming, archiving, studio recording, live
telecast and field productions.
RECAP
FURTHER READINGS
******
112
11
VIDEO RECORDING FORMATS
INTRODUCTION
Format of video tape recorder defines the arrangement of magnetic information on the tape. It
specifies:
All machines conforming to one format have similar parameters to enable compatibility or
interchange i.e. the tape recorded on one machine is faithfully reproduced on the other. There
are number of formats in video tape recording and the number further gets multiplied due to
different TV standards prevailing in various countries.
OBJECTIVES
Faraday’s law of magnetism provided the rules for conversion of electrical signals to magnetic
field. Certain materials when brought in a magnetic field get magnetized and retain the
magnetism permanently until altered. This forms the basis for magnetic recording and the
material mentioned is called ferromagnetic material. High value of permeability ( ) of these
material helps to enhance this conversion. Property of the ferromagnetic materials to retain
magnetism even after the current (or H) is removed is called retentively and is used for recording
electrical signals in magnetic form on a magnetic tape.
One may note that the ferromagnetic material with broader BH curve (Hard Material with high
retentivety) are most suited for a magnetic coating on tapes while the material with narrow a BH
113
Induction Course (Television)
curve (Soft Material with less receptivity) are required for magnetic heads. This is so as the
heads are not required to retain information.
You may recall basic theory of magnetism for a round coil of length L, turns N, with an electric
current I through the coil, then
RECORDING PROCESS
When a tape is passed over the magnetic flux bubble on the gap portion, the electric signal in
the head coil will cause the electric lines of force from the head gap to pass through the
magnetic material of the tape producing small magnets depending upon the strength of the
current. Polarity of the magnetic field which causes these bar magnets depends on the change
of current. Decreasing current will cause NS magnet and vice versa. Power of these magnets is
as per BH curve. Thus the magnetic flux strengthens the unarranged magnetic particles as per
the signal strength and they stay in that condition after the tape has already passed the
magnetic head. Length of the magnet thus formed is directly proportional to writing speed ‘v’ of
the head and inversely proportional to the frequency f of the signal to be recorded, i.e.
114
Video Recording Formats
It may be noted that when the gap becomes equal to ƛ, two adjoining bar magnets may produce
opposite current during playback causing the output to become zero. Similar thing will happen
when the gap equals 2, 3 …n. times ƛ. First extinction frequency occurs when gap
becomes equal to ƛ. For getting maximum output, head gap has to be one half of wavelength.
Frequency at which zero output occurs is called extinction frequency. Thus the maximum usable
frequency (MUF) becomes half of the extinction frequency. These parameters are related by:
MUF = , (Substituting as or )
So higher video frequencies of signal for recording will need, high writing speed or a reduction in
head gap. But unfortunately, the reduction of head gap is limited by the mechanical
considerations. Its head gap size has to be near ½ ƛ tape.
So the recording video signal using stationary video heads with a very high tape speed of about
of 9 m/s was very difficult to manage. The tape transport at such a high speed was extremely
difficult to control besides a very high tape consumption.
Two revolutionary ideas which laid the foundation of present day VCRs were:-
Considering the ratio of video to audio frequencies as about 300, one must either increase the
writing speed or reduce the gap by a factor of 300 to satisfy the above equation connecting head
gap with . Since the gap cannot be reduced, so a tape speed of about 60 mph may be required.
to cope with the higher video frequencies, which is not practical. Keeping in mind this practical
limitation, with a minimum possible gap of 0.025 mil, we may still require a writing speed of
about 15 meters/sec for a video recording. This was again not possible with a stationary head
type recorder.
115
Induction Course (Television)
Amplitude
Head gap Period
to be kept
less than
/2
/2
Length of
the magnet
N N S S N
NS SN
Current Current
Decreases Increases
Let us know find out, how a rotating head has solved this problem. With reference to figure 3
along with the corresponding tape path, a video head mounted on a rotating head wheel now
writes in transverse direction on tape as compared to straight or linear writing in case of fixed
heads. Video head while moving across the tape will lay a track of length which will depend not
only on the speed of the tape but also on the rotating speed of the head. Consider a single head
with diameter D, and r number of rotation per sec with full omega wrap i.e., 360 degrees of tape
contact will have a very high writing speed of Dr (minus or plus the linear tape speed, which is
negligible as compared to the rotating speed). This is same as two heads in ½ omega wrap i.e.
little over 180 degree, laying two tracks in one revolution. 1/2 omega wrap is commonly used in
most of the VCRs. This avoids the requirements of miles of tape for few minutes of recording
with stationary head recorders tried earlier.
The dynamic range of the magnetic medium is limited to about 10 octaves. During play back
when the recorded tape is passed over the head gap at the same speed at which it was
recorded, flux lines emerging from the tape on crossing the head gap induce voltage in the coil
proportional to the rate of change of flux, i.e. d/dt. This in turn depends on the frequency of the
recorded signal. Increase in frequency by two times will causes the output
116
Video Recording Formats
Tape Capstan
Magnetic tape Shaft
Pinch
Roller
Speed=V
Linear tape velocity Head (A)
Provided by capstan Head (B)
Magnetic tape with long diagonal tracks
These two heads will records two track
i.e. one each in every revolution of
the head drum on the video tape.
Full wrap with one head & 360 of tape to head contact Half wrap with two heads & 180 of tape to head contact
Fig. 3: Helical scan & tracks Fig 4 Tape path with Half wrap
Recorded track length for a head drum having two heads with ½ omega wrap will be :-
This provides two variable i.e. D and r to get any desired WS.
voltage to increase by 6dBs as per the well-known 6 dB/octave playback characteristics of the
recording medium. This holds good only up to a certain limit as per the dynamic limit, after that
lot of loss take place causing noise to increase more than the signal. Hence the system can no
longer be used for recording/reproduction after this dynamic range of 60 dB.
FREQUENCY MODULATION
117
Induction Course (Television)
Timing accuracy is very important for VCRs, as our eyes are very sensitive to these errors
compared to our ears which may not detect these errors in audio tape recorders. In order to
reduce these timing errors it is important to create same conditions for the capstan and drum
motors of video tape recorders at the time of playback, which were used at the time of recording.
To achieve this, the status of these motors during recording is written on the tape itself along
with the signal, called control track (ctl) and is used during playback as one of inputs to the
servo system. Servo systems are employed to control various motors to ensure constant tape
tension and minimize timing errors.
Most of the video tape recorders provide electronics to electronics monitoring (EE Mode) at the
time of recording. The video signal is monitored after routing it through all the signal system
electronics of the recorders excluding the video heads and preamplifiers etc. Some of the
recorders also provide simultaneous playback for the off tape monitoring by using additional
heads during recording called confidence heads. These features are very helpful to the
Broadcasting needs of technical operations.
FORMATS
A) Analog Tape formats
During the earlier years of television, the professional broadcasters were using video tape
recorders (VTR) with composite analog video. These were, Quadruplex format (of 2” tape
size) and B or C tape formats (of 1" size). All these formats were using open reels that
required manual threading of tape on a VTR for use. These machines were then replaced by u-
matic cassette recorders (VCR) of ¾“ tape, followed by the best quality component analog
format with separate luminance & chrominance recording on ½”tape, called Betacam SP from
SONY. Video cassettes with smaller size of the tape made it possible to have much needed
portable recorders with automatic threading for easier operations.
Modern television post production requires multi-generation playback & transparent recordings
from video cassette players/recorders, which can be met only by the digital formats without loss
in picture quality. In order to have a standard sampling rate for component video in either 525
or 625 system, CCIR -601 (ITU-R 601) standard has
118
Video Recording Formats
combined the two systems to arrive at a sampling rate based on carefully chosen frequency of
3.375 MHz Four times this frequency i.e., 13.5 MHz gives 864 samples in one line of video
information (similarly 858 samples in 525 systems). As PAL colour TV components R-Y & B-Y
requires low band width so a sampling rate of 6.75 MHz has been considered as enough for
colour component. Thus CCIR-601, 4:2:2 Digital Component system recommends the
following sampling rates for,
119
Induction Course (Television)
For digital composite signal to have a less complex system the sampling rate is fixed as 4X
fsc. This helps to use digital filters with gradual cut off for decoding of the composite PAL signal.
D Series Tape Formats are a series of broadcast digital formats introduced in the early 1990s.
D-1 – is a Society of Motion Picture and Television Engineers (SMPTE) digital recording video
standard, introduced in 1986 through efforts by SMPTE engineering committees. It was the first
major professional digital video format.
D-2 – is a professional digital videocassette format created by Ampex. It was introduced at the
1988 NAB (National Association of Broadcasters) convention as a composite video alternative to
the component video D-1 format.
D-5 is a professional digital video format introduced by Panasonic in 1994. Like Sony’s D-1 (8-
bit), it is an uncompressed digital component system (10bit), but uses the same half-inch tapes
as Panasonic’s digital composite D-3 format.
D-9 or Digital-S as it was originally known, is a professional digital videocassette format created
by JVC in 1995.
120
Video Recording Formats
Audio tracks at
Un-compressed
Compression
compressed
48 KHz SF
Video data
Video data
Format
Tape size
Sampling
(Bits/S)
inch/
Ratio/
mbps
Type
mm
b/s
b/s
D1 422 19 8 16/ nil 225 172 172 4
20
D2 4fsc 19 8 16/ nil 127 94 94 4
20
D3 4fsc ½” 8 16/ nil 125 94 94 4
20
D5 422 ½” 10 16/ nil 300 220 220 4
20
Dig- Beta 422 ½” 10 16/ 2:1 128 219 95 4
MP 20 DCT
DVC-Pro 25 411 ¼” 8 16 5:1 41.8 125 25 2
MP DV
DVC-Pro 422 ¼” 8 16 3.3:1 100 168 50 4
50 D7 MP DV
DVC Pro 422 ¼” 8 16 1.7:1 165 168 100 4
100 MP DV
DV-CAM 411 ¼” 8 16 5:1 35.5 125 25 2
420 ME DV
IEEE 1394 is interface standard for DV, DVCAM & DVCPRO follow both the standards
121
Induction Course (Television)
ii) DVC PRO 25 is an improvement over DV format & has been introduced specially for news
gathering. DVC PRO offers digital component recording based on 4:1:1, ITU-R 601 standard
with intra frame compression of 5:1 and digital quality CD sound. After 5:1 Intra frame
compression, data rate reduces to 25 Mbps. DVC PRO offers quality of video in between
D1&D5, which is as good as Betacam SP. DVCPRO is based on 1/4" (6.3mm) metal particle
tape in two sizes offering up to 2 hours of recording. Track pitch of DVC PRO is almost double
than that of DV system. It uses wider tracks of 18 µm, which causes reduction in recording
density, tape drop out with easier mechanical tolerances. Time code is also recorded on helical
tracks and hence can be read at any speed unlike Betacam SP. DVCPRO can also play back
DV tapes. Compressed data rate of 25 Mbps at 4:1:1 is a balance between many factors. 4:1:1
is an acceptable compromise which allows same resolution for luminance and half the chroma
resolution as compared to a 4:2:2 configurations. This is acceptable, as our eye is not able to
sense more than half chroma resolution of 1.5 MHz.
The compression used is mild & simple. Each frame is compressed separately and does not rely
on previous or following frame. Because of this, a simple frame based editing without external
and complex hardware is possible in this format. Also because the compression system is frame
based, motion artefacts are non-existent.
iii) DVC PRO 50: Doordarshan is using both DVC PRO 25 and DVC PRO 50. It is a
component system with a data rate of 50 Mbps using 4:2:2 sampling structure. It is
compatible with 25 Mbps DVCPRO. This makes it almost equivalent to its superior 1/2 inch
digital format. Compression ratio used is Intra frame 3.3: 1. It also has a switchable aspect ratio
between 4:3 /16:9 & 4 PCM audio channels.
vi) DV CAM This format is a Sony’s answer to DVCPRO. It is also based on DV standard using
1/4 '’ ME tape unlike DVCPRO which uses metal particles tapes (MP). It records 4:2:0 digital
video with 5:1 compression, digital audio, control track & time code information on slant tracks
like DV format at a data rate of 25 Mbps. There is no linear control track & the tape transport is
122
Video Recording Formats
150 rps (9000 rpm ) The track width is 15 µm unlike DVC pro of 18 µm . Wider tracks provide
easy tracking resulting in wider window for mechanical adjustment tolerances.
Writing speed
Linear speed
Track per rev.
Drum Sz mm
Drum Sp rps
Segment/F
Pitch µm
micron
Format
heads
Total
cm/s
m/s
D1 4 28.6 30.8 75 150 6 12 45
Note: Data for compiling this table has been collected from various sources for the
purpose of relative study. Exact figures may vary slightly in some cases.
Optical Blu-ray Disc (BD) is a digital optical disc data storage format and can store high
resolution SD/HD video.. Blu-ray Discs contain 25 GB per layer. Dual layer discs (50 GB) is now
the industry standard for feature-length video discs. Triple layer discs (100 GB) and quadruple
123
Induction Course (Television)
layers (128 GB) are also available. Blu-ray Disc refers to the blue laser used to read the disc and
it allows information to be stored at a greater density as compared to longer-wavelength red
laser used for DVDs. This disc allows storage of video and audio with higher definition than on
DVD. High-definition video may be stored on Blu-ray Discs with up to 1080p resolution
(1920×1080 pixels).
XDCAM format of Sony is based on Blu ray disc. It is developed for digital recording using
random access solid-state memory media. Different models available are XDCAM SD,
XDCAM HD, XDCAM EX and XDCAM HD422. They differ in types of encoder used, frame size,
container type and record media. Different formats within XDCAM family have been designed to
meet different applications and budget constraints. XDCAM HD422 has been recently adopted
by Doordarshan. The XDCAM format uses multiple video compression with media container
formats. Video can be recorded with DV, MPEG-2 or MPEG-4 compression schemes. DV is
used for standard definition video, MPEG-2 is used. both for standard and high definition video,
while MPEG-4 is used for proxy video. Audio is recorded in uncompressed PCM form for all
formats except proxy video.
Recording options
MPEG HD: It is used in all products except for XDCAM SD. This format supports multiple
frame sizes, frame rates, scanning types and quality modes.
XDCAM and XDCAM HD stores video and audio content on disc within MXF container. Whereas
XDCAM EX stores the digital content is on disc within MP4 container. MXF and MP4 files can
store video and audio data in almost any frame rate and codec, along with metadata similar to
QuickTime movie files. Metadata include information about the content, such as the date of
recording, GPS positioning data, and so on.
The Professional Disc as a medium for non-linear video acquisition is similar to Blu-ray Disc and
holds either 23 GB of data (PFD23, single-layer, rewritable), 50 GB (PFD50, dual-layer,
rewritable), 100GB (triple-layer, rewritable) or 128 GB (PFD128QLW, quad-layer, write-
once).The disc is reliable, robust and suitable for field work. The cost of media is comparable to
existing professional formats. This Disc supports a transfer speed of 72 Mbps (or 144 Mbps with
dual heads), whereas a consumer Blu-ray disc has a maximum rate of 36 Mbps.
124
Video Recording Formats
S X S Memory card
Other types of easily available memory cards from local vendors can also be used in Sony
XDCAM EX camcorders with the help of adopter MEAD-MS01 . These are also called
as secure digital memory cards . Available JVC camcorders can also record in XDCAM EX
format with these secure digital memory cards natively.
MXF is an open ended file transfer format and is not compression-scheme specific and simplifies
the integration of systems using MPEG and DV.
TRANSCODING FILES
Transcoded files are those that have been converted to another intermediate file format in order
to use within an NLE. Here either editing software can transcode through the use of Import or
Log. These NLEs can transfer commands or use dedicated transcoding applications that offer
special features like background processing and broader format support. Many workflows rely on
transcoded footage as it reduces the burden on the processor, operating system and graphics
card. A transcoded workflow means a less powerful computer can edit HD video.
The project file created by nonlinear editing is an incredibly valuable file. It contains instructions
for every clip, cut, audio clip or transition etc.. While the project file itself will generally be
125
Induction Course (Television)
proprietary to the NLE program that created it, it is often possible to export a project file from a
program, such as Adobe Premiere Pro, to another make such as Final Cut Pro, by the XML data
file. It’s very important to preserve the project file, this will help to reconnect to the archived
camera media and completely reconstruct a project in minimal time if required.
Choosing the best file format for delivering digital video files requires good communication and
collaboration with the persons receiving the files. Before deciding on this one may have to
consider:-
Operating system: Both the manufacturer and version of an operating system can place
major constraints on file formats. For example, while most websites have standardized
on Flash or H.264 delivery via HTML 5, but many others still be working with older
incompatible systems.
Delivery method: If the file needs to be downloaded after delivery then file size is a
concern. The size requirements for a mobile phone are much lower than those for a
desktop computer connected to a high-speed network.
Personal preference: Any preferences say like, the creation of Windows Media files,
which is very difficult on the Mac platform due to the lack of a free encoder. Support from
Microsoft is still required for WMV fills.
FLV/F4V: Flash Video has become a standard way of presenting video content on the
web. In almost every Adobe Creative Suite application you have the option of exporting
files to be encoded in Flash. An F4V file is the most up-to-date Flash video file that
supports H.264 video and expanded metadata. Flash video does not play on portable
Apple devices.
MPEG-2: DVDs typically use MPEG 2 video. The MPEG-2 format is old, but still a
standard that is broadly compatible with both computers and DVD players.
MPEG-4/H.264: MPEG-4 is one of the most popular digital files on the market today,
especially those using the H.264 codec. These files are used by everything from
YouTube to iPods. Even the high-quality Blu-ray discs utilize H.264 video.
QuickTime: Apple QuickTime files are not widely used for distribution to devices or the
web (having fallen out of favour to the more compatible MPEG-4/H.264 formats). Note
that not all QuickTime files are compressed - you can output a project to match the
quality of the original sequence settings to create a higher-resolution backup. QuickTime
files encoded with the ProRes codec which is the standard for Final Cut Pro.
Windows Media: The Windows Media Video (WMV) is a video format developed by
Microsoft. It is widely used in corporate environments due to its broad compatibility with
the Windows operating system. Windows Media has lost some of its foothold, however,
as Microsoft has shifted to their newer media player, Silverlight, which supports H.264.
126
Video Recording Formats
ACTIVITIES
Study the old head drum assemblies of different format used in Doordarshan and
analyze the comparison table mentioned in this chapter.
Prepare a chart for different types of storage devices available for recording video and
compare their cost per GB.
Study the construction of different types of cassettes and the associated VCR deck, note
down the various parts coming in contact with the tape.
RECAP
Recording is an important activity of any broadcast system. Analog and digital signals recording
is done on different tape formats. Formats used for analog and digital signal recording have
been discussed in this chapter. Doodarshan has adopted Betacam as analog format, and DVC
pro as a digital format for video recording. Compression of various analog and video format has
been tabulated for ready reference. Realizing the tape transfer complexity, tapeless production
is being adopted by production houses due to its easy handling and noise-free seamless
operation. Reliable, robust and high capacity (upto 128 GB) professional optical discs have also
been developed for non-linear video acquisition. Realizing the advantages of tapeless work,
Doordarshan has adopted the SONY XD cam format. Sony has also developed digital memory
cards for different video cam coders. S x S memory card and memory stick cards are a few
example of the same.
FURTHER READINGS
******
127
12
PROFESSIONAL DIGITAL VCRS
INTRODUCTION
Magnetic tape is still the cheapest method of storing analog as well as digital information.
However digital video recording on disk and other memory devices is also possible. Magnetic
tape is gradually losing its importance because of linear and time consuming methods of data
retrieval. It is much faster on available nonlinear devices for storing digital video data. But the
magnetic tape will coexist for some more time because of the archival material available on
tapes.
OBJECTIVES
Any analog or digital VCR will have many sections or systems. These are a mechanical deck,
Video system, Servo system, System control, Audio system with Power supply system. These
are usually rigged up on different PCB in modular form and then connected together via a
mother board to work in synchronism for this complex machine.
DIGITAL VCRS
Analog machines could not achieve the required excellence beyond a certain point after the
most advanced and successful Betacam SP Format. Time base errors, deterioration in multiple
generation copies, error corrections etc., were some of the main areas where the digital
recorders could score much better than analog machines. Compression technology in digital
video also offered a great advantage in the development of portable and professional VCRs for
television production and post-production.
As we know uncompressed video data for standard definition TV sampled with 10 bits is 270
Mbps, which is quite large. Compression is a digital process in which we can reduce this data by
129
Induction Course (Television)
A) Spatial Compression
An image can be divided into low, medium and high spatial frequencies. If an image has
constant intensity variation, the image will represent zero spatial frequency. Gradual change in
the intensity distribution represents low-to-medium frequency. Abrupt changes or sharp
boundaries will indicate high frequency contents. Human Visual System (HVS) is highly sensitive
to low frequency contents and least sensitive to high frequency contents. The high frequencies
are perceived as redundant and considered as spatial redundancy. The main task is to separate
low, medium and high frequencies is achieved by two dimensional discrete cosine transform
(2D-DCT) before applying the compression.
Video can be thought of sequence of images representing a moving scene. Very important
property of video frames is that they are highly correlated and carry almost similar information.
Similarity between the successive frames is called temporal redundancy and removal of such
redundancy is known as temporal compression. Temporal compressions provide massive
reduction in redundant data by Motion Estimation with respect to the current frame. The motion
compensator predicts a frame, P with the help of available motion vectors and the previous
frame. If the prediction is fairly accurate, the predicted frame, P can nearly be similar to the
current frame. The predicted frame is then compared with the current frame. This comparison
results in a prediction error frame or a difference frame, which is further subjected to spatial
compression with. DCT, Quantization and VLC for inter-frame encoded compression. It is quite
obvious that one can achieve higher degree of temporal compression by increasing prediction
accuracy. More the prediction accuracy, more be the predicted frame similar to the incoming or
current frame. The difference of these frames would consequently lead to infinitesimally small
information, resulting in higher compression. The prediction accuracy can be enhanced by
incorporating number of frames called group of pictures (GOP)
2. Video System: This system contains the electronics relating to video processing
including codec for compression and decompression of the base band signal.
3. Audio and Power supply System: This section deals with the audio processing and
conditioning of power supply for the VCR.
4. Servo System: This system ensures and maintains the required read and write speed.
To achieve this requirement, the system is supported by Drum Servo System, Capstan
servo and Reel Servo System.
130
Professional Digital VCRS
Out of several digital VCRs of different makes available to the broadcasters, DVC Pro and XD
Cam are being used in Doordarshan.
8 PIXELS
B - Y SIGNAL DCT 5
8 PIXELS
R - Y SIGNAL DCT 4
R - Y SIGNAL DCT 4
DCT 0 DCT 1
Y SIGNAL
Fig. 1: Macro Block formation using six numbers of 8x8 Blocks for compression
131
Induction Course (Television)
reduced further to 124 Mbps by removing horizontal and vertical blanking signal data from the
video data stream. Each block of samples represents an actual area of the viewable picture on
the screen. Four of these Y blocks (2 blocks vertically x 2 blocks horizontally) are then combined
together with one (R-Y) and one (B-Y) sample blocks (representing the same area on the
screen) to form a macro block. Since the (R-Y) and (B-Y) sampling rate is precisely 1/4th that of
Y, only one block each of (R-Y) and (B-Y) samples will be available in the same picture area.
Analog SMPTE259M-C
Component EBU Tech.3267-E
Video input Digital Component
Video Output Compression
132
Professional Digital VCRS
Data shuffling
Shuffling of the resulted data blocks is needed before sending for compression to average out
the compression rate used across each frame of video by mixing high frequency picture
information with low frequency information and performing group compression on the result. This
helps in sending fixed data rate on to the tape.
In order to get the vast amount of video data onto the tape at a fixed data rate it is necessary to
carry out data compression. If all video pictures were the same (i.e. every Macro Block
presented the same degree of difficulty to the compression circuit) then a fixed compression rate
could be used. In reality, some Macro blocks are more difficult than others.
For instance, Macro blocks representing clear blue sky on a sunny day would be relatively easy
to process as they contain very little detail (virtually dc) with constant brightness. However,
another Macro block in the same picture representing a brightly coloured and highly detailed
flowerbed scene would be much more difficult (containing lots of high frequency components in
the Y and colour difference domain) to process.
Under these circumstances, the fixed rate compression circuit would find it very difficult to
produce a fixed output rate while attempting to retain the large amount of detailed information
contained in each Macro block, producing a visual impairment on the picture screen known as
‘Blocking’ where the square Macro block shapes become easily visible.
133
Induction Course (Television)
One way around this is to process several Macro block together in one go, having firstly selected
these from different areas of the picture frame in a pseudo-random fashion. When this group of
Macro blocks (called a “video sector”) are assembled together ready for DCT and compression,
some will be easy and some will be difficult to process.
Shuffling (1)
A super-matrix is constructed which is a 5 x 12 for PAL. Each of the five horizontal sections is
called a ‘Super Block’, where each Super Block consists of 27 Macro blocks. A pseudo-random
selection of five Macro blocks (one each from five different super blocks) is then made to
produce the ‘video sector’ ready for processing. In this way the degree of difficulty in processing
the entire video frame will be averaged out. To further reduce the incidence of ‘Blocky’ images,
the compression rate can be adjusted by the use of ‘Adaptive Quantization’.
Finally after compression processing, these ‘Video Segments’ are then de-shuffled by placing
the constituent Macro block back into their original position within the video frame. By doing this,
large areas of ‘Blocking’ within the resultant output picture are avoided. Another reason for this
de-shuffles is to allow the possibility of playback pictures during shuttle forward and rewind. If
the video information was recorded as a shuffled frame, then extensive and very fast de-
shuffling circuits would be necessary.
134
Professional Digital VCRS
The 64 quantization tables are organized into 4 groups of 16 tables each. Group 1 contains
quantization tables optimized for very high spatial detail within the picture. Group 4 is optimized
for very low spatial details, and the other two groups are for medium levels of picture detail.
The compression process first selects one of four groups of quantization tables based on the
measured power of the DCT coefficients of video blocks. The power of the coefficients is an
indication of the amount of spatial detail in the picture.
The DV compression process then selects the final quantization table to be used from the 16
remaining choices. Each of the remaining 16 quantization tables is applied to a virtual
compression of the DCT coded video signal (selection of proper weighing factor and VLC), and
for each of these tables the resulting compressed bytes per segment are counted. A selection is
then made using these results so that the final byte count at the output of the compression is
closest to, but not exceeding, 385 bytes. The selected table is then actually used to perform the
compression. This analysis of the picture by pre calculating the number of compressed bytes,
guarantees a constant byte count per frame, which is needed for video tape recording.
After shuffling of video data, the data in blocks passes through Intra-frame/Inter-field module
consisting of DCT (Discrete Cosine Transform) and then VLC (Variable Length Coding). The
process is purely for a single frame only and there is no link with other frames. The technique is
called DV based compression technique. The strength of DV compression is that it enables
analysis of DCT coded blocks of video prior to the actual compression process. The goal is to
fully optimize the actual compression process. In this, pre-analysis process is performed
separately for each uncompressed 1,920 bytes video segment (5 Macro Blocks).
Error Coding is used to correct playback data errors. All tape formats will suffer dropouts on
replay, usually in the form of small ‘random’ errors. Occasionally, bigger ‘burst’ errors may occur
which can cause severe picture disturbance. Missing or damaged data can usually be detected
and corrected by the error correction systems. Error Correction Code (ECC) is added to the
video, audio and sub code data after reshuffling. For the video and audio data, the processor
uses a form of Reed Solomon product code.
Within the DV (Digital Video) format, audio can be recorded in 2 channel (1 left and 1 right) or 4
channel mode (2 stereo pairs). It is mentioned that audio data is recorded un-compressed. It
uses 48 kHz sampling frequency and 16-bit linear quantizer. Since in PAL, one frame of video
occupies 12 tracks on the tape will provide 40 milliseconds for corresponding audio data (per
channel) in a frame period.
At 48 kHz sampling frequency, 1920 samples are generated in 40 m sec (40 x 48 = 1920) which
have to be accommodated in 6 tracks. DVCPRO format fix the first six tracks for data belonging
to the channel 1 input and the second six tracks for data belonging to channel 2. Each track
contains 1920/6 = 320 samples (320 x 16 bits = 640 bytes) which can be memorized in 9 x 72
matrix in processing module.
Like video auxiliary audio data indicating the type of recording to be made (e.g. stereo or mono,
with or without analog pre-emphasis, sampling frequency(48 kHz or 32 kHz), edit start/end
135
Induction Course (Television)
points for cut or fade, etc) is also added to the original digitized audio data by adding 5 x 9 more
byte of data to the audio data. A similar system of Error Correction Coding as discussed in video
processing section is also used for the Audio data.
The above discussion was fully concentrated on the processing and recording techniques of our
main signal i.e. video and audio. But there are some other associated data requirements for
proper tracking are data sectors like ITI, Edit Gaps, and Sub-Code along with audio and video
data sectors. The purpose of each data sector is unique for proper functioning of the tape deck.
DVCPRO format now offers production standard from field and studio acquisition to editing,
playback-to-air, and archiving. The basic DVCPRO, recording at 25Mbits/s, is considered
optimal for event videos. DVCPRO50 (which SMPTE calls D7) lays down its images at
50Mbits/s with better resolution. The newer DVCPRO HD introduced as DVCPRO100 records
16:9 HD images at 100Mbits. DVCPRO VTRs are backward-compatible, which means that a
DVCPRO50 VTR can also play 25Mbits/s DVCPRO videotapes, and DVCPRO HD can play
back both DVCPRO50 and standard DVCPRO tapes along with its own 100Mbits/s high-
definition recordings
The playback chain is just the opposite of the recording chain. The following diagram shows the
stepwise processes during playback of the DVCPRO cassette.
136
Professional Digital VCRS
Flash Memory
It can record both HD and SD. The main advantage is direct transfer from such memories to the
nonlinear edit machine. Small size of the card allows compact recorders. As such machines do
not have moving parts, so require less maintenance. Cards such as SD and P2 are now
available up to 64 GB especially useful for acquisition of material.
HDD cameras records directly on hard drive built into the camera. We may need about 4 GB for
each hours of professional video recording. Some of these cameras have about 160 GB of
storage space on disc. . Stand-alone recorders based on HDD are also available now
RECORD
PLAY BACK
ECC Memory
P/B
P/B Channel Head
SDI O / P Decompression Reverse
Mapping Insertion
& with Shuffle 25 to 124 ECD Coding
Analog 34.4 Mbps
D/A Processing 411 to 422 Mbps 41.85 Mbps
Video
(270 Mbps
Sub code
AES Audio
Mapping
&
Audio D/A Processing
137
Induction Course (Television)
XD CAM DISC
XD line of disc –based cam-system and studio recorders utilizes blue-violet laser technology to
achieve extremely high data transfer rates. This professional camera system can record up to 4
hours of HD on a dual –layer disc that has a large storage capacity of 50 GB. The discs are
rewritable and as per Sony it can handle about 1000 writes and rewrites. The available Blue ray
professional discs are with a capacity of single layer 23GB double layer 50 GB and quad layer
128 GB. XD Cam is a tapeless professional video system introduced by Sony in 2003. First two
generation of two formats XD Cam and XD Cam HD uses the professional disc of 23 GB (Single
layer or 50 GB Dual layer) as a recording medium. HD Cam format as a medium can use any of
the compression method.
IMX (MPEG IMX) IMX allows recording in SD, using MPEG-2 encoding at data rate of 30, 40 or
50 Megabits per second. At 50 Mbit /s it offers visual quality that is comparable to Digi Beta
XDCAM HD (XDCAM HD420) this supports multiple quality-modes. The HQ-mode records at up
to 35 Mbit/s (HQ mode), using variable bit rate (VBR) MPEG-2 long-GOP compression. The
optional 18 Mbit/s (VBR) and 25 Mbit/s (CBR) modes offer increased recording-time.
XDCAM HD422 (MPEG HD422) Third generation XDCAM uses the 4:2:2 profile of the MPEG-2
codec, which has double the chroma-resolution of the previous generations. To accommodate
the chroma-detail, the maximum video-bit rate has been increased to 50 Mbit/s. Doordarshan
has adopted this format.
The recording time at 50Mbps for single layer and dual layer disc is as per table below:
Table
138
Professional Digital VCRS
ACTIVITIES
Study the tape path across various guides of a mechanical deck of a DVCR
List the various heads coming in contact with the tape
Identify the various sensors
Find out the location of the different PCBs for the different sections of a DVCR
Find out the latest available capacity of flash memories and HDD for recording
SD/ HD video
139
Induction Course (Television)
RECAP
Two kind of video compression for DVCRs is based on spatial and temporal
redundancies within the frame and between the frames respectively.
The different sections of a DVCR are mechanical deck, video system, servo system,
system control, audio system and power supply.
DVCPRO format now offers production standard for field and studio acquisition to
editing, playback-to-air, and archiving. The basic DVCPRO, recording at 25Mbits/s, is
considered optimal for news events. DVCPRO50 (which SMPTE calls D7) lays down
its images at 50Mbits/s has better resolution. The newer DVCPRO HD, introduced as
DVCPRO100 records 16:9 HD images at 100Mbits/s.
DVCPRO VTRs are backward-compatible, which means that a DVCPRO50 VTR can
also play 25Mbits/s DVCPRO videotapes, and DVCPRO HD can play back both
DVCPRO50 and standard DVCPRO tapes along with its own 100Mbits/s high-
definition recordings.
Concept of flash memories, hard disc drive and blue ray disks of high capacity are
some of the latest tapeless storing devices.
FURTHER READINGS
1. Richardson, IEG (2003). H.264 and MPEG-4 Video compression: Video coding for
next generation multimedia. New York: John Wiley.
2. Watkison, John (2008). The Art of Digital Video. Amsterdam: Focal.
3. Zink, Michael(2008)Programing HD DVD and Blue-ray Disc .New York: Mcgraw-Hill
4. Millerson, Gerald (2012) Television Production. Oxford: Focal press
5. Digital video tape recorder by Watkinson, J., 1994
6. Video Compression and Communications, Laios Hanzo, PC (2007), New Delhi, Wiley
Eastern.
7. Digital Video Compression, Saymes, P (2004), New York, McGraw Hill.
******
140
13
TV STUDIO AUTOMATION
INTRODUCTION
Over the years numbers of TV channel have been increasing. This has provided lot of options
to the viewers. This competition among multi-channels for TV transmission demands a system
which can create and deliver TV programs quickly and efficiently to capture audiences. The
choice gets multiplied across TV, radio, web, and mobile outlets. The user now demand and
prefer the best in quality in all respect. For quick, seamless and efficient workflow, the emerging
trend is to look for the automation system which supports integration with social media,
production equipment, transmission play out and archival system.
OBJECTIVES
WHY AUTOMATION
141
Induction Course (Television)
No Multiple Task- Work on that material is limited to one task that the individual is
performing.
No Tool for Integration i.e., for using an asset for any task, tapes have to be
physically retrieved and taken to the location for use
It is not possible to monitor the overall process.
The above issues reflect poor production with delays and mistakes on air.,
AUTOMATION SYSTEM
It is like a distributed control system wherein the functionality of any equipment in the chain is
controlled from any remote workstation. It is an open-standards-based system where the entire
program workflow i.e. planning, acquisition, production, post-production, play-out and
transmission is integrated. The automation process is based on time clock known as time-code
(TC). All the events on control time line are marked and executed as per the TC.
The activities pertaining to play out means all the equipment in the PCR is controlled by
automation server. It is designed to automate the play-out of commercial spots, syndicated
programs, station ID's, and so on. This type of automation is Time-based Automation, which
simply means that it performs tasks based upon the time schedule. It generally consists of three
operations – ingesting, scheduling and transmission.
This system operates through central /automation server in mirror mode (1+1) which controls
network resources including third party equipment. Client workstations are physically distributed
& each node is connected to the system. Functionality is controlled by each node which
increases system reliability.
workstation
Applications
Engines
Client software
15
3rd Party
142
News Automation
The term newsroom system includes all of those activities that are expected to be available on
the journalist's desktop. The list of functions expected from the newsroom system includes
ingest, encoding/trans-coding, cataloguing with metadata creation, search engine, low-resolution
proxy browsing, shot-listing for editing, voiceover recording, high-resolution conforming and
finishing. This may also include video server management, scriptwriting, running order creation,
graphics creation, prompting, captioning, subtitling, on-air play out automation, web and DVD
authoring with publishing and archiving of media.
i) News script automation: Doordarshan in India was the first channel to induct “News
Script Automation” system in the year 1995. The Newsroom Computer System (NRCS)
provides:-
ii) Integrated news Automation system: This is true Integrated Automation at the
highest level because of simultaneous Newsroom Automation and the play-out
Automation. All the activities are 100% integrated into a single application to perform,
track and coordinates. This provide:
143
Induction Course (Television)
Fig 2 & 3 below shows the technical set up in brief. You may note that the low resolution and
high resolution data is networked by Giga Ethernet switch ( with CAT 6 cable) and optical fibre
switch on OFC for connectivity respectively. Fibre channel switch will handle the higher data
rate of high res. Video, whereas Gigabit Ethernet is used for control and preview work on
respective work station. This networked block of equipment facilitates the flow of desired data for
equipment control and programme output. Master output of higher resolution quality is
transformed from transmission server via SDI router switcher for final destination to the TV
Master Presentation switcher. It provides the flexibility in operations for distribution. DD News
has adopted integrated automation system equipped with AP’s ENPS for script preparation and
Quantel enterprise solution for the required video workflow. Video, files and metadata are
handled seamlessly in a totally integrated workflow. ENG overages and live feeds are ingested
in the SAN storage and the Quantel workflow facilitates to convert raw visuals to finished
products ready for telecast. As an option, one may also interface programme archive including
engine based retrieval system for integration with automated system to provide flexibility in
operations.
CAMERA
ROUTER MASTER
VCR SDI PRESENTATION
LOW RES SWITCHER
STORAGE
LOW RES
SERVER
INGEST TRANSMISSION
STATION FC SWITCH TRANSMISSION SERVER
SDI
TV STORAGE
SIGNAL HIGH RES
SERVER
CRAFT
EDITING
WORK NRCS
MAIN
STN. SERVER
STORAGE
144
News Automation
DESKTOP
(CLIP NAMING) EDIT EDITORIAL
ASSIGNMENTS SCRIPT CONTROL
1 WRITING 6
3
FIELD Rundown
TAPES Creation
DESKTOP PLAYOUT
INGEST Viewing DESKTOP For
2 & EDIT transmission
FIELD Selection CRAFT 7
TAPES EDIT
NEW INITIATIVES
To provide journalists more freedom to contribute from anywhere, News channels are now
integrating cloud computing with their automation systems. This helps journalists, reporters to
login into the newsroom systems through web browser with proper authentication from any
remote location. They can preview the content and clips from anywhere at any time. Cloud
computing enables the broadcaster to have virtually unlimited scalability for meeting the peak
demands in processing the growing size of digital content.
ACTIVITIES
Log in to different social media sites, twitters and DD websites and study the associated
features/architectures.
RECAP
Automated system from ingest to delivery of TV programmes has become necessity for an
efficient Multichannel Broadcasting. The entire studio equipment for production and post-
production is integrated with the automation server and is controlled by different operational
needs for a designed and well customised complete workflow.
FURTHER READINGS
******
145
14
NON LINEAR EDITING
& 3-D GRAPHICS
INTRODUCTION
Editing is a process where one can place the desired audio/video clips in an appropriate
sequence on time line. This is the main objective for any of the post-production set up. Linear
editing is tape based and is sequential in nature. It has various problems like long hours spent
on rewinding of tapes in search of material, potential risk of damage to original footage, difficult
to insert a new shot in an edit, difficult to experiment with variations; quality loss is more, limited
composting effects and desired repairs including color correction capability.
Non-linear editing (NLE) is a video editing in digital format by using standard computer based
technology. It is similar to the word processing with cut, paste and drag tools. Computer
technology has random access, computational and manipulation capability, multiple copies
without generation loss, multiple versions, intelligent search, sophisticated project and media
management tools, standard interfaces and powerful display.
OBJECTIVES
ADVANTAGES OF NLE
147
Induction Course (Television)
A useful way to feel the difference between linear and non-linear editing system is to compare
them to typewriters and word processors. Using typewriter is like linear editing – you start typing
at the beginning of the essay and then keep going in a linear fashion until you reach the end.
Mistakes along the way can be messy to cover up. Even worse, subsequent revisions are
almost impossible to undertake cleanly. Imagine that you want to insert a new sentence into the
middle of the essay; you will have to retype the rest of the essay from that page on.
Like a word processor, non-linear editing permits you to make changes easily. Sequence can
be moved around in the same way that you can cut sentences and paragraph and paste them to
a new location. Shots can be added and deleted simply. With a word processor, it is easy to
save different versions of your essay. With non-linear editing you can create different versions
of your video project without having to redo the project from scratch each time.
HARDWARE REQUIREMENTS
148
Non-Linear Editing & 3 D Graphics
BREAKOUT BOX
Various video sources like VTR, CD player, camera and other playback/recording devices are
connected to NLE machine through breakout box. The NLE machine takes input from various
video sources for editing and gives output for monitoring and recording through break out box.
149
Induction Course (Television)
ANALOG INPUTS
There are three analog inputs (1) Component video (2) S-video (3) Composite video
150
Non-Linear Editing & 3 D Graphics
To capture synchronized audio with your video, you must connect audio out from the VTR or
other play back device to the audio inputs. You can also connect audio only devices for sound.
DPS velocity board (NLE hardware) has three analog audio input options, balanced, unbalanced
and Aux.
Time code is a kind of digital address of each and every frame of a clip. It is written on the media
of the clip itself. It has 8 digits, two each for Hrs. Minutes, Seconds and Frames. The least count
is a frame which is 1/25th of a second. This is based on our scanning system used in the
adopted TV system. Time code is used for edit decision mark in order to come to a desired
frame for executing an edit. The two versions of time code that are available with dps reality
board are:
a) Linear Time Code (LTC): LTC is placed on one of the linear audio tracks of the video
tape.
b) Vertical Interval Time Code (VITC): It is recorded within the video picture, during the
vertical blanking interval.
If you have an RS-422 cable connected to your computer, you may also acquire time
code through that interface.
ANALOG OUTPUTS
151
Induction Course (Television)
Component analog video (CAV) has three BNC connectors, labeled Y, B-Y, R-Y. These three
colour component has to be used simultaneously by using three cables. As an option, the fourth
one is a composite analog video output which can be distributed by using a VDA for monitoring
or recording etc. Interfacing by using component video should always be preferred as a first
choice, as it is free from SC frequency and the colour components are in base band.
As shown in fig. 7, choose the type of Audio/Video output based on required VTR inputs
interfaces or for that matter any other video and audio equipment. Both the options of balanced
or unbalanced audio are available in NLEs. Audio output can be connected for audio monitoring
and recording on VTR or any other audio recording device
DIGITAL INPUT/OUTPUT
NLEs have a digital audio/video interface connectors besides analog I/O as described above. It
includes three digital video SDI BNC connectors, DV option, and the IEEE-1394 interface either
through a serial port knockout or through an unused PCI slot.
AES/EBU and S/P DIF connectors on the breakout box provide the audio interface. . There are
three digital audio options
152
Non-Linear Editing & 3 D Graphics
VTRs are remotely controlled through this cable with 9 pin D connectors from NLEs.
3 –D GRAPHICS
Softimage 3 D Extreme has given different modules that correspond to different phases of the
workflow process to create animation. Each of the modules replaces some of the menu cells on
the left and right, while leaving other menu cells that are applicable in all modules. The modules
are listed along the top right corner of the screen: Model, Motion, Actor, Matter, and Tools. You
can enter these modules either by clicking the text labels in the top right corner or by pressing
the supra keys that represent them: F1 for Model, F2 for Motion, F3 for Actor, F4 for Matter, and
F5 for Tools.
The first four modules (Module, Motion, Actor and Matter) are the core components of the
Softimage workflow. The last one i.e., Tools, is useful occasionally when importing images,
converting files, looking at your work, and sending your finished frames to a disk recorder or film
recorder for finished output. The first four modules share most of the menu cells on the right
menu column and the top seven menu cells of the left menu column. Tools replace them all
completely, sharing only the Exit menu cell.
Model
You start your workflow in the Model module, where you construct all your scene elements.
Model’s tools enable you to create objects from primitive shapes, draw curves, and develop
surfaces from those curves.
Motion
You then move to animate some parts of your scene, using the animation tools found in the
Motion module. The Motion module allows you to set animation key frames for objects, assign
objects to paths, and to see and edit the resulting animation on screen. After you have refined
your animation using the F curve tools, you move to the next module, Actor.
153
Induction Course (Television)
Actor
The Actor module contains the special Softimage tools for setting up virtual actors, assigning
inverse kinematic skeletons, assigning skin, adjusting skeletons deformations, and weighting the
skin to the IK skeletons. Actor also contains the controls for physical-based animation –
Dynamics, Collisions, and Qstretch – which is an automatic squash-and-stretch feature.
Matter
When your modelling, animation, and acting are complete, you move to the fourth module:
Matter. In the Matter module, you assign color and material values to the objects in your scene,
determining how they will look in the final render. At any time in the first four modules, you can
create lights and adjust their effect on the scene. The Matter module is also where you perform
the last step in the workflow process, rendering.
Tools
Tools contain a variety of utility programs for viewing, editing and exporting your work. You may
view individual images, sequences of images, and line tests. You may bring in images created
in other programmes as image maps or import objects created in other programs as geometry.
You can composite sequences of images together, reduce colours in sequences of images for
reduced colour games systems, and move your finished work to video disk recorders and film
recorders.
154
Non-Linear Editing & 3 D Graphics
ACTIVITIES
Interface two VCRs, AV monitor and a microphone with NLEs by using different interface
options for both recording and ingest purpose.
Ingest few video clips to a NLE and produce continuous tracks with different transitions in
between various clips.
RECAP
Linear system of editing, used earlier with tapes, was time consuming. It used to involve lot of
tape shuffling to come to a desired clip on tape for editing purposes. Nonlinear editing system
has lot of advantages over the linear system. It is fast, accurate, allows multiple generation of
transparent copies for special transition and effects in post-production set up. Time code is a
kind of digital address of each and every frame of a clip. It is written on the media of the clip
itself. It has 8 digits: two each for hours, minutes, seconds and frames. The least count is a
frame which is 1/25 of a second. This is based on our scanning system used in the adopted TV
system. Time code is used as edit decision mark in order to come to a desired frame for
executing an edit in both the linear and nonlinear system of editing. There are two types of time
code options, called LTC & VITC. LTC is placed on one of the linear audio tracks of the video
tape whereas VITC is recorded, during the vertical blanking interval. 3- D graphics system has
several tools to facilitate the generation of graphics and several other special effects.
FURTHER READINGS
******
155
15
VIDEO SERVERS
INTRODUCTION
Servers are rapidly becoming the backbone of television workflows for studio productions as well
as TV newsrooms. Video server supports file based production with a faster delivering speed,
reliability and are economical at every step of the process. It does not matter how the material is
captured in the field. A video server is a computer based device (also called a 'host') dedicated
to delivering video. Though similar to PC, a multi-application device, video server is designed for
a single purpose of providing video to broadcasters. Current format transparent server also
enables to add HD to an existing SD environment without having to reinvest in infrastructure.
Thus video servers are evolving into media platforms that improve the workflow efficiency. Cell
phones are also displaying streaming video for news broadcasts and other programmes through
server, which include on air services. Introduction of information technology (IT) into the
traditional broadcast facility has allowed television's associated workflows to become more
integrated and efficient
OBJECTIVES
VIDEO SERVER
A professional grade video server records, stores, and plays backs multiple streams of video
without any degradation in the video quality. Broadcast quality video servers also store hundreds
of hours of compressed digital audio and video in different codecs. They can provide play out in
multiple and synchronised streams of video, with quality interfaces such as SDI for video or
AES/EBU audio along with Time Code. Video servers are evolving into media platforms that can
provide complete integration and improve workflow efficiency. There are two types of basic
configurations used for their operations, these are:-
157
Induction Course (Television)
i) Stand-alone servers
INGESTION PLAYOUT
VIDEO
SERVER
Stand-alone server is like computer based device to ingest, store and provide play
out by making a schedule of dumped materials.
Net worked servers are wired up as a part of the network as described below :
Gig-E Ethernet
switch
Router/VPN
Ethernet LAN
Firewall/IPS
WAN
to
Internet
File servers
SAN
storage Video
Database
Backup archive
Server
Email server
Web services
App servers
AV
Processors
The important features of networked systems associated with video servers are:
• The application clients shown towards left in the drawing performs A/V- I/O, video editing
and compositing, browsing, other media-related functions as standard enterprise
applications.
• The router connects the facility to the outside using IP routing.
• IP switches & SDI- A/V routers are different
• Firewall - A computer firewall protects the server from private networks
158
Video Servers
• File server — Stores/retrieves A/V files for access by clients and delivery of files over
the network in non-real time.
• A/V processor — This networked resource processes A/V. Typical functions are
compressing/decompressing, file format by using various Codecs.
CLASSIFICATION OF SERVERS
CLASS 1 MEDIA CLIENT SERVERS- It is all analog I/O (composite, component video), stand
alone with no network connection. It is the legacy class of A/V client and includes VTRs, analog
amplifiers, linear switchers for edit suites, master control stations, special effects devices, and so
on. This class has almost gone out of use now as it does not take advantage of digital or
networked system.
CLASS 2 MEDIA CLIENT - Class 2 Media client includes SDI as I/O for both A/V and LAN
CONTROL
Digital
Digital
Audio
Audio
Digital Digital
Video Video
connectivity. LAN is used for file exchange and A/V streaming, but has no access to networkable
Storage. It supports only internal storage. Audio I/O includes AES/EBU format. Examples are
Sony IMX e - VTR, with SDI I/O and a LAN for MXF file export. Panasonic P2 Camera with LAN
for exporting stored clips.
CLASS 3 MEDIA CLIENT- Class 3 is a fully networked device. It has SDI I/O and a real-time
access to external networkable storage. This includes LAN access for file exchange and
streaming. The access to real-time external storage is the major differentiating characteristic of
this class compared to class 2. Storage connectivity can be one or more of the following
methods with current top data rates:
159
Induction Course (Television)
CONTROL
Digital Digital
Audio Audio
Digital
Digital
Video
Video
CLASS 4 MEDIA CLIENT - Class 4 is a class 3 or 2 device without any A/V digital I/O. Since
these devices are fully networked so they do not require ingesting or playing out A/V materials.
Some class 4 stations may use NRT file transfer for importing/exporting files or may access a
RT storage pool, while others may support streaming process.
CONTROL
Examples are, browsers with (low-res. normally), reduced functionality NLE stations, graphics
authoring, (Adobe Illustrator and Photoshop) QA stations, asset managers, file gateways, file
format converters, some storage, DRM authoring stations, file distribution schedulers, A/V
processors, and displays.
Data Plane- This plane is alternatively called data or user. It handles A/V data moving
across links in RT or NRT. The data types may be audio, video, metadata, and general
data. One term describes the data aspects of the plane, whereas the user handle
denotes the applications-related aspects.
160
Video Servers
Control Plane — This is the control aspect of a video system and may include
automation protocols for devices (A/V clients, VTRs, data servers, etc.), live status,
configuration settings, and other control aspects.
SEVEN DOMAINS
DOMAINS 1 AND 2- Domains have several dimensions, including file and streaming protocols
and file formats. FTP has become de facto for file transfer, but there are other choices as well.
Streaming using IT means (LAN/WAN) is not well established for professional A/V systems,
despite its common use in the web space
DOMAIN 3: The control interface- The control interface is vital for automated operations.
Some, but not all, elements have a control interface point
DOMAIN 4: Management Interface- Management interfaces are required for monitoring device
health, for diagnostics, and configurations.
DOMAIN 5: Storage Subsystem Interface- Clients can connect to storage using several
mature protocols. There are two classes of storage interfaces: DAS (direct attached storage)
using SCSI, Fibre Channel, USB2, IEEE 1394, or other interfaces and second as SAN and NAS
technologies
DOMAINS 6 and 7: wrapper and essence formats- A wrapper format (interface 6) is not the
same as a compression format. A wrapper is often a content-neutral format that carries various
lower-level A/V data structures. The simplest wrapper may only provide for multiplexing of A V.
The ubiquitous AVI file type is a wrapper and can carry MPEG, DV, Motion-JPEG, audio, and
other formats.
MAIN PARTS OF VIDEO SERVER- Servers have taken a technological leap. External
applications and processes that were once discrete have been integrated into server platforms.
161
Induction Course (Television)
Today next-generation servers are moving well beyond the realm of video storage to become
true media platforms, capable of integrating many of a television station’s crucial on-air systems,
including channel branding, multi-image processing and media management. The broadcast
servers are broadly segmented into video servers, play-out servers and broadband servers. A
generic server will have the following sections/parts, a) High end CPU, b) I/O cards for Video, c)
Storage- RAID, d) RAID Controller & e ) Gigabit Card.
Up to 4 bi-directional channels
Ethernet transfer of program content via optional FTP server/client software
RAID-3 and redundant power supplies are standard
SpotBase™ and PlayList™ applications are standard
Over 65 hours of storage @ 8Mb/s is standard
Offers MPEG-2, DVCAM, DVCPRO all as standard
Compression support for MPEG: selectable profile, bit rate & GOP
Bi-directional channels (All configurations)
Play-only channels: (All configurations)
MPEG-2 4:2:2 profile @ main level 4:2:2 sampling
Bit rates:10, 12, 14, 15, 18, 22, 26, 30, 34, 38, 42, 46, 50 Mbps
Variable GOP structures from 1 to 16 I, IB, IP, IBP encoding
I/O flexibility like for different formats with real-time.
Exceptional Storage and I/O redundancy having dual hot-swappable power supplies,
Built-in Scalability with simple upgrade path from two channels to hundreds of channels
Integrated media applications and support for a range of broadcast applications
ACTIVITIES
To study the Configuration and feature of a Video Server installed at your Kendra
RECAP
Video server is high end computers with additional hardware for Video and Audio
input and output.
A professional grade video server records, stores, and plays back multiple streams of
video without any degradation in the video signal.
Broadcast quality video servers often store hundreds of hours of MPEG-2 compressed
video, play out three or more simultaneous streams of video, and offer quality
interfaces such as SDI for digital video and XLR for balanced audio.
FURTHER READINGS
162
16
FUNDAMENTALS OF TV
TRANSMISSION
INTRODUCTION
The bandwidth of video signal extends from 0 to 5 MHz while audio signal is band limited to 20
kHz. These signals cannot be transmitted directly. Base band audio and video signals modulate
the carrier waves with standard RF frequencies. Though frequency modulation has certain
advantages over amplitude modulation, its use for picture transmission is not permitted due to
large bandwidth requirements, which is not possible due to very limited channel space available
in VHF/UHF bands. Secondly as the power of the carrier and side band components go on
varying with modulation in the case of FM, the signal with frequency modulation after reflection
from nearby structures at the receiving end will cause variable multiple ghosts, which will be very
disturbing. Hence use of FM for terrestrial transmission of picture signal is not permitted.
OBJECTIVES
i) Positive modulation wherein the increase in video level causes increase in the
amplitude of the modulation envelope and
ii) Negative modulation wherein the increase in video level causes reduction in carrier
amplitude i.e. the carrier amplitude will be maximum corresponding to sync tip and
minimum corresponding to peak white.
163
Induction Course (Television)
Impulse noise peaks appear only in black region in negative modulation. This black
noise is less objectionable compared to noise in white picture region.
Best linearity can be maintained for picture region and any non-linearity affects only
sync which can be corrected easily.
The efficiency of the transmitter is better as the peak power is radiated during sync
duration only (which is about 12% of total line duration).
The peak level representing the blanking or sync level may be maintained constant,
thereby providing a reference for AGC in the receivers.
In negative modulation, the peak power is radiated during the sync-tip. As such even
in case of fringe area reception, picture locking is ensured, and derivation of inter
carrier is also ensured.
Another feature of TV Transmission is vestigial side band transmission. It was not considered
feasible to suppress one complete side band in the case of TV signal as most of the energy is
contained in lower frequencies. If these frequencies are removed, it causes objectionable phase
distortion at these frequencies which will affect picture quality. Thus as a compromise, only a
part of lower side band is suppressed. The radiated signal thus contains full upper side band
together with carrier and the vestige (remaining part) of the partially suppressed LSB. The lower
side band contains frequencies up to 0.75 MHz with a slope of 0.5 MHz so that the final cut off is
at 1.25 MHz. Transmitted vestigial side band spectrums in VHF band is shown in Fig. 2.
5.5 MHz
P C S
4.433 MHz 0.25 MHz
Amplitude
Colour subcarrier
Sound signal
Sidebands
150 KHz
f (MHz)
1.25 -1 0 1 2 3 4 5 5.5
5.75
0.75
1.25 MHz Upper picture sideband 0.75 MHz
When the radiated vestigial sideband signal is demodulated with an idealized detector,
amplitude vs. frequency response is not flat. The resulting signal amplitude during the double
sideband portion of VSB is exactly twice the amplitude during the SSB portion as per the
164
Fundamentals of TV Transmission
response is shown in Fig. 3. In order to equalize the amplitude, the receiver response is
designed to have attenuation characteristics over the double side band region appropriate to
compensate for the two to one relationship.
TV receivers have Nyquist characteristics for reception which introduces group delay errors in
the low frequency region. Notch filters are used in receivers as aural traps in the vision IF and
video amplifier stages. These filters introduce GD errors in the high frequency region of the
video band. These GD errors are pre-corrected in the TV transmitters (using Rx pre corrector)
so that economical receiver filter design is possible. The group delays of the Rx and Tx with
pre-correction are shown in Fig. 5
165
Induction Course (Television)
DEPTH OF MODULATION
Care must be taken to avoid over-modulation at peak-luminance signal values to avoid picture
distortions and interruptions in vision carrier. The peak white levels when over modulated tend to
reduce the vision carrier power or even cause momentary interruptions of vision carrier. These
periodic interruptions due to accidental over modulation result in interruptions of the sound
carrier in inter carrier receiver systems which produces undesired sound buzz in the receiver
output.
Therefore, to prevent this effect, the maximum depth of modulation of the visual carrier by peak
white signal values is specified as being 87.5%. This 12.5% residual carrier (white level) is
required because of the inter-carrier sound method used in TV receiver (Refer Fig.6).
PEAK CARRIER
A PEAK WHITE
B – 12.5% WHITE LEVEL
100% 70% ZERO CARRIER
VIDEO
ENVELOPE
The depth of modulation is set by using a ramp signal or step signal as given in the manual. It
should be 87.5% for 100% modulation (i.e. m = 1).
INTER CARRIER
The TV receivers incorporate inter carrier principle. According to our system, the inter-carrier i.e.
the difference between the vision transmitter frequency and sound transmitter frequency is 5.5
MHz. Hence it is to be ensured that even when the modulating video signal is at white peak,
12.5% of residual carrier is left so that sound can be extracted even at the peak white level,
where the carrier power is minimum.
POWER OUTPUT
The peak power radiated during the sync. is designated as the vision transmitter power. This
power is measured by using a thruline power meter after isolating the aural carrier. The power
read on thruline meter is multiplied by a factor of 1.68 to get the peak power (vision) radiated.
As transmitter output is connected to an antenna, having a finite gain, the effective radiated
power (ERP) is obtained by multiplying the peak power by the antenna gain (w.r.t a half wave
dipole). Hence a 100 W LPT using transmitting antenna having a gain of 3 dB w.r.t a half wave
dipole will have an ERP of 200 W or 53 dBm or 23 dBw.
166
Fundamentals of TV Transmission
In TV broadcasting, the sound signal is transmitted by frequency modulating the sound carrier in
accordance with the standards. The sound carrier is 5.5 MHz above the associated vision
carrier. The maximum frequency deviation is + 50 kHz which is defined as 100 per cent
modulation in PAL-B system.
Frequencies below band I are not used for TV transmission as they are highly affected by
atmospheric noise and man-made noise from electrical equipment. For this reason, channel (41-
47 MHz) is also not used for TV transmission. Frequencies above Band V are also not preferred
because atmospheric attenuation increases with increase in transmission frequency.
CHANNEL BANDWIDTH
Channel bandwidth in VHF band is 7 MHz while in UHF band is 8 MHz. In UHF, a band gap of
1 MHz is provided between adjacent channels to prevent the mutual interference.
Frequency range for particular channel can be determined easily. For example, suppose we are
interested in knowing the frequency range of channel 7. Channel 7 lies in band III which starts
from 174 MHz (channel 5). Further channel bandwidth in VHF range is 7 MHz. So start
frequency for channel 7 is 174+(7-5)*7= 188 MHz. Stop frequency for channel channel 7 is
188+7 =195 MHz. Thus frequency range of channel 7 is from 188 MHz to 195 MHz.
Picture and sound carrier frequencies of a particular channel ranging from ‘a’ to ‘b’ MHz are
determined as under
• unintended long range reception from far off transmitter operating at same frequency,
under certain weather conditions
• Spurious radiations from nearby transmitters.
167
Induction Course (Television)
In PAL B/G, system, 2/3rd line offset (p=8) has been chosen. Picture carrier frequency of channel
7 after offset becomes;
For Channel 7(+) = 189250000+15625*8/12 = 189250000+10416.66=189260416.66 Hz
=189.260416MHz and
For Channel 7(-) = 189250000-10416.66=189239583.3 Hz= 189.239583 MHz.
TV STANDARDS
168
Fundamentals of TV Transmission
ACTIVITIES
Check the i-f modulated signal on oscilloscope at the output of video modulator with
colour-bar as input signal to TV transmitter and learn to adjust modulation depth.
Check the r-f signal spectrum at output of TV transmitter with the help of a spectrum
analyser.
RECAP
In TV broadcast both video and sound signals are required to be conveyed to the viewer.
Modulating video signal being high in frequency (0-5 MHz), the normal amplitude modulation of
picture requires at least 10 MHz. To accommodate picture and sound r-f signals in the same
7/8 MHz VHF/UHF channel, vestigial side band technique is adopted. Difference between the
two carriers (picture and sound) is maintained at 5.5 MHz also called inter carrier separation.
Special circuits are used to receive VSB signal. Mainly negative AM amplitude modulation is
used for video modulation while frequency modulation is used for sound modulation with
maximum frequency deviation +/-50Khz.Maximum modulation depth of visual carrier is kept
87.5% while 12.5% residual carrier is required for sound demodulation in TV receiver. Standard
adopted for TV transmission in India( PAL B/G ) has also been tabulated in this chapter.
FURTHER READINGS
******
169
17
TELEVISION TRANSMITTERS
INTRODUCTION
Transmitter is electronic equipment that generates and modulates carrier waves, with
meaningful signals and radiates the resulting r-f signals through an antenna. It is the back bone
of terrestrial TV transmission. These transmitters are classified according to their rated power
capacity as under
This chapter briefly describes distinction between the above category transmitters, various
modules of transmitters and their functions.
OBJECTIVE
High Power TV transmitters have rated capacity equal to 1kW or more (sync peak vision power).
Doordarshan has 1 KW, 5KW, 10 KW, 20 KW and 30 KW transmitters under this category.
Doordarshan has procured high power transmitters from several manufacturers like BEL, NEC,
Thom cast, R&S and Harris in its network.
Block diagram shown in figure 1 of a High Power TV transmitter in general, consists of following
units
171
Induction Course (Television)
Audio and video is modulated in two exciters in 1+1 mode. The selection between the two
exciters is automatic or manual. The selected output of a particular exciter with its RF aural and
vision outputs is then routed to the respective power amplifier chain. Amplification chain may
consist of one or more Solid State amplifiers for both aural and vision output depending up on
the rated capacity of transmitters. When more than one amplifier is used, outputs of different
amplifiers are suitably combined to attain the rated capacity. Vision and aural signal is finally
combined in a V/A combiner (called diplexer). Transmitter Control Unit (TCU) is used to give
commands to the different transmitter units, monitor status and parameters of different
transmitter units and protect transmitter against abnormal operation. HPTs have either forced air
cooling or liquid cooling system. Detailed description of each of these units is as under;
V
EXCITER-1 VISUAL
A TR PA
VC AC
V/A TO
COMBINER ANTENNA
EXCITER
SWITCHING AURAL
TR PA
VC AC
MAINS
SUPPLY
POWER
V SUPPLY TRANSMITTER COOLING
EXCITER-2 REGULATION CONTROL SYSTEM
A SYSTEM SYSTEM
172
Television Transmitters
EXCITER
Block diagram indicating basic functions of a exciter is shown in Fig. 2. Video signal 1 Vp-p (75
ohm) is fed to exciter. Audio signal input to exciter can either be balanced audio across 600 ohm
or unbalanced audio across 75 ohm. Exciter conditions incoming video and audio signals to
make them suitable for modulation and transmission. For example, input video signal is
subjected to peak white limiting, DC Clamping and group delay correction caused by receiver
and diplexer before modulation. Balanced audio signal is converted to unbalanced audio signal
and subjected to pre-emphasis and limiting before modulation. Baseband conditioning is done in
vision and aural modulators.
V AGC V Output
V Mod Monitor Input Monitor
V IF
V Local
Ref In Synthesizer
V IF Output A Local
Ref
10MHz Output
Synthesizer module generates required IF frequencies 38.9 MHz for modulation of video signal,
a reference frequency for aural modulator and a frequency called local frequency, which is equal
to sum of required channel frequency and vision IF frequency (38.9MHz). Please note that A
local = V local in the above block diagram. All three frequencies viz. Vision IF frequency (38.9
MHz), Aural IF frequency (33.4 MHz) and local frequencies are locked to single reference
frequency either generated using a temperature compensated crystal oscillator (TCXO) or fed
from an external source so that relative difference between these frequencies are maintained.
Principle on which synthesizer works is explained in the fig. 3 given below
173
Induction Course (Television)
Reference
Source Divide by N PLL VCXO
(10MHz)
Divide by N
Any variation in Voltage Controlled Crystal Oscillator (VCXO) output frequency will be corrected
by PLL.
Vision modulator receives Vision IF frequency of 38.9 MHz from synthesizer and amplitude
modulate incoming video signal. Double diode ring modulator is used for amplitude modulation.
Amplitude Modulated output is then passed through Vestigial Side Band filter to obtain required
spectrum. Output of vision modulator is then routed to IF corrector, which pre-distorts incoming
IF signal to compensate for amplitude and phase non-linearity in power amplifiers.
Aural modulator has a 33.4 MHz varactor diode oscillator locked by the reference frequency
from synthesizer. Audio signal after pre-emphasis and amplitude limiting is applied to a varactor
diode to get frequency modulation. This causes the change in the frequency of oscillator
according to the amplitude of audio signal.
Output of IF corrector is fed to Vision Mixer. Output of aural modulator is fed to Aural Mixer.
Vision and Aural Mixers also receive V-local and A-local respectively from synthesizer.
Operation of mixer is based on non-linearity of active devices like diode. Double balanced diode
mixer, shown in figure 4, is used very frequently for this purpose. Output of mixer provides
signals containing sum and difference frequencies.
D4
D1
T1 T2
LO RF
D3
D2
IF
174
Television Transmitters
Signal with difference frequencies is separated through channel filter and is fed to power
amplifiers to achieve the required power level.
Transmitter supplied by Rohde and Schwartz employs altogether different mechanism for
generation of vestigial side band. It employs an I-Q modulator to generate vestigial side band at
channel frequency and eliminates the need for VSB filter and up conversion. I stand for in-
phase and Q stands for quadrature path. Both the paths have same level of video signals which,
however, differs in phase by 90 degree. Any imbalance in amplitude and phase will give rise to
undesirable RF components. Diagrams of an I-Q modulator and demodulator are shown in figure 5.
Power Amplifier
Power amplifiers are broadband for operation in the frequency range 174MHz to 230 MHz for
VHF transmitters and 470 MHz to 862 MHz for UHF transmitters. When more than one PA is
used, they are interchangeable and can be used at any position within the specified band of
operation. Power amplifier circuit can broadly be divided as under:
I. Amplification circuit
II. Monitoring circuits
III. A power supply distribution board
Amplification circuitry contains pre-driver, driver and final PA pallets. Pre-driver is generally class
A linear Hybrid amplifier to reduce the distortion in successive stages. Driver is operated in class
AB mode. Final amplification stage contains push pull configuration in Class AB mode to have
better power efficiency and remove crossover distortions. Block Diagram of typical power
amplifier is shown in Fig. 6.
Power amplifier in new generation transmitter use Laterally Diffused Metal Oxide Semiconductor
Field Effect Transistors. These transistors provide higher gain, better linearity, and more
reliability due to negative temperature coefficient. LDMOS allows for a replacement of the toxic
BeO (Berelium Oxide) packages used in bipolar transistors by environment friendly ceramic or
plastic packages.
175
Induction Course (Television)
Monitoring and protection circuitry in power amplifier monitors temperature, device currents,
forward power, reflected power, PA power output, reflected power of Power amplifiers etc. In
case of temperature fault, power amplifier is turned off. In case of other faults like higher forward
power, higher reflected power, higher device currents than defined threshold, input to power
amplifier is reduced thorough attenuator at input of power amplifier.
Automatic gain control is provided so that output power of amplifiers is maintained constant with
the increase/decrease in input power level within ± 3dB. Sync peak or pedestal level of RF
signal is taken as reference for generating AGC voltage. RF signal from transmitter output is
used for generating AGC voltage which is then applied to both the exciters. Applied AGC voltage
controls output of exciter applied to Power amplifiers. Better method is to sample output of each
amplifier and develop corresponding AGC voltages. Then maximum of these AGC voltages is
selected and applied to exciter. This method helps to prevent overdrive of power amplifiers in
case one or more PAs fail.
Though the power amplifiers are broadband in their band of operation but they need to be
aligned in phase with existing power amplifiers if they are to be used for other transmitters
operating in the same band. Any mismatch in phase will cause error in combining and lead to
reflection in power amplifiers.
Transmitter Control Unit (TCU) is the heart of a TV transmitter. It performs the following basic
functions:
Transmitter Control Unit facilitates control and operation of transmitter in local or remote mode.
An additional maintenance mode is also provided in some transmitters. In Local mode,
commands to transmitter can be given from by pushing the switches on the operation panel.
Remote mode is used when transmitter is to be controlled from a remote site through computers.
Maintenance mode is provided to prohibit automatic changeover of exciters blowers (pump for
liquid cooling) to ensure the safety when maintenance is being carried out on exciters or
blowers.
176
Television Transmitters
Amplifier
Power
Power Amplifier
Supply
MODULES
2X300W
3 WAY SPLITTER
3dB COUPLER
3dB COUPLER
THERMISTOR
Transmitter Control Unit ensures correct switching sequence for transmitter. It switches on the
transmitters only when certain conditions necessary for safety of transmitter in operations are
satisfied. These conditions are called interlocks. In all solid state transmitters, interlocks for
proper termination of transmitter output either to antenna or dummy load ( called external
interlock) and for proper cooling of transmitter ( air pressure/liquid pressure for air cooling/ liquid
cooling) is provided.
177
Induction Course (Television)
When external interlock is through, Transmitter Control Unit switches on blower (pump in case of
liquid cooling). If air/ liquid flow is not proper, the blower (pump) is switched off and second
blower or pump is switched “ON”. Transmitter (power supplies and power amplifiers) is switched
“ON” only when transmitter is connected to antenna or dummy load and air/ liquid pressure is
sufficient. Transmitter Control Unit also does automatic changeover of standby exciter and
blower in case of the failure of main exciter/ blower during operation.
Transmitter Control Unit also performs monitoring function. Monitoring circuit allows detection of
Vision output signal, aural output signal, reflected power and absorbed power etc. It also
monitors performance parameters of various sub modules i.e. exciter, power amplifiers, power
supplies etc. These parameters are displayed on display screen on Transmitter Control Unit. If
any of the parameter is abnormal, it is indicated on display screen by blinking that particular
parameter.
Monitoring section also generates fault conditions in case some of the parameters are abnormal.
Depending upon nature of fault, Transmitter Control Unit takes appropriate action. For example,
if temperature of particular power amplifier or output reflected power exceeds defined limit,
transmitter will be switched off immediately. For some other faults, transmitter does not stop
working immediately but continue to work by virtue of redundant nature i.e. if fault relates to one
of the exciter, automatic changeover to healthy exciter will be made.
A diplexer (Two channel combiner) is a device which combines two RF signals so that same
antenna may be utilized. This concept is economical, as erection of two antennae system with
178
Television Transmitters
separate feeder cables would be very expensive. A simple block diagram of the Constant
Impedance Band Pass Diplexer (CIBD) is shown in Fig. 7.
1. The signal of the aural transmitter applied at terminal (1) of the 3 dB coupler H1 appears
at terminals (2) and (4) with same amplitude and with a phase difference of 900 [terminal
(2) is 90o ahead in phase then that at terminal (4)]. Because of the nature of the 3 dB
coupler no output appears at terminal (3).
2. The signals appearing at terminals (2) and (4) are passed through aural band pass filters
respectively, and reaches the terminals (2)’ and (3)’ of the other 3 dB coupler H2 still with
90o phase difference.
3. The signals appearing at 3 dB coupler H2 terminals (2)’ and (3)’ with 900 phase
difference are combined in the H2 terminal (4)’ since signal at terminal (2)’ has a 900
leading phase from the terminal (3)’, but no output appears at the terminal (1)’. The
signal appears at the terminal (4)’.
4. The visual transmitter output is connected to terminal (1)’ of the 3 dB coupler H2. The
visual signal entering these terminals does not appear at terminal (4)’ but at the terminal
(2)’ and (3)’ with the same amplitude and with a phase difference of 900.
5. The visual signals are reflected at point A and B, and then reaches the terminals (2)’ and
(3)’ of the 3 dB coupler H2. Since the length of point A to terminal (2)’ of H2 is equally
electrical to that of the point B to terminal (3)’ of H2. While the visual signal returning to
terminal (2)’ of H2 is combined with that to terminal (3)’ because of the nature of the 3 dB
coupler. Then, the combined visual signals appear at the terminal (4)’.
6. The CIB Diplexer has constant input impedance as viewed from the visual and aural
input, and also allows sufficient isolation between visual and aural signals. Accordingly, it
can supply to the antenna without any mutual interference. The absorbing resistor
absorbs the aural signal component reflected by filters and the visual signal components
passing through the filters.
7. Spurious frequencies of the visual carrier are attenuated by the OPF while the visual
carrier is not attenuated by OPF as it is tuned to the visual carrier.
8. The signals with higher frequencies than visual and aural carrier frequencies are
attenuated by the harmonic filter. The visual and aural carriers are not attenuated by
harmonic filter as it is tuned to these frequencies.
179
Induction Course (Television)
COOLING SYSTEM
Simplified block diagram of typical liquid cooling system in a transmitter is shown in the figure 8.
Transmitter uses 50/50 glycol and demineralized water solution for removing majority of heat
away from the transmitter. Transmitter also has cabinet flushing fans to remove away residual
heat. Glycol is an anti- freezing agent and it improves the freezing and boiling point of water.
50/50 Glycol solution freezes at about -37oC and boils at 106oC. The cooling system essentially
consists of -
180
Television Transmitters
The cooling system is closed loop cooling, pressurized system. Trapped air must be removed
through vent before operation of cooling system.
Expansion tanks in the pump module in a closed loop cooling system absorb the expanding fluid
(due to heating) and limit the pressure within a cooling system. The expansion tank uses
compressed air to maintain system pressures by accepting and expelling the changing volume
of water as it heats and cools. The water that is used initially to fill a water cooling system
contains dissolved air. Make-up water subsequent added will similarly have a high air content.
Heating this water releases the air which it must be vented. Air Purge is provided to continuously
separate and collect any air from the water as it circulates so that it may be vented automatically
by Air vent without the necessity for frequent manual venting.
The cooling system control panel controls the operation of the pump modules and heat
exchanger, and sends fault and status information to the Transmitter Control Unit. Pump module
and heat exchanger can be operated in “Local” or “Remote”. The selection of the mode is made
through a System Control switch on Control Panel. When System Control is in Remote mode,
the transmitter is responsible for control of the cooling system, including ON/ OFF, and
automatic pump switching in the case of a failure. Placing the control panel in Local mode allows
manual switching of the pumps.
The heat exchanger is a 2 fan unit. The fans are controlled electronically. The fans are enabled
whenever the pump module is activated. Fan turn ON and OFF setting points have some
hysteresis window. For example, Fan# 1 set point will be set at 32oC with a 5 degree hysteresis
window. This means Fan 1 turns ON at 34.5oC and shuts off at 29.5oC. Fan # 2 set point will be
set at 37.5oC also with a 5oC window. This means Fan 2 turns ON at 40oC and shuts off at 35oC.
TV transmitters with their sync peak power equal to 100W or more but less than 1kW are called
Low Power Transmitters (LPTs). Low power transmitters in Doordarshan have 100 W, 300 W or
500 W sync peak power. Doordarshan has acquired LPTs from several manufacturers like BEL,
GCEL, WEBEL, and IMP Telecom. New Generation LPTs can work in auto mode with 1+1
configuration.
181
Induction Course (Television)
ANTENNA
BAND
PASS
FILTER
VIDEO RF RF VIDEO
Transmitter 2 WAY Transmitter
A SWITCH B
AUDIO AUDIO
TV TRANSMITTER
Fig. 9: (1+1)SCHEMATIC
VHF VLPT
VISION
MODULATOR IF/RF CONVERTER
VIDEO
(75) IF LINEARITY
38.9 COMBINER CORRECTOR REF OCXO
MHz
OSC
MICROCONTROLLER
In a LPT, video and audio signals are modulated at IF frequencies, combined and then up-
converted to required channel frequency in exciter. The combined signal passes through
182
Television Transmitters
common amplification chain, filtered in band pass filter at the output of amplification chain and
fed to antenna. Therefore, LPT differ from an HPT in the sense that in HPT, aural and vision
signals are generally not combined in exciter and are amplified through separate amplification
chains to avoid intermodulation.
Block diagram of latest 500 W (1+1) VHF low power TV transmitter commissioned in
Doordarshan network is shown in figure 9. TV transmitter system consists of 2 number of 500W
solid state air cooled transmitters. One transmitter remains in circuit and other transmitter in
passive stand by. In case of breakdown of one transmitter, Automatic Switching Unit switches to
second transmitter to ensure continuity of service.
DESCRIPTION OF OPERATION
MODULATOR VISION: Vision modulator amplitude modulates IF video carrier with video
signal. Video modulator consists of subunits viz. Video input level control (auto or manual),
Synchro pulse regeneration, White level limiter, Group delay pre-correction, DC black level
restoration and modulation depth setting, Amplitude modulator, Oscillator 38.9 MHz with external
frequency reference, SAW (Surface Acoustic Wave) filter and IF amplifier with level control.
Subunits are controlled by Microcontroller unit where the basic settings and measurements are
available
MODULATOR SOUND: It frequency modulates IF sound carrier with sound signal. Sound
modulator is working in combination with vision modulator. Generated frequencies are
synchronized with video carrier frequency (38.9MHz).
183
Induction Course (Television)
Balanced sound input is fed to this module. Audio input impedance is set to 600 Ohms or 12 k
Ohms. Gain of the input signal and thus consecutive frequency deviation is set on the MCU unit.
Audio input signal is than fed to the pre-emphasis amplifier and low pass filter. Pre-emphasis
circuitry and Sound Carrier can be switched ON/OFF. Output frequency of sound carrier is PLL
synchronized with frequency of oscillator of video modulator module. Amplitude of Sound Carrier
is stabilized by means of automatic gain control circuitry (AGC). The operation of the module is
supervised and controlled by MCU.
RF AMPLIFIER: It is a broad band pre driver amplifier. In VHF band, it consists of linear pallet
class AB mounted on heat sink. It amplifies relatively low level input signals. Amplified r f signal
is fed to the next amplifying stage, which has a built in over temperature protection.
ALC signal is obtained from a directional coupler which is provided at the output of r f amplifier.
POWER AMPLIFIER 500W VHF: Final RF Power Amplifier stage is solid state and broad band.
RF signal is divided by Wilkinson divider into four outputs which are amplified by four power
amplifiers (AB class). After amplification four RF signals are combined in Wilkinson 4-way-
combiner. In case of overheating, two electronic protection circuits with temperature sensor
switch off the voltage of the SMPS AC/DC converter. The amplifier module incorporates micro
controller PCB which controls the electric and thermic parameters of module and also provides
communications via RS-485 with exciter module. The unit also incorporates the test circuitry for
metering the output and reflected power, as well as protective circuitry against the occurrence of
reflected power. If reflected power exceeds beyond 100W (20%), the corresponding circuitry will
cut the power supply for the stage. If output power falls below a set value (-4dB), automatic
switch over to standby transmitter takes place.
184
Television Transmitters
Output of the power amplifier is fed to band pass filter to attenuate harmonic frequencies. To get
the required attenuation of products of (vision + sound) and (vision – sound) two suction circuits
are also added, which may be adjusted as necessary.
Transmitters, with rated capacity of less than 100 W, are called Very Low Power Transmitters.
These transmitters are installed in those areas where coverage area for a given power is
restricted by terrain or at remote inaccessible locations. These transmitters are unmanned
transmitters and are being maintained by Doordarshan maintenance centres. Schematic of
VLPT is shown in figure 11
Each transmitter consists of two complete transmission chains (Exciter +Power Amplifier +Filter)
along with an automatic switching unit. Any combination of Exciter and Power Amplifier can be
turned ‘ON’ by automatic switching unit. Automatic switching unit can be put in either manual or
auto mode. In auto mode, exciter-1 is first selected and its status is monitored by switching unit.
If the health of exciter is not OK, exciter 2 is selected. Audio & video signals are routed to
selected exciter through change over unit which also receives command from ASU. Video and
audio signals are modulated at IF frequencies, combined and then up-converted to required
channel frequency in exciter. Similarly PA1 is first turned ‘ON’ and if its health is not found OK
PA2 is turned “ON”. Both the transmitter chains are connected to the antenna through coaxial
switch. Selected transmission chain is connected to antenna and other to dummy load.
Transmitter operates on 24 V DC. However, DC power supply, which operates on 230 V and
gives 24 V, has also been provided. This power supply, however, has low regulation and is used
in emergency only. Main source for power supply is battery bank. Two banks of batteries of 800
AH or 1600 AH are normally used to power the transmitter. Battery bank is charged by array of
solar panels through charge controller. Charge controller has the following functions.
185
Induction Course (Television)
ANTENNA
EXCITER - I 50 W PA - I
VIDEO
(75)
AUDIO VIDEO
CONTROL DUMMY C/O
AUDIO CHANGE OVER TO
UNIT LOAD SWITCH
(600)
SELECTED EXCITER
EXCITER - II 50 W PA - II
FOR TX 1 FOR TX 2
24 V DC POWER SUPPLY
FROM
BATTERY BANK
iv. It isolates battery bank from solar panel. Solar panel voltage may go as high as 50 V
while nominal battery voltage is 24 V. Hence solar panel voltage cannot be directly
applied to battery terminals.
There is another variant of Very Low Power TV transmitter which is called “Transposer”.
Transposer caters to those areas which fall in the shadow region of nearby transmitter.
Transposer is installed at the location where there is strong signal available from a nearby
transmitter. Signal of transmitter is received from antenna and are converted in IF frequency
signal, which is again converted to some other channel frequency, amplified and retransmitted.
Transposer thus reconverts signal from one channel frequency to another
186
Television Transmitters
ACTIVITIES
1. Visit a high power TV transmitter installation & study the signal path from input rack to
antenna feeder panel.
2. Note down the signal power at different stages of the transmitter.
3. Study the cooling system of the transmitter.
4. Study the AC power supply distribution in transmitter hall.
RECAP
TV Transmitter plays an important role in reaching the TV programs to the masses, New
generation TV transmitters are all solid state with high reliability and efficiency. Depending on
geographical conditions and coverage area Doordarshan has installed high power, low power
and very low power TV transmitters. Block diagram of a TV transmitter with description has
been explained in this chapter. To avoid the overheating of TV transmitter equipment different
types of cooling systems are used such as air and liquid cooling. Liquid cooling has been
described thoroughly with line diagram. It is very efficient and noise free cooling system.
FURTHER READINGS
******
187
18
DIGITAL VIDEO BROADCASTING
INTRODUCTION
Multichannel Television Transmission in digital form in the given bandwidth is achieved through
digital terrestrial transmission. Digital transmission offers many advantages over analog
transmission. Digital television also enables different interactive services like on-air programme
guide (EPG), TV shopping and on-air games to name a few.
OBJECTIVE
ATSC standard uses single carrier 8 VSB modulation. Digital data stream first generates 8 level
ASK which is then IQ modulated to generate vestigial side band signal. ATSC is not suitable for
reception in mobile environment. DVB-T/T2, ISDB-T and DTMB use COFDM.
DVB-T is an abbreviation for "Digital Video Broadcasting — Terrestrial". It is the DVB European-
based consortium standard for the broadcast transmission of digital terrestrial television that was
first published in 1997. It was then used for TV Broadcasting in the year 1998 in UK. This
system transmits compressed digital audio, digital video and other data in an MPEG transport
189
Induction Course (Television)
DVB-T2
DVB –T2 is a second generation Digital Terrestrial Video Broadcasting Standard. It was mainly
pushed by BBC for HDTV terrestrial coverage with MPEG 4 compression. The novel system was
targeted to provide a minimum 30% increase in payload under similar channel conditions
already used for DVB –T. DVB -T2 is a complete new standard and is not upgraded version of
DVB –T. So it has no backward compatibility with DVB T-2. DVBT-2 achieve 30% to 50 % higher
net data rate as compared to DVB-T. The maximum possible data rate which is achievable with
DVB T-2 is 50.32 Mbits per second in 8 MHz channel. It is more suitable for mobile reception
than DVB-T. It is able to manage even with very narrow channels. Lowest bandwidth defined in
DVB –T2 is 1.7 MHz
a) Modulation
b) Modes (FFT Size)
c) Guard Interval
d) Forward Error Correction (FEC)
MODULATION
DVB –T2 provides following options: QPSK 16QAM, 64QAM, 256QAM. Higher the QAM chosen,
better is the data rate. But higher QAM means lesser distance between the constellation points
for given power of transmitters and thus signal becomes more susceptible to the noise. Thus,
coverage area decreases with higher QAM.
MODE
Mode means number of the subcarriers used in COFDM. For example, in 8K mode, number of
the subcarriers used is 8X1024 = 8192. Mode in turn defines size “N” in FFT and IFFT. Available
modes in DVB-T are 2K & 8K and in DVB -T2 are 1K, 2K, 4K, 8K, 16K, and 32K. Number of
COFDM carriers is always chosen as power of 2 because it allows the computation of discrete
Fourier transform using fast Fourier transform method. Fast Fourier transform involves much
lesser calculation and is faster.
Lower orders modes are suitable for mobile reception of the signals. With the lower order
modes, frequency spacing between carriers within a given bandwidth is much larger than that in
higher order mode. They are less susceptible to spreading in frequency domain caused by
Doppler Effect due to mobile reception and multiple echoes. Particular mode is selected after
taking into account Doppler Effect and also multipath propagation.
Higher order modes are chosen where mitigation of effects of reflection and echoes are
required. They are suitable for implementation of single frequency network because higher order
mode increase symbol duration. If direct and reflected signals are received within the same
190
Digital Video Broadcasting
symbol duration then reflected signals does not cause any problem. In fact, it enforces
transmitted symbol.
Data rate does not depend on the order of mode chosen. Higher order mode means more
samples (more bits) per symbol but at the same time symbol duration is also increased. Thus
increase in number of bits per symbol is offset by increase in symbol duration.
All the carriers available in particular modes are not used for transmission of pay load. For
example 2K mode has 2048 carriers but actual number of payload carriers is 1512.
The edge carriers at upper and lower edge are set to zero i.e. they are inactive and do not carry
any modulation. This is done to provide sufficient guard band between adjacent channels.
Continual pilots are used in the receiver as phase reference and for automatic frequency control
i.e. for locking the receiving frequency to transmitter frequency. They are located on real axis
i.e. I (in phase axis). Thus, these carriers have either zero or 180 degree phase. Continual pilots
are boosted by 3 dB with respect to the average signal power. These carriers occupy the same
position in the spectrum i.e. their position does not change with time. Scattered pilots constitute
virtually a sweep signal for channel estimation. The position of scattered carrier changes from
symbol to symbol. Scattered pilots are also on I axis at “0” or “180 degrees”. The scattered pilots
have several selectable patterns. Less dense pilot means there are more payload carriers
resulting in the net higher data rate. Denser pilot pattern allows for a better channel estimation
especially in presence of difficult reception conditions such as multipath reception and mobile
reception. Not all the pilot patterns are available in all the modes and guard interval
configuration. P2 pilots are used to carry information about transmitter parameters to receiver,
for example, mode, code rate, guard interval being used at transmitter.
GUARD INTERVAL
Digital data is very sensitive to echoes, reflections and propagation delay. These effects can be
minimized by making symbol duration longer. Subcarriers in COFDM are orthogonal in the
symbol duration. If the reflected signals are received within the same symbol period then
reflected signal will not cause Inter-symbol interference due to the orthogonality of the signal.
This has already been explained in the chapter 9 of “Fundamentals of Broadcast Technology”
while discussing COFDM. Symbol duration can be increased firstly by choosing higher order
mode. When the particular mode is selected, symbol duration can be further increased by
choosing appropriate guard interval. Thus, guard interval is provided to provide the immunity
against multipath, echoes etc. Longer the guard interval, better is the protection but lower is
useful data rate.
191
Induction Course (Television)
Long symbol duration and hence guard interval is desirable in Single Frequency Network (SFN)
as this will minimize the inter-symbol interference from large distant transmitter
BCH Code
Low Density Parity Check Codes (LDPC).
Low density parity check codes are highly efficient code and help to achieve performance close
to Shannon limit in a noisy environment.
The available code rates are 1/2, 2/3, 3/4, 3/5, 4/5 or 5/6.
A digital TV transmission is received till fall-off-the-cliff, that means go or no-go. All interferences
cause more or less bit errors. If the bit error ratio is too high, then the FEC in the receiver will fail.
That means no-go or fall-off-the-cliff if there are too many bit errors. The transition from go to no-
go takes place within only a few steps of a hundredth dB of signal to noise ratio. Therefore,
choice of proper code rate is important.
192
Digital Video Broadcasting
DIGITAL
CLIFF
PICTURE QUALITY
ANALOG
ROLLOFF
SIGNAL STRENGTH
DVB-T standard confines to only MPEG-2 or MPEG 4 transport stream format of input data.
Only one transport stream can be fed to the modulator, except in case of hierarchical modulation
mode of operation, where a second low bit rate data stream can also be supplied to the
modulator. However, in DVB T-2, 255 transport or generic stream can be fed to the modulator
and transmitted. Moreover, each stream can be assigned different modulation and code rate.
This is called variable coding and modulation.
ROTATED CONSTELLATIONS
The rotated constellations have been introduced in DVB-T2 standard. This method is also known
as Signal Space Diversity (SSP) because its final purpose is to increase the diversity order,
that is, to achieve a redundancy in information bits of the coded modulation, to improve the
receiver performance in severe propagation scenarios.
Principle
The non-rotated constellation diagram for 4 QAM and 16 QAM is shown in Fig. 2.
193
Induction Course (Television)
It is clear from the diagram that each constellation point is uniquely specified by two coordinates
viz. in phase component and quadrature component. Receiver needs information for both the
coordinate to clearly identify a particular constellation point transmitted from transmitter. One
component does not give any information about the other component. For example there are
four points projected on Q-Axis and 4 on I-Axis corresponding to 16 points. Each point is
uniquely defined by these I & Q values. If any of the value is lost in channel fading the point is
completely lost. Accordingly receiver may make an error when one or both the components are
lost.
Now consider constellation diagram shown in the Fig. 3. The constellation points in blue have
been obtained by rotating constellation diagram with green constellation points.
It is clear from the constellation diagram that any of the blue constellation points can be uniquely
determined only by one of in-phase component or quadrature component. So if receiver
identifies one of component correctly, it will make correct decision about the constellation points
being received.
To overcome the problem of fading of both the components which occurs because both the
components are transmitted together, I and Q of a constellation point are transmitted separately
in different carriers and even in different time slots. Thus, if one of the components is destroyed
or affected by a deep selective fading of the channel, the other component can be used to
recover the information. This process of separately sending coupled information is called
interleaving. Now, due to this interleaving process, the in-phase and quadrature components of
a transmitted symbol are affected by independent fading, if a receiver is able to make correct
estimate of transmitted constellation point even if only one component is received correctly. The
result of this technique is to increase the robustness of the receiver in propagation scenarios
with deep fades and/or erasure events.
The performance gain obtained when using rotated constellations depends on the choice of the
rotation angle. The optimum rotation angle depends on the chosen modulation and channel
type. The angle values for different QAM is given below
194
Digital Video Broadcasting
QPSK 29.0
16-QAM 16.8
64-QAM 8.6
256-QAM atan (1/16)
These angles have been chosen for each constellation size independently of the channel type.
Although these angles are only optimum for a particular channel type, they always present a
performance improvement with respect to non-rotated constellations in fading channels with or
without erasures
The cell in the above figure is defined as result of mapping a carrier. It is clear that only I
component of a particular cell is transmitted in that cell. Q component is delayed and sent in next
cell. So Cellk carries its in-phase components but Q components of cell k-1. Q components of
Cellk are transmitted in Cellk+1. Hence Q components are delayed by one cell. This is why this
technique is called Rotated Q Delayed Constellation technique.
The peak-to-average power ratio (PAPR) is a related measure that is defined as the peak
amplitude squared (giving the peak power) divided by the RMS value squared (giving the
average power):
Crest factor and PAPR are therefore dimensionless quantities. While the crest factor is most
simply expressed by a positive rational number, in commercial products it is also commonly
195
Induction Course (Television)
stated as the ratio of two whole numbers, e.g., 2:1. The PAPR is mostly used in signal
processing applications. As it is a power ratio, it is normally expressed in decibels (dB).
Cf =20*log(Upeak/Urms)
The minimum possible crest factor is 1, 1:1 or 0 dB.
PAPR IN COFDM
COFDM signal contain large no of sub carriers whose amplitude and phase varies
independently. It is likely that all these carriers may interfere constructively giving high
instantaneous peak value. This may result in high Peak to average Ratio. PAPR in COFDM
signal is given by
PAPRCOFDM =10*log (2*N)
Where N is no of subcarriers used in the COFDM
PAPR in a COFDM system may rise to very high value say 20 dB. High peak-to-average power
ratio can cause following problems.
It increases complexity of the analog-to-digital and digital-to-analog converters
It reduces efficiency of the RF power amplifier
The PAPR puts a stringent requirement on the power amplifier and reduces the efficiency in the
sense that a higher input back off factor is needed before the peaks in the signal experience
significant distortion due to power amplifier
In practice, PAPR is kept up to 15 dB, clipped at about 12 -13 dB in power amplifiers. There are
two methods which are commonly employed for reduction in crest factor. They are
1. Active Constellation Extension (ACE)
2. Tone Reservation (TR)
196
Digital Video Broadcasting
Outer constellation points can be shifted outward within certain limit without increasing the BER
in order to decrease the crest factor. This is shown in Fig. 5
However, this method can be used in DVB- T but not in DVB T-2 because DVB T-2 uses rotated
constellation diagram.
In this technique, certain carriers are not used for payload transmission but are kept reserved.
Reserved tones (carriers) can be switched on or off and can be modified in amplitude and phase
to decrease the crest factor. If TR is in use then the net data rate is reduced by 0.4…0.8Mbit/s,
depending on the transmission parameters (IFFT mode, guard, CR).
Symbol length is reciprocal of frequency spacing between the carriers. For 2K mode in 8 MHz,
symbol length= 1/4.464285714 = 224 µs. For 8 K mode in 8 MHz channels, symbol length is 896
ms
In DVB -T/T2 edge carriers are made zero. Actual no. of carriers used in 2K mode is 1705 and
that in 8 K mode is 6817. So total bandwidth occupied by carriers in 2K mode = frequency
spacing between carriers X no. of carriers used. Thus, total bandwidth required for 2K mode is
7.612 MHz and for 8 K Modes is 7.608 MHz
So COFDM signal occupies bandwidth of 7.6 MHz and there is a space 200 KHz on either side
of selected channel and adjacent channel.
COFDM carriers can be counted either from 0 to 2047 in 2K Mode and 0 to 8191 in 8 K mode in
accordance with IFFT carriers. Alternatively, counting can begin with actual payload carrier used
i.e. 0 to 1704 in 2 K mode and 0 to 6816 in 8 K mode.
197
Induction Course (Television)
EXTENDED MODES
In 8 K and higher mode, DVBT-2 supports wider spectrum. Lesser no. of edge carriers are made
zero and thus payload carriers are increased. Payload carriers in Normal and Extended modes
in DVB-T2 are listed as under
Use of larger no. of carriers increases net data rate in DVB T-2. However, use of extended mode
reduces guard interval between selected channel and adjacent channel.
Spectrum of DVB -T2 signal shown above is ideal spectrum. Spectrum in the wanted channel
should be as flat as possible and should not contain any ripple or tilt. The spectrum should roll
off to zero smoothly towards edges and there should not be any signal component outside
wanted band. However in practice, there are the signal components outside the actual wanted
bands. These signal components are called shoulder. These shoulders arise partly due to the
superposition of Sin(x)/x tails of modulated carriers. Nonlinearities also cause shoulder. These
unwanted components should at least be minimum 40 dB down from centre of the spectrum to
avoid adjacent channel interference. The permissible shoulder attenuation is called tolerance
mask. Spectrum of DVB T-2 signal in normal and extended mode is shown in the Fig. 7.
198
Digital Video Broadcasting
It is seen that shoulder attenuation does not increase when extended mode is switched on in
DVB-T2. Moreover, shoulder attenuation is more in 2K mode than 32 K mode.
The critical mask is to be used for the lowest and highest channel in the allocated band to
protect neighbouring services; the uncritical mask is to be used inside the allocated band.
DVB-T tolerance mask is defined as under
Required masking of spectrum is obtained by using mask filter at the output of the transmitter.
199
Induction Course (Television)
ACTIVITY
Study the main differences in digital TV transmitters using DVB-T and DVB-T2 standards.
RECAP
Concept of DVB-T2 shows that it has certain advantages over DVB-T. Rotating constellation
technique reduces bit error rate in DVB-T2 transmission. Increase in symbol duration by
choosing higher order mode in DVB-T2 helps in eliminating effects of reflection and echoes
by providing longer symbol duration. DVB-T2 is more reliable in case of mobile reception.
FURTHER READINGS
******
200
19
TELEVISION ANTENNA SYSTEM
INTRODUCTION
TV Antenna System is that part of the Broadcasting Network which accepts RF energy from
transmitter and radiates electromagnetic waves in to space. The polarization of the radiation as
adopted by Doordarshan is linear horizontal. The system is installed on a tower of appropriate
height to achieve desired coverage. Different types of antennas are used depending upon the
power and carrier frequencies of transmitters.
This chapter describes panel type antenna and super turnstile antenna for high power
transmitters. Pole mounted V shaped dipole antenna and slot antenna for low power TV
transmitters.
OBJECTIVE
After reading this chapter the reader will be able to:
a) It should have required gain and provide desired field strength at the point of reception.
b) It should have desired horizontal radiation pattern and directivity for serving the planned
area of interest. The radiation pattern should be Omni directional if the location of the
transmitting station is at the center of the service area and directional one, if the location
is otherwise.
c) It should offer proper impedance to the main feeder cable and thereby to the transmitter
so that optimum RF energy is transferred into space. Impedance mismatch results into
201
Induction Course (Television)
202
Television Antenna System
Band III antenna panel, as shown in fig. 3 consists of a reflector, four dipole elements, two
baluns a set of parallel feeders, two variable capacitors, a power divider and two branch feeder
cables of 72 ohms impedance each. Variable capacitors are shunted across the parallel feeders
to tune out the reactive impedance of the dipole elements. The power divider located on the
rear of the reflector has two 72 ohms output points to which one end of the branch feeder cable
are connected. Each branch feeder cable feeds two dipole elements through balun and parallel
feeder. Input port of the power divider has 50 ohms impedance and the same is connected to
one of the port of the junction box through 50 ohms branch feeder cable.
JUNCTION BOX
Two nos. of junction boxes made from coaxial elements are located at suitable position at the
tower. The junction box has one input terminal and a number of output terminals. The number
of the later depends upon the no. of antenna panels in each bay. Input port and output port of
the junction box for Band III has impedance of 50
Two sets of branch feeder cables connect the antenna panels. One set has the length L and
other set's length is L + quarter wave length. The numbers of such cables in each set are half
the total nos. of antenna panels. This condition applies when equal no. of panels is mounted on
each face of the tower. The impedance of branch feeder cable is 50 ohms for band III.
203
Induction Course (Television)
FEEDING ARRANGEMENT
BEAM TILT
As explained in the previous paragraph the upper bay antenna panels may be called as antenna
system No. 1 and those of the lower bay as antenna system No. 2. If the electrical length of the
two main feeders feeding RF energy to the two junction boxes are equal, the antenna system 1
& 2 are excited by equal amplitude and in phase current and the resultant main beam is directed
to the right angle to the direction of the arranged elements. But in order to have better signal
strength in the fringe areas, considering curvature of the earth it is necessary to direct this main
beam to be tangent to the earth, for this, main beam is required to be tilted slightly lower than
the horizontal direction. This method is called the beam tilt. This is realized by exciting the
lower bay elements with the current, which is lagging in phase compared to the current, which
feeds upper bay. The relation between the phase difference and tilt obtained is given by -
2 x 22 x d x Sine of
(meters )
7 x
Where, d = distance between center points of upper and lower bay of antenna
= Beam tilt angle
= Phase difference
The required phase difference is obtained by increasing the length of the lower feeder by L. 'L' is
given by
x 7 x
L
2 x 22
The optimum degree of beam tilt depends on the antenna height.
The horizontal and vertical radiation patterns are shown in fig. 5 and 6. The total gain depends
upon the type of the antenna panel and no. of stacks.
204
Television Antenna System
D FACE
A FACE B FACE C FACE D FACE
A1 B1 C1 D1
-90 -90 C FACE A FACE
A2 B2 C2 D2
-90 -90
B FACE
(TOP VIEW)
A3 B3 C3 D3
-90 -90
A4 B4 C4 D4
-90 -90
A1 A2 A3 A4 B1 B2 B3 B4 C1 C2 C3 C4 D1 D2 D3 D4
A1 A2 B1 B2 C1 C2 D1 D2 A3 A4 B3 B4 C3 C4 D3 D4
U-LINK U-LINK
205
Induction Course (Television)
206
Television Antenna System
The Sira broad band UHF antenna system has an aperture of six bays with four panels each
bay. This system is an array of horizontally polarized panels suitably designed for use around
square type towers. The panel is composed by four full wave dipoles mounted on a solid
reflecting screen, and fed by a two-way splitter.
The panel is protected by an epoxy glass radom and is pressurizable up to the dipoles feed
point, in order to avoid water formation due to condensation of the humidity content in the air.
The Panels are fed in parallel in order to obtain the desired radiation pattern and a low V.S.W.R.
To feed the individual panels, a system of power distribution transformers, power splitters and
coaxial cables, are used. The power transformation ratio of the various power splitters and the
electrical lengths of the feed lines are selected to vary the amplitude and the phase fed to the
individual dipole panels, thereby achieving the desired gain on vertical and horizontal patterns.
The antenna is completely mounted on a self-supporting mast. The splitting system is fitted at
the bottom of the supplied mast. Such supporting mast is fitted with steps (welded inside the
mast) to allow accessibility to the components.
The panels of bays 2 and 4 are mounted mechanically advanced 40mm and those in 3 and 5 by
80mm and 140mm, respectively. In the figure 8 above
A Super turnstile antenna is a type of radio antenna named for its distinctive shape which
resembles a bat wing or bow tie. Stacked arrays of batwing antennas are often used for the
broadcast of television signals due to their omnidirectional characteristics. Batwing antennas
typically generate a horizontally polarized signal.
Batwing antennas are essentially a type of crossed dipole antenna, a variant of the turnstile
antenna. The typical arrangement consists of four elements offset at right angles that are
mounted vertically around a common mast. Element “wings” on opposite sides are electrically
connected to each other through the pole and work as a pair. To generate an omnidirectional
pattern, the two element pairs are fed so the first is 90° out of phase with the second. This
creates primarily horizontal polarization in the horizontal plane with an increasing vertical
component as the elevation angle increases. Each group of four elements at a single level is
typically referred to as a bay.
207
Induction Course (Television)
In broadcast applications, multiple bays fed in phase are stacked vertically with a spacing of
approximately one wavelength, to create a collinear array. This generates an omnidirectional
radiation pattern with increased horizontal directivity (more of the energy radiated in horizontal
directions and less into the sky or down at the earth), suitable for terrestrial broadcasting.
The most notable characteristic of a batwing antenna is its 35% wider bandwidth at a VSWR of
1.1:1. This makes the antenna design suitable for broadcasters who wish to use a single
antenna to transmit multiple television signals.
In the band III antenna, transmitter power is first taken to a power divider through a low loss
feeder cables. Branch feeder cables from divider feed the antenna panels. The BEL 'V' antenna
consists of four quadrant dipoles arranged vertically in two stacks. A stack contains two
quadrant dipoles spaced at half wave lengths on a common balance feeder line made by
aluminium tubes. The tubes are extended a quarter wavelength beyond each stack and short
circuited resulting in a quarter wave stub appearing as high impedance at the feed points. Both
the stacks are fed with equal amplitude and phase current by connecting the branch feeder
cable at the center of the stack as shown in the schematic fig. 10.The resultant radiation pattern
in horizontal plane is almost Omni directional as claimed by the manufacturer. The feed
arrangement including branch feeder cables is entirely concealed to prevent entry of moisture.
208
Television Antenna System
Slot antenna elements, cut in metal sheets are used as electromagnetic waves radiator at UHF
frequencies. A similar half wave length slot cut in a flat metal sheet, fed at center is shown in
Fig. 3. The long sides of the slot carry current of opposite phase and their field cancels out. The
short ends carry currents in phase and radiate efficiently because currents are not only confined
to the edges of the slot but spread out over the sheet. Power is radiated equally from both sides
of the sheet, if the slot is horizontal as depicted in the Fig. 11.
/4 /4
FEED
The radiation is normal to the sheet and vertically polarized. The slot antenna can be easily
excited with a coaxial transmission line by connecting the outer conductor to the sheet. Inner
conductor is connected to the center of the slot. The feed point impedance of such an antenna
element is of the order of 50 ohms.
209
Induction Course (Television)
In the UHF LPT antenna, slot windows are cut in a cylindrical, heavy make aluminium pipe and
the same are covered with durable laminated plastic. The total length of the aluminum pipe is
from 24 ft to 30 ft which is mounted on a mast of 30 metre height as shown in fig. 12
Symmetrical parallel feed system completely housed within the center of the antenna is
employed for feeding the slots. The radiation pattern in horizontal plane is off set Omni
directional, as shown in fig. 13. Maximum radiation occurs in the direction that faces the slot
area.
Fig. 12: UHF LPT slotted antenna Fig. 13: Horizontal radiation pattern, slot antenna
FEEDER CABLE
Feeder cables are used for transferring RF energy from the output of transmitter to antenna for
radiation in free space. Feeder cables of size 1⅝” for 1kW, 4” for 10 kW and 5” for 20kW
transmitters are used. Higher the size of feeder cable, more is the power handling capacity and
lesser is the attenuation.
Air dielectric feeder cables are used for high power transmitters. An air –dielectric coaxial cable
has an inner conductor supported by a dielectric spacer and the remaining volume is filled with
air. Air dielectric coaxial cable offers lower attenuation and higher average power rating than
foam filled cable but requires pressurization. Air dielectric feeder cable is shown in figure 14.
A coaxial cable carries current in both the inner and outer conductors. These currents are equal
and opposite and as a result all the fields are confined within the cable and it neither radiates nor
picks up signals. This means coaxial cable operates by propagating an Electromagnetic wave
through the dielectric area inside the cable, between the center conductor and outer conductor.
The electromagnetic fields are entirely contained inside the cable, in contrast to signals sent
down a pair of wires, and the outer cylinder protects the signal from external interference.
210
Television Antenna System
Feeder cables have connector with gas through valve at transmitter end. Compressed dry air is
pumped in to the feeder cable by Dehydrator through this valve to prevent ingression of moisture
in the cable. Moisture reduces the performance of feeder cable by corrosion, voltage arcing and
increase in VSWR.
Characteristic impedance is one of the most important electrical characteristics of feeder cables.
Any deformation in feeder cable or ingression of moisture cause Characteristic Impedance to
change from specified value of 50 Ohm. These results in the reflection of RF energy and
reduction in energy carried to the antenna.
If two bays of antenna are used, then output of transformer is split in to two parts using quarter
wave T- Transformer and two separate feeder cables are used for feeding upper and lower bay
of antennae. In such an arrangement, it must be ensured that electrical lengths of two feeder
cables are correct. Otherwise cancellation of radiations from two bays may take place at some
locations within the coverage area, depending up on the difference in their electrical lengths.
Semi flexible main cables feeding RF energy to junction boxes and branch feeder cables are
required to handle large power. For stability and to prevent the change of characteristic
impedance due to moisture absorption, the same are pressurized at suitable pressure by
sending dry air from the dehydrator installed in the transmitter building. The operation of
dehydrator is automatic. Connectors are tightened properly and then sealed by a sealing agent
to avoid in-different contact and prevent seepage of moisture. Indifferent contact and moisture
would cause reflection resulting into ghost and high VSWR. It may even lead to RF spark,
damaging cable and connectors.
211
Induction Course (Television)
ACTIVITY
Visit r-f lab and study the structure and feed-system of all the TV antennae available in the lab.
RECAP
FURTHER READINGS
******
212
20
VIDEO MEASUREMENTS
INTRODUCTION
Colour Composite Video Signal (CCVS) while passing through a chain of equipment may get
amplitude/phase distorted. This may happen because of improper functioning/tuning of
equipment in the chain. Engineers who are operating and maintaining the equipment are
supposed to monitor and make regular checks and corrections at periodical intervals so that the
received signal is distortion free. For evaluation of video signal we normally use following two
methods :-
1. Objective evaluation
2. Subjective evaluation.
Each evaluation technique has its own merits and demerits and both combined together form an
effective tool to judge the quality of the picture.
Objective evaluation is based on waveform comparison method and frequency sweep method.
In waveform comparison method, a standard wave form is sent through the system and the
received wave form is compared for any deviation from the original. The Sweep frequency
method is used for evaluating the overall frequency response of the transmitter.
OBJECTIVE
TYPES OF DISTORTION
Nonlinear distortion arises due to nonlinear relationship between input and output. Such
distortions are level dependent.
213
Induction Course (Television)
Linear distortion may arise even if system is linear. It is due to the dependence of transfer
function of system (gain and phase) on the frequency of input signal. Such distortions are called
linear distortions.
i) Termination
Termination used should also be of high quality for accurate measurement of small distortion.
Incorrect termination impedance may cause amplitude as well as frequency response problems.
Termination with 0.5% or better tolerance should be used in measurements.
CRT based equipment have a specified warm up period. Turn the equipment on and allow them
to operate up to their warm up time, as specified in manual, before checking calibration and
measurement.
It has been observed that the APL of the line preceding the VITS line (line 16 and line 329) can
affect the measurements being made on the test signal and can give a wrong picture about the
quality of the video. To minimize errors, it is recommended that the contents of line 16 and line
329 should be maintained at a value of 50%. All this can be achieved either by filling line 16 and
line 329 with data pulses having a mean value of 50% or by inserting 50% line bar during both of
these lines, not only for in service measurements but also for out of service testing.
iv) Calibration
Equipment which needs to be calibrated externally must be calibrated following the procedure
specified in the manual before measurements. Processor based equipment like VM 700
calibrate automatically when turned on and continue to do so periodically during the operation.
VIT signals are a group of test signals inserted during the blanking period of composite video
signal. These signals are used to weight the transmission characteristics of a transmission
system between the test generator and the output of the demodulator. For PAL 625 system
these signals are inserted during line 17, 18, 330 and 331. CCIR test signals are shown below in
fig.no. 1, 2, 3 and 4.
214
Video Measurements
215
Induction Course (Television)
In addition to these, lines 22 and line 335 are used for S/N ratio measurement.
216
Video Measurements
BAR
Since long time, square wave is being used to find the response of any equipment as it is very
easy to generate and convenient to use. In Television also it is no exception. Rising edge of
pulse represents high frequency component while baseline represents low frequency.
The figures given below illustrate a fairly good account of the effect of the deteriorations on the
shape of the bar.
Black
White
H F Loss H F Boost
L F Loss L F Boost
The duration of the bar signal is 25 sec. (one half of the active line period) in full field signals
and 10 sec. in Inserted Test Signal (ITS).
This signal indicates the irregularities occurring from about 15 KHz to 1 MHz in the video signals.
Parameters measured using luminance bar are
217
Induction Course (Television)
PAL composite video signals are nominally 1 volt peak to peak. Bar amplitude
measurement technique is used to vary that signal conforming to this nominal value and to
make appropriate adjustment. Amplitude measurement is made at mid-point of bar.
The Bar K rating is quantified as the maximum absolute departure in bar top level from the
level at the bar centre expressed in percentage, ignoring the first and the last
microsecond.
218
Video Measurements
How to measure?
1. Leave 1 micro second from the edges on the top of the bar so that the very high
frequency error cannot reduce the accuracy of the measurement.
2. Measure at the center point of the top of the bar and take it as "A"
3. From the top center measure the left slope or right slope after leaving one micro
second as indicated in step 1 and mark it as "B".
Base Line Distortion is defined as the difference between the level of signal at point b7 in
Fig.1 which is located after the mid-point of the trailing edge of bar element B2 at a
distance of 400ns and at a reference point b1 located before the beginning of staircase.
The Base-Line distortion is expressed as the percentage of Luminance Bar amplitude. The
sign of the difference is positive if the signal level at point b7 is higher than the level
reference point b1
Sine squared pulse is obtained by squaring sine wave. It has half amplitude duration of 200
nsec and has amplitude of 700 mV. The base of the pulse is perfectly horizontal. The 2T pulse
is mainly used to find the response of high frequency video signals.
Summary on parameter distortions, measured with 2T pulses is given in the following table:
2T Pulse measurements
a) Pulse bar ratio error: The 2T sine squared pulse bar measurements is defined as the
difference between the amplitudes of the 2T pulse and the luminance bar. The sign of
the error ratio is +ve if the 2T pulse amplitude is greater and -ve if the 2T pulse
amplitude is lesser than the bar.
How to measure?
1. Measure the amplitude of the bar at the top centre and let it be "B" volts.
2. Measure the height of the 2T pulse and let it be "P" volts.
219
Induction Course (Television)
P
3. Then, % Pulse Bar ratio = 100
B
1PB
4. % Pulse Bar K rating : Kpb = 100
4 P
Fig. 8(a): Kpb 4% PAL Graticule Fig. 8(b): Kpb or Pulse Bar Error Ratio
A A
B
B
b) 2T K-factor: As indicated earlier the base of the 2T pulse should be horizontal. If there
is any deviation in this it indicates group delay and reflections in the picture at that
frequency.
How to measure?
The pulse and bar were more suitable for black and white transmissions as the effective
indication of response of 2T was up to 4 MHz (even though it extends up to 5MHz) and the
advent of colour transmission required addition of one more test waveform that covers the high
220
Video Measurements
frequency especially the colour sub carrier frequency. The 20T pulse fulfilled this requirement to
100%.
The 20T pulse is not a simple pulse but is modulated with colour sub carrier frequency (4.43
MHz) thus indicating the response from 3.93 to 4.93 MHz.
Whenever high frequency boost is there, the 20T pulse height will be more than the top center of
the bar and the base line will curve outwards. Whenever high frequency loss is there, vice versa
takes place i.e. the height of the pulse will be less than the top center of the bar and the base
line will curve inward.
The measurements done on this waveform are gain inequality and delay inequality. The gain
inequality as its name indicates is the difference between the amplification factors of
chrominance and luminance channels.
A brief summary on distortion, measured with 20 T pulse is given in the following table
How to measure?
1. Measure the height of the 20T pulse from the baseline and let it be "A" volts.
2. Measure the height of the curve from the base line and let it be "B" volts.
3. The gain inequality = Bx100/A in db.
This is noticeable only when amplitude distortion is present. When phase distortion is also
present the situation becomes more complex. The following steps will explain the procedure.
(a) : 20T pulse with both gain (b) : 20T pulse without
& delay inequalities gain & delay inequalities
221
Induction Course (Television)
1. Measure the height of the pulse from the base line let it be "A" volts.
2. Measure the curve inside and outward and let it be "P" and "Q" volts.
3. Then gain inequality is 2(P-Q) x 100/A.
4. The delay inequality = 12.8 (P+Q) x 100/A in nanosecs.
222
Video Measurements
5. The chart for the gain and delay inequalities is also given.
MULTIBURST SIGNAL
The amplitude of 1 MHz frequency burst packet is adjusted to 0.7 V (by keeping the vertical gain
control of WFM/Oscilloscope in uncalibrated position and adjusting the vertical gain and position
controls) and the average peak to peak amplitudes of the remaining burst packets are measured
and recorded. The maximum and minimum amplitude deviations are then expressed as a dB
ratio (or %) of the reference 1MHz burst amplitude.
y x
Note: Since the channel bandwidth is only 5 MHz, the last burst (5.8 MHz) is not taken into
account in measurement.
Differential gain: This is one of the distortions those occur in chrominance. This can be defined
as the change in the chrominance vector amplitude because of increase in luminance levels.
The effect in the picture is to change the saturation. The effect is most noticeable in the yellow
to red region of the spectrum.
Differential phase: This is also one of the distortions those occur in chrominance. This can
be defined as the change of phase of the component of the video signal caused by change in
amplitude of associated luminance component of the signal. The effect in the picture is to
change the hue (colour). Normally in PAL system the DP effect is transformed into DG. The
above mentioned distortions tend to occur because of the shift in the operating point into the
nonlinear region of the transfer curve. Most of the time, the fault can be corrected to a greater
extent by slightly adjusting the bias and so shifting the operating point to the linear region.
223
Induction Course (Television)
Smear or Blurring
When high frequency loss is more pronounced, the monochrome picture develops a distortion
known as smear or blurring. The picture will appear fuzzy with loss of fine details.
Streaking
Streaking is a particularly unpleasant form of picture impairment which is the result of deviations
from linearity of the amplitude and group delay responses at frequencies comparable with line
frequency. The name streaking has risen because any sharp edge defining the end of a fairly
large picture area doesn't terminate cleanly but decays slowly carrying a streak of that
luminance level towards right hand side of the picture. Practical example is while super-
imposing characters you can see the extension of the characters (alphabets) on the right hand
side without clear termination.
Ringing or overshooting
This will manifest as sharp contour lines at the edge of the picture, as shown in Fig. 14. This
effect is observed when the signal passes through a band pass filter having sharp cut-off
characteristics. Phase distortion results in overshoots. This distortion is removed by using
phase correctors.
K-factor
The human eye is not very sensitive to distortions at very low frequencies and very high
frequencies. The K-factor takes into account this factor and the distortions occurring at very low
frequencies and very high frequencies are divided by a weighting factor depending on the
sensitivity of the eye at that particular frequency (for example the weighting factor for 0 to
10KHz is 2 and the weighting factor for 500 KHz to 5 MHz is 4).
224
Video Measurements
0.0
-2.0
-4.0
-6.0
-8.0
0.4
0.3
0.2
0.1
0.0
-0.1
Average 32 32
To make an automatic measurement of differential gain with the VM700T, select DGDP in the
MEASURE mode. Both differential phase and differential gain are shown on the same display
(the upper graph is differential gain). Measurements results are also available in the AUTO
mode. This is one of the distortions occurring in luminance due to the increase in the levels of
chrominance. It can be compared to DG occurring in chrominance because of increased
luminance levels. The chrominance Luminance Inter-modulation occurs because of increased
chrominance levels. The visual effect in the picture is the same as DG i.e. the change in
saturation levels. The signal used for the measurement of this distortion is the three level
chrominance signal or in case VITS line 331 can be used. A brief summary of parameter
distortions.
S. Parameter Cause For Effect In TV Picture Signal to use
No. Distortion Distortion
1. Sub carrier Frequency response Colour saturation errors Line 331
amplitude at colour sub carrier (3 level
level errors. chrominance
signal)
2. Inter Modulation Non-linearities, Change in luminance
between luminance saturation effects depending on colour ---do-------
and chrominance saturation
3. Chrominance non- Non-linearities, Colour gradation errors
linear gain saturation effects i.e. unequal steps in ---do-------
colour saturation
4. Chrominance non- -do- Colour hue errors
linear phase depending on colour ----do-------
saturation
225
Induction Course (Television)
LUMINANCE NON-LINEARITY
This is a non-linear relationship between input and output signals in the luminance channel only.
The eye is very tolerant to this type of distortion and it is not easily detected until it reaches quite
larger values. Extreme values can result in crushing of the blacks and clipping of the whites.
Moreover its presence may be accompanied by other annoying types of non-linearity.
A five or ten step staircase waveform (Fig. 1) can be used for the measurement of this type of
distortion. In case of VITS, line 17 can be used for this measurement as it contains the required
waveform.
HOW TO MEASURE?
GROUP DELAY
This term is much talked about but less understood in Television. As you know the TV
waveform is a complex waveform and has a wide bandwidth. Let us observe two different
frequencies F1 and F2 within the pass band of video spectrum. After passing through a filter or
226
Video Measurements
an amplifier if the two frequencies reach at the same time there is no distortion. But if F1
reaches at a time t and F2 reaches at a time t+p, then there is delay. If this delay is different for
different frequencies phase distortion is said to exist. The slope of the phase shift vs. frequency
curve at any particular frequency is defined as differential time delay at that frequency.
This can be measured by Group delay measuring set or envelope delay measuring set with 2
MHz frequency taken as reference.
S/N RATIO
Noise can be defined here as a generic name for all the various forms of unwanted voltages
which modify the signal during the course of transmission from one point to another.
Whenever we say S/N ratio normally we mean the continuous random noise measurement only.
The random noise takes the form of quasi infinite series of short pulses whose amplitudes and
instants of occurrences are random.
As you might be aware that higher luminance frequency components are subjectively less
visible. So some people felt that you should simulate the response of the eye to get the true
picture by introducing filters to filter out noise voltages of frequencies beyond 8 or 10 MHz. So a
227
Induction Course (Television)
weighting filter is introduced which simulates the response of the eye. This is known as
weighted S/N ratio. This is normally more than the unweighted S/N ratio.
0.7
S / N (unweighted) = 20 log 10 16 dB
y
Note: The S/N ratio is expressed as ratio of P-P value of signal to r.m.s. value of noise, whereas
the noise ‘y’ measured above is in P-P value. Hence to express in standard form of p-p signal to
r.m.s. noise, 15.5 dB (rounded to 16 dB) is added. This will then give the required value of S/N.
ACTIVITIES
RECAP
Measurements reveal the performance of any equipment. The test signals used for video
measurements such as line-17, Line-18, Line-330 and Line-331 have been thoroughly
described. Procedure for measuring different parameters has also been explained. Linear and
non-linear distortions both have been taken up in details.
FURTHER READINGS
1. Weaver, L.E. (1978), Video measurement and the correction of video circuits; Belgium;
EBU.
******
228
21
DIGITAL EARTH STATION & DSNG
INTRODUCTION
The analog video broadcasting occupies full 36 MHz transponder bandwidth for one video
service. Doordarshan adopted Digital Video Broadcasting -Satellite (DVB-S) in simulcast mode
in the late nineties. In simulcast mode, analog and digital services are transmitted
simultaneously in the same transponder bandwidth. 27MHz bandwidth was assigned to analog
service while 9 MHz to digital service. To broadcast live news or cover various events from
remote area, Digital Satellite News Gathering (DSNG) is used in Doordarshan network. DSNG
has facility to uplink programs in C/Ku band at a short time with a small dish for transmission.
DSNG has been discussed in details in the later part of this chapter.
OBJECTIVES
After going through this chapter, you will be able to:
229
Induction Course (Television)
There are two types of FEC used in the DVB-T and DVB-S systems. Each system uses a
combination of block codes (Reed-Solomon (RS) codes) and convolutional codes (Viterbi
Codes). Since these codes are arranged in a cascaded or series configuration, they are said to
be concatenated. Various broadcasting standards specify different digital modulation and FEC
techniques.
DVB-S gives the choice of selectable FEC rate: 1/2, 2/3, 3/4, 5/6, 7/8. Table1 shows information
data and parity data for various FEC rates. Code rate 1/2 means that there are two encoded
output bits with one information bit, or to state it differently: the data stream contains 50%
redundancy. If the information data at the input of the encoder is 10 Mbps then, for FEC ½, the
parity data would be another 10 Mbps. So the total data rate (output of the encoder) would be 20
Mbps. By puncturing, the code rate can be increased, (2/3, 3/4, 5/6, 6/7, 7/8) which of course
reduces error correction capability, making transmission more error prone. Higher FEC rate on
one hand increases information bits and reduces parity bits, on the other hand decreases error
correcting capability. The reduced error correcting capability demands more transmitting power
i.e. more Eb/No to maintain the desired BER. So the choice of FEC rate depends on information
data and transmitting power.
Table 1
Code Rate 1/2 2/3 3/4 5/6 6/7 7/8
Information 10 13.33 15.0 16.66 17.142 17.5
Data (Mbps)
Parity Data 10 6.66 5.0 3.33 2.857 2.5
(Mbps)
Total data 20 20 20 20 20 20
(Mbps)
Required 4.0 4.5 5.0 5.5 6.0 6.4
Eb/No (dB)
230
Digital Earth Station & DSNG
Example: For a channel of 9 MHz Bandwidth (useful bandwidth 8MHz, and guard band 1MHz
assigned to each digital earth station), = 0.28 and m = 2 for QPSK, the symbol rate and bit
rate come out to be 6.25 MSps and 12.50 Mbps respectively, taking only the useful bandwidth of
8MHz. Bit rate Rb which is output of multiplexer and applied at the input to the FEC encoder.
FEC encoding is generally part of digital modulator. The information bit rate or useful bit rate Rb
can be calculated as
Rb Rc . rRS . rconv. .
188
Where, rRS is code rate for Reed-Solomon code and is specified as in DVB-S.
204
And rCONV . is code rate of convolution code which is generally chosen to be ¾.
The information rate comes out to be 8.639705 Mbps. It is obvious from the information rate;
two video programs of MPEG-2 quality (4.0-5.0 Mbps) can be multiplexed into a single Transport
Stream (TS). The encoders are used in variable bit rate (VBR) mode where minimum and
maximum bit rates are set in the encoders. For example, range of bit rates can be set from
3.5Mbps to 8.0Mbps in encoder1 (4:2:0 mode) for regional service, and from 0.5Mbps to
5.0Mbps in encoder 2 (4:2:2) for news service. The multiplexed output bit rate as calculated
above remains constant. The multiplexing used here is basically statistical multiplexing in which
multiplexer continuously monitors the demand of bit rate in each program. The program which
needs more bits would be allotted required bits sparing from that of the less demanding
program. Statistical multiplexing is way of efficiently utilizing the available bandwidth and it is
found very efficient where large numbers of programs are to be multiplexed as in DTH, 10
programs per transponder bandwidth. In Doordashan setup 36 MHz transponder bandwidth is
being shared by different earth stations. The assignment of bandwidth and frequency is dene by
Doordarshan Directorate in the following manner-
6 MHz for Single Channel Per Carrier (SCPC) - This is used by DSNG and few of the earth
stations which have provision for only one channel for up-linking.
231
Induction Course (Television)
9 MHz for Multiple Channel Per Carrier (MCPC) - This is used by all the Regional language
satellite stations (RLSS) for unlinking the two channels. Here the main program is uplinked in
4:2:2 modes whereas the news feed are uplinked in 4:2:0 modes.
18 MHz for Multiple Channel Per Carrier (MCPC) – These earth stations are carrying 4 or
more channels.
36 MHz for Multiple Channel Per Carrier (MCPC) – These earth stations are carrying 10 to 12
channels. Earth stations at DDK Delhi, DTH Todapur & HPT Pitampura are few which are
uplinking multiple of channels.
SDI VIDEO
PROCESSOR ASI
STREAM L L-C
70 MHz
TS QPSK
ENCODER IF
MUX 1 MODU- BAND BAND
AES AUDIO LATOR CONVER- CONVER-
PROCESSOR TER TER
FROM OTHER
CHANNELS
TWTA
1
PWR
Wave Guide Switch
SPLITTER
DUMMY TWTA
LOAD 2
SDI video along with AES/EBU audio is applied to MPEG-2 encoder. The output in
Asynchronous Serial Interface (ASI) format from each encoder is fed to MUX which multiplexes
two ASI signal from encoders.
The output in Transport Stream (TS) format from MUX is fed to QPSK modulator which outputs
70 MHz IF. The 70 MHz IF is further up-converted first into L-band IF and finally into C-Band.
The C-band signal is fed to TWTA (or SSPA) and finally fed to up- linking parabolic dish antenna
(PDA) through waveguide. 1+1 redundancy in RF chain is provided where one complete chain
(modulator, up-converter, TWTA) is in hot standby.
DOWN-LINK CHAIN
Fig.2 depicts the down-link receiving chain, where the down-link signal is received by Low Noise
Amplifier (LNA) through Trans-Reject Filter (TRF). The RF signal is divided using 4-way divider
for monitoring and measurements purposes.
232
Digital Earth Station & DSNG
For Monitoring
TRF LNA H/V
POWER ACU
DIVIDER
IRD
OUTDOOR C BAND
TO L BAND POWER Monitoring
DOWN CONVERTOR DIVIDER Power meter
Spectrum
Analyser
+Y
D SUBTENDED PARABOLIC SHAPE
E
B ANGLE
C
DIAMETER
0 A
PARABOLIC
AXIS (X) FOCUS
VERTEX OF
PARABOLA
FOCAL
POINT
G
F
-Y
FOCAL LENGTH
Fig. 3(a): Parabolic reflector Fig 3(b): Incident converging at focal point
Parabolic surface is either made of metal or wire mesh supported by solid fiber surface. When
parallel EM wave falls on the metallic surface, it is reflected by the surface. All the reflected
waves are directed towards the focal point due to the parabolic shape of the antenna as shown
in Fig.3b. The energy of the EM wave is increased (or amplified) by the amount equal to the
gain of the antenna. If an EM radiator (Horn antenna) is placed at the focal point (center fed
233
Induction Course (Television)
antenna), EM waves emanating from the feed horn fall on the reflector and then reflected back
into the space as shown in Fig. 4a.
D To Satellite
E PARABOLIDAL
B MAIN REFLECTOR
C
From Satellite
DIAMETER
FEED HYPERBOLOIDAL
0 SUBREFLECTOR
PARABOLIC
AXIS (X)
Fig. 4(a): Center fed antenna Fig. 4(b): Basic geometry of Cass grain Antenna
If feed is placed at the focal point of secondary reflector (Convex Sub-reflector) as shown in
Fig.4b, this type of antenna is known as Cassegrain Antenna. If the sub-reflector is concave, the
antenna is known as Gregorian type.
Sub-reflector
Main-reflector
Feed
Feed is basically horn antenna which can transmit and receive electromagnetic waves
simultaneously. Fig. 6 shows various view of uplink feed. Back view shows four waveguide
ports: two transmit ports (Vertical and Horizontal polarization) and two receive ports (Vertical and
Horizontal polarization).
234
Digital Earth Station & DSNG
Waveguide feed
arrangement Conical horn
Fig. 6(a): Side view Fig. 6(b): Back view Fig. 6(c): Front view of feed
If one transmit port is used, second port should be terminated. LNA is connected to the receive
ports. If LNA is connected to one receive port, the second receive port is terminated. If RF signal
is uplinked in vertical polarization, the downlink would be in horizontal polarization. This is the
prevalent practice in satellite broadcasting. Feed can transmit and receive signal simultaneously.
Transmit and receive signals are isolated by the trans-reject filter (TRF) which is a part of feed.
In satellite broadcasting, received signals are extremely weak due to higher path loss which is of
the order of 200 dB for geostationary satellites. Ordinary amplifiers generally have higher noise
LOW-NOISE L BAND
AMPLIFIER MIXER AMPLIFIER
OUTPUT TO CABLE
950 – 2150 MHz
BAND BAND
PASS PASS
FILTER FILTER
235
Induction Course (Television)
figure (or noise temperature) and cannot be used to receive such weak signals. The simple
reason is that an amplifier with higher noise figure can generate noise. If the noise generated is
higher than the received signal, the signal will be buried into the noise and cannot be recovered.
Therefore LNA with higher gain (about 80 dB) and lower noise factor (about 0.22 dB) is needed
to amplify weak signals. High gain is achieved by cascading five to six amplifying stages.
MOSFETS are used to achieve low noise amplification. In satellite receiving systems, Low Noise
Block Down converter (LNBC) is a combination of low-noise amplifier, frequency mixer, local
oscillator and IF amplifier as shown in Fig.7. It receives the microwave signal from the satellite
collected by the dish, amplifies it, and down converts the block of frequencies to a lower block
of intermediate frequencies (IF 950-2150 MHz). This down conversion allows the signal to be
carried to the indoor satellite TV receiver using relatively cheap coaxial cable; if the signal
remained at its original microwave frequency it would require an expensive and
impractical waveguide line. Low Noise Block Down converter with integrated Feed-horn (LNBF)
is used to receive Ku-Band DTH signal. In LNBF Feed-horn is integrated with LNB.
Satellite news gathering (SNG) is the use of mobile satellite communications equipment for the
purpose of broadcasting live news or coverage of various events from any remote area. Mobile
units are usually vans equipped with advanced two-way audio and video uplink and downlink
system using parabolic dish antenna that can be aimed at geostationary satellites.
The earliest SNG equipment used analog equipment. Doordarshan used Transportable Remote
Area Communication Terminal (TRACT) vans for live coverage from any remote part of India till
late 1990s. These TRACT vans were very huge in size and used to cause great difficulty in
movement on (Indian) Roads.
236
Digital Earth Station & DSNG
With migration from analog to digital, these TRACT vans were replaced by DSNG vans which
were compact and smaller in size as shown in Fig.8. There are two types of DSNG systems-Fly
away type and Van type which operate in C-band and Ku-band. DSNG system was inducted in
Doordarshan in the late nineties. Now Doordarshan operates 31 DSNG vans in various parts of
India.
DSNG vans are now widely being utilized for OB coverage and reporting live news. DSNG vans
are used in two ways. Firstly it is used with OB van for coverage of big events which utilizes
multi camera set-up. Secondly it is used for news reporting with single camera.
EQUIPMENT CHAIN
As shown in Fig. 9 Analog audio/video (CCVS) or Digital Audio (AES/EBU)/Video (SDI) signal
from camera is fed to encoder which compresses audio and video separately and multiplex into
single stream called Asynchronous Serial Interface (ASI). ASI output from encoder goes to
QPSK modulator which performs Forward Error Correction (FEC) coding and QPSK modulation
at 70 MHz IF frequencies. 70 MHz IF signal is fed to up-converter which converts 70 MHz IF first
into L-band and then into C/Ku-band uplink frequency. Output of up converter is fed to HPA for
desired power amplification. Output of HPA is finally fed to highly directional parabolic dish
antenna. RF power is radiated towards satellite. Satellite receives up-link RF signal, converts it
to down-link frequency, amplifies and finally transmits the down converted RF signal back to the
earth. The down-link signal can be received anywhere within the coverage area of satellite.
UPLINK
DIGITAL
AUDIO ENCODER MODULATOR UP-CONVERTOR HPA
DIGITAL (MAIN) DVB-S / DVS-S2 (MAIN) (MAIN)
VIDEO (MAIN)
RF
SWITCH
DIGITAL
AUDIO
ENCODER MODULATOR UP-CONVERTOR HPA
DIGITAL (REDUNDANT) DVB-S / DVS-S2 (REDUNDANT) (REDUN
VIDEO (REDUNDANT) DANT) DOWN
LINK
MONITOR IRD
SPECTRUM
ANALYZER
237
Induction Course (Television)
The following measurements are usually required in any earth station set-up:
The quality of any communication link is assessed by its bit error ratio (BER). If a communication
channel has 10-6 BER, it will ensure one bit error in every 106 bits transmitted. The BER is
defined as the number of bits received in error divided by the number of bits transmitted, which
equals the error count in a measurement period divided by the product of the bit rate and the
measurement period.
In general, BER measurements tend to be pass/fail in nature, and convey very little information
about a failure. Moreover, some additional tests are usually required on components/channel to
ensure that they will meet the desired BER when they are installed in the system. For these
reasons, it is desirable to perform a number of parametric measurements on the transmitted
waveforms in time domain. Typically, an oscilloscope or an eye pattern/diagram analyzer is
added to the BER measurements.
An eye pattern is obtained when a high speed circuit/system outputs a long pseudo-random-bit-
sequence (PRBS). A sampling oscilloscope is used to observe the output such that received
sequence is applied to vertical deflecting plates and the clock used for triggering is applied to the
horizontal plates of CRO. The scope is triggered on every fourth or eighth clock cycle, and every
sample point is plotted on the screen. Since the signal is PRBS, the scope is triggered to “look”
at every fourth cycle, and every sample point is plotted on the screen, the picture one obtains is
a superposition of 1’s and 0’s output. The figure so obtained looks like open eye hence the
name eye diagram.
Fig. 10 shows the composite eye diagram resulting due to noise, bandwidth limitation, cable loss
and inter symbol interference in system/channel. Jitter is caused by variations in timing of the
data transition relative to the clock edge. Ideally the crossing of high-to-low and low-to-high
transition should occur midway between the high and low levels and forms a perfect looking X.
Practically, some transitions are delayed as result of following a long sequence of bit intervals
with no change. In other cases, bandwidth limitations of the system/device-under-test (DUT)
may attenuate the height of pulses and reduce the eye opening. Eye diagram can provide
rise/fall time, jitter, amplitude, noise level and eye opening ratio measurements. The
measurements can be carried out using digital waveform monitor.
238
Digital Earth Station & DSNG
For SDI signal the eye amplitude is 800 mV. This amplitude should remain within +/-10% limit
(720 mV to 880 mV) for error free transmission from studio to earth station. Jitter should remain
within 20% of Unit Interval (UI) p-p. Overshoot/rising/falling edge should be less that 10%.
Measurement of C/No ratio is very important measurement which gives the quality of up-link and
down-link chain. This is measured by up-linking pure carrier (un-modulated) with the help of
spectrum analyzer. One can also measure C/N ratio for modulated carrier. But accuracy of
measurement is obtained with C/No measurement. For this measurement pure RF carrier is up-
linked to the transponder of interest (disconnect transport stream and any other energy
dispersal signals). Transponder downlink frequency is entered in the spectrum analyzer in
case if LNA is used. Spectrum analyzer is set to 10 KHz RBW and 0dB input attenuation to get
the smooth spectrum. Main marker is brought on the peak and delta marker on the noise floor
level. The level difference of the two components displayed on the top of the screen is measured
C/No as shown in Fig.11.
Once C/No is measured, C/N can be computed from the following relation
C / N C / N o 10 log( BW ) dB
C/N can also be measured using the spectrum analyzer. Modulated carrier is up-linked.
Spectrum of down link signal is seen on the screen of the spectrum analyzer as shown in the
Fig.12. The measurement procedure is same as for the C/No measurement.
Eb/No can also be calculated as
239
Induction Course (Television)
Eb C BW
Or 10 log dB
No N Rb
With FEC Eb N o C / N o 10 log( Rb ) Gc dB
240
Digital Earth Station & DSNG
Gc is coding gain achieved by adopting forward error correction mechanism and can be
controlled by FEC rate. 2/3 FEC rate can offer 7-8 dB coding gain for a QPSK modulation.
BER (Bit Error Probability) is a very important parameter. For good quality reception, DVB
specifies BER as 10-6.
SHF CW LNA Rx
HPA
Source Subsystem
This test procedure requires the Network Operation and Control Center (NOCC) to receive an
unmodulated carrier transmitted from the Antenna Under Test (AUT) and to measure the
transmit cross-polarisation isolation. Figure 13 illustrates the Tx cross-polarisation test
configuration.
241
Induction Course (Television)
Cross polarization measurement is carried out at NOCC. NOCC assigns a carrier frequency to
an uplink station for cross-polarization measurement. The earth station then uplinks the
assigned frequency and at the same time remains online with NOCC. NOCC receives the uplink
frequency and connects it to spectrum analyzer. Feed of AUT is then rotated as per the online
instruction from NOCC. NOCC measure the signal level for both Co-pol and cross- pol signals.
The level difference in dB will give the cross-pol isolation as shown in Fig.14.
ACTIVITIES
1. Calculate the useful bit rate for a given band width and symbol rate. Set the bit rate in
the encoder for a SCPC system. If MCPC system is used, find out the bit rate at the
output of the multiplexer and set the bit rate in the MUX. Uplink the video signal and
receive it at the earth station and observe BER & Eb/No in the IRD.
RECAP
Earth Station up-linking & down linking is digital & adopts the DVB-S standard. The advantages
of DVB-S are explained. FEC is very important technique which is explained in brief. Example
for bit rate calculation is provided so that settings can be done in encoder, mux & modulator.
A complete earth station up-link chain including PDA and its feed used in DD network is
explained in detail. Receiving chain is also depicted. The function & utility of low noise amplifier
& LNBC is discussed in brief.
DSNG which is a mobile and compact earth station uses the same technology as being used in
permanent digital earth station. Deployment of DSNG is now very convenient mainly for news &
sports events. Special features of DSNG have also been taken-up in this chapter.
FURTHER READINGS
1. Satellite communication; Denis Roody, New Jersey, Prentice Hall
2. Specifications documents of DG: Doordarshan
3. Satellite Communication by Timothy Pratt and Charles W. Bostian
4. Manuals of Digital Earth Station and DSNG
******
242
22
DIRECT-TO-HOME SATELLITE
BROADCASTING
INTRODUCTION
Terrestrial broadcasting has a major disadvantage of being localized and requires a large
number of transmitters to cover a big country like India. It is a gigantic task and expensive affair
to run and maintain the large number of transmitters. Satellite broadcasting, came into existence
in mid-sixties, was thought to provide the one-third global coverage simply by up-link and down-
link set-ups. In the beginning of the satellite broadcasting, up-linking stations (or Earth Stations)
and satellite receiving centers could had only been afforded by the governments organizations.
The main physical constraint was the enormous size of the transmitting and receiving parabolic
dish antennae (PDA).
OBJECTIVES
243
Induction Course (Television)
DTH broadcasting is basically satellite broadcasting in Ku-band (14/12 GHz). The main
advantage of Ku-band satellite broadcasting is that it requires physically manageable smaller
size of dish antenna compared to that of C-band satellite broadcasting. C-band broadcasting
requires about 3.6m PDA (41dB gain at 4 GHz) while Ku-Band requires 0.6m PDA (35dB gain at
12 GHz). The shortfall of this 6 dB is compensated using Forward Error Correction (FEC), which
can offer 8 to 9 dB coding gain in the digital broadcasting. Requirement of transmitter power is
less (about 25 to 50 Watts) than that of analog C-band broadcasting. The major drawback of
Ku-band transmission is that the RF signals typically suffer 8 to 9dB rain attenuation under
heavy rainfall while rain attenuation is very low at C-band. Fading due to rain can hamper the
connectivity of satellite and therefore rain margin has to be kept for reliable connectivity. Rain
margin is provided by operating transmitter at higher powers and by using larger size of the dish
antenna (7.2m PDA) for up linking.
Since DTH is designed to carry lot of channels, all the incoming feed in different bands are
received as a source via satellite down link or by a dedicated OFC connectivity from the feeding
station. All these sources are fed go to Quad encoders of DTH installation ( both MPEG-2 &
MPEG-4) via a SDI Router. Each Quad encoder takes 4 SDI signals and gives multiplexed
output in IP format.
IP streams from encoders are fed to IP switch. Output of the IP switch is then fed to IP
encapsulator-cum-stat.Mux.
The encapsulator-cum-stat. Mux first convert IP into ASI format & then multiplex statistically into
ASI streams. Each ASI stream contains approximately 16 channels.
There are 6 ASI streams (TS-1, TS-2, TS-3, TS-4-MPEG-2, TS-5 & TS-6-MPEG-4) which are
fed to modulators (1+1) mode via ASI router. (1+1) means one is main and other is as reserve or
standby. The output of modulator is fed to up-converter through IF switch and the up-converter
output is finally fed to HPAs (6+1) mode. There are 3 PDAs working in (2+1) mode. One PDA is
kept spare. Two PDAs are simultaneously used for R/F up-linking.
In new DTH set-up DD has planned to uplink 97 programmes. Using 6 Ku-band transponders of
INSAT-4B with uplink/downlink frequencies of 14040/10990 MHz, 14120/11070 MHz,
14160/11110 MHz, 14200/11150 MHz, 14290/11490 MHz, 14370/11570 MHz.
DOWN-LINK CHAIN
Down-Link or receiving chain of DTH signal is depicted in Fig.2. There are mainly three sizes of
receiving antenna, 0.6m, 0.9m, and 1.2m. Any of the sizes can easily be mounted on rooftop of
a building or house. RF signals from satellite are picked up by a feed and further down
converted to L-Band (950-1450 MHz) signal. Feed and LNBC are now combined in single unit
called LNBF. The L-Band signal is fed to indoor unit, consisting of a set-top box and television
monitor. The set-top box or Integrated Receiver Decoder (IRD) down converts the L-Band signal
244
Direct to Home Satellite Broadcasting
into 70 MHz IF perform digital demodulation, de-multiplexing, decoding and finally provides
audio/video outputs.
16
Inputs TS-1 Mpeg-2
Stream MPEG-2 MOD 1 IF
Encoder 1
IP MAIN IF
Encoder 2 Encapsu- switch
48 Port IF
Encoder 3 later MOD 1
IP Switch Stax MUX R
Encoder 4
(M)
Encoder 5 MOD 2
4+1 (M) MAIN IF
SDI ASI switch
TS-2 Mpeg-2 MPEG-2 MOD 2
Sream IP R
Encoder 1
SDI Signal from IRDs/ OFC Links
Encapsu-
Encoder 2 later MOD 3
Encoder 3 48 Port Stax MUX MAIN IF
IP Switch (R) switch
Encoder 4 MOD 3
Encoder 5 (R) R
ROUTER ROUTER
4+1 ‘A’
MPEG-4 MOD 4
Mpeg-4 MAIN IF
TS-3 Stream IP
switch To
Encapsu-
Encoder 1 lator MOD 4
Encoder 2 Stax MUX R IF
48 Port
IP Switch (M)
Encoder 3 MOD 5 S
176 x 176 Encoder 4 32 x 32 MAIN IF w
(M) switch i
Encoder 5
MOD 5 t
4+1 Mpeg -4 R c
Stream MPEG-4 h
TS-4 IP MOD 6 IF
Encoder 1 Encapsu- MAIN IF
48 Port lat0r switch
Encoder 2 Stax MUX MOD 6 IF
IP Switch
Encoder 3 (R) R
Encoder 4 (R)
Encoder 5
4+1
TS-5
Encoder 1
Encoder 2
Encoder 3
Encoder 4
Encoder 5
4+1
TS-6
Encoder 1
Encoder 2
Encoder 3
Encoder 4
Encoder 5
4+1
Fig. 1(a): Block diagram of DD - DTH setup (baseband & i-f chain)
245
Induction Course (Television)
Ganged Switch
From ‘A’
KU-Band
HPA 1 PDA -1
U/C 1
Diplexer
DC
HPA 2
U/C 2 Diplexer
HPA 3 RL
IF U/C 3 RF
PDA -3
HPA 6
U/C 6
HPA R
U/C R
U/C – Up convertor RL
DC – Directional Coupler
LNBF
0.6m PDA
(mounted on rooftop)
Audio
Set-Top
T.V
Box (IRD)
IF signal (950-1450MHz)
Video
through coaxial cable
Indoor Unit
Fig. 2: One of the Receiving Chain for the source signal of DTH
Table 1 and Table 2 depict uplink and downlink parameters calculated for DTH respectively for
Delhi region. The uplink and down link frequencies are taken to be 13.891 GHz and 12.647
GHz respectively. The C/No for uplink and down link is calculated using the following relations:
246
Direct to Home Satellite Broadcasting
The subscripts ‘U’, ‘D’, ‘S’ and ‘ES’ stand for Uplink, Downlink, Satellite and Earth Station
respectively. Effective Isotropic Radiated Power, EIRP is defined as
Lotherlosses includes various losses like feeder loss, branching loss, pointing error loss, rain loss,
back-off loss etc. The main contribution to losses is due to rain and it has to be taken into
account. Here rain loss is typically taken to be 8 dB (From ITU site) and is not uniform, varying
location to location.
247
Induction Course (Television)
The quality of digital broadcasting is mainly specified by bit error ratio, (BER) which is about
10 6 for good quality video broadcasting. BER is primarily depends on two parameters, Forward
Error Correction (FEC) rate and Eb N o (Energy per bit/Noise Power Density). Eb N o is directly
related to (C / N o ) T and is given for a particular bit rate, Rb as follows
Gc is coding gain achieved by adopting forward error correction technique and can be controlled
by FEC rate. 2/3 FEC rate can offer 7-8 dB coding gain for QPSK modulation. Over all C/No, of
77.15 dB results in 22dB Eb N o for 7 dB coding gain and symbol rate 27.5 Msps, 22dB Eb N o
would give quasi error free transmission (BER< 10-11).
ACTIVITIES
1. Visit DTH installation of Doordarshan and draw the line diagram of uplink and down link
chain.
2. Get all the down-link carriers signals on spectrum analyser and measure their frequency
and level.
RECAP
Direct-to-home (DTH) has proved a very useful service to receive video signals direct from
satellite. Mainly Ku-band is used for DTH service. It enable the viewer to receive DTH
programs by using 60 cm dish. To compensate the r-f loss at Ku-band in rainy season, Ku-band
uplink power and downlink transponder powers are kept around 10 dB higher than in C-band
uplink system. Uplink and down link r-f power is calculated by exercising link-budget. Initially
Doordarshan started its free-to-air DTH service with 33 programmes on its platform. In next step
these programmes were increased to 59 programmes. Presently doordarshan is going to
upgrade its DTH service from 59 to 97 programmes. The new setup has been described in this
chapter with the help of block diagrams.
FURTHER READINGS
1. Satellite communication; Denis Roody, New Jersey, Prentice Hall
2. Specifications documents of DG: Doordarshan
3. Satellite Communication by Timothy Pratt and Charles W. Bostian
4. Manuals of Digital Earth Station and DSNG
******
248
ACKNOWLEDGEMENTS
NABM hereby acknowledges the contribution made by following faculty members and subject
experts by providing the content for the volume III of induction training material of Engineering
Assistants. It also acknowledges the support of the staff in preparation of drawings, typing and
editing the volume in electronic form.
Supported by