0% found this document useful (0 votes)
46 views

Here, The Odd-Numbered Lines Are Traced First, Then The Even-Numbered Lines

This document discusses fundamental concepts in video, including: 1) It describes interlaced and progressive scanning methods for capturing video frames. Interlaced divides frames into odd and even lines, while progressive captures full frames. 2) It outlines analog and digital video formats. Analog stores continuously varying voltages while digital uses binary data. Common analog TV standards are NTSC, PAL, and SECAM. 3) Digital video offers advantages like direct access and no quality loss from copying. Color spaces for digital video include component video and RGB converted to color opponent space.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views

Here, The Odd-Numbered Lines Are Traced First, Then The Even-Numbered Lines

This document discusses fundamental concepts in video, including: 1) It describes interlaced and progressive scanning methods for capturing video frames. Interlaced divides frames into odd and even lines, while progressive captures full frames. 2) It outlines analog and digital video formats. Analog stores continuously varying voltages while digital uses binary data. Common analog TV standards are NTSC, PAL, and SECAM. 3) Digital video offers advantages like direct access and no quality loss from copying. Color spaces for digital video include component video and RGB converted to color opponent space.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Fundamental Concepts in Video

Chapter 4

VIDEO a. Interlaced Video


- a recording, reproducing, or broadcasting of moving - divides the frame into 2 fields of lines: Odd and Even
visual images made digitally or on videotape lines
- 2 fields of lines are displayed alternately - generally not
Terminologies: noticeable except when fast action is taking place on
● Frames: a single image in a video; can be described in screen, when blurring may occur
pixels but it can also be divided in scan lines.
● Fps (frames per second) Here, the odd-numbered lines are traced first, then the
even-numbered lines
Visual Resolutions
- for computer monitors and most digital displays, visual b. Progressive Video
resolutions are in pixels - progressive scanning displays the whole image per
./. Ie 1280x720, 1920x1080 etc frame
- for analog television, they are in a parameter called NOTE:
Television lines In videos nowadays,
./. Ie 525 TV scan lines for NTSC Broadcast tv p in 480p,1080p etc means progressive
i in 480i, 1080i etc means interlaced
Types of Video
a. Analog * De-interlacing
b. Digital - method used to turn interlaced scans/video images to
progressive images (full frame)
ANALOG VIDEO - Since it is sometimes necessary to change the frame
- Up until the 2000s, most TV programs were sent and rate, resize, or even produce stills from an interlaced
received as an analog signal source video, various schemes are used to de-interlace
- video signal that is recorded, transmitted, and stored it.
as a continuously varying voltage, rather than as a NOTE:
stream of bits ➔ Combing Effect: happens at times in
./. old video tape recorders such as VHS, PAL, de-interlacing. This is because the 2 fields(odd and
Betamax and Umatic all store analogue signals even) displayed in a video are not the same

* Broadcasting Analog Video Analog TV Broadcast Standards


- Analog videos and Digital can be divided into lines
a. NTSC National Television Standard Committee
- Analog television would use CRT or cathode ray
- North America, Japan
tubes to project the signals to the screen
- 4:3 Aspect Ratio
NOTE:
- 525 scan lines per frame at 30 or 29.97 frames per
➔ Cathode Ray Tube contain electron guns
second
that fire beams onto phosphoric screens to create
- interlaced
an image
➔ the electron beam move in a "raster scan"
pattern across and down the screen b. PAL Phase Alternating Line
➔ Phosphor is any material that, when - German scientists
exposed to radiation, emits visible light. The - 625 scan lines at 25 fps
radiation might be ultraviolet light or a beam of - widely used in Western Europe, China, India, and
electrons. many other parts of the world
- Because it has higher resolution than NTSC (625
* Determining black and white versus 525 scan lines), the visual quality of its
- White has a peak value of 0.714 V pictures is generally better.
- Black is slightly above zero at 0.055 V
- Blank is at zero volts c. SECAM Système Électronique Couleur Avec
The higher the volts, the brighter a point in the Mémoire
screen. - invented by the French
- third major broadcast TV standard.
Progressive vs Interlaced Scanning - 625 scan lines per frame, at 25 frames per second
“Painting” the image/frame - 4:3 aspect ratio and interlaced fields.
- picture/frame is "drawn" on a television or computer
display screen by sweeping an electrical signal Color Spaces (used by video broadcasts)
horizontally across the display one line at a time ● YUV: Analog; used by PAL & SECAM broadcasts
● YIQ: Analog; used by NTSC broadcasts
Analog videos often use interlaced scan. ● YCbCr: Digital

- Y stands for luma or for others: luminance


- The rest stands for chroma or for others: chrominance
pertaining to the hue or color values
DIGITAL VIDEO c. Multiple sub-Nyquist sampling encoding (MUSE) was an
Unlike analog, digital video comprises binary data. improved NHK HDTV with hybrid analog/digital
technologies that was put in use in the 1990s.
Advantages of Digital Video d. It has 1,125 scan lines, interlaced (60 fields per second),
a. ready to be processed and a 16:9 aspect ratio. It uses satellites to
- storing video on digital devices or in memory, ready to broadcast—quite appropriate for Japan, which can be
be processed (noise removal, cut and paste, and so on) covered with one or two satellites. The direct broadcast
and integrated into various multimedia applications satellite (DBS) channels used have a bandwidth of 24
MHz.
b. direct access
- direct access, which makes nonlinear video editing ./.
simple In general, terrestrial broadcast, satellite broadcast, cable,
and broadband networks are all feasible means for
c. repeated recording transmitting HDTV as well as conventional TV. Since
- repeated recording without degradation of image uncompressed HDTV will easily demand more than 20 MHz
quality bandwidth, which will not fit in the current 6 or 8 MHz
channels, various compression techniques are being
d. better quality than analog investigated.

e. To be used nowadays, analog videos must be converted ./.


to digital a. In 1987, the FCC decided that HDTV standards must be
compatible with the existing NTSC standard and must be
f. no quality lost confined to the existing very high-frequency
- No quality is lost in digital. Unlike analog, digital b. (VHF) and ultra-high-frequency (UHF) bands. This
videos do not degrade with each copy prompted a number of proposals
c. In North America by the end of 1988, all of them analog
Color space for Digital or mixed analog/digital.
- In earlier Sony or Panasonic recorders, digital video was in d. In 1990, the FCC announced a different initiative—its
the form of composite video. preference for full- resolution HDTV. They decided that
- Modern digital video generally uses component video, HDTV would be simultaneously broadcasted with
although RGB signals are first converted into a certain existing NTSC TV and eventually replace it. The
type of color opponent space. development of digital HDTV immediately took off in
- The usual color space is YCbCr. North America.

Chroma Subsampling ./.


- reduces the color information of an image a. Witnessing a boom of proposals for digital HDTV, the
- focuses more on its luminance data or luma when FCC made a key decision to go all digital in 1993.
compressing an image b. This eventually led to the formation of the Advanced
- reduces bandwidth without significantly affecting picture Television Systems Committee (ATSC), which was
quality responsible for the standard for TV broadcasting of
HDTV.
Since humans see color with much less spatial resolution than c. In 1995, the U.S. FCC Advisory Committee on Advanced
black and white, it makes sense to decimate the chrominance Television Service recommended that the ATSC digital
signal. television standard be adopted.

* Chroma Subsampling Schemes High-Definition TV HDTV


a. Standard-definition TV (SDTV): the NTSC TV or
higher
b. Enhanced-definition TV (EDTV): 480 active lines or
higher
c. High-definition TV (HDTV): 720 active lines or higher

* Popular choices include


720P (1,280 × 720, progressive scan, 30 fps),
1080I (1,920 × 1,080, interlaced, 30 fps), and
HIGH-DEFINITION (TV HDTV) 1080P (1,920 × 1,080, progressive scan, 30 or 60 fps)
● Wide-screen
● 16:9 Aspect Ratio VIDEO DISPLAY INTERFACES
● The main thrust of high-definition TV (HDTV) is not to I. Analog Display Interface
increase the “definition” in each unit area, but rather to - Analog video signals are often transmitted in one of
increase the visual field, especially its width three different interfaces: component video, composite
● 128x 720 resolution and above video, and S-video

- Advanced digital TV formats supported by ATSC Typical connectors:

History
a. Late 1970s: The first-generation HDTV was based on an
analog technology developed by Sony and NHK in Japan
b. 1984: HDTV successfully broadcasts the 1984 Los
Angeles Olympic Games in Japan
- VGA video signals are based on analog component
Component RGBHV (red, green, blue, horizontal sync, vertical
- Higher end video systems, such as for studios, make use sync).
of three separate video signals for the red, green, and - Since the video signals are analog, it will suffer from
blue image planes interference, particularly when the cable is long.
- This is referred to as component video. This kind of
system has three wires (and connectors) connecting the II. Digital Display Interface
camera or other devices to a TV or monitor
- Color signals are not restricted to always being RGB DVI (Digital Visual Interface)
separations. Instead, we can form three signals via a - developed by the Digital Display Working Group
luminance–chrominance transformation of the RGB (DDWG) for transferring digital video signals,
signal – for example, YIQ or YUV. particularly from a computer’s video card to a
- For any color separation scheme, component video gives monitor.
the best color reproduction, since there is no “crosstalk” - carries uncompressed digital video and can be
between the three different channels, unlike composite configured to support multiple modes, including
video or S-video. Component video, however, requires DVI-D (digital only), DVI-A (analog only), or DVI-I
more bandwidth and good synchronization of the three (digital and analog).
components. - bulky

Composite HDMI (High Definition Multimedia Interface)


- in composite video, color (chrominance) and - newer digital audio/video interface developed to be
intensity (luminance) signals are mixed into a backward compatible with DVI
single carrier wave - doesn’t carry analog signal and hence is not
- chrominance: composite of two color components (I compatible with VGA
and Q or U and V) - supports digital audio in addition to digital video
- downward compatible with b&w TV - DVI is limited to the RGB color range (0–255).
- connecting to TVs or VCRs, composite video uses HDMI supports both RGB and YCbCr 4:4:4 or 4:2:2.
only one wire (and hence one connector, such as a The latter are more common in application fields
BNC connector at each end of a coaxial cable or an other than computer graphics.
RCA plug at each end of an ordinary wire), and video - HDMI 1.0 is 165 MHz, which is sufficient to support
color signals are mixed, not sent separately. 1080P and WUXGA (1,920 × 1,200) at 60 Hz.
- Since color information is mixed and both color and - HDMI 1.3 increases that to 340 MHz, which allows
intensity are wrapped into the same signal, some for higher resolution (such as WQXGA, 2,560 ×
interference between the luminance and chrominance 1,600) over a single digital link.
signals is inevitable. - HDMI 2.1 was released in 2017, which supports
higher resolutions and higher refresh rates.
S-video - 2.1 Also uses a new HDMI cable category called
- separated video or super video (e.g. S-VHS) ultra-high speed (up to 48 Gbit/s bandwidth), which
- uses two wires: luminance and composite is enough for 8K resolution at approximately 50 Hz.
chrominance signal Using display stream compression with a
- less crosstalk between the color information and the compression ratio of up to 3:1, formats up to 8K
crucial grayscale information (7,680 × 4,320) at 120 Hz or 10K (10,240 × 4,320) at
- The reason for placing luminance into its own part of 100 Hz are possible.
the signal is that black- and white information is most
important for visual perception. Display Port
- humans are able to differentiate spatial resolution in - can be used to transmit audio and video
the grayscale (“black- and-white”) part much better simultaneously or either of then
than for the color part of RGB image - The video signal path can have 6–16 bits per color
- Color information transmitted can be much less channel, and the audio path can have up to 8 channels
accurate than intensity information. We can see only of 24-bit 192 kHz uncompressed PCM audio or carry
fairly large blobs of color, so it makes sense to send compressed audio.
less color detail. - 2019: VESA released DisplayPort 2.0 standard
(incorporates a number of advanced features, such as
beyond-8K resolutions, higher refresh rates and high
dynamic range (HDR) support at higher resolutions,
improved support for multiple display configurations,
as well as improved user experience with
augmented/virtual reality (AR/VR) displays)

Other types of Video


VGA (Video Graphics Array)
360-degree
- a video display interface that was first introduced by
- aka Omnidirectional Video, Spherical Video, or
IBM in 1987, along with its PS/2 personal computers
Immersive Video
- initial VGA resolution: 640 x 480 using the 15-pin
- can soan 360 horizontally and 180 vertically
D-subminiature VGA connector
- usually captured by an omnidirectional camera or a
- Later extensions can carry resolutions ranging 640 ×
collection of cameras with a wide field of view
400 pixels at 70 Hz to 1,280 × 1,024 pixels (SXGA)
- can be monoscopic or stereoscopic
at 85 Hz and up to 2,048 × 1,536 (QXGA) at 85 Hz.
- became very popular due to the development of
Virtual Reality (VR)
- most 360 videos have a limited range of views
vertically (i.e. may not include the top and bottom of
the sphere
- Video stitching is performed for generating the
merged footage, which is like stitching for panoramic
images.
- Since the color and contrast of the shots often vary
significantly, their calibration/adjustment is a
challenging task.

Equirectangular Projection
- type of projection for mapping a portion of the
surface of a sphere to a flag image

Cubemap projection (CMP)


- Instead of projecting onto a single plane as in ERP,
projections are made onto the six faces of a cube, i.e.,
left, front, right, behind, top, and bottom. As a result,
the distortion caused by the projection is much
reduced.

Other projection methods

Equi-angular cubemap
- a variation of CMP adopted by Google for streaming
VR videos. In the conventional CMP, although the
projection rays are equally spaced over the circle

Pyramid format
- advocated by Facebook.
- In the so-called pyramid projection, the base of the
pyramid provides the full resolution image of the
front view. The sides of the pyramid provide the side
view images with gradually reduced resolution. The
main advantage of this multiresolution approach is to
reduce the bitrate required for video transmission,
i.e., to use the higher resolutions only when needed.

You might also like