Lect 2
Lect 2
Lecture 2
Dr. Hiba Hassan Sayed
Department of Electrical and Electronic Engineering
University of Khartoum
11/9/2024 Dr. Hiba HAssan : U of K 2
Multimedia Presentation
• To make readable colors on a screen it is
customary to use the principal complementary
color as the background for text.
• For color values in the range 0–1 (or 0–255), if
the text color is R, G, B, the background color is
usually given by,
(R, G, B) ⇒ (1 − R, 1 − G, 1 − B)
• That is, the color is “opposite” in terms of
contrast, brightness…etc.
11/9/2024 Dr. Hiba HAssan : U of K 3
Color Wheel
• The colors are represented in a color wheel.
• The color wheel used for Graphics design is
different from the color wheel used by artists.
• In the color wheel for graphics design, the
opposite colors are calculated by the equation
mentioned earlier,
(𝑅𝑅, 𝐺𝐺, 𝐵𝐵) ⇒ (1 − 𝑅𝑅, 1 − 𝐺𝐺, 1 − 𝐵𝐵)
• While artist’s color wheel is based on feel rather
than on an algorithm.
• The following diagrams show the 2 color wheels.
11/9/2024 Dr. Hiba HAssan : U of K 4
Color Models
• These components are considered in color models.
• Hence, an RGB model is further represented by:
• The Hue, Saturation and Value (HSV).
• The hue ( )درﺟﺔ اﻟﻠونsaturation (ﺻﻔﺎء اﻟﻠون/)ﺗﺷﺑﻊ
resembles various shades of brightly colored
paint, and the value resembles the mixture of
those paints with varying amounts of black
(closer to 0)or white (approaching 255) paint.
• And the Hue, Saturation and Lightness (HSL)
• places fully saturated colors around a circle at a
lightness value of 1⁄2
11/9/2024 Dr. Hiba HAssan : U of K 6
Image/Video Processing
• In component video, for example, the resulting
color space is referred to as YUV or YCBCR,
where Y encodes luminance, U or CB the
difference between the blue primary and
luminance, and V or CR the difference between
the red primary and luminance .
• Computing the color differences instead of direct
colors has reduced the amount of data processed
considerably.
11/9/2024 Dr. Hiba HAssan : U of K 8
1-bit images
• Images consist of pixels.
• A 1-bit image consists of on and off bits only and
thus is the simplest type of image.
• Each pixel is stored as a single bit (0 or 1).
Hence, such an image is also referred to as a
binary image.
• It is also called a 1-bit monochrome image,
since it contains no color.
11/9/2024 Dr. Hiba HAssan : U of K 11
8-bit (cont.)
• The pixel value is calculated as follows:
11/9/2024 Dr. Hiba HAssan : U of K 15
8-bit (cont.)
• The first plane gives the most contribution to the
image.
11/9/2024 Dr. Hiba HAssan : U of K 16
Palette Animation
• A simple animation process can be achieved by
changing the color table, this process is called color
cycling or palette animation.
• The color cycling technique was used in early
computer games.
• Storing one image and changing its palette required
less memory and processor power than storing the
animation as several frames (images).
11/9/2024 Dr. Hiba HAssan : U of K 20
Image Resolution
• There are 3 forms of resolutions in images:
1. Intensity resolution
Each pixel has “Depth” bits for colors/intensities.
2. Spatial resolution
Image has “Width” x “Height” pixels.
3. Temporal resolution
Monitor refreshes images at specific rates
(measured in Hz).
11/9/2024 Dr. Hiba HAssan : U of K 22
Error Sources
• There are 3 main error sources in digital image
processing:
• Intensity quantization;
Not enough intensity resolution
• Spatial aliasing;
Not enough spatial resolution
• Temporal aliasing;
Not enough temporal resolution
11/9/2024 Dr. Hiba HAssan : U of K 23
Quantization
• Quantization corresponds to a discretization of the
intensity values.
• Hence, for an image of intensity i(x,y), different
intensities are represented by LEVELS.
• For uniform quantization, p(x,y), is defined by
p(x,y) = trunc(LEVELS * i(x,y))
• And the error is given by
e(x,y) = p(x,y)/(LEVELS-1) - i(x,y).
• The image total error is the mean square error.
11/9/2024 Dr. Hiba HAssan : U of K 24
Uniform Quantization
Example
11/9/2024 Dr. Hiba HAssan : U of K 25
Applying Quantization
• Notice the color jumps, known as contouring.
11/9/2024 Dr. Hiba HAssan : U of K 26
Dithering
• A possible solution for the contouring effect is
Dithering.
• Dithering distributes errors amongst pixels
11/9/2024 Dr. Hiba HAssan : U of K 27
Dithering Techniques
• There are several dithering techniques.
• The following will be exploited in this course:
1) Random dither
2) Ordered dither
3) Error diffusion dither
11/9/2024 Dr. Hiba HAssan : U of K 28
Random Dither
• The quantization errors are randomized, i.e. errors
are diffused.
• The pixels are scanned in order and errors in
realizing a pixel's intensity are distributed (ie diffused)
to keep the overall intensity of the image closer to the
input intensity.
P(x,y)=trunc(I(x,y) + noise(x,y))
11/9/2024 Dr. Hiba HAssan : U of K 29
11/9/2024 Dr. Hiba HAssan : U of K 30
Order Dithering
• Dithering is often used when converting greyscale
images to bit-mapped ones e.g. for printing.
• The pixel value is represented by a larger pattern,
(2×2 or 4×4 matrix).
• This results in number of printed dots that
approximates the varying-sized disks of ink used in
halftone printing.
• Halftone printing is an analog process used in
newspaper printing. It uses smaller or larger filled
circles of black ink to represent shading.
11/9/2024 Dr. Hiba HAssan : U of K 31
Halftone Patterns
• The intensities in a 3× 3 cluster are given by:
11/9/2024 Dr. Hiba HAssan : U of K 32
Dithering (cont.)
• Replace each pixel by a 4× 4 dots (binary pixels).
If the remapped intensity is > the dither matrix
entry, put a dot at the position (set to 1) otherwise
set to 0.
• To keep the image size: an ordered dither
produces an output pixel with value 1 iff the
remapped intensity level just at the pixel position
is greater than the corresponding matrix entry.
11/9/2024 Dr. Hiba HAssan : U of K 35
• Where 𝛼𝛼 + 𝛽𝛽 + 𝛾𝛾 + 𝛿𝛿 = 1
• The most famous such technique is Floyd–Steinberg
dithering.
11/9/2024 Dr. Hiba HAssan : U of K 39
Color Quantization
• Reasonably accurate color images can be
obtained by quantizing the color information to
collapse it.
• Many systems can make use of only 8 bits of color
information (constitutes 256 colors) in producing a
screen image.
• Even if a system has the electronics to use 24-bit
information, backward compatibility demands that it
can understand 8-bit color image files.
11/9/2024 Dr. Hiba HAssan : U of K 42
JPEG
• It is a standard for photographic image
compression created by the Joint Photographic
Experts Group.
• It takes advantage of limitations in the human
vision system to achieve high rates of
compression.
• The color information in JPEG is decimated
(partially dropped, or averaged).
• JPEG is a lossy compression which allows user to
set the desired level of quality/compression.
11/9/2024 Dr. Hiba HAssan : U of K 46
Cont.
• In JPEG image compression standard, the
amount of compression is controlled by a value Q
in the range 0–100 (and see Sect. 9.1 for details).
The “quality” of the resulting image is best for Q =
100 and worst for Q = 0.
11/9/2024 Dr. Hiba HAssan : U of K 47
PNG
• Portable Network Graphics (PNG) is meant as
system independent image format.
• PNG intended to surpass GIF standard
• Some features of PNG
1. Support up to 48 bits per pixel {more accurate
colors}
2. Support description of gamma-correction and
alpha-channel for controls such as transparency
3. It supports both lossless and lossy compression
with performance better than GIF.
• PNG is widely supported by various web browsers
and imaging software.
11/9/2024 Dr. Hiba HAssan : U of K 50
Postscript (PS)
• A typesetting language which includes text as well as
vector/structured graphics and bit-mapped images.
• Output in several popular graphics programs (E.g.
Illustrator, FreeHand)
• PostScript page description language does not provide
compression, they are stored as ASCII.
• Consequently, files are often large.
• It is common for PS files to be made available only
after compression by some Unix utility, such as
compress or gzip.
• Many high-end printers have a PostScript interpreter
built into them.
11/9/2024 Dr. Hiba HAssan : U of K 52
Bitmap (BMP)
• It is a major system standard image file format for
Microsoft Windows.
• It uses raster graphics (grid or dot matrix data
structure).
• BMP supports many pixel formats, including
indexed color (up to 8 bits per pixel), and 16, 24,
and 32-bit color images.
• It makes use of Run-Length Encoding (RLE)
compression.
• BMP images can also be stored uncompressed.
11/9/2024 Dr. Hiba HAssan : U of K 54
Video Standards
• Experts from these two standardization organizations
had collaborated several times to produce joint
standards. The most significant standards produced as
such, so far, are:
1. The H.262/ MPEG-2 Part 2 developed as a result of
joint partnership in 1996.
2. The advanced video coding (AVC) H.264/ MPEG-4
Part 10 developed by the Joint Video Team (JVT) in
2003.
3. The high efficiency video coding (HEVC);
H.265/MPEG-H Part 2 developed by Joint
Collaborative Team on Video Coding (JCT-VC) in
2013.
11/9/2024 Dr. Hiba HAssan : U of K 56
Latest Standards
• HEVC is restricted by patents, and the licensing
fees are far higher than those for H.264/AVC, this
has discouraged most major tech companies from
deploying it.
• Hence, some of the largest companies; Google,
Mozilla, Intel, Microsoft, Netflix, Amazon, Cisco,
have founded a group called Alliance for Open
Media (AOMedia) in 2015.
• The AOMedia is working on a standard called
AOMedia Video 1 (AV1). First working version
released on January 2019. It targets real-time
applications.
11/9/2024 Dr. Hiba HAssan : U of K 57
Compression
• There are 2 types of compression: Lossy &
Lossless :
• Lossless : Ideal (e.g. zip, unix compress) not
good enough for MM data.
• Lossy :Throw away nonessential (perceptually
less relevant) parts of the data stream.
• FILTER the data.
• Examples: MP3, JPEG, MPEG Video/Audio
11/9/2024 Dr. Hiba HAssan : U of K 58
Compression (cont.)
• Compression via Synthesis :
• Encode how to make (synthesize) the data can
be done in many less bits in certain cases.
• Examples: Vector Graphics (Flash), MPEG Video,
MP4 (Audio), MIDI