Difference Between Lossless Compression and Lossy Compression
Difference Between Lossless Compression and Lossy Compression
Compression.
2. Data that has been compressed using this 2. If data has been (lossless) compressed,
technique can’t be recovered and the original data can be recovered from
reconstructed exactly. the compressed data.
3. Used for application that can tolerate 3. Used for application that can’t tolerate
difference between the original and any difference between original and
reconstructed data. reconstructed data.
5. Sound and Image compression uses lossy 5. Text compression uses lossless
compression. compression
8. E.g.(i)TelephoneSystem, 8. E.g(i)FaxMachine,
(ii)VideoCDE. (ii)Radiological Imaging
Compression has two types i.e. Lossy and Lossless technique. Atypical image compression system
comprises of two main blocks An Encoder (Compressor) and Decoder (Decompressor). The image f(x,y)
is fed to the encoder which encodes the image so as to make it suitable for transmission. The decoder
receives this transmitted signal and reconstructs the output image f(x,y). If the system is an error free one
f(x,y) will be a replica of f(x,y).
The encoder and the decoder are made up of two blocks each. The encoder is made up of a Source encoder
and a Channel encoder. The source encoder removes the input redundancies while the channel encoder
increases the noise immunity of the source encoders. The decoder consists of a channel decoder and a
source decoder. The function of the channel decoder is to ensure that the system is immune to noise.
Hence if the channel between the encoder and the decoder is noise free, the channel encoder and the
channel decoder are omitted.
The three basic types of the redundancies in an image are inter-pixel, coding redundancies and
psychovisual redundancies. Run length coding is used to eliminate or reduce inter-pixel redundancies
Huffman encoding is used to eliminate or reduce coding redundancies while I.G.S is used to eliminate
inter-pixel redundancies. The job of the source decoders is to get back the original signal. The problem
solved by run length coding, Huffman encoding and I.G.S coding are examples of source encoders and
decoders.
The input image is passed through a mapper. The mapper reduces the interpixel redundancies. The
mapping stage is a lossless technique and hence is an reversible operation. The output of a mapper is
passed through a Quantizer block. The quantizer block reduces the psychovisual redundancies. It
compresses teh data by eliminating some information and hence is an irreversible operation. The quantizer
block uses JPEG compression which means a lossy compression. Hence in case of lossless compression,
the quantizer block is eliminated. The final block of the source encoder is that of a symbol encoder. This
block creates a variable length code to represent the output of the quantizer. The Huffman code is a typical
example of the symbol encoder. The symbol encoder reduces coding redundancies.
The source decoder block performs exactly the reverse operation of the symbol encoder and the mapper
blocks. It is important to note that the source decoder has only two blocks. Since quantization is
irreversible, an inverse quantizer block does not exist. The channel is noise free and hence have ignored
the channel encoder and channel decoder.
The channel encoder is used to make the system immune to transmission noise. Since the output of the
source encoder has very little redundancy, it is highly susceptible to noise. The channel encoder inserts a
controlled form of redundancy to the source encoder output making it more noise resistant.
Symbol S1 S2 S3 S4 S5 S6 S7 S8
S1 0.25 01 2
S2 0.15 001 3
S3 0.06 1010 4
S4 0.08 0000 4
S5 0.21 11 2
S6 0.14 100 3
S7 0.07 0001 4
S8 0.04 1011 4
What are different types of redundancies in digital image?
Explain in detail.
(i) Redundancy can be broadly classified into Statistical redundancy and Psycho visual
redundancy.
(ii) Statistical redundancy can be classified into inter-pixel redundancy and coding redundancy.
(iii) Inter-pixel can be further classified into spatial redundancy and temporal redundancy.
(iv) Spatial redundancy or correlation between neighboring pixel values.
(v) Spectral redundancy or correlation between different color planes or spectral bands.
(vi) Temporal redundancy or correlation between adjacent frames in a sequence of images in
video applications.
(vii) Image compression research aims at reducing the number of bits needed to represent an
image by removing the spatial and spectral redundancies as much as possible.
(viii) In digital image compression, three basic data redundancies can be identified and exploited:
Coding redundancy, Inter-pixel redundancy and Psychovisual redundancy.
Coding Redundancy:
o Coding redundancy is associated with the representation of information.
o The information is represented in the form of codes.
o If the gray levels of an image are coded in a way that uses more code symbols
than absolutely necessary to represent each gray level then the resulting image is
said to contain coding redundancy.
Inter-pixel Spatial Redundancy:
o Inter-pixel redundancy is due to the correlation between the neighboring pixels in
an image.
o That means neighboring pixels are not statistically independent. The gray levels
are not equally probable.
o The value of any given pixel can be predicated from the value of its neighbors
that is they are highly correlated.
o The information carried by individual pixel is relatively small. To reduce the
inter-pixel redundancy the difference between adjacent pixels can be used to
represent an image.
Inter-pixel Temporal Redundancy:
o Inter-pixel temporal redundancy is the statistical correlation between pixels from
successive frames in video sequence.
o Temporal redundancy is also called inter frame redundancy. Temporal
redundancy can be exploited using motion compensated predictive coding.
o Removing a large amount of redundancy leads to efficient video compression.
Psychovisual Redundancy:
o The Psychovisual redundancies exist because human perception does not involve
quantitative analysis of every pixel or luminance value in the image.
o It’s elimination is real visual information is possible only because the information
itself is not essential for normal visual processing.
State Objective and Subjective Fidelity Criteria of Image
evaluation.
Obtain Huffman coding word COMMITEE.
Explain all the steps in JPEG image compression standard.
1. JPEG 2000 standard for the compression of still images is based on the Discrete Wavelet
Transform (DWT). This transform decomposes the image using functions called
wavelets.
2. The basic idea is to have a more localized analysis of the information which is not
possible using cosine functions whose temporal or spatial supports are identical to the
data.
3. Better image quality that JPEG at the same file size or alternatively 25-35 % smaller file
sizes with the same quality.
4. Good image quality at low bit rates ( even with compression ratios over 80 :1)
5. Low complexity option for devices with limited resources.
6. Scalable image files – no decomposition needed for reformatting with JPEG 2000, the
image that best matches the target device can be extracted from a single compressed file
on a server. Options include:
a) Image sizes from thumbnail to full size.
b) Grayscale to full 3 channel color.
c) Low quality image to lossless (identical to original image)
7. JPEG 2000 is more suitable to web-graphics than baseline JPEG because it supports
alpha-channel (transparency component)
8. Region of Interest (ROI): One can define some more interesting parts of image, which are
coded with more bits than surrounding areas.
It checks the tone of image based on bpp value. It accepts color or grayscale images.
It checks tone of image it generate variable length segment of image called ‘Tiles’.
Because of this improved functionality JPEG 2000 can accept non standard image.
It divides the complete image into multiple of 8 pixels called as block of size of 8 pixels
called as block of size of 8 × 8.
If image is not complete multiples of 8 pixels then it adds extra zeros this process is
called as zero padding.
B. Level Shifter:
It converts unsigned value of an image into sign value of an image. This process
increases the efficiency of discrete cosine/ wavelet transform.
If bpp value is 8 bits, number of possible levels of an image is 256. i. e 28 ( Range 0 to
255). This unsigned range is converted into sign range of -128 to 127.
DWT Process
D. Quantizer
Quantizer provide rounding off frequency domain image with respect to amplitude as
well as co-ordinate of image the amount of rounding is described by fixed rounding table.
The fractional value of frequency domain coefficient is rounded to the nearest possible
integer is called amplitude rounding. In co-ordinate rounding it will round off
unnecessary HF component of image.
It converts 2 dimensional frequency domain image into one dimensional matrix. By using
scanning process. The zigzag encoder doesn’t follow the natural scanning. It follows
diagonal scanning as shown in figure below:
The diagonal scanning is based on Comet distribution because of this diagonal scanning
the element in one dimensional matrix is also arranged frequency basis. The size of one
dimensional area for each block 1 × 64.
F. Huffman Encoder
It converts fixed length frequency domain pixel into variable length frequency domain
pixel depending on probability of frequency domain pixel.
The large amount compression in JPEG is achieved in Huffman encoder block. The
blocks before the Huffman encoder are functions such a way that it will improve the
compression ratio provided by Huffman encoder.
G. RLE
The compression ratio provided by Huffman encoder is increased by run length encoding
because in output of Huffman encoder there is in Run of zeroes or ones.
Short note: JPEG
JPEG is widely used image compression technique. It is used in image processing systems such
as copiers, scanners and digital camera’s. These devices ofte3n require high-speed image
compression techniques.
JPEG Encoder:
In the figure DCT stands for the Discrete Cosine Transform and IDCT stands for Inverse
Discrete Cosine Transform. The input image is partitioned into a 8x8 sub-block. The DCT is
computed on each of the 8x8 blocks of pixels. The coefficients with zero frequency in both the
directions is called the “DC coefficient’ and the remaining 63 coefficients are called the ‘AC
coefficients’. The DCT processing step lays the foundation for achieving data compression by
concentrating most of the signal in the lower spatial frequencies. The 64 DCT coefficients are
scalar quantized using the uniform quantisation tables based upon psychovisual experiments.
After the DCT coefficients are quantised, the coefficients are ordered according to the zigzag
scan as shown in the below figure. The purpose of the zigzag scanning is based upon the
observation that most of the high-frequency coefficients are zero after quantization.
JPEG Decoder:
The JPEG standard is a broad standard encompassing several compression and transmission
modes. In order to facilitate future expansion to other modes, this implementation has a very
modular construction. A single thread of control code handles all individual routines (kernels)
which are called multiple times as required by the application. This control code will always be
in ‘C’to facilitate the changes in the control architecture. The quantizes and the Huffman encodes
the DC coefficients obtained from the DCT module. In JPEG the DC component is differentially
encoded i.e. a difference between the present and the preceding DC component is computed and
this difference is quantized and encoded. Quantization involves an inherent division operation
with an element from the quantizer table. In this implementation a reciprocal quantizer table, pre-
computed from the quantizer table, is used.