Data Compression (1)
Data Compression (1)
The Data Compression refers to the process of reducing the amount of data required to
represent a given quantity of information. We know that one common characteristic
followed by all the images is the neighbouring of pixels and all the pixels are correlated to
each other so there is a chance of existing redundant information.
Types of redundancy:
Coding Redundancy:
o Coding redundancy is associated with the representation of information.
o The information is represented in the form of codes.
o If the gray levels of an image are coded in a way that uses more code
symbols than absolutely necessary to represent each gray level then the
resulting image is said to contain coding redundancy.
Inter-pixel Spatial Redundancy:
o Interpixel redundancy is due to the correlation between the neighboring
pixels in an image.
o That means neighboring pixels are not statistically independent. The gray
levels are not equally probable.
o The value of any given pixel can be predicated from the value of its neighbors
that is they are highly correlated.
o The information carried by individual pixel is relatively small. To reduce the
interpixel redundancy the difference between adjacent pixels can be used to
represent an image.
Inter-pixel Temporal Redundancy:
o Interpixel temporal redundancy is the statistical correlation between pixels
from successive frames in video sequence.
o Temporal redundancy is also called interframe redundancy. Temporal
redundancy can be exploited using motion compensated predictive coding.
o Removing a large amount of redundancy leads to efficient video
compression.
Psychovisual Redundancy:
o The Psychovisual redundancies exist because human perception does not
involve quantitative analysis of every pixel or luminance value in the image.
o It’s elimination is real visual information is possible only because the
information itself is not essential for normal visual processing.
Image Compression Models
Two distinct structures:
Encoder: An input image f(x,y) is fed into the encoder which creates the set of symbols from
the input data.
Decoder: The encoded information is fed into decoder where reconstructed output image
f(x,y) is generated.
Source Decoder
It contains two components:
• A symbol decoder.
• Inverse mapper.
• Performs in reverse order.
• The inverse operations of the source encoder symbol encoder and mapper blocks.
• The inverse for the quantizer has been left out.
Metrics:
1. Compression ratio:
2. Entropy
Entropy is a measure of the information content in an image, quantifying
the average number of bits needed to represent each pixel based on the
probability of each pixel value.
Key Points:
Definition: Entropy H is given by:
where p(x) is the probability of occurrence of pixel value xxx across the
entire image.
3. Mean Square Error (MSE)
Mean Square Error (MSE) quantifies the difference between the original
and compressed images, specifically focusing on the pixel-by-pixel
difference. It is commonly used in evaluating lossy compression
techniques, where some data is discarded.
Key Points:
Definition: MSE is defined as:
where:
I(I,j) is the pixel value in the original image at position (i,j).
K(i,j) is the pixel value in the compressed image,
m and n are the dimensions of the image.