0% found this document useful (0 votes)
5 views

Data Compression (1)

Uploaded by

5038simar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Data Compression (1)

Uploaded by

5038simar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Data Compression

The Data Compression refers to the process of reducing the amount of data required to
represent a given quantity of information. We know that one common characteristic
followed by all the images is the neighbouring of pixels and all the pixels are correlated to
each other so there is a chance of existing redundant information.
Types of redundancy:
 Coding Redundancy:
o Coding redundancy is associated with the representation of information.
o The information is represented in the form of codes.
o If the gray levels of an image are coded in a way that uses more code
symbols than absolutely necessary to represent each gray level then the
resulting image is said to contain coding redundancy.
 Inter-pixel Spatial Redundancy:
o Interpixel redundancy is due to the correlation between the neighboring
pixels in an image.
o That means neighboring pixels are not statistically independent. The gray
levels are not equally probable.
o The value of any given pixel can be predicated from the value of its neighbors
that is they are highly correlated.
o The information carried by individual pixel is relatively small. To reduce the
interpixel redundancy the difference between adjacent pixels can be used to
represent an image.
 Inter-pixel Temporal Redundancy:
o Interpixel temporal redundancy is the statistical correlation between pixels
from successive frames in video sequence.
o Temporal redundancy is also called interframe redundancy. Temporal
redundancy can be exploited using motion compensated predictive coding.
o Removing a large amount of redundancy leads to efficient video
compression.
 Psychovisual Redundancy:
o The Psychovisual redundancies exist because human perception does not
involve quantitative analysis of every pixel or luminance value in the image.
o It’s elimination is real visual information is possible only because the
information itself is not essential for normal visual processing.
Image Compression Models
Two distinct structures:
Encoder: An input image f(x,y) is fed into the encoder which creates the set of symbols from
the input data.
Decoder: The encoded information is fed into decoder where reconstructed output image
f(x,y) is generated.

The Source Encoder


Three stages of encoding process:
1. Mapper:
• Transforms input data format designed to reduce interpixel redundancies in the
input image.
• Reversible operation.
• May or may not reduce the amount of data required to represent the image.
Second stage
2. Quantizer:
• Reduces the accuracy of the mappers output.
• Reduces the psychovisual redundancies of the input image.
• Not reversible operation.
• It must be omitted when error free (lossless) compression is desired.
Third and final stage
3. Symbol encoder:
• Creates a fixed-or variable length code to represent the quantizer's output.
• Maps the output in accordance with the code.
• In most cases variable length code is used.
• Reversible operation.

Source Decoder
It contains two components:
• A symbol decoder.
• Inverse mapper.
• Performs in reverse order.
• The inverse operations of the source encoder symbol encoder and mapper blocks.
• The inverse for the quantizer has been left out.
Metrics:
1. Compression ratio:

2. Entropy
Entropy is a measure of the information content in an image, quantifying
the average number of bits needed to represent each pixel based on the
probability of each pixel value.
Key Points:
Definition: Entropy H is given by:

where p(x) is the probability of occurrence of pixel value xxx across the
entire image.
3. Mean Square Error (MSE)
Mean Square Error (MSE) quantifies the difference between the original
and compressed images, specifically focusing on the pixel-by-pixel
difference. It is commonly used in evaluating lossy compression
techniques, where some data is discarded.
Key Points:
 Definition: MSE is defined as:

where:
 I(I,j) is the pixel value in the original image at position (i,j).
 K(i,j) is the pixel value in the compressed image,
 m and n are the dimensions of the image.

Lossy Compression is a data compression method that reduces file size by


eliminating less important information. In contrast to lossless compression,
which maintains exact data, lossy compression achieves higher
compression ratios by discarding data that the human eye or ear might not
notice. This makes it highly useful for multimedia applications, such as
images, audio, and video, where some data can be sacrificed for space
efficiency without significantly affecting perceived quality.

JPEG is based on the Discrete Cosine Transform (DCT), which converts


spatial pixel values into frequency components. Here’s a step-by-step JPEG
compression process:
1. Divide the image into 8x8 blocks: Each 8x8 block undergoes separate
processing to avoid noticeable artifacts.
2. Apply DCT: This transform converts the pixel values into frequency
values, isolating high-frequency details (edges, fine texture) from low-
frequency information (overall shape, color).
3. Quantization: JPEG compression uses a quantization table to reduce the
precision of DCT coefficients, especially for high frequencies, as they are
less perceptible to the human eye.
4. Entropy Encoding: The quantized coefficients are then compressed
using entropy coding techniques like Huffman or arithmetic coding.
JPEG Quality Trade-off: By adjusting the quantization table, JPEG
compression can provide a range of compression ratios, allowing users to
balance image quality with file size.

You might also like