0% found this document useful (0 votes)
2 views

Unit4

The document discusses the need for image compression, highlighting benefits such as reduced storage requirements, faster transmission, and efficient processing. It explains concepts like spatial redundancy and objective fidelity, and outlines the image compression model with steps including preprocessing, transformation, quantization, and encoding. Additionally, it details various image formats, compression algorithms like Huffman and LZW, and the JPEG standard for continuous-tone image compression.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Unit4

The document discusses the need for image compression, highlighting benefits such as reduced storage requirements, faster transmission, and efficient processing. It explains concepts like spatial redundancy and objective fidelity, and outlines the image compression model with steps including preprocessing, transformation, quantization, and encoding. Additionally, it details various image formats, compression algorithms like Huffman and LZW, and the JPEG standard for continuous-tone image compression.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Need for Image Compression (3):

1. Reduced Storage Requirements:

o High-resolution images can occupy a large amount of memory or storage space. Image compression
reduces the file size, allowing you to store more images in the same storage capacity. This is
particularly important for applications that generate a large number of images, such as digital
photography or satellite imaging.

2. Faster Transmission:

o Compressed images require less bandwidth for transmission over networks, which is crucial for tasks
like sending images over the internet, streaming media, or in video conferencing. Smaller image sizes
result in faster upload and download times, improving user experience and reducing transmission
costs.

3. Efficient Processing:

o Smaller images are easier and faster to process since they require less computational power and
memory. This is important for real-time applications such as image processing, computer vision tasks,
and machine learning algorithms, where high-speed processing is necessary. Additionally, it helps in
reducing the overall resource usage, making the system more efficient.

Spatial Redundancy (3):

Spatial redundancy refers to the repetition of similar or identical pixel values in an image, especially in regions that
do not undergo significant changes in intensity or color. This redundancy can be exploited during compression to
reduce the amount of data required to represent the image. In other words, spatial redundancy occurs when
neighboring pixels have similar values, allowing for more efficient storage by encoding these patterns rather than
storing each pixel individually.

For example, large areas of solid color or smooth gradients often exhibit high spatial redundancy, which makes them
prime candidates for compression techniques like run-length encoding or transform coding (e.g., JPEG).

Objective Fidelity (2):

Objective fidelity refers to the measure of how accurately a compressed image represents the original image without
perceptual distortion. It is a quantitative approach to evaluating the quality of the compressed image based on
factors such as:

• Compression Ratio: The ratio of the original image size to the compressed size.

• Error Metrics: Objective fidelity can be assessed using error metrics like Peak Signal-to-Noise Ratio (PSNR) or
Mean Squared Error (MSE), which quantify the difference between the original and compressed images.

Objective fidelity is important because it helps to evaluate the effectiveness of an image compression technique in
preserving the quality of the image after compression.

Image Compression Model with Block Diagram (5):

The image compression model involves several stages to reduce the size of the image while maintaining as much
visual quality as possible. Below is a simplified block diagram and the steps involved in the process:

Block Diagram:
Steps Involved:

1. Preprocessing: The image is first preprocessed, which may include operations like converting the image to
grayscale, reducing noise, or normalizing the pixel values.

2. Transformation: In this step, the image is transformed into a different domain (e.g., from the spatial domain
to the frequency domain). A common transformation used is the Discrete Cosine Transform (DCT) in JPEG
compression. The purpose is to represent the image in a form that is easier to compress.

3. Quantization: After transformation, the image coefficients are quantized to reduce the precision of the
values, which leads to a reduction in data size. The higher the quantization level, the greater the
compression, but it can also lead to a loss in image quality.

4. Encoding (Entropy Coding): This step involves encoding the quantized values using efficient methods like
Huffman coding or Arithmetic coding. These methods assign shorter codes to frequently occurring values and
longer codes to less frequent values, reducing the overall data size.

5. Compressed Image: The final compressed image is obtained, which can be stored or transmitted.

Four Image Formats (2):

1. JPEG (Joint Photographic Experts Group): A commonly used format for lossy image compression, especially
for photographs.

2. PNG (Portable Network Graphics): A lossless image format that supports transparency and is widely used for
web images.

3. GIF (Graphics Interchange Format): A lossless format for images with limited color depth, often used for
simple graphics or animations.

4. TIFF (Tagged Image File Format): A versatile image format that supports both lossless compression and high-
quality images, used in professional photography and scanning.

Expand JPEG, MPEG (2):

1. JPEG: Joint Photographic Experts Group

2. MPEG: Moving Picture Experts Group

JPEG Image Standard for Continuous Still Image Compression (5):

The JPEG (Joint Photographic Experts Group) standard is widely used for compressing continuous-tone still images,
such as photographs, by applying a lossy compression method. Here's a brief explanation of how it works:

1. Color Space Transformation:


o The first step in JPEG compression is converting the image from the RGB color space (which is used
for display) to the YCbCr color space, where:

▪ Y represents the luminance (brightness) of the image.

▪ Cb and Cr represent the chrominance (color) components.

o This step is beneficial because the human eye is more sensitive to brightness details than to color
details, allowing more aggressive compression for the chrominance channels (Cb and Cr) without a
significant loss in perceived quality.

2. Block-based Transformation (DCT):

o The image is divided into small blocks, typically 8x8 pixels. Each block is then processed individually.

o A Discrete Cosine Transform (DCT) is applied to each block to convert the spatial domain into the
frequency domain. The DCT separates the image data into a set of frequencies: low frequencies
(representing major structures) and high frequencies (representing fine details or noise).

o The low-frequency components are most important for image quality, while high-frequency
components can often be discarded without significant visual loss.

3. Quantization:

o The DCT coefficients are quantized to reduce their precision, which is a major step in compressing
the image. A quantization table is used to reduce high-frequency components more aggressively,
taking advantage of human visual perception (since the eye is less sensitive to small variations in
high-frequency details).

o This quantization introduces some loss of image quality, but it significantly reduces the amount of
data needed to represent the image.

4. Entropy Coding (Huffman Coding):

o After quantization, the DCT coefficients are encoded using Huffman coding or Arithmetic coding to
further compress the data.

o These techniques assign shorter codes to more frequent values and longer codes to less frequent
values, reducing the total size of the data.

5. Compression Output:

o The final compressed image consists of the encoded DCT coefficients, along with other data such as
header information, which specifies the image dimensions, quantization tables, and other
parameters necessary for decoding.

o The result is a compressed image file, typically with a significant reduction in size compared to the
original image, with some loss of detail due to quantization.

JPEG compression is widely used in applications where some loss of quality is acceptable in exchange for reduced file
size, such as in web images, digital photography, and image archives.
1. Huffman Coding Algorithm (5):

Huffman coding is a lossless data compression algorithm that assigns variable-length codes to input characters, with
shorter codes assigned to more frequent characters.

Algorithm:

1. Step 1: Calculate Frequency of Each Character

o Analyze the input string and calculate the frequency of each character.

2. Step 2: Create a Priority Queue

o Create a min-heap (priority queue) where each node contains a character and its frequency.

o Add each character and its frequency to the heap.

3. Step 3: Build the Huffman Tree

o While there is more than one node in the heap:

▪ Remove the two nodes with the lowest frequencies.

▪ Create a new internal node with these two nodes as children, and assign it a frequency equal
to the sum of the frequencies of the two nodes.

▪ Insert the new node back into the heap.

4. Step 4: Assign Binary Codes

o Starting from the root of the Huffman tree, assign a binary code to each character by traversing the
tree.

o Assign ‘0’ for the left branch and ‘1’ for the right branch.

5. Step 5: Generate Encoded Output

o Replace each character in the original input with its corresponding Huffman code to get the
compressed output.

2. Golomb Coding Algorithm (5):

Golomb coding is a lossless data compression algorithm, particularly useful for encoding integer sequences with
known or estimated geometric distribution.

Algorithm:

1. Step 1: Calculate the Golomb Parameter (m)

o Golomb coding uses a parameter mmm, where mmm is typically chosen based on the distribution of
the data. For most practical uses, mmm can be approximated as the mean of the data or another
value that minimizes the average code length.

2. Step 2: Divide Input into Quotient and Remainder

o For each integer xxx, divide it by mmm to get the quotient qqq and remainder rrr.

o x=q⋅m+rx = q \cdot m + rx=q⋅m+r, where qqq is the quotient, and rrr is the remainder.

3. Step 3: Encode the Quotient

o The quotient qqq is encoded in unary form (i.e., using qqq number of 1's followed by a 0).

4. Step 4: Encode the Remainder


o The remainder rrr is encoded in binary form using ⌈log⁡2m⌉\lceil \log_2 m \rceil⌈log2m⌉ bits (the
number of bits needed to represent mmm).

5. Step 5: Concatenate the Codes

o Concatenate the unary code for the quotient and the binary code for the remainder to generate the
Golomb code for each integer.

3. LZW Coding Algorithm (5):

LZW (Lempel-Ziv-Welch) is a lossless data compression algorithm that replaces strings of characters with shorter
codes.

Algorithm:

1. Step 1: Initialize the Dictionary

o Create a dictionary with all individual characters of the input data and assign them a unique code.

2. Step 2: Start Reading the Input

o Read the input data and start with the first character.

3. Step 3: Find the Longest Matching Substring

o Look for the longest substring that already exists in the dictionary.

4. Step 4: Output the Code

o Output the dictionary code for the longest substring found.

5. Step 5: Add New Substring to Dictionary

o Add the next character to the substring and add this new substring to the dictionary with a new
code.

6. Step 6: Repeat Until All Data is Processed

o Continue reading characters and adding new substrings to the dictionary until the entire input is
processed.

4. Arithmetic Coding Algorithm (5):

Arithmetic coding is a form of entropy encoding that represents an entire message as a single number, a fraction
between 0 and 1, which is then encoded into binary.

Algorithm:

1. Step 1: Calculate Probabilities

o Calculate the probability of each symbol in the input message (or use a predefined model if
necessary).

2. Step 2: Define the Range

o Define an initial range [low, high], which is set to [0, 1).

3. Step 3: Update the Range for Each Symbol

o For each symbol in the input message, update the range according to the probability of the symbol.
This divides the range into sub-ranges corresponding to the symbols.
o Update the low and high values by narrowing down the range based on the probability of the
symbol.

4. Step 4: Repeat Until All Symbols Are Processed

o Continue this process for every symbol in the message.

5. Step 5: Output the Final Range

o Once all symbols are processed, the final range [low, high] will represent the entire message. Any
number in this range can be used as the compressed code.

5. Run-Length Coding Algorithm (5):

Run-Length Encoding (RLE) is a simple form of lossless data compression that encodes sequences of identical data
elements into a single data value and count.

Algorithm:

1. Step 1: Initialize

o Start from the beginning of the input data.

2. Step 2: Identify Runs

o Traverse the data and identify runs of consecutive identical symbols (characters or pixels).

3. Step 3: Output the Run

o For each run, output the symbol and the length of the run. For example, a run of "AAAA" would be
encoded as "A 4".

4. Step 4: Continue for All Runs

o Continue processing the input data, identifying runs of identical characters or symbols, and
outputting them until the entire input is processed.

5. Step 5: Return the Encoded Data

o The output will be a sequence of pairs representing the symbol and the count for each run.

You might also like