MMC Chap3
MMC Chap3
Unit-3
Instructional Objectives: At the end of the unit, the students should be able to: 1.) Understand the varying concepts of compression techniques like: lossless and lossy compression. 2.) Learn the source encoding and destination decoding technique. 3.) Learn and differentiate the various methods of text compression. 4.) Evaluate the compression for different types of images.
MMC
Unit-3
1. INTRODUCTION:
Compression is used in all major applications. All the images on the web are compressed, typically in the JPEG or GIF formats, most modems use compression, HDTV will be compressed using MPEG-2, and several file systems automatically compress files when stored, and the rest of us do it by hand. The neat thing about compression is that the algorithms used in the real world make heavy use of a wide set of algorithmic tools, including sorting, hash tables, tries, and FFTs. Furthermore, algorithms with strong theoretical foundations play a critical role in real-world applications. In this chapter we will use the generic term message for the objects we want to compress, which could be either files or messages. The task of compression consists of two components, an encoding algorithm that takes a message and generates a compressed representation (hopefully with fewer bits), and a decoding algorithm that reconstructs the original message or some approximation of it from the compressed representation. These two components are typically intricately tied together since they both have to understand the shared compressed representation. Lossless algorithms, can reconstruct the original message exactly from the compressed message, and lossy algorithms, can only reconstruct an approximation of the original message. Lossless algorithms are typically used for text, and lossy for images and sound where a little bit of loss in resolution is often undetectable, or at least acceptable. Lossy is used in an abstract sense, however, and does not mean random lost pixels, but instead means loss of a quantity such as a frequency component, or perhaps loss of noise. For example, one might think that lossy text compression would be unacceptable because they are imagining missing or switched characters. Consider instead a system that reworded sentences into a more standard form, or replaced words with synonyms so that the file can be better compressed. Technically the compression would be lossy since the text has changed, but the meaning and clarity of the message might be fully maintained, or even improved. Because one cant hope to compress everything, all compression algorithms must assume that there is some bias on the input messages so that some inputs are more likely than others, i.e. that there is some unbalanced probability distribution over the possible messages. Most compression algorithms base this bias on the structure of the messages i.e., an assumption that repeated characters are more likely than random characters, or that large white patches occur in typical images. Compression is therefore all about probability. When discussing compression algorithms it is important to make a distinction between two components: the model and the coder. The model component somehow captures the probability distribution of the messages by knowing or discovering something about the structure of the input. The coder component then takes advantage of the probability biases generated in the model to generate codes. It does this by effectively lengthening low probability messages and shortening high-probability messages. A model, for example, might have a generic understanding of human faces knowing that some faces are more likely than others (e.g., a teapot would not be a very likely face). The coder would then be able to send shorter messages for objects that look like faces. This could work well for compressing teleconference calls. The models in most current real-world compression algorithms, however, are not so sophisticated, and use more mundane measures such as repeated patterns in text. Although there are many different ways to design the model component of compression algorithms and a huge range of levels of sophistication, the coder components tend to be quite genericin current algorithms are almost exclusively based on either Huffman or arithmetic codes. Lest we try to make to fine of a distinction here, it should be pointed out that the line between model and coder components of algorithms is not always well defined.
2 BSS ECE, REVA
MMC
Unit-3
It turns out that information theory is the glue that ties the model and coder components together. In particular it gives a very nice theory about how probabilities are related to information content and code length. As we will see, this theory matches practice almost perfectly, and we can achieve code lengths almost identical to what the theory predicts. Another question about compression algorithms is how one judges the quality of one versus another. In the case of lossless compression there are several criteria I can think of, the time to compress, the time to reconstruct, the size of the compressed messages, and the generalityi.e., does it only work on Shakespeare or does it do Byron too. In the case of lossy compression the judgement is further complicated since we also have to worry about how good the lossy approximation is. There are typically tradeoffs between the amount of compression, the runtime, and the quality of the reconstruction. Depending on your application one might be more important than another and one would want to pick your algorithm appropriately. Perhaps the best attempt to systematically compare lossless compression algorithms is the Archive Comparison Test (ACT) by Jeff Gilchrist. It reports times and compression ratios for 100s of compression algorithms over many databases. It also gives a score based on a weighted average of runtime and the compression ratio.
2. COMPRESSION PRINCIPLES
By compression the volume of information to be transmitted can be reduced. At the same time a reduced bandwidth can be used. The application of the compression algorithm is the main function carried out by the encoder and the decompression algorithm is carried out by the destination decoder. Compressions algorithms can be classified as being either lossless (to reduce the amount of source information to be transmitted with no loss of information) e.g transfer of text file over the network. OR
lossy (reproduced a version perceived by the recipient as a true copy) e.g digitized images, audio and video streams.
Examples of run-length encoding are when the source information comprises long substrings of the same character or binary digit.
3
MMC
Unit-3
In this the source string is transmitted as a different set of codewords which indicates only the character but also the number of bits in the substring, providing the destination knows the set of codewords being used, it simply interprets each codeword received and outputs the appropriate number of characters/bits.
E.g. output from a scanner in a Fax Machine.000000011111111110000011 will be represented as 0, 7 1, 10 0, 5 1, 2. 2.2 Entropy Encoding Statistical Encoding
A set of ASCII codewords are often used for the transmission of strings of characters. However, the symbols and hence the codewords in the source information does not occur with the same frequency. E.g A may occur more frequently than P which may occur more frequently than Q. The statistical coding uses this property by using a set of variable length codewords the shortest being the one representing the most frequently appearing symbol.
Uses smaller codewords to represent the difference signals. Can be lossy or lossless. This type of coding is used where the amplitude of a signal covers a large range but the difference between successive values is small. Instead of using large codewords a set of smaller code words representing only the difference in amplitude is used. For example if the digitization of the analog signal requires 12 bits and the difference signal only requires 3 bits then there is a saving of 75% on transmission bandwidth.
MMC
Unit-3
Transform encoding involves transforming the source information from one form into another, the other form lending itself more readily to the application of compression.
The LZ algorithm uses strings of characters instead of single characters. For example for text transfer, a table containing all possible character strings are present in the encoder and the decoder. As each word appears instead of sending the ASCII code, the encoder sends only the index of the word in the table. This index value will be used by the decoder to reconstruct the text into its original form. This algorithm is also known as a dictionary-based compression.
MMC
Unit-3
The principle of the Lempel-Ziv-Welsh coding algorithm is for the encoder and decoder to build the contents of the dictionary dynamically as the text is being transferred. Initially the decoder has only the character set e.g ASCII. The remaining entries in the dictionary are built dynamically by the encoder and decoder. Initially the encoder sends the index of the four characters T, H, I, S and sends the space character which will be detected as a non alphanumeric character. It therefore transmits the character using its index as before but in addition interprets it as terminating the first word and this will be stored in the next free location in the dictionary. Similar procedure is followed by both the encoder and decoder. In applications with 128 characters initially the dictionary will start with 8 bits and 256 entries 128 for the characters and the rest 128 for the words.
MMC
Unit-3
The graphics interchange format is used extensively with the Internet for the representation and compression of graphical images. Although colour images comprising 24-bit pixels are supported GIF reduces the number of possible colours that are present by choosing 256 entries from the original set of 224 colours that match closely to the original image. Hence instead of sending as 24-bit colour values only 8-bit index to the table entry that contains the closest match to the original is sent.This results in a 3:1 compression ratio. The contents of the table are sent in addition to the screen size and aspect ratio information. The image can also be transferred over the network using the interlaced mode.
4.1 Image Compression GIF Compression Dynamic Mode Using LZW Coding
MMC
Unit-3
GIF also allows an image to be stored and subsequently transferred over the network in an interlaced mode; useful over either low bit rate channels or the Internet which provides a variable transmission rate.
The compression image data is organized so that the decompressed image is built up in a progressive way as the data arrives. 4.3 Digitized Documents
Since FAX machines are used with public carrier networks, the ITU-T has produced standards relating to them. These are T2(Group1), T3 (Group2), T4 (Group3) (PSTN), and T6 (Group 4) (ISDN). Both use data compression ratio in the range of 10:1. The resulting codewords are grouped into termination-codes table (white or black run-lengths from 0 to 63 pels in steps of 1) and the make-up codes table (contains in multiples of 64 pels). Since this codeword uses two sets of codeword it is known as the modified Huffman codes.
MMC
Unit-3
ITU T Group 3 and 4 facsimile conversion codes: termination-codes Termination code table
MMC
Unit-3
ITU T Group 3 and 4 facsimile conversion codes: make-up codes Make-up of 64 codewords
Each scanned line is terminated with an EOL code. In this way the receiver fails to decode a word it starts to search for an EOL pattern. If it fails to decode an EOL after a preset number of lines it aborts the reception process and informs the sending machine. A single EOL precedes the end of each scanned line and six consecutive EOLs indicate the end of each page. The T4 coding is known as one-dimensional coding.
The modified-modified relative element address designate coding explores the fact that most scanned lines differ from the previous line by only a few pels. E.g. if a line contains a black-run then the next line will normally contain the same run pels plus or minus 3 pels. In MMR the run-lengths associated with a line are identified by comparing the line contents, known as the coding line (CL), relative to the immediately preceding line known as the reference line (RL).
MMC
Unit-3
The run lengths associated with a coding line are classified into three groups relative to the reference line.
4.6 IMAGE COMPRESSION RUN-LENGTH POSSIBILITIES: PASS MODE (A), VERTICAL MODE Pass mode
This is the case when the run-length in the reference line(b1b2) is to the left of the next run-length in the coding line (a1a2), that is b2 is to the left of a1.
Vertical mode
This is the case when the run-length in the reference line (b1b2) overlaps the next run-length in the coding line(a1a2) by a maximum of plus or minus 3 pels.
MMC
Unit-3
This is the case when the run-length in the reference line (b1b2) overlaps the run-length (a1a2) by more than plus or minus 3 pels.
The Joint Photographic Experts Group forms the basis of most video compression algorithms.
4.9 IMAGE COMPRESSION IMAGE/BLOCK PREPARATION Source image is made up of one or more 2-D matrices of values.
2-D matrix is required to store the required set of 8-bit grey-level values that represent the For the colour image if a CLUT is used then a single matrix of values is required.
12
image.
MMC
Unit-3
If the Y, Cr, Cb format is used then the matrix size for the chrominance components is smaller than the Y matrix (Reduced representation). Once the image format is selected then the values in each matrix are compressed separately using the DCT.
In order to make the transformation more efficient a second step known as block preparation is carried out before DCT.
In block preparation each global matrix is divided into a set of smaller 8X8 submatrices (block) which are fed sequentially to the DCT.
Once the source image format has been selected and prepared (four alternative forms of representation), the set values in each matrix are compressed separately using the DCT).
MMC
Unit-3
Block preparation is necessary since computing the transformed value for each position in a matrix requires the values in all the locations to be processed.
Each pixel value is quantized using 8 bits which produces a value in the range 0 to 255 for the R, G, B or Y and a value in the range 128 to 127 for the two chrominance values Cb and Cr. If the input matrix is P[x,y] and the transformed matrix is F[i,j] then the DCT for the 8X8 block is computed using the expression:
F[i, j] =
All 64 values in the input matrix P[x,y] contribute to each entry in the transformed matrix F[i,j]. For i = j = 0 the two cosine terms are 0 and hence the value in the location F[0,0] of the transformed matrix is simply a function of the summation of all the values in the input matrix. This is the mean of all 64 values in the matrix and is known as the DC coefficient. Since the values in all the other locations of the transformed matrix have a frequency coefficient associated with them they are known as AC coefficients. for j = 0 only the horizontal frequency coefficients are present. for i = 0 only the vertical frequency components are present. For all the other locations both the horizontal and vertical frequency coefficients are present.
14
MMC
Unit-3
The values are first centred around zero by subtracting 128 from each intensity/luminance value. Using DCT there is very little loss of information during the DCT phase. The losses are due to the use of fixed point arithmetic. The main source of information loss occurs during the quantization and entropy encoding stages where the compression takes place. The human eye responds primarily to the DC coefficient and the lower frequency coefficients (The higher frequency coefficients below a certain threshold will not be detected by the human eye). This property is exploited by dropping the spatial frequency coefficients in the transformed matrix (dropped coefficients cannot be retrieved during decoding). In addition to classifying the spatial frequency components the quantization process aims to reduce the size of the DC and AC coefficients so that less bandwidth is required for their transmission (by using a divisor). The sensitivity of the eye varies with spatial frequency and hence the amplitude threshold below which the eye will detect a particular frequency also varies. The threshold values vary for each of the 64 DCT coefficients and these are held in a 2-D matrix known as the quantization table with the threshold value to be used with a particular DCT coefficient in the corresponding position in the matrix. The choice of threshold value is a compromise between the level of compression that is required and the resulting amount of information loss that is acceptable.
15
MMC
Unit-3
JPEG standard has two quantization tables for the luminance and the chrominance coefficients. However, customized tables are allowed and can be sent with the compressed image.
From the quantization table and the DCT and quantization coefficents number of observations can made: - The computation of the quantized coefficients involves rounding the quotients to the nearest integer value. - The threshold values used increase in magnitude with increasing spatial frequency. - The DC coefficient in the transformed matrix is largest. - Many of the higher frequency coefficients are zero.
4.13 IMAGE COMPRESSION ENTROPY ENCODING Vectoring The entropy encoding operates on a one-dimensional string of values (vector). However the output of the quantization is a 2-D matrix and hence this has to be represented in a 1-D form. This is known as vectoring.
MMC
Unit-3
In order to exploit the presence of the large number of zeros in the quantized matrix, a zig-zag of the matrix is used. Differential encoding In this section only the difference in magnitude of the DC coefficient in a quantized block relative to the value in the preceding block is encoded. This will reduce the number of bits required to encode the relatively large magnitude. The difference values are then encoded in the form (SSS, value) SSS indicates the number of bits needed and actual bits that represent the value. e.g: if the sequence of DC coefficients in consecutive quantized blocks was: 12, 13, 11, 11, 10, --- the difference values will be 12, 1, -2, 0, -1.
MMC
Unit-3
IMAGE COMPRESSION RUN LENGTH ENCODING The remaining 63 values in the vector are the AC coefficients. Because of the large number of 0s in the AC coefficients they are encoded as string of pairs of values. Each pair is made up of (skip, value) where skip is the number of zeros in the run and value is the next non-zero coefficient.
The above will be encoded as (0,6) (0,7) (0,3)(0,3)(0,3) (0,2)(0,2)(0,2)(0,2)(0,0). Final pair indicates the end of the string for this block.
Significant levels of compression can be obtained by replacing long strings of binary digits by a string of much shorter codewords. The length of each codeword is a function of its relative frequency of occurrence. Normally, a table of codewords is used with the set of codewords precomputed using the Huffman coding algorithm.
In order for the remote computer to interpret all the different fields and tables that make up the bitstream it is necessary to delimit each field and set of table values in a defined way. The JPEG standard includes a definition of the structure of the total bitstream relating to a particular image/picture. This is known as a frame. The role of the frame builder is to encapsulate all the information relating to an encoded image/picture. In order for the remote computer to interpret all the different fields and tables that make up the bitstream it is necessary to delimit each field and set of table values in a defined way. The JPEG standard includes a definition of the structure of the total bitstream relating to a particular image/picture. This is known as a frame. The role of the frame builder is to encapsulate all the information relating to an encoded image/picture.
18
BSS
ECE, REVA
MMC Unit-3 TXT & IMG REPRESENTATION At the top level the complete frame-plus-header is encapsulated between a start-of-frame and an end-of-
frame delimiter which allows the receiver to determine the start and end of all the information relating to a complete image.
The frame header contains a number of fields - The overall width and height of the image in pixels. - The number and type of components (CLUT, R/G/B, Y/Cb/Cr). - The digitization format used (4:2:2, 4:2:0 etc.).
At the next level a frame consists of a number of components each of which is known as a scan. The level two header contains fields that include: - The identity of the components. - The number of bits used to digitize each component. - The quantization table of values that have been used to encode each component.
Each scan comprises one or more segments each of which can contain a group of (8X8) blocks preceded by a header. This contains the set of Huffman codewords for each block.
MMC
Unit-3
MMC Unit-3 TXT & IMG REPRESENTATION A JPEG decoder is made up of a number of stages which are simply the corresponding decoder sections
The JPEG decoder is made up of a number of stages which are the corresponding decoder sections of those used in the encoder. The frame decoder first identifies the encoded bitstream and its associated control information and tables within the various headers. It then loads the contents of each table into the related table and passes the control information to the image builder. Then the Huffman decoder carries out the decompression operation using preloaded or the default tables of codewords. The two decompressed streams containing the DC and AC coefficients of each block are then passed to the differential and run-length decoders. The resulting matrix of values is then dequantized using either the default or the preloaded values in the quantization table. Each resulting block of 8X8 spatial frequency coefficient is passed in turn to the inverse DCT which in turn transforms it back to their spatial form.
JPEG SUMMARY
Although complex using JPEG compression ratios of 20:1 can be obtained while still retaining a good quality image. This level (20:1) is applied for images with few colour transitions. For more complicated images compression ratios of 10:1 are more common. Like GIF images it is possible to encode and rebuild the image in a progressive manner. This can be achieved by two different modes progressive mode and hierarchical mode. Progressive mode First the DC and low-frequency coefficients of each block are sent and then the high-frequency coefficients. hierarchial mode in this mode, the total image is first sent using a low resolution e.g 320 X 240 and then at a higher resolution 640 X 48.0
MMC
Unit-3