0% found this document useful (0 votes)
166 views70 pages

Chapter08 1

Uploaded by

Jangwoo Yong
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
166 views70 pages

Chapter08 1

Uploaded by

Jangwoo Yong
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

Digital Image Processing

Spring 2012
dml@dgu
Image Compression
DML
Why Compression?
Super-fast
Network
raw video of
512x512x30(fps)x3(bpp)
requires 25.6 Mbits!
nearly 170 GB (2-hour movie)
15.7 min using
180 Mbps (1/8 of movie)
7 sec using 187 kbps
for 1.3 GB (MPEG1)
A/D
Encoding
D/A
Decoding
Huffman
LZW
pcm
calp
jpeg
jbig
mpeg
h264
DML
Why Compression?
Use of compression varies according to media
and application
Standard Bit Rate Application
G.721 32 kbps Telephone
G.728 16 kbps Telephone
G.722 48 ~ 64 kbps Teleconferencing
MPEG-1 (audio) 128 ~ 384 kbps 2-Channel Audio
MPEG-2 (audio) 320 kbps 5-Channel Audio
JBIG 0.05 ~ 1.0 bpp Binary Images
JPEG 0.25 ~ 8.0 bpp Still Images
MPEG-1,2 (video) 1 ~ 8 Mbps Video
H.261 (px64) 64 ~ 1,536 kbps Videoconference
HDTV 17 Mbps Advanced TV
DML
Why Compression?
A digitized video having IBBPBBPBBPBBI is to be compressed
using MPEG1, and average compression ratio of 10:1 (I), 20:1 (P),
50:1 (B). What is the average bit rate generated by the encoder?
ex
Ave compression ratio = (1x0.1 + 3x0.05 + 8x0.02) / 12
= 0.0342 or 29.24:1
w/o compression = 352x240x8 + 2x(176x120x8) = 1.0137 Mbpf
w/ compression = 1.0137 x 1/29.24 = 34.670 kbpf
bit rate @ 30fps = 1.040 Mbps
DML
Why Compression?
Bandwidth vs. Compression
considerable progress has been made in compression
and networking
with greater availability of broader bandwidth, the need
for more compression decreases
more users access multimedia data, compression is still
needed despite the increase in bandwidths
DML
Why Compression?
Applied to the source prior to its transmission
relatively slow storage devices which does not allow
playing multimedia data (specially video) in real-time
To reduce volume of information
large storage requirements of multimedia data
To reduce bandwidth that is required
present network s bandwidth, which do not allow real-
time video data transmission
Criteria
application
compression ratio
complexity (cost)
availability
DML
Fundamentals
Data compression process of reducing amount
of data required to represent information
data are means by which information is conveyed
Removing data redundancy is central issue in
digital image processing
Relative data redundancy in relation with C
R
:
coding, interpixel & psychovisual redundancy
(a) My 3 month old dog got flees from a cat next door.
(b) My dog has flees.
ex
R
D
C
R
1
1 =
2
1
n
n
C
R
=
DML
Fundamentals
Coding Redundancy
Code 1: L1 (average number of bits for Code 1) = 3 bits
Code 2: L2 = 2(0.19) + 2(0.25) + 2(0.21) + 3(0.16) + 4(0.08)
+ 5(0.06) + 6(0.03) + 6(0.02)
= 2.7 bits
Compression ratio = 3/2.7 = 1.11, R
D
=1-1/1.11=0.099
ex
DML
Fundamentals
Coding Redundancy
coding redundancy is present when codes have not
been selected to take full advantage of data
(ex) fixed-length coding vs. variable-length coding
compression is achieved
by assigning shortest
code to most frequently
occurred gray level
DML
Fundamentals
Fidelity Criteria
objective criteria - level of information loss is expressed
as a function of original and output image
(ex) compression ratio, bpp, rms, psnr
subjective criteria - level of information loss is evaluated
based on visual perspective
DML
Fundamentals
Fidelity Criteria
objective criteria
C
r
=
Original data size
Compressed data size
N
b
=
Encoded number of bits
Number of pixels
RMS = ( X
i
X
i
)
2
1
n

n
I = 0
N
b
= 98 / 64 = 1.53 bits
C
r
= 5.22
PSNR = 38.5 dB
DML
Fundamentals
Fidelity Criteria
objective criteria
original
10 : 1
0.8 bpp
(38 db)
45 : 1
0.18 bpp
(21 db)
Nb [bits/pixel] Picture quality
0.25~0.5 Moderate to good quality
0.5~0.75 Good to very food quality
0.75~1.0 Excellent quality
1.5~2.0 Usually indistinguishable from the origin
DML
Image Compression Models
Source Encoder & Decoder
Encoder consists of two independent subblocks
source encoder removes input redundancies
channel encoder increases noise immunity
Decoder consists of two independent subblocks
channel decoder
source decoder
DML
Image Compression Models
Source Encoder & Decoder
source encoder reduces any coding, interpixel, or
psychovisual redundancies of input image
mapper transforms input into format designed to reduce
interpixel redundancies; reversible; run-length or
transform coefficients
quantizer reduces accuracy of mapper s output;
reduces psychovisual redundancies; irreversible; omitted
when lossless compression is desired
symbol encoder creates codes to represent quantizer s
output; reduces ; reversible

DML
Image Compression Models
Source Encoder & Decoder
note that three operations are not necessarily used in
every compression schemes
when error-free, quantizer must be omitted
in predictive compression, mapper & quantizer are
represented by a single block
DML
Image Compression Models
Channel Encoder & Decoder
channel codec reduces impact of channel noise by
inserting controlled redundancy
cyclic codes
block codes
convolutional coding
block Interleaving
Hamming code: append enough bits to ensure some
minimum number of bits must change between valid
code words
DML
Information Theory
Mathematical review
Is there minimum amount of data that is sufficient to
describe information completely without any loss of
information?
Suppose there is an event A, which is a set of outcomes
of some random experiment, then the self-information
associated with A is given by
Recall that and increases as
decreases from one to zero
) ( log
) (
1
log ) ( A P
A P
A i
b b
= =
, 0 ) 1 log( = ) log(x
xx
DML
Information Theory
Therefore, if the probability of an event is low, the
amount of self-information associated with it is high; if
the probability of an event is high, the information
associated with it is low.
Suppose, A and Bare two independent events. The self-
information associated with the occurrence of both
event A and Bis,
As A and Bare independent,
) (
1
log ) (
AB P
AB i
b
=
). ( ) (
) (
1
log
) (
1
log
) ( ) (
1
log ) ( B i A i
B P A P B P A P
AB i
b b b
+ = + = =
Is there any self-information in barking of a dog & burglary?
ex
DML
Information Theory
The unit of information depends on the base of the log.
If we use log base 2, the unit is bits; if we use log base e,
the unit is nats, and if we use log base 10, the unit is
hartleys.
Note that to calculate the information is bits, we need
to take the logarithm base 2 of the probabilities. Recall
that
Therefore, if we want to take the log base 2 of x,
By taking natural log (log base e) or log base 10 of both
sides, then
a x
b
= log . x b
a
=
a x =
2
log
, 2 x
a
=
x
a
ln ) 2 ln( =
x a ln 2 ln =
2 ln
ln x
a =
DML
Information Theory
Let H and T be the outcome of flipping a coin. If the coin is
fair, then p(H)=p(T)=0.5 and i(H)=i(T)=1 bit
ex
If the coin is not fair, we could expect that the information
associated with each event to be different.
Suppose p(H)=1/8 & p(T)=7/8 then i(H)=3 & i(T)=0.193 bits
ex
DML
If we have a set of independent events which are
sets of outcomes of some experiment , where,
where is the sample space, then the average self-
information associated with the random experiment is
given by
This quantity is called the entropy associated with the
experiment and the average number of bits per
codeword can be found by
) ( ) (
k r k i i
r p r l P N L

= =
,
i
A
S
S A
i
=
S

= = ). ( log ) ( ) ( ) (
i b i i i
A P A P A i A P H
theoretical value
that is required to
transmit source
theoretical value
that is required to
transmit source
Information Theory
DML
Information Theory
(ex) Coding Redundancy
Code 1: L (average number of bits) = 3 bits
Code 2: L = 2(0.19) + 2(0.25) + 2(0.21) + 3(0.16) + 4(0.08)
+ 5(0.06) + 6(0.03) + 6(0.02)
= 2.7 bits
Compression ratio = 3/2.7 = 1.11, R
D
=1-1/1.11=0.099
about 10% of data in Code 1 is redundant
ex
DML
Information Theory
Consider the following sequence, 1 2 3 2 3 4 5 4 5 6 7 8 9 8
9 10 and associated probabilities of each symbol are as
follows,
Assuming the sequence is iid, the entropy for this sequence
can be calculated as 3.25 bits.
This means that the best scheme we could find for coding
this sequence could only code it at 3.25 bits/symbol.
ex
16
1
) 10 ( ) 7 ( ) 6 ( ) 1 ( = = = = p P P P
.
16
2
) 9 ( ) 8 ( ) 5 ( ) 4 ( ) 3 ( ) 2 ( = = = = = = P P P P P P
DML
Fundamental Coding Theorems
Shannon s first theorem for statistically independent
source: L
ave
= average length of codeword
n
X H
n
L
X H
ave
1
) ( ) ( + < s
2
1
) (
2
) ( + < s X H
L
X H
ave
Huffman coding using sequence of symbols:
1, 2, 3 11, 12, 13, 21, 22, 23, 31, 32, 33
ex
2
1
) (
2
) ( + < s X H
L
X H
ave
Information Theory
DML
Information Theory
Fundamental Coding Theorems
1
st
extension:
- entropy = 0.92 bits/symbol
- L = 1 bit/symbol
2
nd
extension:
- entropy = 1.83 bits/symbol
- L = 1.89 bits/symbol
Coding efficiency for 2
nd
extension is improved:
average number of code bits
per source symbol is 0.94
(1.89/2) bits/symbol
DML
Information Theory
Fundamental Coding Theorems
DML
a simple way is to assume a particular source model and
compute entropy based on that model
an alternate way is to construct source model based on
relative frequency of image under consideration
21 21 21 95 169 243 243 243
21 21 21 95 169 243 243 243
21 21 21 95 169 243 243 243
21 21 21 95 169 243 243 243
Estimate the information content (entropy) of 8-bit image:
ex
Using Information Theory
21 21 21 95 169 243 243 243
21 21 21 95 169 243 243 243
21 21 21 95 169 243 243 243
21 21 21 95 169 243 243 243
Information Theory
DML
Information Theory
we can assume that the image was produced by 8-bit
gray-level source
the source symbols are gray levels and source alphabet
is composed of 256 possible symbols
if symbol probabilities are know, entropy of each pixel
can be easily computed
we assume they have uniform pdf, source is
characterized by entropy of 8 bits/ pixel; average
information per pixel is 8 bits
total entropy of 4 x 8 image is (4 x 8) x 8 = 256 bits
1
DML
Information Theory
we can assume that the image is sample of behavior of
gray-level source
it is reasonable to model probabilities of source symbols
using gray-level histogram of sample image as:
2
Gray Level Count Probability
21 12 3/ 8
95 4 1/ 8
169 4 1/ 8
243 12 3/ 8
bits total 58 bits/pixel 81 . 1
)) 8 / 3 ( log 8 / 3 ) 8 / 1 ( log 8 / 1 ) 8 / 1 ( log 8 / 1 ) 8 / 3 ( log 8 / 3 (
2 2 2 2
= =
+ + + = H
1
st
order
estimate
DML
Information Theory
better estimate of entropy can be computed by
examining the relative frequency of pixel blocks; block
is a grouping of adjacent pixels
entropy = 2.5/ 2 = 1.25 bits/ pixel = 40 bits
as block size goes infinity, estimate approaches source s true
entropy => slow & high computational complexity
3
Gray-level Pair Count Probability
(21, 21) 8 1/ 4
(21, 95) 4 1/ 8
(95, 169) 4 1/ 8
(169, 243) 4 1/ 8
(243, 243) 8 1/ 4
(243, 21) 4 1/ 8
2
nd
order
estimate
DML
Error-Free Compression
Lossless compression - reduce the amount of
residue so that there is no loss of information
(reversible)
archival of medical or business document, processing of
satellite imagery, digital radiography
Lossless
Huffman
Adaptive
Huffman
Arithmetic
LZW
Bit-Plane
Coding
Predictive
LZ77
LZ78
LZW
Bit-Plane
Decomposition
Run-Length
Coding
VLC
DML
Error-Free Compression
Variable-Length Coding
simplest approach to reduce the coding redundancy
coding redundancy is normally present in natural binary
encoding of gray levels in image
construction of variable-length codes that assign the
shortest possible codewords to the most probable level
DML
Error-Free Compression
Huffman Coding
attempts to create codes that minimize the average
number of bits per character by assigning short codes to
most frequent data
prefix codes and optimum for given set of statistics
1. symbols occur more frequently will have shorter codewords
(reducing average number of bits)
2. two symbols occur least frequently will have same length
3. two lowest probability symbols differ only in the last bit
prefix codes code in which no codeword is a prefix to
another codeword
simple way to check if a code is prefix code is to draw
rooted binary tree
DML
Error-Free Compression
Huffman Coding
Encoding Algorithm
<step 1> all characters are initially free nodes
<step 2> two free nodes with lowest frequency are assigned
to a parent node with a weight equals to sum of two
child nodes
<step 3> repeat <step 2> until there is only one free node left
DML
Error-Free Compression
Huffman Coding
A=19, B=17, C=16, D=5, E=4, F=2, G=1
A 19
B 17
C 16
D 5
E 4
F 2
G 1
A 19
B 17
C 16
D 5
E 4
3
A 19
B 17
C 16
7
D 5
A 19
B 17
C 16
12
36
28
1
0
1
0
1
0
1
0
0
1
1
0
A=11, B=10, C=01, D=000, E=0011, F=00101, G=00100
ex
DML
Error-Free Compression
Huffman Coding
A=19, B=17, C=16, D=5, E=4, F=2, G=1
A=11, B=10, C=01, D=000, E=0011, F=00101, G=00100
ex
Average length of codeword = 0.29 x 2 + 0.27 x 2 + 0.25 x 2
+ 0.078 x 3 + 0.063 x 4
+ 0.031 x 5 + 0.016 x 5
=
DML
Error-Free Compression
Huffman Coding
use of binary tree which insures prefix property
one code can not be prefix of another code
requires two passes over the data
to accumulate data to check frequency
to compress
decoder must use the same binary tree
Applications
image compression
set of integers from 0 to 255 for monochrome image
spatial correlation between pixels is high
audio compression
CD quality audio data (44.1 KHz, 16 bits)
16-bit audio takes on 65536 distinct values which require 65536 distinct variable-
length codewords
variance of sample difference is low
DML
Error-Free Compression
Huffman Coding
Average length of code:
L = (0.4)1 + (0.3)2 + (0.1)4
+ (0.06)5 + (0.04)5
= 2.2 bits/symbol
entropy =
DML
Error-Free Compression
Adaptive Huffman Coding
used if knowledge of source statistics is unknown
collect probabilities of each symbol
encoding process
modify binary tree by inserting
weight
external node number of times symbol corresponding to leaf
internal node sum of its children
node number uniquely assigned to each node
satisfies sibling properties
weight of node increases as node number decreases
node number of parent node is greater than its children
node number of right child is greater than left child
DML
Error-Free Compression
Adaptive Huffman Coding
both TX & RX start with same tree structure consisting a
single node (NYT)
encoding and decoding processes are synchronized
since updating procedure used is identical
tree at encoder/ decoder is updated after each symbol is
encoded/ decoded
DML
Error-Free Compression
DML
Error-Free Compression
Adaptive Huffman Coding
Encode the message [a a r d v a r k] using adaptive
Huffman coding scheme.
ex
DML
Error-Free Compression
Arithmetic Coding
another popular method of generating variable-length
codes
useful when source contains small symbols
binary source, alphabets with high skewed probabilities
size of codebook may grow exponentially when Huffman
coding is used with long symbols
unique arithmetic code can be generated for
sequence of length m w/ o generating codewords for all
sequence of length m
generates a unique tag represents sequence and
symbols with numbers in unit interval
cumulative distribution function (cdf) is used
DML
Error-Free Compression
Arithmetic Coding
LOW=0.0
HIGH=1.0
WHILE not end of input stream
Get next CHARACTER
RANGE=HIGH-LOW
HIGH=LOW+RANGE*high range of CHARACTER
LOW=LOW+RANGE*low range of CHARACTER
END WHILE
Output LOW
Encoding Algorithm
DML
Error-Free Compression
Arithmetic Coding
generates one specific codeword of floating number
between 0 and 1
B=0.1, A=0.3, S=0.3, I=0.2, C=0.1
0
0.3
0.6
0.8
0.9
1.0
0.8
0.83
0.86
0.88
0.89
0.90
0.8
0.809
0.818
0.824
0.827
0.83
0.809
0.8117
0.8144
0.8162
0.8171
0.818
0.8114
0.81494
0.81548
0.81584
0.81602
0.8162
ex
DML
Error-Free Compression
Arithmetic Coding
Get NUMBER
DO
Find CHARACTER that has HIGH>NUMBER and LOW<NUMBER
Set HIGH and LOW corresponding to CHARACTER
Output CHARACTER
RANGE=HIGH-LOW
NUMBER=NUMBER-LOW
NUMBER=NUMBER/RANGE
UNTIL no more CHARACTERs
Decoding Algorithm
DML
Error-Free Compression
Arithmetic Coding
Number = 0.81602
ex
0
0.3
0.6
0.8
0.9
1.0
B
A
S
I
C
Number Symbol Range Low High
0.81602
0.1602
0.534
B
A
S
0.1
0.3
0.8 0.9
0.0 0.3
Number = Number low = 0.81602 0.8 = 0.01602
Number = Number / Range = 0.01602 / 0.1 = 0.1602
Number = Number low = 0.1602 0.0 = 0.1602
Number = Number / Range = 0.1602 / 0.3 = 0.534
Number = Number low = 0.534 0.3 = 0.234
Number = Number / Range = 0.234 / 0.3 = 0.78
Number = Number low = 0.78 0.6 = 0.18
Number = Number / Range = 0.18 / 0.2 = 0.9
DML
Error-Free Compression
Arithmetic Coding
two-pass algorithm
computes characters frequency & generates table
do actual compression
slightly higher compression ratio than Huffman coding
but computationally expensive
application
JBIG (Joint Bi-level Image Processing Group)
joint experts group of ISO, ICE, CCITT (ITU)
B=0.1, A=0.3, S=0.3, I=0.2, C=0.1
output = 0.81602
compression ratio = (8x5) / (4x5) = 2
ex
DML
Error-Free Compression
Arithmetic Coding
DML
Error-Free Compression
Run-length Coding
simplest data compression is to take advantage of the
repeated data called runs
it is optimal when source comprises long sub-string of
same characters
AAAABBBBBCCCCCCCCDEEEE
4A5B8C1D4E compression ratio =
MYDOGHASFLEAS
1M1Y1D1O1G1H1A1S1F1L1E1A1S
compression ratio =
ex
ex
DML
Error-Free Compression
Run-length Coding
represents unique string of data as original string and
run-length encode only repetitive ones, use prefix to flag
runs
it works well for images with solid background like
cartoons, not natural images
ABCDDDDDDDDEEEEEEEEE
ABC+8D+9E compression ratio =
ex
DML
Error-Free Compression
Run-length Coding
DML
Error-Free Compression
Run-length Coding
185 -17 14 -8 23 -9 -13 18
20 -34 26 -9 -10 10 13 6
-10 -23 -1 6 -18 3 -20 0
-8 -5 14 -14 -8 -2 -3 8
-3 9 7 1 -11 17 18 15
3 -2 18 8 8 -3 0 -6
8 0 -2 3 -1 -7 -1 -1
0 -7 -2 1 1 4 -6 0
61 -3 2 0 2 0 0 -1
4 -4 2 0 0 0 0 0
-1 -2 0 0 -1 0 -1 0
0 0 1 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 -1 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
Zig-zag scanned:
61,-3,4,-1,-4,2,0,2,-2,0,0,0,0,0,2,0,0,0,1,0,0,0,0,0,0,-1,0,0,-1,0,0,
0,0,-1,0,0,0,0,0,0,0,-1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
DML
Error-Free Compression
LZW Coding
Abraham Lempel & Jacob Ziv (LZ77, LZ78)
Lempel-Ziv & Terry Welch (LZW, 1984)
no need to include code table
basis for GIF, UNIX compression command
seeks to replace 8-bit characters with 12-bit codeword
one of dictionary techniques
DML
Error-Free Compression
LZW Coding
dictionary techniques increase compression by utilizing
recurring patterns of source
(ex) four-character words (3 from 26 English alphabet
followed by punctuation marks (, . : ; ! ?))
must have good idea about structure of source
static approach sufficient prior knowledge
adaptive approach not sufficient prior knowledge
how sufficient is sufficient?
p p p R 12 21 ) 1 ( 21 9 = + =
DML
Error-Free Compression
LZW Coding
static dictionary
when considerable prior knowledge about source is
available
(ex) student record at university
Digram coding dictionary consists of all letters of source
alphabet followed by as many pairs of letter, called
digrams
(ex) { } r d c b a , , , , = u
a abracadabr
Code Entry Code Entry
000 a 100 r
001 b 101 ab
010 c 110 ac
011 d 111 ad
>> 101 100 110 111 101 100 000
DML
Error-Free Compression
LZW Coding
Adaptive dictionary
LZ77 approach encoder examines input sequence thru
sliding window having two parts
search buffer contains portion of recently encoded
look-ahead buffer contains next portion to be encoded
encodes triple of <o, l ,c> (ex: <7,4,d>)
requires no prior knowledge of source
use recent past of sequence as dictionary for encoding
could perform better when full knowledge about statistics of
source is occupied
c b b a c d a b a e a c d a d d a d c e
match pointer
search buffer look-ahead buffer
o: offset
l: length of match
c: codeword
corresponding
to symbol after match
o: offset
l: length of match
c: codeword
corresponding
to symbol after match
current pointer
DML
Error-Free Compression
LZW Coding
.. c a b r a c a d a b r a r r a r r a d
Use LZ77 to encode the following sequence:
ex
c a b r a c a d a b r a r r a r r a d
c a b r a c a d a b r a r r a r r a d
c a b r a c a d a b r a r r a r r a d
encoded symbols:
( length of window = 13, look-ahead buffer = 6 )
<0, 0, c(d)>
DML
Error-Free Compression
LZW Coding
adaptive dictionary
LZ78 approach
(ex)
drops reliance on search buffer and keep explicit directory
directory is built identically at both encoder & decoder
inputs are encoded as double <i, c>
i index to dictionary entry w/ longest match
c code for character in input following matched portion
index 0 is used in the case of no match
double becomes newest entry in dictionary
ability to capture patterns and hold them indefinitely
dictionary keeps growing w/ o bound
a b c d e f a b c d e f a b c d e f a b
DML
Error-Free Compression
LZW Coding
Use LZ78 to encode the following sequence:
ex
wabba_wabba_wabba_wabba_woo_woo_woo
Index Entry Output
1 w <0, c(w)>
2 a <0, c(a)>
3 b <0, c(b)>
4 ba <3, c(a)>
5 _ <0, c(_)>
6 wa <1, c(a)>
7 bb <3, c(b)>
8 a_ <2, c(_)>
Index Entry Output
9 wab <6, c(b)>
10 ba_ <4, c(_)>
11 wabb <9, c(b)>
12 a_w <8, c(w)>
13 o <0, c(o)>
14 o_ <13, c(_)>
15 wo <1, c(o)>
16 o_w <14, c(w)>
17 oo <13, c(o)>
DML
Error-Free Compression
LZW Coding
adaptive dictionary
LZW approach modification of LZ78
encoder only sends index to dictionary
dictionary has to be primed w/ all letters of source
DML
Error-Free Compression
LZW Coding
Use LZW to encode the following sequence:
ex
encoded symbols: 5 2 3 3 2 1 6 8 10 12 9 11 7 16 5 4 4 11 21 23 4
Index Entry Index Entry Index Entry
1 _ 10 a_ 19 ba_w
2 a 11 _w 20 wo
3 b 12 wab 21 oo
4 o 13 bba 22 o_
5 w 14 a_w 23 _wo
6 wa 15 wabb 24 oo_
7 ab 16 ba_ 25 _woo
8 bb 17 _wa
9 ba 18 abb
wabba_wabba_wabba_wabba_woo_woo_woo
DML
Error-Free Compression
LZW Decoding
Decode the following symbols using LZW:
ex
Index Entry Index Entry Index Entry
1 _ 10 a_ 19 ba_w
2 a 11 _w 20 wo
3 b 12 wab 21 oo
4 o 13 bba 22 o_
5 w 14 a_w 23 _wo
6 wa 15 wabb 24 oo_
7 ab 16 ba_ 25 _woo
8 bb 17 _wa
9 ba 18 abb
5 2 3 3 2 1 6 8 10 12 9 11 7 16 5 4 4 11 21 23 4
decoded sequence: wabba_wabba_wabba_wabba_woo_woo_woo
DML
Error-Free Compression
Bit-Plane Coding
effective way to reduce interpixel redundancy
DML
Error-Free Compression
Gray Level Bit-Plane Gray Code
0 0000 0000
1 0001 0001
2 0010 0011
3 0011 0010
4 0100 0110
5 0101 0111
6 0110 0101
7 0111 0100
8 1000 1100
9 1001 1101
10 1010 1111
11 1011 1110
12 1100 1010
13 1101 1011
14 1110 1001
15 1111 1000
DML
Error-Free Compression
Bit-Plane Coding
bit-plane decomposition
the gray levels of an m-bit image can be represented in
the form of base 2 polynomial:
alternative decomposition is to represent image by m-bit
Gray code:
successive codewords differ in only one bit position
small changes in gray level are less likely to affect
0
0
1
1
2
2
1
1
2 2 2 2 a a a a
m
m
m
m
+ + + +


1 1
, 1
2 0

+
=
s s =
m m
i i i
a g
m i a a g
DML
Error-Free Compression
DML
Error-Free Compression
Bit-Plane Coding
(ex) would there be any difference between bit-plane
coding and Gray coding when the LSB was modified in
each case?
?
DML
Error-Free Compression
Lossless Predictive Coding
eliminate interpixel redundancies by extracting &
coding new information only
difference between actual and predicted value
|
|
.
|

\
|
=

=

m
i
i n i n
f round f
1

o
DML
Error-Free Compression
Lossless Predictive Coding

You might also like