Ch02_4web
Ch02_4web
Data Representation in
Computer Systems
Chapter 2 Objectives
2
Chapter 2 Objectives
3
2.1 Introduction
4
2.1 Introduction
5
2.2 Positional Numbering Systems
6
2.2 Positional Numbering Systems
9 10 2 + 4 10 1 + 7 10 0
+ 4 10 -1 + 7 10 -2
7
2.2 Positional Numbering Systems
8
2.3 Converting Between Bases
• Converting 190 to base 3...
– Continue in this way until
the quotient is zero.
– In the final calculation, we
note that 3 divides 2 zero
times with a remainder of 2.
– Our result, reading from
bottom to top is:
19010 = 210013
9
2.3 Converting Between Bases
• Converting 190 to base 2
19010 = 101111102 2 190
2 95 0
2 47 1
2 23 1
2 11 1
2 5 1
2 2 1
2 1 0
0 1
10
2.3 Converting Between Bases
11
2.3 Converting Between Bases
• The binary numbering system is the most
important radix system for digital computers.
• However, it is difficult to read long strings of binary
numbers -- and even a modestly-sized decimal
number becomes a very long binary number.
– For example: 110101000110112 = 1359510
• For compactness and ease of reading, binary
values are usually expressed using the
hexadecimal, or base-16, numbering system.
12
Decimal Binary Hexadecimal Octal
0 00000 0 0
1 0001 1 1
Decimal, 2 0010 2 2
3 0011 3 3
Binary, 4 0100 4 4
Hexadecimal, 5 0101 5 5
6 0110 6 6
Octal 7 0111 7 7
8 1000 8 10
9 1001 9 11
10 1010 A 12
11 1011 B 13
12 1100 C 14
13 1101 D 15
14 1110 E 16
15 1111 F 17
13
2.3 Converting Between Bases
15
2.4 Signed Integer Representation
16
2.4 Signed Integer Representation
17
2.4 Signed Integer Representation
18
2.4 Signed Integer Representation
• Example:
– Using signed magnitude
binary arithmetic, find the
sum of 75 and 46.
• First, convert 75 and 46 to
binary, and arrange as a sum,
but separate the (positive)
sign bits from the magnitude
bits.
19
2.4 Signed Integer Representation
20
2.4 Signed Integer Representation
21
2.4 Signed Integer Representation
22
2.4 Signed Integer Representation
23
2.4 Signed Integer Representation
• Although the “end carry around” adds some
complexity, one’s complement is simpler to
implement than signed magnitude.
• But it still has the disadvantage of having two
different representations for zero: positive zero and
negative zero.
• Two’s complement solves this problem.
• Two’s complement is the radix complement of the
binary numbering system; the radix complement of
a non-zero number N in base r with d digits is rd –
N.
24
2.4 Signed Integer Representation
• To express a value in two’s complement
representation:
– If the number is positive, just convert it to binary and
you’re done.
– If the number is negative, find the one’s complement of
the number and then add 1.
• Example:
– In 8-bit binary, 3 is: 00000011
– -3 using one’s complement representation is:
11111100
– Adding 1 gives us -3 in two’s complement form:
11111101.
25
2.4 Signed Integer Representation
• With two’s complement arithmetic, all we do is add
our two binary numbers. Just discard any carries
emitting from the high order bit.
– Example: Using one’s
complement binary
arithmetic, find the sum of
48 and - 19.
26
2.4 Signed Integer Representation
27
2.4 Signed Integer Representation
• Example:
– Using two’s complement binary
arithmetic, find the sum of 107
and 46.
• We see that the nonzero carry
from the seventh bit overflows into
the sign bit, giving us the
erroneous result: 107 + 46 = -103.
28
2.4 Signed Integer Representation
• Example:
– Using two’s complement binary
arithmetic, find the sum of 23 and
-9.
– We see that there is carry into the
sign bit and carry out. The final
result is correct: 23 + (-9) = 14.
29
2.5 Floating-Point Representation
30
2.5 Floating-Point Representation
31
2.5 Floating-Point Representation
32
2.5 Floating-Point Representation
• Computers use a form of scientific notation for
floating-point representation
• Numbers written in scientific notation have three
components:
33
2.5 Floating-Point Representation
• Computer representation of a floating-point
number consists of three fixed-size fields:
34
2.5 Floating-Point Representation
35
2.5 Floating-Point Representation
• Example:
– Express 3210 in the simplified 14-bit floating-point model.
• We know that 32 is 25. So in (binary) scientific notation
32 = 1.0 x 25 = 0.1 x 26.
– In a moment, we’ll explain why we prefer the second
notation versus the first.
• Using this information, we put 110 (= 610) in the
exponent field and 1 in the significand as shown.
36
2.5 Floating-Point Representation
37
2.5 Floating-Point Representation
38
2.5 Floating-Point Representation
• Example: Express -3.75 as a floating point number
using IEEE single precision.
• First, let’s normalize according to IEEE rules:
– 3.75 = -11.112 = -1.111 x 21
– The bias is 127, so we add 127 + 1 = 128 (this is our
exponent)
– The first 1 in the significand is implied, so we have:
(implied)
39
2.6 Character Codes
• Calculations aren’t useful until their results can
be displayed in a manner that is meaningful to
people.
• We also need to store the results of calculations,
and provide a means for data input.
• Thus, human-understandable characters must be
converted to computer-understandable bit
patterns using some sort of character encoding
scheme.
40
2.6 Character Codes
42
2.6 Character Codes
43
2.6 Character Codes
44
2.6 Character Codes
• The Unicode codes-
pace allocation is
shown at the right.
• The lowest-numbered
Unicode characters
comprise the ASCII
code.
• The highest provide for
user-defined codes.
45
2.7.2 Error Detection and Correction
• Hamming codes are code words formed by adding
redundant check bits, or parity bits, to a data word.
• The Hamming distance between two code words is
the number of bits in which two code words differ.
This pair of bytes has a
Hamming distance of 3:
46
2.7.2 Error Detection and Correction
47
2.7.2 Error Detection and Correction
48
2.7.2 Error Detection and Correction
• Suppose we have a set of n-bit code words
consisting of m data bits and r (redundant) parity
bits.
• Suppose also that we wish to detect and correct
one single bit error only.
• An error could occur in any of the n bits, so each
code word can be associated with n invalid code
words at a Hamming distance of 1.
• Therefore, we have n + 1 bit patterns for each
code word: one valid code word, and n invalid
code words
49
2.7.2 Error Detection and Correction
• Using n bits, we have 2 n possible bit patterns. We
have 2 m valid code words with r check bits (where
n = m + r).
• For each valid codeword, we have (n+1) bit
patterns (1 legal and n illegal).
• This gives us the inequality:
(n + 1) 2 m 2 n
• Because n = m + r, we can rewrite the inequality
as:
(m + r + 1) 2 m 2 m + r or (m + r + 1) 2 r
– This inequality gives us a lower limit on the number of
check bits that we need in our code words.
50
2.7.2 Error Detection and Correction
• Suppose we have data words of length m = 4.
Then:
(4 + r + 1) 2 r
implies that r must be greater than or equal to 3.
– We should always use the smallest value of r that makes
the inequality true.
• This means to build a code with 4-bit data words
that will correct single-bit errors, we must add 3
check bits.
• Finding the number of check bits is the hard part.
The rest is easy.
51
2.7.2 Error Detection and Correction
• Suppose we have data words of length m = 8.
Then:
(8 + r + 1) 2 r
implies that r must be greater than or equal to 4.
• This means to build a code with 8-bit data words
that will correct single-bit errors, we must add 4
check bits, creating code words of length 12.
• So how do we assign values to these check
bits?
52
2.7.2 Error Detection and Correction
• With code words of length 12, we observe that each
of the bits, numbered 1 though 12, can be expressed
in powers of 2. Thus:
1 = 20 5 = 22 + 2 0 9 = 23 + 2 0
2 = 21 6 = 22 + 2 1 10 = 2 3 + 2 1
3 = 21+ 2 0 7 = 22 + 21 + 2 0 11 = 2 3 + 2 1 + 2 0
4 = 22 8 = 23 12 = 2 3 + 2 2
– 1 (= 20) contributes to all of the odd-numbered digits.
– 2 (= 21) contributes to the digits, 2, 3, 6, 7, 10, and 11.
– . . . And so forth . . .
• We can use this idea in the creation of our check bits.
53
2.7.2 Error Detection and Correction
• Using our code words of length 12, number each
bit position starting with 1 in the low-order bit.
• Each bit position corresponding to a power of 2
will be occupied by a check bit.
• These check bits contain the parity of each bit
position for which it participates in the sum.
54
2.7.2 Error Detection and Correction
• Since 1 (=20) contributes to the values 1, 3 , 5, 7, 9,
and 11, bit 1 will check parity over bits in these
positions.
• Since 2 (= 21) contributes to the values 2, 3, 6, 7, 10,
and 11, bit 2 will check parity over these bits.
• For the word 11010110, assuming even parity, we
have a value of 1 for check bit 1, and a value of 0 for
check bit 2.
56
2.7.2 Error Detection and Correction
57
2.7.2 Error Detection and Correction
58
Chapter 2 Conclusion
59
Chapter 2 Conclusion
60
End of Chapter 2
61