0% found this document useful (0 votes)
21 views

IFT211(LECTURE NOTE)

The document discusses digital logic design, focusing on how various types of information such as text, graphics, and sound are represented in binary for computer processing. It explains the significance of bits and bytes, character encoding methods like ASCII and EBCDIC, and the importance of resolution in graphics representation. Additionally, it covers encryption methods for information security and data compression techniques to reduce storage requirements while maintaining data integrity.

Uploaded by

nurainadetomiwa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

IFT211(LECTURE NOTE)

The document discusses digital logic design, focusing on how various types of information such as text, graphics, and sound are represented in binary for computer processing. It explains the significance of bits and bytes, character encoding methods like ASCII and EBCDIC, and the importance of resolution in graphics representation. Additionally, it covers encryption methods for information security and data compression techniques to reduce storage requirements while maintaining data integrity.

Uploaded by

nurainadetomiwa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 39

COURSE CODE: IFT211

COURSE TITLE: DIGITAL LOGIC DESIGN

1.0 Information Representation


Computers of today do much more than perform calculations on numbers. The computer’s
ability to work with other types of information is dependent on representing the information in
terms of numbers. Text, sound, graphics, and motion video are all translated into numbers before
the computer can process the information. These types of information also require a large
number of bits when they are converted to binary; therefore, to make it easier to express the
number of bits a factor prefix is used. For example, kilobits or megabits. The prefix kilo
represents 103 (1000) or 210 (1024) the two numbers are close to each other and are convenient to
use in different contexts. The first factor is based on base 10 and the second factor is based on
base 2. One factor is easier to use in terms of base 10 and the other is easier to use in terms of
base 2. Therefore, both definitions of the prefix factor are commonly used, and both approaches
approximately describe the same amount of information (2 3 ≈103). The following table shows the
prefix factors and their amounts. Table: Commonly used order of magnitude prefixes.

Another unit of bit measure is the byte. A byte is equivalent to 8 bits. This measure is commonly
used because characters were initially represented using sets of 8 bits, and this led to character
information being measured in terms of bytes instead of bits. Now it is common to measure
information in terms of both bits and bytes. To distinguish between bits and bytes a capital B is
used to represent bytes and a lowercase b is used to represent bits. For example, a 1.44MB
floppy disk can hold 1.44 megabytes of information or 1.44x8 megabits of information.

(i) Characters
The task of representing characters is done by assigning a number to each character. This
concept is similar to the Morse code which was used in the past to send messages using a system
of dots and dashes, or short beeps and long beeps. In computing, several different methods are
used to represent characters in binary, but the most popular standard is defined by the American

1
Standard Code for Information Interchange (ASCII). The ASCII code uses 8 bits or 1 byte to
represent a character. Since 8 bits are used a total of 2 8 = 256 characters can be represented using
this system.

For example, the letter A is assigned the number 65 or 0100001 2 and the letter B the number 66
or 01000102. The computer handles the text using the assigned numbers in binary. This idea is
also used in the storage of information within the computer. The word “computer” will take up
approximately 8x8bit of space within a storage system. The ASCII code is found in the
appendix, and it is worth noting that the letter lowercase “a” and uppercase “A” are considered
two separate characters. Essentially all characters and symbols (letters, numbers, punctuation
marks, etc.) used within the computer need to be distinguished separately, or the computer will
not be able to process the information. Some ASCII codes are used as control codes. These
codes are used to format and organize the text for output. For example, decimal 9 on the ASCII
code is used for the tab function.

In the ASCII code table, the binary assignment for the number 7 is 00110111. In this case, the
binary number converted to decimal will not give 7. Here the bit pattern is simply used to
represent the symbol 7. From this example, it is evident that a decimal number that is coded in
binary and a decimal number that is converted to binary will not necessarily produce the same
result. A code that represents a decimal number in terms of its binary equivalent is called a
Binary Coded Decimal (BCD). It is interesting to note that the last four bits of the ASCII
representation do follow the BCD method.

Like the ASCII code other codes can also be used to represent information in a computer. The
Extended Binary Coded Decimal Interchange Code (EBCDIC) was developed by IBM and used
on IBM equipment. The concept is the same but done differently with the EBCDIC code. This
system also follows the BCD idea.

The number of bits used to represent the characters is of interest because it will affect the
processing time and storage requirements for the information. Therefore, the number of bits used
is kept to a bare minimum. For example, if each character is represented with 20 bits using code
A and with 10 bits using code B, then the storage requirement for code A will turn out to be
twice that of code B, but more characters can be represented using code A than code B.

2
(ii) Graphics
The representation of graphics in the computer is done using the same kind of idea as character
representation. A graphic is created, stored, and processed by considering it in terms of small
dots called pixels.

The pixels of the image are enlarged and shown.

Each graphic has several pixels that are used to define the image. The information for each pixel
like the colour and location is maintained in terms of binary. The method used to maintain the
information can vary, as seen in the different file types (gif, jpg, bmp, etc.) that are used to store
graphical information. The graphic in Figure 2.2 shows a magnified view of a graphic created by
a computer on any output device. Notice that it has been created using small boxes (pixels). If
the number of pixels used to represent a graphic is increased, then the quality of the graphic
becomes better as the pixel size drops. This idea is referred to as resolution. The higher the
resolution, the smaller the pixel size, and the better the quality. Better quality would mean an
increase in the number of pixels per unit area, which would mean that the computer would have
to keep track of more information. This idea is easily seen by comparing two graphic files stored
on a disk. The file with the better quality will take up more space (bits) than the file with the
lower quality. Another measure of resolution is in terms of the number of dots or pixels per unit
area. In this measure, the pixel density is used to describe the quality. This is measured in dots
per square inch or dpi. In this system, a higher dpi value means a better graphic quality. This
measure is commonly used to describe the graphic quality of monitors and printers.

In a colour graphic, the colour information of each pixel needs to be tracked as well. This is
referred to as several bits of colour. For example, a graphic with an 8-bit colour would have one
of 256 possible colours to choose from for each pixel, and a graphic with a 4-bit colour would
have one of 16 colours to choose from for each pixel. When pixel colours are chosen from a
larger selection, then the colour quality of the image becomes better; however, the larger

3
selection would mean a larger amount of information for the computer to process. The number
of colours available for a graphic is called the colour palette, and as the number of bits for the
colour increases so does the number of colours within the colour palette.

Other Information
Representing information using binary is the key to processing the information using the
computer. Both in character processing and graphic processing, the information is represented in
binary using a scheme before it is handled by the computer. Therefore, the first step in
processing any kind of general information is representing it in binary using a scheme. The
methods used can vary, but the idea or concept is the same. For example, to represent sound or
audio information, the volume and the pitch of the sound at every instant should be kept track of.
If the number of bits used to keep track of the information is increased then the resolution or the
quality of the information goes up. Consider temperature measurement by a computer as another
example. If the range of measurements is from 1 o to 64o and the number of bits used is 2, then
the range can be divided into 4 segments with each segment represented using a two-bit binary
number as shown in the following chart.

In this case, the resolution is low because the level of detail is low. A temperature measurement
of 7 and 12 will both be recorded as the same.

Now consider a 3-bit system of representation with 8 divisions of the range of measurement. In
the 3-bit system of measurement, the 7 o and 12o will be recorded by the computer as being
different. Now imagine an 8-bit system; in this case, the range will be divided into 256 divisions
leading to greater resolution or level of detail. In the 8-bit scheme, a 0.25 o change will be
recorded compared to an 8o change for the 3-bit system.

4
Regardless of what kind of information is being processed by the computer, the concept of
binary representation using a fixed number of bits is the same. The criterion for deciding on the
number of bits to use for a type of information depends on the resolution needed for the
application and the processing capability of the hardware. Improved hardware performance leads
to improved quality of the information. This relationship could be represented using the
following mathematical equation.

Example: 6 bits are available to measure temperatures from 20 o C to 60o C. Determine the
resolution of the temperature measurement.

Solution:
This means that a change of 0.625 C can be measured with such a system. Any change less than
0.625 C will not be detected.

(iii) Encryption
Encryption is the process by which information is scrambled in a particular manner to prevent
unauthorized access to it. Those with access to the scrambling method will have access to the
information. For example, the letters in a word could be switched around by reflecting it to make
access to the information difficult when the scrambling method is not known. The word
COMPUTER would be represented as RETUPMOC. If every word in a given text is scrambled
using this method, then it makes it difficult for someone to get the information out without the
scrambling method. In this example, the encryption method is relatively simple and therefore

5
can be easily broken. A simple encryption method is relatively easy to break, therefore we can
say that a complex encryption method is relatively hard to break. The complexity or the
difficulty of breaking a scrambling method is called the strength of the encryption. Strong
encryption is difficult to unscramble without the key, and weak encryption is easier to
unscramble without the key. The following is an example of a more complex encryption method.
Try unscrambling it.

Fodszqujpo!jt!jefbm!gps!qspwjejoh!jogpsnbujpo!tfdvsjuz

This is a bit harder to unscramble or decrypt than the previous example because of the
complexity of the encryption. Compare your attempts at accessing the information to the actual
information found at the end of this section.

Encryption ideas can also be applied directly to a binary sequence. In this case, a simple bit-by-
bit replacement method would not work, because it would simply switch the 1's and 0's around.
Therefore other ideas need to be used. For example, the bits could be grouped into sets, and
these sets of bits could be replaced. Consider the bit string: 100100110111011. If the bits are
grouped into sets of 3 bits then the string becomes 100 100 110 111 011. These sets could
now be replaced using different methods For example, the set of three could be replaced with the
next binary number i.e. 100 becomes 101 and 001 becomes 010. In this approach, 111 could be
assigned to become 000. Using this scheme the original bit string could be expressed as 101 101
111 000 100 to produce 101101111000100. Similarly, other replacement methods could be
employed using different operations.

If access to information can be controlled effectively by using a strong encryption method, then
it becomes a useful tool for providing information security. Information security is needed to
provide private communications over public communication systems like computer networks
and the Internet. The ability to maintain privacy is essential in many areas, for example, financial
transactions, which form the basis for conducting business over the Internet. For example, to
have credit card information transmitted over the internet, the information needs to be in a secure
format before transmission can take place. Encryption can also create problems because it can be
used as a tool for illegal activity, therefore there are laws controlling the use of encryption in
many countries.

6
Now going back to the encryption challenge presented earlier, it should be evident that
decryption without knowing the encryption method is difficult. The solution to the encrypted
text is shown below.

Encryption is ideal for providing information security.


Now that you know the answer, compare the letters and determine the method of encryption that
was used in this case. In this case, the method is a replacement method, where each letter is
replaced with another letter. Simple replacement methods can be easily cracked using statistical
analysis. The frequency of each letter is determined within the text, and these letters are most
likely to be the vowels. This statistical information can then be used to determine the encryption
method. It is interesting to note that computers are usually used to crack encryption systems by
analysing mathematical patterns. Therefore, most strong encryption software will use encryption
systems with complex mathematical patterns that are difficult to break. One such method is
called RSA encryption which is based on prime numbers.
(iv) Data Compression
Data compression is used to represent data using fewer number of bits than what is needed.
There are two types of data compression; one is called lossless data compression and the other is
called lossy data compression. In lossless data compression, the idea is to represent the data with
fewer bits without losing the accuracy of the original data. For example, the binary data
100000000000000000000000000110111 is 32 bits long. Therefore, if it is represented as it is, it
would take 32 bits of space in a storage system to store the information. This 32-bit number is
interestingly mostly 0's with a few 1's, therefore, if it can be represented using fewer bits, then
the storage space required will be reduced. In this case, if the numbers are reassigned based on
the following scheme then the space requirement will be reduced.
00 as 0

01 as 10

10 as 110

11 as 1110

Using this scheme, the bits in the original data are grouped into sets of 2 bits and then
represented using the scheme as shown below.

7
10 00 00 00 00 00 00 00 00 00 00 00 00 01 10 11 - 32 bits (original)

110 0 0 0 0 0 0 0 0 0 0 0 0 10 1110 - 21bits (new)

In this case, the number of bits needed to represent the data has been reduced by a significant
amount. This system works well, but it appears to cause a problem when there are an odd
number of data bits to work with. This problem can be dealt with by using a single bit at the
beginning to specify the odd and even number of bits in the original data. For example, if the
first bit is 0 then the number of bits in the original data is even, and if the first bit is 1 then the
number of bits in the original data is odd. If the number of bits in the data is odd it can be made
even by adding a 0 at the end as shown next.

10 00 00 00 00 01 0 - Original data 13-bit

10 00 00 00 00 01 00 - 0 is added to make the number of bits even

1 110 0 0 0 0 10 0 - Leading 1 shows an odd number of data bits.

1110000100 - 10 bits.

When using data compression methods, it must always be possible to recreate the original data
from the compressed bit pattern, otherwise the compression method is useless. Let us now see
how the original data can be recovered from the compressed bit pattern.

1110000100 - Compressed data

1 110000100 - The first bit shows an odd number of bits in the original data.

1 110 0 0 0 10 0 - The 0's are used to identify the bit groupings.

10 00 00 00 00 01 00 - Original bits are recovered with the 0 at the end.

10 00 00 00 00 01 0 - Since the 1st bit is 1, the last 0 is removed

10 00 00 00 00 01 - The original bit pattern has been recovered.

From the analysis, it can be seen that data representations can be made more efficient or compact
by using compression schemes. This particular scheme has a limitation. If the incidence of 1's

8
and 0's in the data is equal then there will be little or no compression. In this case, this scheme
works best when there are lots of 0s in the data. If the data is mainly 1's, then this method will
increase the data size. Therefore, it must be noted that there are limitations to data compression.
Lossless data compression schemes are used in commonly in data compression software like
Winzip.

In the lossy data compression method the original data cannot be recovered from the compressed
data, but a close representation is recreated. This approach works best in some situations where
the decompressed data does not have to be the same as the compressed data. In such cases, much
higher levels of compression can be achieved. For example, still graphics, motion video, and
sound information can be compressed to a much higher level by giving up accuracy on
decompression. JPG and MPEG files are an example of lossy data compression. Under these
lossy compression schemes graphics quality is sacrificed in favour of reduced file sizes. The
reduced file sizes become a significant advantage when working with the internet because
smaller file sizes mean faster transfer of the files.

Computers do not understand human language; they understand data within the prescribed form.
Data representation is a method to represent data and encode it in a computer system. Generally,
a user inputs numbers, text, images, audio, video etc types of data to process but the computer
converts this data to machine language first and then processes it

Some Common Data Representation Methods Include

9
Data representation plays a vital role in storing, process, and data communication. A correct and
effective data representation method impacts data processing performance and system
compatibility.

2.0 Computers Represent Data in the Following Forms


2.1 Number System
A computer system considers numbers as data; it includes integers, decimals, and complex
numbers. All the inputted numbers are represented in binary formats like 0 and 1. A number
system is categorized into four types:

(i) Decimal Number System (Base 10 Number System)


Number system with base value 10 is termed as Decimal number system. It uses 10 digits i.e. 0-9
for the creation of numbers.Here, each digit in the number is at a specific place with place value
a product of different powers of 10. The place value is termed from right to left as first place
value called units, second to the left as Tens, so on Hundreds, Thousands, etc. Here, units has the
place value as 100, tens has the place value as 101, hundreds as 102, thousands as 103, and so on.

The decimal number system has a base of 10 because it uses ten digits from 0 to 9. In the
decimal number system, the positions successive to the left of the decimal point represent units,
tens, hundreds, thousands and so on. This system is expressed in decimal numbers. Every
position shows a particular power of the base (10).

For example: 10285 has place values as


(1 × 104) + (0 × 103) + (2 × 102) + (8 × 101) + (5 × 100)
1 × 10000 + 0 × 1000 + 2 × 100 + 8 × 10 + 5 × 1
10000 + 0 + 200 + 80 + 5
= 10285

Example of Decimal Number System


The decimal number 1457 consists of the digit 7 in the units position, 5 in the tens place, 4 in the
hundreds position, and 1 in the thousands place whose value can be written as:

(1×103) + (4×102) + (5×101) + (7×100)


(1×1000) + (4×100) + (5×10) + (7×1)
1000 + 400 + 50 + 7
= 1457

10
 In 734, value of 7 is 7 hundreds or 700 or 7 × 100 or 7 × 102
 In 971, value of 7 is 7 tens or 70 or 7 × 10 or 7 × 101
 In 207, value 0f 7 is 7 units or 7 or 7 × 1 or 7 × 100

The weightage of each position can be represented as follows −

In digital systems, instructions are given through electric signals; variation is done by varying
the voltage of the signal. Having 10 different voltages to implement decimal number system in
digital equipment is difficult. So, many number systems that are easier to implement digitally
have been developed. Let’s look at them in detail.

(ii) Binary Number System (Base 2 Number System)


The base 2 number system is also known as the Binary number system wherein, only two binary
digits exist, i.e., 0 and 1. Specifically, the usual base-2 is a radix of 2. The figures described
under this system are known as binary numbers which are the combination of 0 and 1. For
example, 110101 is a binary number. We can convert any system into binary and vice versa.

Binary number system is very useful in electronic devices and computer systems because it can
be easily performed using just two states ON and OFF i.e. 0 and 1. Decimal Numbers 0-9 are
represented in binary as: 0, 1, 10, 11, 100, 101, 110, 111, 1000, and 1001

Example
Write (14)10 as a binary number.
Solution:

Base 2 Number System Example

11
∴ (14)10 = 11102

Examples:

14 can be written as 1110

19 can be written as 10011

50 can be written as 11001

Each binary digit is also called a bit. Binary number system is also positional value system,
where each digit has a value expressed in powers of 2, as displayed here.

In any binary number, the rightmost digit is called least significant bit (LSB) and leftmost digit
is called most significant bit (MSB).

And decimal equivalent of this number is sum of product of each digit with its positional value.

110102 = 1×24 + 1×23 + 0×22 + 1×21 + 0×20

= 16 + 8 + 0 + 2 + 0

= 2610

Computer memory is measured in terms of how many bits it can store. The followings are
memory capacity conversion.

 1 byte (B) = 8 bits


 1 Kilobytes (KB) = 1024 bytes
 1 Megabyte (MB) = 1024 KB
 1 Gigabyte (GB) = 1024 MB
 1 Terabyte (TB) = 1024 GB
 1 Exabyte (EB) = 1024 PB

12
 1 Zettabyte = 1024 EB
 1 Yottabyte (YB) = 1024 ZB

(iii) Octal Number System (Base 8 Number System)


In the octal number system, the base is 8 and it uses numbers from 0 to 7 to represent numbers.
Octal numbers are commonly used in computer applications. Converting an octal number to
decimal is the same as decimal conversion and is explained below using an example.

Example: Convert 2158 into decimal.


Solution:
2158 = 2 × 82 + 1 × 81 + 5 × 80
= 2 × 64 + 1 × 8 + 5 × 1
= 128 + 8 + 5
= 14110

Octal number system is also a positional value system with where each digit has its value
expressed in powers of 8, as shown here −

Decimal equivalent of any octal number is sum of product of each digit with its positional value.

7268 = 7×82 + 2×81 + 6×80

= 448 + 16 + 6

= 47010

(iv) Hexadecimal Number System (Base 16 Number System)


In the hexadecimal system, numbers are written or represented with base 16. In the hexadecimal
system, the numbers are first represented just like in the decimal system, i.e. from 0 to 9. Then,
the numbers are represented using the alphabet from A to F. The below-given table shows the
representation of numbers in the hexadecimal number system.

Hexadecima 0 1 2 3 4 5 6 7 8 9 A B C D E F

13
l
Decimal 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Hexadecimal Number: Hexadecimal number system represents values in 16 digits. It consists


of digits 0, 12, 3, 4, 5, 6, 7, 8, and 9 then it includes alphabets A, B, C, D, E, and F; so its base is
16. Where A represents 10, B represents 11, C represents 12, D represents 13, E represents 14
and F represents 15.

Examples of Number System Conversion

Example 1:
Convert (1056)16 to an octal number.

Solution:
Given, 105616 is a hex number.
First we need to convert the given hexadecimal number into decimal number
(1056)16
= 1 × 163 + 0 × 162 + 5 × 161 + 6 × 160
= 4096 + 0 + 80 + 6
= (4182)10

Convert this decimal number to the required octal number by repetitively dividing by 8.

8 4182 Remainder
8 522 6
8 65 2
8 8 1
8 1 0
0 1

Therefore, taking the value of the remainder from bottom to top, we get;

(4182)10 = (10126)8

Therefore,

(1056)16 = (10126)8

14
Example 2:

Convert (1001001100)2 to a decimal number.

Solution:

(1001001100)2

= 1 × 29 + 0 × 28 + 0 × 27 + 1 × 26 + 0 × 25 + 0 × 24 + 1 × 23 + 1 × 22 + 0 × 21 + 0 × 20

= 512 + 64 + 8 + 4

= (588)10

Example 3:
Convert 101012 into an octal number.
Solution:
Given 101012 is the binary number
Write the given binary number as,
010 101
The octal number system,
010 → 2
101 → 5
Therefore, the required octal number is (25)8

Example 4:

Convert hexadecimal 2C to decimal number.

Solution:

Convert 2C16 into binary numbers first.

2C → 00101100

Now convert 001011002 into a decimal number.

101100 = 1 × 25 + 0 × 24 + 1 × 23 + 1 × 22 + 0 × 21 + 0 × 20

= 32 + 8 + 4

= 44

15
(ii) Bits and Bytes

Bits
A bit is the smallest data unit that a computer uses in computation; all the computation tasks
done by the computer systems are based on bits. A bit represents a binary digit in terms of 0 or 1.
The computer usually uses bits in groups. It's the basic unit of information storage and
communication in digital computing.

Bytes
A group of eight bits is called a byte. Half of a byte is called a nibble; it means a group of four
bits is called a nibble. A byte is a fundamental addressable unit of computer memory and
storage. It can represent a single character, such as a letter, number, or symbol using encoding
methods such as ASCII and Unicode.
Bytes are used to determine file sizes, storage capacity, and available memory space. A kilobyte
(KB) is equal to 1,024 bytes, a megabyte (MB) is equal to 1,024 KB, and a gigabyte (GB) is
equal to 1,024 MB. File size is roughly measured in KBs and availability of memory space in
MBs and GBs.

The following table shows the conversion of Bits and Bytes −


Byte Value Bit Value
1 Byte 8 Bits
1024 Bytes 1 Kilobyte
1024 Kilobytes 1 Megabyte
1024 Megabytes 1 Gigabyte
1024 Gigabytes 1 Terabyte
1024 Terabytes 1 Petabyte
1024 Petabytes 1 Exabyte
1024 Exabytes 1 Zettabyte
1024 Zettabytes 1 Yottabyte

16
1024 Yottabytes 1 Brontobyte
1024 Brontobytes 1 Geopbytes

(iii) Text Code


A Text Code is a static code that allows a user to insert text that others will view when they scan
it. It includes alphabets, punctuation marks and other symbols. Some of the most commonly
used text code systems are:

(a) EBCDIC
(b) ASCII
(c) Extended ASCII
(d) Unicode

(a) EBCDIC
EBCDIC stands for Extended Binary Coded Decimal Interchange Code. IBM developed
EBCDIC in the early 1960s and used it in their mainframe systems like System/360 and its
successors. To meet commercial and data processing demands, it supports letters, numbers,
punctuation marks, and special symbols. Character codes distinguish EBCDIC from other
character encoding methods like ASCII. Data encoded in EBCDIC or ASCII may not be
compatible with computers; to make them compatible, we need to convert with systems
compatibility. EBCDIC encodes each character as an 8-bit binary code and defines 256 symbols.
The table depicts different characters along with their EBCDIC code.

17
(b) ASCII
ASCII stands for American Standard Code for Information Interchange. It is an 8-bit code that
specifies character values from 0 to 127. ASCII is a standard for the Character Encoding of
Numbers that assigns numerical values to represent characters, such as letters, numbers,
exclamation marks and control characters used in computers and communication equipment that
are using data.

ASCII originally defined 128 characters, encoded with 7 bits, allowing for 2^7 (128) potential
characters. The ASCII standard specifies characters for the English alphabet (uppercase and
lowercase), numerals from 0 to 9, punctuation marks, and control characters for formatting and
control tasks such as line feed, carriage return, and tab.

ASCII Tabular column


ASCII Decimal Value Character
Code
0000 0000 0 Null prompt
0000 0001 1 Start of heading
0000 0010 2 Start of text
0000 0011 3 End of text
0000 0100 4 End of transmit
0000 0101 5 Enquiry
0000 0110 6 Acknowledge
0000 0111 7 Audible bell
0000 1000 8 Backspace
0000 1001 9 Horizontal tab
0000 1010 10 Line Feed

(c) Extended ASCII


Extended American Standard Code for Information Interchange is an 8-bit code that specifies
character values from 128 to 255. Extended ASCII encompasses different character encoding
normal ASCII character set, consisting of 128 characters encoded in 7 bits, some additional
characters that utilise full 8 bits of a byte; there are a total of 256 potential characters.

18
Different extended ASCII exist, each introducing more characters beyond the conventional
ASCII set. These additional characters may encompass symbols, letters, and special characters to
a specific language or location.

Extended ASCII Tabular column

(d) Unicode
It is a worldwide character standard that uses 4 to 32 bits to represent letters, numbers and
symbols. Unicode is a standard character encoding which is specifically designed to provide a
consistent way to represent text in nearly all of the world's writing systems. Every character is
assigned a unique numeric code, program, or language. Unicode offers a wide variety of
characters, including alphabets, ideographs, symbols, and emojis.

Unicode Tabular Column

3.0 Boolean Algebra

19
Boolean Algebra is a branch of algebra that deals with boolean values true and false. It is
fundamental to digital logic design and computer science, providing a mathematical framework
for describing logical operations and expressions.

3.1 Boolean Algebra Operations


Various operations are used in Boolean algebra but the basic operations that form the base of
Boolean Algebra are.
(i) Negation or NOT Operation
(ii) Conjunction or AND Operation
(iii) Disjunction or OR Operation

3.2 Basics of Booean Algebra in Digital in Digital Electronics


These operations have their own symbols and precedence and the table added below shows the
symbol and the precedence of these operators.

Operator Symbol Precedence

NOT ‘ (or) ⇁ First

AND . (or) ∧ Second

OR + (or) ∨ Third

Two boolean variables A and B that can have any of the two values 0 or 1, i.e. they can be either
OFF or ON. Then these operations are explained as,
(i) Negation or NOT Operation
Using the NOT operation reverse the value of the Boolean variable from 0 to 1 or vice-versa.
This can be understood as:
 If A = 1, then using NOT operation we have (A)’ = 0

 If A = 0, then using the NOT operation we have (A)’ = 1

 We also represent the negation operation as ~A, i.e if A = 1, ~A = 0

20
(ii) Conjunction or AND Operation
Using the AND operation satisfies the condition if both the value of the individual variables are
true and if any of the value is false then this operation gives the negative result. This can be
understood as,

 If A = True, B = True, then A . B = True

 If A = True, B = False, Or A = false, B = True, then A . B = False

 If A = False, B = False, then A . B = False

(iii) Disjunction (OR) Operation


Using the OR operation satisfies the condition if any value of the individual variables is true, it
only gives a negative result if both the values are false. This can be understood as,
 If A = True, B = True, then A + B = True
 If A = True, B = False, Or A = false, B = True, then A + B = True
 If A = False, B = False, then A + B = Falses

3.3 Boolean Algebra Table

Given Below is the Expression for the Boolean Algebra

Operation Symbol Definition


AND Operation ⋅ or ∧ Returns true only if both inputs are true.
OR Operation + or ∨ Returns true if at least one input is true.
NOT Operation ¬ or ∼ Reverses the input.
XOR Operation ⊕ Returns true if exactly one input is true.
NAND Operation ↓ Returns false only if both inputs are true.
NOR Operation ↑ Returns false if at least one input is true.
XNOR Operation ↔ Returns true if both inputs are equal.

3.4 Boolean Expression and Variables


Boolean expression is an expression that produces a Boolean value when evaluated, i.e. it
produces either a true value or a false value. Whereas boolean variables are variables that store
Boolean numbers.

21
P + Q = R is a Boolean phrase in which P, Q, and R are Boolean variables that can only store
two values: 0 and 1. The 0 and 1 are the synonyms for false and True and are used in Boolean
Algebra, sometimes we also use “Yes” in place of True and “No” in place of False. Thus, we can
say that statements using Boolean variables and operating on Boolean operations are Boolean
Expressions. Some examples of Boolean expressions are,

 A + B = True

 A.B = True

 (A)’ = False

3.5 Terminologies in Boolean Algebra


There are various terminologies related to Boolean Algebra, which are used to explain various

Boolean Variables
Variables used in Boolean algebra that store the logical value of 0 and 1 are called the boolean
variables. They are used to store either true or false values. Boolean variables are fundamental in
representing logical states or propositions in Boolean expressions and functions.

Boolean Function
A function of the Boolean Algebra that is formed by the use of Boolean variables and Boolean
operators is called the Boolean function. It is formed by combining Boolean variables and
logical expressions such as AND, OR, and NOT. It is used to model logical relationships,
conditions, or operations.
Literal
A variable or the complement of the variable in Boolean Algebra is called the Literal. Literals
are the basic building blocks of the boolean expressions and functions. They represent the
operands in logical operations.

Complement
The inverse of the Boolean variable is called the complement of the variable. The complement of
0 is 1 and the complement of 1 is 0. It is represented by ‘ or (¬) over the variable. Complements
are used to represent logical negations in Boolean expressions and functions.

22
Truth Table
Table containing all the possible values of the logical variables and the combination of the
variable along with the given operation is called the truth table. The number of rows in the truth
table depends on the total Boolean variables used in that function. It is given by using the
formula.

3.6 Truth Tables in Boolean Algebra


A truth table represents all the combinations of input values and outputs in a tabular manner. All
the possibilities of the input and output are shown in it and hence the name truth table. In logic
problems, truth tables are commonly used to represent various cases. T or 1 denotes ‘True’ & F
or 0 denotes ‘False’ in the truth table.

Example:
Draw the truth table of the conditions A + B and A.B where A and b are boolean variables.
Solution:
The required Truth Table is,
A B X=A+B Y = A.B
T T T T
T F T F
F T T F
F F F F

3.7 Rules of Boolean Algebra


In Boolean Algebra there are different fundamental rules for logical expression.

(i) Binary Representation


In Boolean Algebra the variables can have only two values either 0 or 1 where 0 represents Low
and 1 represents high. These variables represents logical states of the system.

(ii) Complement Representation

23
The complement of the variables is represented by (¬) or (‘) over the variable. This indicates
logical negation or inversion of the variable’s value. So Complement of variable A can be
represented by A‾A,if the value of A=0 then its complement is 1.

(iii) OR Operation
The OR operation is represented by (+) between the Variables. OR operation returns true if at
least one of the operands is true. For Examples let us take three variables A,B,C the OR
operation can be represented as A + B + C.

(iv) AND Operation


The AND Operation is denoted by (.) between the Variables. AND operation returns true only if
all the operands are true. For Examples let us take three variables A,B,C the AND operation can
be represented A.B.C or ABC.

3.8 Laws for Boolean Algebra


The basic laws of the Boolean Algebra are added in the table added below,
Law OR form AND form

Identity Law P+0=P P.1 = P

Idempotent Law P+P=P P.P = P

Commutative Law P+Q=Q+P P.Q = Q.P

Associative Law P + (Q + R) = (P + Q) + R P.(Q.R) = (P.Q).R

Distributive Law P + QR = (P + Q).(P + R) P.(Q + R) = P.Q + P.R

Inversion Law (A’)’ = A (A’)’ = A

De Morgan’s Law (P + Q)’ = (P)’.(Q)’ (P.Q)’ = (P)’ + (Q)’

(i) Identity Law


In the Boolean Algebra, we have identity elements for both AND(.) and OR(+) operations. The
identity law state that in boolean algebra we have such variables that on operating with AND and
OR operation we get the same result, i.e.

 A+0=A

 A.1 = A

24
(ii) Commutative Law
Binary variables in Boolean Algebra follow the commutative law. This law states that operating
boolean variables A and B is similar to operating boolean variables B and A. That is,
 A. B = B. A
 A+B=B+A

(iii)Associative Law
Associative law state that the order of performing Boolean operator is illogical as their result is
always the same. This can be understood as,

 (A.B).C=A.(B.C)

 ( A + B ) + C = A + ( B + C)

(iv) Distributive Law


Boolean Variables also follow the distributive law and the expression for Distributive law is
given as:

 A . ( B + C) = (A . B) + (A . C)

(v) Inversion Law


Inversion law is the unique law of Boolean algebra this law states that, the complement of the
complement of any number is the number itself.
 (A’)’ = A
Apart from these other laws are mentioned below:

(vi) AND Law


AND law of the Boolean algebra uses AND operator and the AND law is,

 A.0=0

 A.1=A

 A.A=A

25
(vii) OR Law
OR law of the Boolean algebra uses OR operator and the OR law is,
 A+0=A
 A+1=1
 A+A=A

De Morgan’s Laws are also called De morgan’s Theorem. They are the most important laws in
Boolean Algebra and these are added below under the heading Boolean Algebra Theorem

3.9 Boolean Algebra Theorems


There are two basic theorems of great importance in Boolean Algebra, which are De Morgan’s
First Laws, and De Morgan’s Second Laws. These are also called De Morgan’s Theorems. Now
let’s learn about both in detail.

De Morgan’s First laws


De Morgan’s Law states that the complement of the product (AND) of two Boolean variables
(or expressions) is equal to the sum (OR) of the complement of each Boolean variable (or
expression).

(P.Q)’ = (P)’ + (Q)’

The truth table for the same is given below:

26
P Q (P)’ (Q)’ (P.Q)’ (P)’ + (Q)’

T T F F F F

T F F T T T

F T T F T T

F F T T T T

We can clearly see that truth values for (P.Q)’ are equal to truth values for (P)’ + (Q)’,
corresponding to the same input. Thus, De Morgan’s First Law is true.

De Morgan’s Second laws


Statement: The Complement of the sum (OR) of two Boolean variables (or expressions) is
equal to the product(AND) of the complement of each Boolean variable (or expression).

(P + Q)’ = (P)’.(Q)’

Proof:
The truth table for the same is given below:
P Q (P)’ (Q)’ (P + Q)’ (P)’.(Q)’

T T F F F F

T F F T F F

F T T F F F

F F T T T T

We can clearly see that truth values for (P + Q)’ are equal to truth values for (P)’.(Q)’,
corresponding to the same input. Thus, De Morgan’s Second Law is true.

Examples on Boolean Algebra


Draw Truth Table for P + P.Q = P

27
Solution:
The truth table for P + P.Q = P
P Q P.Q P + P.Q

T T T T

T F F T

F T F F

F F F F

In the truth table, we can see that the truth values for P + P.Q is exactly the same as P.

Draw Truth Table for P.Q + P + Q


Solution:
The truth table for P.Q + P + Q
P Q P.Q P.Q + P + Q

T T T T

T F F T

F T F T

F F F F

Solve A‾+B⋅CA+B⋅C
Solution:
Using De Morgan’s Law
A‾+B.C=A‾.(B+C)A+B.C=A.(B+C)
Using Distributive Law
A‾.(B+C)=A‾.B+A‾.CA.(B+C)=A.B+A.C
So, the simplified expression for the given equation A‾.(B+C)=A‾.B+A‾.CA.(B+C)=A.B+A.C

3.10 Application Areas of Boolean Algebra

Boolean Algebra finds applications in many other fields of science related to digital logic design,
computer science, telecommunications, etc. It will equip you with the basics of designing and

28
analyzing digital circuits; therefore, this is an introduction to the backbone of modern digital
electronics. Boolean Algebra also forms a framework of logical expressions essential in
simplification and optimization while programming and designing algorithms.

(i) Digital Logic Design


Boolean Algebra acts as the backbone of digital logic design, being the most important element
in the creation and analysis of digital circuits used in computers, smartphones, and all other
electronic devices. It helps simplify the logic gates and circuits so that in the design of digital
systems, they can be effectively designed and optimized.

(ii) Computer Science


In computer science, Boolean Algebra is utilized in the design and study of algorithms,
particularly in fields that require decision-making processes. It’s vital in database query
optimization, where Boolean logic is utilized to filter and obtain specific data based on
circumstances.

(iii) Telecommunications
Boolean Algebra finds application in the design and analysis of communication systems in
telecommunication. More specifically, it is used in error detection and correction mechanisms. It
is also used in the modulation and encoding of signals so that data is efficiently and accurately
transmitted over networks.

(iv) Artificial Intelligence (AI)


Boolean Algebra is vital in AI, notably in the construction of decision-making algorithms and
neural networks. It’s used to model logical thinking and decision trees, which are crucial in
machine learning and expert systems.

(v) Electrical Engineering:


In electrical engineering, Boolean Algebra is employed to analyze and design switching circuits,
which are important in the operation of electrical networks and systems.It aids in the
optimization of these circuits, ensuring minimal energy loss and effective functioning.

29
3.11 Advantages and Disadvantagesn of Boolean Algebra
Advantages
(i) Simplifies the design and analysis of digital circuits.
(ii) Reduces the complexity of logical expressions and functions.
(iii) Enhances efficiency in digital logic design and computer programming.

Disadvantages
(i) Limited to binary values, which may not always represent real-world complexities.
(ii) Requires a strong understanding of logical operators and rules.

4.0 Switching Theory


Switching circuit theory is the mathematical study of the properties of networks of idealized
switches. Such networks may be strictly combinational logic, in which their output state is only a
function of the present state of their inputs; or may also contain sequential elements, where the
present state depends on the present state and past states; in that sense, sequential circuits are
said to include "memory" of past states. An important class of sequential circuits are state
machines. Switching circuit theory is applicable to the design of telephone systems, computers,
and similar systems. Switching circuit theory provided the mathematical foundations and tools
for digital system design in almost all areas of modern technology. Theory of circuits made up of
ideal digital devices, including their structure, behaviour, and design. It incorporates Boolean
logic (see Boolean algebra), a basic component of modern digital switching systems.

Switching Theory is about using switches to implement Boolean expressions and logic gates for
the the logic design of digital circuits. Switching Theory allows us to understand the operation
and relationship between Boolean Algebra and two-level logic functions with regards to Digital
Logic Gates. Switching theory can be used to further develop the theoretical knowledge and

30
concepts of digital circuits when viewed as an interconnection of input elements producing an
output state or condition.

Digital logic gates whose inputs and output can switch between two distinct logical values of 0
and 1, can be defined mathematically simply by using Boolean Algebra. But we can also
represent the two digital logic states of HIGH or LOW, “1” or “0”, “ON” or “OFF”, as well as
TRUE or FALSE.
These logical states can be presented using electromechanical contacts in the form of switches or
relays as a logic circuit element. The implementation of switching functions in digital logic
circuits is nothing new, but it can give us a better understanding of how a single digital logic
gate works.
Digital logic gates are the basic building blocks from which all digital electronic circuits and
microprocessor based systems are made. They can be interconnected together to form either
combinational logic circuits which are fully dependent on any external input signals applied to
it, or sequential logic circuits which are dependent on its present stable state, feedback of its
output, as well as any external input signals that may trigger a switching event.

The Switching Theory of a Switch


You may think that a switch is, well a switch, that can be used to turn a lighting load “ON” or
“OFF”. But a switch can also be a complex mechanical or electromechanical element used to
control the flow of a signal through it in either direction, making it a bilateral device. Consider
the circuit shown.

4.1 Switching Theory of a Normally-open Switch

Here in this simple example, the lamp (L) is connected to the battery supply, VS via the
normally-open switch, S1. Thus if switch S1 is not-pressed and therefore open, no current (I)
flows so the lamp will be “OFF” and not illuminated. Likewise, if switch S1 is pressed closing it,

31
then current flows around the circuit and the lamp (L) will be “ON” and illuminated. Under
normal steady state conditions the switch is permanently “open” so the lamp is “OFF”.

We can use switching algebra to describe the operation of the circuit containing the switch, S1.
For example, if we label the normally-open switch as a variable with the letter “ A“, then when
the switch is open, that is “A” is not-pressed, we can define the value of “A” as being “0”.
Again, when the switch is closed, that is “A” is pressed, we can define the value of “A” as being
“1”. This switching algebra theory is true for ALL normally-open switch configurations.

4.2 Switching Truth Table

We can develope this switching theory idea further by saying when the lamp is “ON”
(illuminated), its switching alegebra variable will be “1”, and when the lamp is “OFF” (not
illuminated), its switching algebra variable will be “0”.

Thus, when the switch is pressed (activated) the lamp is “ON”, so “A” = 1 and “L” = 1, and
when the switch is not-pressed (unactivated) the lamp is “OFF”, so “A” = 0 and “L” = 0.
Therefore we can correctly say that for the switching theory of the lamp, L = A as shown in the
truth table.

The type of switch used in the above example is called a normally-open, make-contact switch as
the have to physically make it for the switch to be considered closed (A = 1). But there is another
type of switch arrangement which is the exact opposite in operation of the switch above called a
normally-closed, break-contact switch which is constantly closed.

4.3 Switching Theory of Series Switches


We have seen that the lamp (L) circuit above can be controlled using a single switch, S1 and
when S1 is closed (pressed) current flows around the circuit and the lamp is “ON”. But what if
we added a second switch in series with S1, how would that affect the switching function of the
circuit and the illumination of the lamp.

32
4.4 The Switching Theory of Series Switches

The switching circuit consists of two switches in series with a voltage source, VS and the lamp. To
distinquish the operation of each individual switch, we shall label switch, S1 with the letter “A“, and label
switch, S2 with the letter “B“. Thus when either switch is open, that is not-pressed, we can define the
value of “A” as being “0” and “B” as also being “0”.

Likewise, when either switch is closed or pressed, we can define the value of “A” as being “1”
or “B” as being “1”. That is the logical level “1” corresponds to the supply voltage value, and
will be positive. Whereas the logic level “0” corresponds to the voltage value of zero voltage, or
ground.

As there are two switches, S1 and S2, or “A” and “B”, then we can see that there are four possible
combinations of the Boolean variables “A” and “B” to illuminate the lamp. For example, “A” is
open and “B” is closed, or “A” is closed and “B” is open, or both “A” and “B” are open or
closed at the same time. Then we can define these operations in the following switching theory
truth table.

4.5 Series Switch Truth Table

The truth table shows that the lamp will only be “ON” and illuminated when BOTH switch, A
AND switch, B are pressed and closed as pressing only one switch on its own will not cause
current to flow.

33
This proves that when two switches S1 and S2 are connected in series, the only condition that will
allow current (I) to flow and make the lamp illuminate is when both switches are closed giving
the boolean expression of: L = A and B.

In Boolean Algebra terms, this expression is that of the AND function which is denoted by a
single dot or full stop symbol, (.) between the variables giving us the Boolean expression of: L =
A.B.

Thus when switches are connected together in series their switching theory and operation is the
same as for the digital logic “AND” gate because if both inputs are “1”, then the output is “1”,
otherwise the output is “0” as shown.

5.0 Digital Logic AND Gate

Thus if input “A” is AND’ed with input “B” it produces output “Q”. In switching terms, the
AND function is referred to as the Boolean Algebra multiplication function.

5.1 The Switching Theory of Parallel Switches


If we now connect switches, S1 and S2 together in parallel as shown, how would is arrangement
affect the switching function of the circuit and the illumination of the lamp.

5.2 The Switching Theory of Parallel Switches

34
The switching circuit now consists of the two switches in parallel with the voltage source, VS
and the lamp. As before, when either switch is open, that is not-pressed, we can define the value
of “A” as being “0” and “B” as also being “0”. Likewise, when either switch is closed or
pressed, we can define the value of “A” as being “1” or “B” as being “1”.

As before, with two switches, S1 and S2, or “A” and “B”, there are four possible combinations of
the Boolean variables “A” and “B” required to illuminate the lamp. The corresponding states
are: “A” is open and “B” is closed, or “A” is closed and “B” is open, both “A” and “B” are open,
or both closed at the same time. Then we can define these switching operations in the following
switching theory truth table.

5.3 Parallel Switch Truth Table

The truth table shows that the lamp will only be “ON” and illuminated when EITHER switch, A
OR switch, B are pressed and closed as pressing either switch will cause current to flow because
there will always be a conducting path available for the lamp through whichever closed switch.

This therefore proves that when two switches S1 and S2 are connected together in parallel, the
switching condition that allows current (I) to flow and make the lamp illuminate is when any one
of the switches, or both are closed. This gives the boolean expression of: L = A or B.

In Boolean Algebra terms, this expression is that of the OR function which is denoted by a
addition or plus sign, (+) between the variables giving us the Boolean expression of: L = A+B.
Thus when switches are connected together in parallel their switching theory and operation is the

35
same as for the digital logic “OR” gate because if both inputs are “0”, then the output is “0”,
otherwise the output is “1” as shown.

5.4 Digital Logic OR Gate

Thus if input “A” is OR’ed with input “B” it produces output “Q”, and in switching terms, the
OR function is referred to as the Boolean Algebra logical addition function.

5.5 Switching Theory of a Boolean Function


Switching Theory can be used to implement Boolean expressions as well as digital logic gates.
Ad we have seen above, in switch contact terms, the boolean expression using a dot ( .) is
interpreted as a series connection for Boolean multiplication, while a plus sign (+) is interpreted
as a pair of parallel branches for Boolean addition.

5.6 Switching Theory

Example
Implement the following Boolean function of Q = A(B+C) using switches to illuminate a lamp
(or LED). Also show the equivalent digital logic circuit.

36
5.6.1 Switch Implementation

5.6.2 Gate Implementation

5.7 Indempotent Law of Switches


Thus far we have seen how to connect two switches together either in series or parallel to
illuminate a lamp. But what if the two switches representing a Boolean AND function or an OR
function (operations of multiplication and sum) are of the same single Boolean variable, A. In
Boolean Algebra there are various laws and theroems which can be used to define the
mathematics of logic circuits. One such theorem is known by the name of indempotent law.
Idempotent laws used in switching theory states that AND-ing or OR-ing a variable with itself
will produce the original variable. For example variable “A” AND’ed with “A” will give “A”,
likewise variable “A” OR’ed with “A” will give “A”, allowing us to simplify our switching
circuits and we can demonstrate this below.

5.7.1 Indempotent Law of AND Function

37
5.7.2 Indempotent Law of OR Function

The representation of “AND” and “OR” functions using normally-open switches are easy to
construct, easy to understand, and form the basic building blocks for most combinational logic
circuits. Thus given any Boolean expression or logic function, it is possible to use switching
theory to implement it, after all, logic design is about using switches or electromechanical
devices such as relays.

38
39

You might also like