0% found this document useful (0 votes)
74 views

Chapter 2 - Computer Organization & Architecture

The document provides an overview of computer organization and architecture. It discusses that computer architecture can be viewed at different levels, from individual hardware components to how they work together. The central processing unit (CPU) is described as the "brain" of the computer, consisting of an arithmetic logic unit (ALU), control unit, and registers. The ALU performs arithmetic and logic operations, while the control unit directs other parts of the system. Registers are high-speed memory used to hold instructions and data during processing.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views

Chapter 2 - Computer Organization & Architecture

The document provides an overview of computer organization and architecture. It discusses that computer architecture can be viewed at different levels, from individual hardware components to how they work together. The central processing unit (CPU) is described as the "brain" of the computer, consisting of an arithmetic logic unit (ALU), control unit, and registers. The ALU performs arithmetic and logic operations, while the control unit directs other parts of the system. Registers are high-speed memory used to hold instructions and data during processing.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Chapter 2.

Computer Organization & Architecture


CHAPTER 2: COMPUTER ORGANIZATION AND
ARCHITECTURE

2.1. INTRODUCTION
Just as a tall building has
different levels of detail
namely, the number of
buildings, the size of rooms,
details of door and window
placement, similarly, each
computer has a visible
structure, which is referred to
as its “architecture”. One can
look at computer’s architecture
at similar levels of hardware
elements, which in turn
depends on the type of
computer (personal computer,
super computer, and so on)
required. Therefore, when we
talk about architecture in terms of computer, it is defined as the science of selecting and
interconnecting hardware components to create computers that meet functional, performance, and
cost goals.

Extending the concept of architecture and making these hardware components to work in a
harmonized manner in order to achieve a common objective in an environment is known as
computer organization. The study of computer organization focuses more on the collective
contribution from the hardware peripherals than individual electronic components.

2.2. CENTRAL PROCESSING UNIT (CPU)


The central processing unit (CPU) is referred to as the “brain” of a computer system, which converts
data (input) into meaningful information (output). It is a highly complex, extensive set of electronic
circuitry, which executes stored program instructions. A CPU controls all internal and external
devices, performs arithmetic and logic operations, and operates only on binary data, that is, data
composed of 1s and 0s. In addition, it also controls the usage of main memory to store data and
instructions, and controls the sequence of operations.

The central processing unit consists of three main subsystems, the Arithmetic/Logic Unit (ALU), the
Control Unit (CD), and the Registers. The three subsystems work together to provide operational as
to the computer.

2.2.1. Arithmetic/Logic Unit (ALU)


Arithmetic/logic unit (ALU) contains the electronic circuitry that executes all arithmetic and logical,
on the data made available to it. The data required to perform the arithmetic and logical are inputs
from the designated registers. ALU comprises two units: Arithmetic Unit and Logic.

1 / 29
Chapter 2. Computer Organization & Architecture
• Arithmetic unit: The arithmetic unit contains the circuitry that is responsible for
performing the actual computing and carrying out the arithmetic calculations, such as
addition, subtraction, multiplication, and division. It can perform these operations at a very
high speed.
• Logic Unit: The logic unit enables the CPU to make logical operations based on the
instructions provided to it. These operations are logical comparison between data items. The
unit can compare numbers, letters, or special characters and can then take action based on the
result of the comparison. Logical operations of Logic unit test for three conditions:
• Equal-to Condition: In a test for this condition, the arithmetic/logic unit compares two
values to determine if they are equal. For example, if the number of tickets sold equals the
number of seats in the auditorium, then the concert is declared sold out.
• Less-than Condition: To test this condition, the ALU compares values to determine if one is
less than another. For example, if the number of speeding tickets on a driver’s record is less
than three, then insurance rates are Rs.425/-; otherwise, the rates are Rs.500/-.
• Greater-than Condition: In this type of comparison, the computer determines if one value
is greater than another. For example, if the number of hours a person works in a week is
greater than 40, then every extra hour multiplied by 1.5 times, the usual hourly wage to
compute overtime pay.

2.2.2. Registers
Registers are special purpose, high-speed
temporary memory units. These are temporary
storage areas for holding various types of
information such as data, instructions, addresses,
and the intermediate results of calculations.
Essentially, they hold the information that the
CPU is currently working on. Registers can be
thought of as CPU’s working memory, a special
additional storage location that offers the
advantage of speed. Registers work under the
direction of the control unit to accept, hold, and
transfer instructions or data and perform
arithmetic or logical comparisons at high speed.
The control unit uses a data storage register in the
way a store owner uses a cash register as a
temporary, convenient place to store the
transactions. As soon as a particular instruction or
piece of data is processed, the next instruction
immediately replaces it, and the information that
results from the processing is returned to main memory. Figure above reveals types of registers
present inside a CPU.

Instruction addresses are normally stored in consecutive registers and are executed sequentially. The
control unit reads an instruction in the memory by a specific address in the register and executes it.
The next instruction is then fetched from the sequence and executed, and so on. This type of
instruction sequencing is possible only if there is a counter to calculate the address of the instruction
that has been executed. This counter is one of the registers, which stores intermediate data used
during the execution of the instructions after it is read from the memory. Table below lists some of
the important register used in CPU.

2 / 29
Chapter 2. Computer Organization & Architecture
Register Name Function
A program counter keeps track of the next instruction to be
Program Counter (PC)
executed.
An instruction register holds the instruction to be decoded by
Instruction Register (IR)
the control unit.
A memory address register holds the address of the next
Memory Address Register (MAR)
location in memory to be accessed.
A memory buffer register is used for storing data either
Memory Buffer Register (MBR)
coming to the CPU or data being transferred by the CPU.
An accumulator is a general purpose register used for storing
Accumulator (ACC)
temporary results and produced by arithmetic logic unit.
Data Register (DR) A data register is used for storing the operands and other data.

The size or the length of each register is determined by its function. For example, the memory
address register, which holds the address of the next location in memory to be accessed, must have
the same number of bits as the memory address. Instruction register holds the next instruction to be
executed and, therefore, should be of the same number of bits as the instruction.

2.2.3 Control Unit (CU)


The control unit of the CPU contains circuitry that uses electrical signals to direct the entire
computer system to carry out, or execute, stored program instructions. This resembles orchestra
leader who himself does not play a musical instrument but directs other people to play the musical
instrument in a harmonized manner. The control unit also does not execute program instructions;
rather, it directs other parts of the system to do so by communicating with both arithmetic/logic unit
and memory unit.

The control unit controls the I/O devices and


transfer of data to and from the primary
storage. The control unit itself is controlled by
the individual instructions in programs located
in primary storage. Instructions are retrieved
from the primary storage, one at a time. For
this, the control unit uses the instruction
register for holding the current instruction, and
an instruction pointer to hold the address of the
next instruction. Each instruction is interpreted
(decoded) so that it can be executed; based on
the instructions, the control unit controls how
other parts of the CPU and, in turn, rest of the
computer system should work in order that the
instructions are executed in a correct manner.
An analogy can be considered between control
unit and the traffic police, the control unit
decides which action will occur just as the
traffic police takes decisions on which lanes traffic will move or stop.

Figure above illustrates how control unit instructs other parts of CPU (ALU and registers and the I/O
devices) on what to do and when to do. It also determines what data is needed, where it is stored,
where to store the results of the operation, and sends the control signals to the devices involved in
the execution of the instructions. It administers the movement of large amount of instructions and

3 / 29
Chapter 2. Computer Organization & Architecture
data used by the computer. In order to maintain the proper sequence of events required for any
processing task, the control unit uses clock inputs.

2.2.4. System Bus


A bus is a set of connections between two or more components/devices, which are designed to
transfer several/all bits of a word from a specific source to destination. It is a shared media of
information transfer. A bus consists of multiple paths, which are also termed as lines; each line is
capable of transferring one bit at a time. Thus, to transmit 8 bits simultaneously over a bus, 8 lines
are required to transfer the data. In addition, some other lines are needed for controlling this transfer.
A bus can be unidirectional (transmission of data can be in only one direction) or bi-directional
(transmission of data can be in both directions). In a shared bus, only one source can transmit data at

one time while one or more than one can receive that signal. A bus that connects to all three
components (CPU, memory, I/O components) is called a system bus. A system bus consists of 50-
100 separate lines. These kinds are broadly categorized into three functional groups.

• Data Lines: Data lines provide a path for moving data between the system modules. Data
lines are collectively known as data bus. Normally a data bus consists of 8, 16, and 32 bits
separate lines. The number of lines present in data bus is called the width of data bus. Data
bus width limits the maximum number of bits, which can be transferred simultaneously
between two modules. The width of data bus helps in determining the overall performance of
a computer system.
• Address Lines: Address lines are used to designate the source of data for data bus. As the
memory may be divided into linear array of bytes or words, therefore, for reading or writing,
any information on to memory CPU needs to specify the address of a particular location. This
address is supplied by address bus (address lines are collectively called address bus). Thus,
the width of address specifies the maximum possible memory supported by a system. For
example, if a system has 16-bit wide address bus then it can have memory size equal to 216 =
65,536 bytes.
• Control Lines: Control lines are used to control the access to data and address bus; this is
required as bus is a shared medium. The control lines are collectively called control bus.
These lines are used for transmission of commands and timing signals (which validate data
and address) between the system modules. Timing signals indicate whether data and address
information is valid or not whereas command signals specify which operations are to be
performed. Some of the control lines of bus are required for providing clock signals to
synchronize operations, and for resetting signals to initialize the modules. Control lines are
also required for reading/writing to I/O devices or memory. A control line if used as a bus
request, indicates that a module needs to gain control of the bus. Bus grant control line, is
used to indicate that a requesting module has been granted control of the bus.

Physically, a bus is a number of parallel electrical conductors. These circuits are normally imprinted
on printed circuit boards. The bus normally extends across most of the system components, which
can be tapped into the bus lines.
4 / 29
Chapter 2. Computer Organization & Architecture

2.2.5. Main Memory Unit


Memory is that part of the computer that holds data and instructions for processing. Logically it is an
integral component of the CPU but physically it is a separate part placed on the computer’s
motherboard. Memory stores program instructions or data for only as long as the program they
pertain to is in operation. The CPU accesses the main memory in random manner, that is, the CPU
can access any location of this memory to either read information from it or store information in it.
The primary memory is implemented by two types of memory technologies. The first is called
Random Access Memory (RAM) and the other is Read Only Memory (ROM).

RAM directly provides the required information to the processor. It can be defined as a block of
sequential memory locations, each of which has a unique address determining the location and those
locations contain a data element. Storage locations in main memory are addressed directly by the
CPU’s instructions. It is volatile in nature, which means the information stored in it remains as long
as the power is switched on. As soon as the power is switched off, the information contained is lost.

ROM stores the initial start-up instructions and routines in BIOS (Basic Input/Output System),
which can only be read by the CPU, each time it is switched on. The contents of ROM are not lost
even in case of a sudden power failure, thus making it non-volatile in nature. The instructions in
ROM are built into the electronic circuits of the chip, called firmware. ROM is also random access
in nature, which means the CPU can randomly access any location within ROM. Improvement in
technology for constructing flexible ROM comes in various types, namely, PROM (Programmable
Read Only Memory), EPROM (Erasable Programmable Read Only Memory), and EEPROM
(Electrically Erasable Programmable Read Only Memory).

2.2.6. Cache Memory


The cache is a very high speed, expensive piece of memory, which is used to speed up the memory
retrieval process. Due to its higher cost, the CPU comes with a relatively small amount of cache
compared with the main memory. Without the cache memory, every time the CPU requests for data,
it would send a request to the main memory which would then be sent back across the system bus to
the C PU. This is a slow process in computing terms. The idea of introducing cache is that this
extremely fast memory would store data that is frequently accessed and if possible, the data that is
around it. This to achieve the quickest possible response time to the CPU.

The computer uses logic to determine which data is the most frequently accessed and keeps them in
the cache. A cache is a piece of very fast memory, made from high-speed static RAM that reduces
the access time of the data. It is very expensive d generally incorporated in the processor, where
valuable data and program segments are
kept. Cache memory can be categorized
into three levels: L1 cache, L2 cache, and
L3 cache.

L1 Cache: This cache is closest to the


processor and hence is termed as primary
or L1 cache. Each time the processor
requests information from memory, the
cache controller on the chip uses special
circuitry to first check if the memory data
is already in the cache. If it is present, then
the system is spared from time-consuming
access to the main memory. In a typical
CPU, primary cache ranges in size from 8
5 / 29
Chapter 2. Computer Organization & Architecture
to 64 KB, with larger amounts on the newer processors. This type of cache memory is very fast
because it runs at the speed of the processor since it is integrated into it. There are two different ways
that the processor can organize its primary cache: some processors have a single cache to handle
both command instructions and program data; called a unified cache while others have separate data
and instruction caches called split cache. However, the overall performance difference between
integrated and separate primary caches is not significant.

L2 Cache: The L2 cache is larger but slower in speed than Ll cache. It is used to see recent accesses
that is not picked by L1 cache and is usually 64 KB to 2 MB in size. A L2 cache is also found on the
CPU. If Ll and L2 caches are used together, then the missing information that is not present in Ll
cache can be retrieved quickly from the L2 cache.

L3 Cache: L3 cache memory is an enhanced form of memory present on the motherboard of the
computer. It is an extra cache built into the motherboard between the processor and main memory to
speed up the processing operations. It reduces the time gap between request and retrieving of the
data and instructions much more quickly than a main memory. L3 caches are being used with
processors nowadays, having more than 3 MB of storage in it.

2.3. COMMUNICATION AMONG VARIOUS UNITS


All units in a computer system work in conjunction with each other to formulate a functional
computer system. To have proper co-ordination among these units (CPU, cache, memory, I/O), a
reliable and robust means of communication is required. One of the most important functions in the
computer system, that is, the communication between these units:
• Processor to Memory Communication.
• Processor to I/O Devices Communication.

2.3.1. Process To Memory Communication


The whole process of communication between the processor and memory can be divided into two
steps, namely, information transfer from memory to processor and writing information in memory.
The following sequence of events takes place when information is transferred from memory to the
processor:
1. The processor places the address in Memory Address Register through the address bus.
2. The processor issues a READ command through the control bus.
3. The memory places retrieved data on the data bus, which is then transferred to the processor.

Based on the read time of the memory, a specific number of


processor clock intervals are allotted for completion of this
operation. During this interval, the processor is forced to wait.
Similarly, the following sequence of events takes place when
information is written into the memory:
1. The processor places the address in memory address
register through the address bus.
2. The processor transmits the data to be written in memory
using the data bus.
3. The processor issues a WRITE command to memory by
the control bus.
4. The data is written in memory at address specified in
memory address register.

The main concern in processor-memory communication is the speed mismatch between the memory
and processor. Memory access time is generally slower than the data. This speed mismatch is
eliminated by using a small fast memory as an intermediate buffer between processor and memory
called the Cache.
6 / 29
Chapter 2. Computer Organization & Architecture

2.3.2. Process To I/O Devices Communication


Units are connected to the computer system through the system bus. Each I/O device in a computer
system is first met with controller, called DMA (Direct Memory Access) controller, which controls
the ration of that device. The controller is connected to the buses to perform a sequence of data
transfers on half of the CPU. It is capable of taking over control of the system bus from the CPU,
which is required to transfer data to and from memory over the system bus. A DMA controller can
directly access memory and is used to transfer data from one memory location to another, or from an
I/O device to memory and vice versa. The DMA controller can use the system bus only when the
CPU does not require it or it should suspend the operations currently being processed by the CUP.

With DMA, a dedicated


data transfer device reads
incoming data from a
device and stores that in a
system memory buffer
for later retrieval by the
CPU. DMA allows
peripheral devices to
access the memory for
both read and write
operations without
affecting the state of the
computer’s central processor. As a result, the data transfer rate is significantly increased, improving
system efficiency. When a large amount of data is to be transferred from the CPU, a DMA controller
can be used. DMA allows I/O unit to exchange data directly with memory without going through
CPU except at the beginning (to issue the command) and at the end (to clean up after the command
is processed). While the I/O is being performed by the DMA, the CPU can start execution of some
other part of the same program or can start executing some other program. Thus, the DMA increases
the speed of I/O operations by taking over buses and thus eliminating CPU’s intervention.

2.4. INSTRUCTION FORMAT


An instruction consists of an opcode and
one or more operands, which may be
addressed implicitly or explicitly. To
define the layout of the bits allocated to
these elements of instructions, an
instruction format is used. The
instruction format also indicates (implicitly or explicitly) the addressing mode used for each operand
in that instruction. Note that, for most of the instruction sets more than one instruction format is
used. Over the years a variety of instruction formats have been used, but designing of instruction
format involves many complex issues. Some of the key issues are discussed below:
1. Instruction Length: The core designing issue involved in instruction format is the designing
of the instruction length. The instruction length determines the flexibility of the machine. The
decision on length of instruction depends on memory size, memory organization, and
memory transfer length. There exists a trade-off between a desire of having powerful
instruction range and need of saving space.
2. Allocation of Bits: For the given instruction length, there is a trade-off between the number
of opcodes and the power of the addressing capability. More opcode means more bits in the
opcode field, also for an instruction format of a given length, which reduces number of bits
available for addressing.

7 / 29
Chapter 2. Computer Organization & Architecture
2.5. INSTRUCTION CYCLE
The basic function performed by a CPU is the execution of a program. The program to be executed
is a set of instructions, which are stored in memory. The CPU executes the instructions of the
program to complete a given task. The CPU fetches an instruction stored in the memory and then
executes the fetched instructions within the CPU before it can proceed to fetch the next instruction
from memory. This process is continuous until specified to stop. The instruction execution takes
place in the CPU registers, which are used as temporary storage areas and have limited storage
space. These CPU registers have been discussed earlier.

In the simplest form, instruction


processing consists of two cycles
fetch cycle and execution cycle as
shown in side Figure. Here, the
CPU fetches (reads) instructions
from the memory, one at a time,
and performs the operation
specified by this instruction. Instruction fetch involves reading of an instruction from a memory
location to the CPU; execution of this instruction may involve several operations depending on the
nature of instruction.

The processing needed for a single instruction (fetch and execution) is referred to as instruction
cycle. This instruction cycle consists of the fetch cycle and execute cycle.

Fetch Cycle:
In the beginning, the address,
which is stored in the program
counter (PC), is transferred to
the memory address register
(MAR). The CPU then
transfers the instruction located
at the address stored in the
MAR to the memory buffer
register (MBR) through the
data lines connecting the CPU
to memory. This transfer from
memory to CPU is coordinated
by the control unit. To finish
the cycle, newly fetched instruction is transferred to the instruction register (IR) and unless in-
structed otherwise, the CU increments the PC to point to the next address location.

Figure above illustrates fetch cycle, it can be summarized in the following points:
1. PC  MAR
2. MAR  memory  MBR
3. MBR  IR
4. CU  PC

After the CPU has finished fetching an instruction, the CU checks contents of the IR and determines
which type of execution is to be carried out next. This process is known as the decoding phase. The
instruction is now ready for the execution cycle.

Execute Cycle:
Once an instruction has been loaded into the IR, and the control unit has examined and decoded the
fetched instruction and determined the required course of action to take, the execution cycle can
8 / 29
Chapter 2. Computer Organization & Architecture
commence. Unlike the fetch cycle and the interrupt cycle, both of which have a set instruction
sequence, the execute cycle can contain some complex operations. The actions within the execution
cycle can be categorized into the following four groups:

1. CPU - Memory: Data may be transferred from memory to CPU or from CPU to memory.
2. CPU - I/O: Data may be transferred from an I/O module to the CPU and vice versa.
3. Data Processing: The CPU may perform some arithmetic or logic operation on data via the
arithmetic-logic unit (ALU).
4. Control: An instruction may specify that the sequence of operation may be altered. For
example, the program counter may be updated with a new memory address to reflect that the
next instruction fetched should be read from this new location.

For simplicity, the following


example LOAD ACC,
memory (illustrated in side
Figure) deals with one
operation that can occur. The
example [LOAD ACC,
memory] can be classified as
memory reference
instructions. Instructions that
can be executed without
leaving the CPU are referred
to as non- memory reference
instructions.

This operation loads the accumulator (ACC) with data that is stored in the memory location
specified in the instruction. The operation starts by transferring the address portion of the instruction
from IR to the memory address register (MAR). The CPU then transfers the instruction located at the
address stored in the MAR to the memory buffer register (MBR) via the data lines connecting the
CPU to memory. This transfer from memory to CPU is coordinated by the CU. To finish the cycle,
the newly fetched data is transferred to ACC. The illustrated LOAD operation (above Figure) can be
summarized in the following points:

1. IR [address portion]  MAR


2. MAR  memory  MBR
3. MBR  ACC

After the execution cycle completes, the next instruction is fetched and the process starts again.

2.6. INSTRUCTION SET


Processors are built with the ability to execute a limited set of basic operations. The collections of
these operations are known as the processor’s instruction set. An instruction set is necessary so that a
user can create machine language programs to perform any logical and/or mathematical operations.
The instruction set is hardwired (embedded) in the processor, which determines the machine
language for the processor. The more complex the instruction set, the slower the processor works.

Processors differ from one another by their instruction set. If the same program can run on two
different processors, they are said to be compatible. For example, programs written for IBM
computers may not run on Apple computers because these two architectures (different processors)
are not compatible. Since each processor has its unique instruction set, machine language programs
written for one processor will normally not run on a different processor. Therefore, all operating
9 / 29
Chapter 2. Computer Organization & Architecture
systems and software programs are constructed within the boundaries of the processor’s instruction
set. Thus, the design of the instruction set for the processor becomes an important aspect of
computer architecture. Based upon the instruction sets, there are two common types of architectures,
Complex Instruction Set Computer (CISC) and Reduced Instruction Set Computer (RISC).

2.6.1. CISC Architecture:


Earlier, programming was done in low-level languages such as machine language and assembly
language. These languages are executed very quickly on computers, but are not easy for
programmers to understand and code. To overcome these shortcomings, and make programming
more accessible to the masses, high-level languages were developed. These languages quite
resembled the English language, and were user-friendly. However, instructions in high-level
languages still need to be converted into their equivalent low-level languages before the processor
can execute them. This conversion process is performed by the compiler. With the development in
high-level languages, they became more powerful and provided more features (for example,
complex mathematical functions). Writing compilers for such high-level languages became
increasingly difficult. Compilers had to translate complex sub-routines into long sequences of
machine instructions. The development of a compiler was a tricky, error-prone, and time-consuming
process.

To make compiler development easier, CISC was developed. The sole motive of manufacturers of
CISC-based processor was to manufacture processors with more extensive and complex instruction
set. It shifted most of the burden of generating machine instructions to the processor. For example,
instead of making a compiler, to write long machine instructions for calculating a square root, a
CISC processor would incorporate a hardwired circuitry for performing the square root in a single
step. Writing instructions for a CISC processor is comparatively easy because a single instruction is
sufficient to utilize the built-in ability. In fact, the first PC microprocessors were CISC processors,
because all the instructions that the processor could execute were built into the processors. As
memory was expensive in the early days of computers, CISC processors saved memory because
their instructions could be fed directly into the processor. Most of the PCs today include a CISC
processor.

Advantages of CISC Architecture:


• At the time of their initial development, else machines used available technologies to
optimize computer performance.
• CISC architecture uses general-purpose hardware to carry out commands. Therefore, new
command can be added into the chip without changing the structure of the instruction set.
• Microprogramming is as easy as assembly language to implement, and much less expensive
than hardwiring a control unit.
• As each instruction became more capable, fewer instructions could be used to implement a
given task. This makes efficient use of the relatively slow main memory.
• As micro-program instruction sets can be written to match the constructs of high-level
languages, the compiler does not have to be very complex.

Disadvantages of CISC Architecture:


• Processors of early generation of computers were contained as a subset in succeeding
version, so instruction set and chip hardware became complex with each generation of
computers.
• Different instructions take different amount of clock time to execute, and thus slow down the
overall performance of the machine.
• CISC architecture requires continuous reprogramming of on-chip hardware.
• CISC design includes the complexity of hardware needed to perform many functions, and the
complexity of on-chip software needed to make the hardware do the right thing.

10 / 29
Chapter 2. Computer Organization & Architecture
2.6.2. RISC Architecture:
Reduced Instruction Set Computer is a processor architecture that utilizes a small, highly optimized
set of instructions. The concept behind RISC architecture is that a small number of instructions are
faster in execution as compared to a single long instruction. To implement this, RISC architecture
simplifies the instruction set of the processor, which helps in reducing the execution time.
Optimization of each instruction in the processor is done through a technique known as pipelining.
Pipelining allows the processor to work on different steps of the instruction at the same time; using
this technique, more instructions can be executed in a shorter time. This is achieved by overlapping
the fetch, decode, and execute cycles of two or more instructions. To prevent more interactions with
memory or to reduce the access time, the RISE design incorporates a larger number of registers.

As each instruction is executed directly using the processor, no hard-wired circuitry (used for
complex instructions) is required. This allows RISE processors to be smaller, consume less power,
and run cooler than CISC processors. Due to these advantages, RISE processors are ideal for
embedded applications, such as mobile phones, PDAs, and digital cameras. In addition, the simple
design of a RISE processor reduces its development time compared to a CISC processor.

Advantage of RISC Architecture:


• A simplified instruction set allows for a pipelined, super scalar design RISE processors to
often achieve two to four times the performance of CISC processors using comparable
semiconductor technology and the same clock rates.
• As the instruction set of a RISC processor is simple, it uses less chip space. Extra functions,
such as memory management units or floating point arithmetic units, can also be placed on
the same chip. Smaller chips allow a semiconductor manufacturer to place more parts on a
single silicon wafer, which can lower the per-chip cost significantly.
• Since RISC architecture is simpler than CISC architecture, they can be designed more
quickly, and can take advantage of the other technological developments faster than
corresponding CISC designs, leading to greater leaps in performance between generations.

Disadvantage of RISC Architecture:


• The performance of a RISC processor depends largely on the code that it is executing. If the
compiler does a poor job of instruction scheduling, the processor can spend time waiting for
the result of one instruction before it can proceed with subsequent instruction.
• Instruction scheduling makes the debugging process difficult. If scheduling (and other
optimizations) is turned off, the machine-language instructions show a clear connection with
their corresponding lines of source. However, once instruction scheduling is turned on, the
machine language instructions for one line of source may appear in the middle of the
instructions for another line of source code.
• RISC machines require very fast memory systems to feed instructions. RISC-based systems
typically contain large memory caches, usually on the chip itself.

2.6.3. Comparing CISC And RISC:


The CISC processor came with complex instructions sets, where decoding and executing of such
instructions was a complicated and time-consuming task. Moreover, with the development of high-
level languages, using the instruction set posed problems in the compiler’s design. As the CISC
processors were less memory (very expensive earlier) intensive, it resulted in the rapid growth of
CISC processors. With time, memory prices reduced drastically but CISC processors could not
optimally use this availability of cheap memory. Manufacturers started working towards processors,
which could run faster by using extra memory. This idea gave birth to the RISC architecture, which
included small, highly optimized instructions but were more memory intensive.

The difference between the RISC approach and the CISC approach can be best explained by the
example, which shows how each design carries out five multiplications task. The general steps
required to perform the multiplications are:
11 / 29
Chapter 2. Computer Organization & Architecture
1. Read the first number out of memory.
2. Read the second number out of memory.
3. Multiply the two numbers.
4. Write the result back to memory.
5. Repeat Step 1- 4 for each of the four remaining multiplications.

On a simple CISC-based CPU, the CPU is first configured to get (read) the numbers, then the
umbers are read. Next, the CPU is configured to multiply the numbers, and then the numbers are
multiplied. Next, the CPU is configured to write the result to memory. Finally the numbers are
written into a memory. To multiply five sets of numbers, the whole process must be repeated five
times.

On a simple RISC CPU, the process is slightly different. A piece of hardware on the CPU is
dedicated to read the first number. When this operation is complete, another piece of hardware reads
the second number. After completion of this operation, another hardware performs the
multiplication, and when it is completed, another hardware writes the result to memory. If this
operation happens five times in a row, the RISC
hardware dedicated to obtaining the first number
from memory obtains the first number for the
second operation and second number for the first
operation. At the same time, the second number
for the second operation is retrieved from
memory, while the first number for the third
ration is obtained. As the first result is written
back to memory, the second multiplication is
armed, while the second number for the third
operation is read from memory and the first
number for the fourth operation is read from
memory, and so on.

2.7. INSIDE A COMPUTER


Computing machines are complex devices, made
from numerous electronic components. Many of
12 / 29
Chapter 2. Computer Organization & Architecture
these components are small, sensitive, expensive, and operate with other components to provide
better performance to the computing machines. Therefore, to ensure the better performance and
increase the life of these components, they are placed inside a metal enclosure called system case or
cabinet. A system case is a metal and plastic box that houses the main components of the computer.
It protects electronics hardware against the heat, light, temperature, and other means. It serves
important roles in the functioning of a properly designed and well-built computer. Several areas
where system case play an important role are:

• Structure: The system case provides a rigid structural framework to the components, which
ensure that everything fits together and works in a well-organized manner.
• Protection: The system case protects the inside of the system from physical damage, and
electrical interference.
• Cooling: The case provides cooling system to the vital components. Components that run
under cool temperature last longer and are less troublesome.
• Organization and Expandability: The system case is key to an organization of physical
system. If a system case is poorly designed, up gradation or expansion of peripheral is
limited.
• Status Display: The system case contains lights or LEDs that provide information inside the
box to the user.

System case encloses all the components, which are essential in running the computer system. These
components include motherboard, processors, memory, power supply, expansion slots, cables,
removable drives, and many others.

2.7.1. Power Supply (SMPS)


A power supply or SMPS (Switch Modulate Power Supply) is
a transformer and voltage control device in a computer that
furnishes power to all the electronic components by converting
them into the low voltage DC (direct current) supply. When
computer is turned on, the power supply allows the converted
electricity to travel to other components inside the computer.
Modem day power supply provides protection against surge
and spikes in the power, which could damage vital components
of the computer. Nowadays, PC power supply is capable of
providing several different voltages, at different strengths, and
manages additional signals for the motherboard. The power
supply plays an important role in the following areas of the
computer system:

• Stability: A high quality power supply with sufficient capacity to meet the demands of the
computer provides years of stable power for the PC.
• Cooling: The power supply contains the main fan that controls the flow of air through the
system case. This fan is a major component in PC cooling system.
• Expandability: The capacity of the power supply determines the ability to add new drives to
the system or upgrade to a more powerful motherboard or processor.

2.7.2. Motherboard:
Motherboard, also known as system board, is a large multi-layered printed circuit board inside a
computer. The motherboard contains the CPU, the BIOS ROM chip, and the CMOS Setup
information. It has expansion slots for installing different adapter cards like video card, sound card,
network interface card, and modem. This circuit board provides a connector for the keyboard as well
as housing to the keyboard controller chip. It possesses RAM slots for the system’s random access
memory chips and provides the system’s chipset, controllers, and underlying circuitry (bus system)
13 / 29
Chapter 2. Computer Organization & Architecture
to tie everything together. In a typical motherboard, the circuitry is imprinted on the surface of firm
planar surface and is usually manufactured in a single piece. The most common design of
motherboard in today’s desktop computers is the ATX design. In ATX designs, the computer
components included are: processor, coprocessors (optionally), memory, BIOS, expansion slot, and
interconnecting circuitry. Additional components can be added to a motherboard through its
expansion slot. Nowadays, they are designed to put peripherals as integrated chips directly onto the
motherboard. Initially this was confined to audio and video chips but in recent times, the peripherals
integrated in this way includes SCSI, LAN, and RAID controllers. There are cost benefits to this
approach, the biggest downside is the restriction of future upgrade options. Figure below provides a
detailed look at the various components on motherboards.

BIOS:
BIOS (Basic Input/Output System) comprises a set of several routines and start up instruction inside
a ROM (Read Only Memory). This gives two advantages to the computer, firstly, the code and data
in the ROM BIOS need not be reloaded each time the computer is started, secondly, they cannot be
corrupted by wayward applications that are accidentally written into the wrong part of memory. The
first part runs as soon as the machine is switched on. It inspects the computer to determine what
hardware is fitted and then conducts simple test POST (Power-On Self Test) for normal
functionality. If all the tests are passed, the ROM then determines the drive to boot the machine.
Most PCs have the BIOS set check for the presence of an operating system in the primary hard disk
drive. Once the machine is booted, the BIOS serves a different purpose by presenting DOS with a
standardized API (Application Program Interface) for the PC hardware.

CMOS:
Motherboard includes a separate block of memory made up of very low power consumption called
CMOS (complementary metal oxide silicon) chip. This chip is kept alive by a battery even when
PCs power is off. The function of CMOS chip is to store basic information about the PCs
configuration number and type of hard and floppy drives, memory capacity, and so on. The other
important data, which is kept in CMOS memory, is system time and date. The clock, CMOS chip
and batteries are usually all integrated into a single chip.

14 / 29
Chapter 2. Computer Organization & Architecture
2.7.3. Ports And Interfaces:
Ports and interfaces are a generic name for the various “holes” (and their associated electronics),
found at the back of the computer, to which external devices are connected to the computer’s
motherboard. Different interfaces and ports run at varying speeds and work best with specific types
of devices.
• PS/2 Ports: It is a standard serial port connector used to plug computer mouse and
keyboards into personal computer. It consists of 6 pins in small and round shape socket.
• Serial Ports: It is a general-purpose communications port, through which data is passed
serially, that is, one bit at a time. These ports are used for transmitting data over long
distances. In the past, most digital cameras were connected to a computer’s serial port in
order to transfer images to the computer. However, because of its slow speed these ports are
used with computer mouse and modem.
• Parallel Port: It is an interface on a computer, which supports transmission of multiple bits
of data (usually 8 bits) at the same time. This port transmits data faster than a serial port and
is exclusively used for connecting peripherals such as printers and CD-ROM drives.
• SCSI Port: These ports are used in transmitting data up to seven devices in a “daisy chain”
fashion and at a speed faster than serial and parallel ports (usually 32 bits at a time). In daisy
chain several devices are connected in series to each other, so that data for the seventh device
need to go through the entire six devices first. These ports are hardware interface, which
includes an expansion board that plugs into the computer called a SCSI host adapter or SCSI
controller. Device which can be connected to SCSI ports are hard-disk drives and network
adapters.
• USB Port: USB (Universal Serial Bus) port is a plug-and-play hardware interface for
connecting peripherals such as the keyboard, mouse, joystick, scanner, printer, and modem. It
supports a maximum bandwidth of 12 MB/sec and has the capability to connect up to 127
devices. With USB port, a new device can be added to the computer without adding an
adapter card. These ports are replacement for parallel and serial ports.

15 / 29
Chapter 2. Computer Organization & Architecture
2.7.4. Expansion Cards:
An expansion card, also called an adapter card, is a circuit board that provides additional capabilities
to the computer system. Adapter cards are made up of large-scale integrated circuit components
installed on it. The cards are plugged into the expansion sockets present in the computer’s
motherboard to provide the computer an added functionality. Common available expansion cards
connect monitors (for enhanced graphics) and microphone (for sound), each having a special
purpose to perform. However, nowadays most of the adapters come inbuilt on the motherboard and
no expansion card is required unless the need for high performance is required.

• Sound Cards: An expansion card


that allows the computer to output
sound through connected speakers,
to record sounds from a microphone
input, and to manipulate sounds
stored on the computer is called a
sound card. It contains special
circuits for operating the computer’s
sound and allows playback and
recording of sound from CD-ROM.
• Video Cards: Video card, also
called display adapter is used for
enhancing graphics images that are
seen on the computer’s monitor. The card converts the images created in the computer to the
electronic signals required by the monitor. Generally, a good card with a graphics accelerator
is preferred for editing digital video. There are different video cards with varying capabilities
related to the size of the monitor and total number of displayable colors.
• Network Interface Card: Network Interface Card is a computer circuit board that is
installed in a computer so that it can be connected to other computers in a network. Personal
computers and workstations on a local area network (LAN) contain a network interface card
specifically designed for transmitting data across LAN. Network interface cards provide a
dedicated, full time connection to a network.
• Modem: Modem is an expansion card that allows two computers to communicate over
ordinary phone lines. It converts digital data from computers into analog data, transmits over
the telephone lines, and at the same time converts incoming analog signals back to digital
signals for the receiving computer. Modems do not, provide high bandwidth for data
communication and as a result, they do not support high speed Internet access as current
modems can run up to 56KB per second.
• PC Card: PC Card is a removable device, approximately the size of a credit card, which is
designed to plug into a PCMCIA slot. It is a standard, formulated by the Personal Computer
Memory Card International Association (PCMCIA) for providing expansion capabilities to
computers. The PCMCIA standard supports input-output devices, memory, fax/modem,
SCSI, and networking products. The card fits into a notebook or laptop computer.

2.7.5. Ribbon Cables:


Ribbon cables are wide, flat, insulated cables, which are
flexible enough to fit into areas with little space. These
cables are made up of numerous tiny wires (traces and
electronic pathways) called bunch, where one bunch
carries data/information around to different components
on the motherboard, and another bunch connects these
components to the various devices attached to the
computer. These cables connect the nard drive, floppy
drive, and CD-ROM drive to the connectors on the
16 / 29
Chapter 2. Computer Organization & Architecture
motherboard and control the drives by getting and sending data from and to them. These cables
connect different external devices, peripherals, expansion slots, I/O ports, and drive connections to
the rest of the computer.

2.7.6. Memory Chips:


Memory is the place where the computer holds programs data that are currently in use. System
memory on the motherboard is arranged in groups called memory banks. The number of memory
banks and their configurations vary from computer to computer because these are determined by the
CPU and the way it receives information. The speed of the CPU determines the number memory
sockets required in a bank. For main memory, either of the two types of memory chips are used:
SIMM (Single In-Line Memory Modules) or DIMM (Dual In-Line Memory Modules).

SIMM:
Single In-Line Memory Modules are small circuit
board designed to accommodate surface-mount
memory chips. A typical SIMM chip comprises a
number of RAM chips on a PCB (printed circuit
board), which fits into a SIMM socket on a
computer’s motherboard. These chips are packed into
small plastic or ceramic dual inline packages (DIPs),
which are assembled into a memory module. A
typical motherboard offers four SIMM sockets
capable of taking either single-sided or double-sided
SIMMs with module sizes of 4, 8, 16, 32 or even 64MB. When 32-bit SIMMs chip is used with
processors, they have to be installed in pairs, with each pair of modules making up a memory bank.
These chips support 32-bit data paths, and are originally used with 32-bit CPUs. The CPU then
communicates with memory bank as one logical unit. SIMM chips usually come in two formats:
• A 30-pin SIMM, used in older system boards, which deliver one byte of data.
• A larger 72-pin SIMM, used in modem PCs, which deliver four bytes of data (plus parity) in
every memory request.

DIMM:
With the increase in speed and bandwidth capability, a new standard for memory was adopted called
dual in-line memory module (DIMM). These chips have 168-pins in two (or dual) rows of contacts;
one on each side of the card. With the additional pins, a CPU retrieves information from DIMM chip
at 64 bits as compared to a 32 or 16-bit transfers with SIMMs. Some of the physical differences
between 168-pins and 72-pin SIMMs include length of module, number of notches on the module,
and the way the module is installed. The main difference between the two is that on a SIMM,
opposing pins on either side of the board are tied together to form one electrical contact; while on a
DIMM, opposing pins remain electrically isolated to form two separate contacts. DIMMs are often
used in computer configurations that support a 64-bit or wider memory bus (like Intel’s Pentium IV).

2.7.7. Storage Devices:


Disk drives are the important components present inside the system case. These drives are used to
read and write information to and from processor. The three most common disk drives located inside
a system case are the hard drive, floppy disk drive, and CD-ROM. These drives are high storage
devices, enabling the user to store large amount of data without any consideration towards imitation
ill size. Out of these drives, the hard disk drive provides the largest storage space for saving. All the
vital applications ranging from operating system to word processor are stored in the hard disk drive.
Hard disk drive is costly and not robust enough to transfer data physically, therefore, CD-ROMs and
floppy disks are used as an alternative means to transfer data physically.

17 / 29
Chapter 2. Computer Organization & Architecture
2.7.8. Processors:
Processor, often called CPU, is the central component of the computer. It is referred to as the brain
of a computer responsible for carrying out operations in efficient and effective manner. A processor
holds the key for carrying out all the processing and computational work. Every work that is done by
the user on the computer is performed either directly or indirectly by the processor. The following
factors should be considered while choosing a processor of a computer system:
• Performance: The processor’s capabilities dictate maximum performance of a system. It is
the most important single determinant of system performance (in terms of speed and
accuracy) in the computer.
• Speed: The speed of a processor defines how fast it can perform operations. There are many
ways to indicate speed, but the most obvious way to measure is through the internal clock
speed of the CPU. The faster the speed of the internal clock of the processor, the faster the
CPU will work, and therefore, hardware will be more expensive.
• Software Support: New and faster processors support resource-consuming software in a
better manner. For example, new processors such as the Pentium IV, enable the use of
specialized software, which were not supported on earlier machines.
• Reliability and Stability: The reliability of the computer system directly depends on the
type and quality of the processor.
• Energy Consumption and Cooling: Although processors consume relatively little power
compared to other system devices, newer processor consumes a great deal of power,
resulting in the impact on everything from cooling methods selection to overall system
reliability.
• Motherboard Support: The type of processor used in the system is a major determining
factor of chipset used on the motherboards. The motherboard, in turn, dictates many facets of
the system’s capabilities and performance.

2.8. DATA REPRESENTATION IN COMPUTER


Since the early days of human civilization, people have been using their fingers, sticks, etc. for
counting things. The need for counting probably originated when man started to use animals for
domestic purposes and practice animal breeding for fulfilling his needs and requirements. As daily
activities became more complex, numbers became more important in trade, time, distance, and in all
other spheres of human life. Ever since people discovered that it was necessary to count objects, they
have been looking for easier ways of counting. To count large numbers, man soon started to count in
groups, and various number systems were formed.

As manual counting had limited role for carrying simple computing task, computation that was more
complex, made humans to depend on the machines to perform the computing task efficiently and
accurately. With the advancement of machines, different number systems were formed to make the
task simple, accurate, and fast. These number systems worked on the principle of digital logic design
present in the modern day computer system and opened a gateway to overcome complex
computation barriers. In a precise manner, a number system defines a set of values used to represent
‘quantity’. Generally, one talks about a number of people attending class, or a number of modules
taken by each student, and use numbers to represent grades achieved by students in tests.
Quantifying values and items in relation to each other is helpful for us to make sense of our
environment. The number system can be categorized into two broad categories:
• Non-Positional Number Systems: In ancient times, people used to count with their fingers.
When fingers became insufficient for counting, stones and pebbles were used to indicate the
values. This method of counting is called the non-positional number system. It was very
difficult to perform arithmetic operations with such a number system, as it had no symbol for
zero. The most common non-positional number system is the Roman number system. These
systems are often clumsy and it is very difficult to do calculations for large numbers.
• Positional Number Systems: A positional number system is any system that requires a finite
number of symbols/digits of the system to represent arbitrarily large numbers. When using
these systems the execution of numerical calculations becomes simplified, because a finite
set of digits are used. The value of each digit in a number is defined not only by the symbol,

18 / 29
Chapter 2. Computer Organization & Architecture
but also by the symbol’s position. The most popular positional number system being used
today is the decimal number system.

Base (or Radix) of System:


The word base (or radix) means the quality of admissible marks used - a given number system. The
admissible marks are the characters such as Arabic numerals, Latin letters, or other recognizable
marks, which are used to present the numerical magnitude of a ‘quantity’. The decimal number
system originated in India. This system has 10 as base of a number system and is indicated by a
subscript (decimal number) and this is followed by value of the number. For example, (5148)10
represents base 10 number system and (214)8 represents base 8 number system.

For a computer, everything is in the digital form (binary form) whether it is number, alphabet,
punctuation mark, instruction, etc. Let us illustrate with the help of an example. Consider the word
‘PANDU’ that appears on the computer screen as a series of alphabetic characters. However, for the
computer, it is a combination of numbers. To the computer it appears as:
Character Representation P A N D U
Binary Representation 01010000 01000001 01001101 01000100 01010101
Decimal Representation 80 65 77 68 85

Types of Number Systems:


Number System Radix Value Set of Digits Example
Decimal R = 10 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9 (5148)10
Binary R=2 0, and 1 (1011001)2
Octal R=8 0, 1, 2, 3, 4, 5, 6, and 7 (2751)8
Hexadecimal R = 16 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F (3C8F)16

Generally, a user is least aware of the fact that actual operations in a computer are done with a binary
number system. Traditionally, the two possible states for a binary system is represented by the digits
Os and Is. Long before the introduction of octal and hexadecimal numbers, programmers used a
convenient method of handling large binary numbers in either 3-bit or 4-bit groupings. Later, the
actual machine code for the computer instructions was replaced by mnemonics, which comprised
three or four letters of the assembly language for a particular CPU. It was also possible to use more
than one base numeration for writing data in these assembly languages, so programmers made sure
their assemblers could understand octal, hexadecimal, and binary numbers. Octal and hexadecimal
number being convenient to humans is used in computational task, since the computer only
understands binary and as octal and hexadecimal are much more compact than binary. In addition,
octal and hexadecimal numbers prevent unwieldy strings that were written in binary. For example,
the three-digit decimal number 513 requires ten digits in pure binary (1000000001) but only three
(201) in hexadecimal.

Binary Number System:


The digital computer provides accurate solutions to the problems by performing arithmetic
computations. These numbers are not expressed as decimal numbers within the computer because it
is not suitable for machine processes. Computers are not only powered by electricity, they compute
with electricity. They shift voltage pulses around internally. When numbers are represented in a
computer’s memory by means of small electrical circuits, a number system with only two symbols is
used. These symbols are ON or OFF states of the circuit. This system of representing numbers is
known as the binary number system. Circuits allow electricity to flow or to be blocked depending on
the type of circuit. Computer circuit is made out of transistors, which have only two states, ON and
OFF. ON is interpreted as 1, while OFF as 0. Similar to the decimal system, the position of a digit in
a number indicates its value. Instead of ones, tens, hundreds, thousands, etc., as in the decimal
system, the columns in the binary system contains ones, twos, fours, eights, etc. Each additional
column to the left has powers of 2, specifically, each place in the number represents two times (2
x’s) the place to its right. Table below represents the first 10 binary numbers.
19 / 29
Chapter 2. Computer Organization & Architecture
Binary Numbers
Decimal
24 23 22 21 20
Numbers
16 8 4 2 1
0 0
1 1
2 1 0
3 1 1
4 1 0 0
5 1 0 1
6 1 1 0
7 1 1 1
8 1 0 0 0
9 1 0 0 1

Octal Number System:


The octal number system is a base 8 system, having eight admissible The octal number system is a
base 8 system, having eight admissible marks: 0, 1, 2, 3, 4, 5, 6, and 7 with no 8’s or 9’s in the
system. This system is a positional notation number system. The octal system uses powers of 8 to
determine the digit of a number’s position.
Decimal Number Binary Number Octal Number
0 000 0
1 001 1
2 010 2
3 011 3
4 100 4
5 101 5
6 110 6
7 111 7

Hexadecimal Number System:


Hexadecimal system, which is similar to the decimal, binary, and octal number systems, except that
the base is 16. Each hexadecimal number represents a power of 16. To represent the decimal
numbers, this system uses 0 to 9 numbers and A to F characters to represent 10 to 15, respectively.
The largest hexadecimal digit F is equivalent to binary 1111.
Decimal Numbers Binary Numbers Octal Numbers Hexadecimal Numbers
0 0000 00 0
1 0001 01 1
2 0010 02 2
3 0011 03 3
4 0100 04 4
5 0101 05 5
6 0110 06 6
7 0111 07 7
8 1000 10 8
9 1001 11 9
10 1010 12 A
11 1011 13 B
12 1100 14 C
13 1101 15 D
14 1110 16 E
15 1111 17 F

20 / 29
Chapter 2. Computer Organization & Architecture

Table Showing xn:

2n Decimal Numbers 8n Decimal Numbers 16n Decimal Numbers


20 1 80 1 160 1
21 2 81 8 161 16
22 4 82 64 162 256
23 8 83 512 163 4096
24 16 84 4096 164 65536
25 32 85 32768 165 1048576
26 64 86 262144 166 16777216
27 128 87 2097152 167 268435456
28 256 88 16777216 168 4294967296
29 512 89 134217728 169 18719476736
210 1024 810 1072741824 1610 1099511627776
211 2048 811 8589934592 1611 17592186044416
212 4096 812 68719476736 1612 281474976710656
213 8192 813 549755813888 1613 4503599627370496
214 16384 814 4398046511104 1614 72057594037927936
215 32768 815 35184372088832 1615 1152921504606846976
216 65536 816 281474976710656 1616 18446744073709551616

2.8.2. Conversion Between Number Systems:


Computers and other digital systems process information as their primary function. Therefore, it is
necessary to have methods and systems for representing information in forms, which can be
manipulated and stored using electronic or other types of software. As discussed earlier, internally
computer uses binary numbers for data representation whereas externally it uses decimal numbers.
However, any number in one number system can be represented in another number system. Various
techniques, which can be used to convert numbers from one base to another, are:

Converting Decimal To Binary, Octal, And Hexadecimal:


The method used for the conversion of decimal number into other number systems is often done
using the ‘remainder’ method. This method involves the following steps:

1. Divide the decimal number by the base of the target number system. That is, to convert
decimal to binary, divide the decimal number with 2 (the base of binary number system), 8
for octal, and 16 for hexadecimal.
2. Note the remainder separately as the first digit from the right. In case of hexadecimal, if the
remainder exceeds 9, convert the remainder into equivalent hexadecimal form. For example,
if the remainder is 10 then note the remainder as A.
3. Continually repeat the process of dividing until the quotient is zero and keep writing the
remainders after each step of division.
4. Finally, when no more division can occur, write down the remainders in reverse order.

21 / 29
Chapter 2. Computer Organization & Architecture
Example 1: Determine the binary equivalent of (15407)10:
Remainder Least Significant Bit
2 15407  1
2 7703  1
2 3851  1
2 1925  1
2 962  0
2 481  1
2 240  0
2 120  0
2 60  0
2 30  0
2 15  1
2 7  1
2 3  1
1  1
Most Significant Bit

Taking remainders in reverse order, we have 11110000101111. Thus, the binary equivalent of
(15407)10 is (11110000101111)2.

Example 2: Determine the octal equivalent of (15407)10:


Remainder Least Significant Bit
8 15407  7
8 1925  5
8 240  0
8 30  6
8 3  3
Most Significant Bit

Taking remainders in reverse order, we have 36057. Thus, the binary equivalent of (15407)10 is
(36057)8.

Example 3: Determine the Hexadecimal equivalent of (15407)10:


Remainder Least Significant Bit
16 15407  15 = F
16 962  2
16 60  12 = C
16 3  3
Most Significant Bit

Taking remainders in reverse order, we have 3C2F. Thus, the hexadecimal equivalent of (15407)10 is
(3C2F)16.

Converting Binary, Octal, And Hexadecimal To Decimal:


The method used for conversion of a binary, octal, or hexadecimal number to decimal number
involves each digit of the binary, octal, or hexadecimal number to be multiplied by its weighted
position, and then each of the weighted values is added together to get the decimal number.

22 / 29
Chapter 2. Computer Organization & Architecture
Example 1: Determine the decimal equivalent of (11110000101111)2.
Binary
1 1 1 1 0 0 0 0 1 0 1 1 1 1
Number
Weighted of 13 12 11 10 9 8 7 6 5 4 3 2 1 0
Each Bit
213 x 1 212 x 1 211 x 1 210 x 1 29 x 0 28 x 0 27 x 0 26 x0 25 x1 24 x0 23x1 22x1 21x1 20x1
Weighted
Value
8192x1 4096x1 2048x1 1024x1 512x0 256x0 128x0 64x0 32x1 16x0 8x1 4x1 2x1 1x1

Solved
8192 4096 2048 1024 0 0 0 0 32 0 8 4 2 1
Multiplication
Sum of weight of all bits = 8192 + 4096 + 2048 + 1024 + 0 + 0 + 0 + 0 + 32 + 0 + 8 + 4 + 2 + 1 = 15407.
Thus, the decimal equivalent of (11110000101111)2 is (15407) 10.

Example 2: Determine the decimal equivalent of (36057)8.


Octal Number 3 6 0 5 7
Weight of Each Bit 4 3 2 1 0
4 3 2 1 0
8 x3 8 x6 8 x0 8 x5 8 x7
Weighted Value
4096 x 3 512 x 6 64 x 0 8x5 1x7
Solved Multiplication 12288 3072 0 40 7
Sum of weight of all bits = 12288 + 3272 + 0 + 40 + 7 = 15407
Thus, the decimal equivalent of (36057)8 is (15407) 10.

Example 3: Determine the decimal equivalent of (3C2F)16.


C F
Hexadecimal Number 3 2
12 15
Weight of Each Bit 3 2 1 0
3 2 1 0
16 x 3 16 x 12 16 x 2 16 x 15
Weighted Value
4096 x 3 256 x 12 16 x 2 1 x 15
Solved Multiplication 12288 3072 32 15
Sum of weight of all bits = 12288 + 3072 + 32 + 15 = 15407
Thus, the decimal equivalent of (3C2F)16 is (15407) 10.

Converting Among Binary, Octal, And Hexadecimal:


Converting among binary, octal, and hexadecimal can be accomplished easily without converting to
decimal first, as the base numbers of all three systems (2, 8, and 16) are powers of 2. Any octal digit
can be written as a group of three binary digits while a hexadecimal number will comprise four
binary digits.

Example 1: Determine the octal equivalent of (11110000101111)2.


Binary Number 011 110 000 101 111
Octal Number 3 6 0 5 7
The octal equivalent of (11110000101111)2 is (36057)8.

Example 2: Determine the hexadecimal equivalent of (11110000101111)2.


Binary Number 0011 1100 0010 1111
Hexadecimal Number 3 12 = C 2 15 = F
The hexadecimal equivalent of (11110000101111)2 is (3C2F)16.

Example 3: Determine the binary equivalent of (36057)8.


Octal Number 3 6 0 5 7
Binary Number 011 110 000 101 111
The binary equivalent of (36057)8 is (011110000101111)2.
23 / 29
Chapter 2. Computer Organization & Architecture

Example 4: Determine the binary equivalent of (3C2F)16.


Hexadecimal Number 3 12 = C 2 15 = F
Binary Number 0011 1100 0010 1111
The binary equivalent of (3C2F)16 is (0011110000101111)2.

Converting between octal and hexadecimal:


The method used for the conversion of octal number to hexadecimal number is accomplished by the
following steps:
1. Convert each octal digit to 3-bit binary form.
2. Combine all the 3-bit binary numbers.
3. Segregate the binary numbers into the 4-bit binary form by starting the first number from
the right bit (LSB) towards the number on the left bit (MSB).
4. Finally, convert these 4-bit blocks into their respective hexadecimal symbols.

Example 1: Determine the hexadecimal equivalent of (36057)8.


Octal Number 3 6 0 5 7
Binary Number 011 110 000 101 111
Combination of Binary Numbers 011110000101111
Segregating into 4-Bits 0011 1100 0010 1111
Hexadecimal Number 3 12 = C 2 15 = F
Thus, the hexadecimal equivalent of (36057)8 is (3C2F)16.

The method used for the conversion of hexadecimal number to octal number is the same as the octal
to hexadecimal conversion except that each hexadecimal digit is converted into 4-bit binary form
and then after grouping of all the 4-bit binary blocks, it is converted into the 3-bit binary form.
Finally, these 3-bit binary forms are converted into octal symbols.

Example 2: Determine the hexadecimal equivalent of (36057)8.


Hexadecimal Number 3 C 2 F
Binary Number 0011 1100 0010 1111
Combination of Binary Numbers 0 011110000101111
Segregating into 3-Bits 011 110 000 101 111
Octal Number 3 6 0 5 7
Thus, the octal equivalent of (3C2F)16 is (36057)8.

Converting Fractional Section of Decimal To Binary:


The method used for the conversion of decimal number into other number systems is often done
using the multiplication method. This method involves the following steps:

1. Repeatedly multiply the decimal number by the base of the target number system. That is, to
convert decimal to binary, multiply the decimal number with 2 (the base of binary number
system), 8 for octal, and 16 for hexadecimal.
2. Note the integer / whole number part separately as the first digit from the left. In case of
hexadecimal, if the remainder exceeds 9, convert the remainder into equivalent hexadecimal
form. For example, if the remainder is 10 then note the remainder as A.
3. Continually repeat the process of multiplication until the fractional part is zero or until we
have enough bits to satisfy our representational requirements.
4. Finally, read the bits from the top to form the binary number result.

24 / 29
Chapter 2. Computer Organization & Architecture
Example 1: Determine the binary equivalent of (0.375)10:
Integer Part Least Significant Bit
0.375 x 2
0.750 x 2  0
0.500 x 2  1
1.0  1
0
Most Significant Bit

Taking remainders in order, we have 011. Thus, the binary equivalent of (0.375)10 is (0.011)2.

Converting Binary, Octal, And Hexadecimal To Decimal:


The method used for conversion of a binary, octal, or hexadecimal number to decimal number
involves each digit of the binary, octal, or hexadecimal number to be multiplied by its weighted
position, and then each of the weighted values is added together to get the decimal number.

Example 1: Determine the decimal equivalent of (0.011)2.


Binary Number 0 1 1
Weighted of Each Bit -1 -2 -3
-1 -2 -3
2 x0 2 x1 2 x1
Weighted Value 1 2
1/2 x0 1/2 x1 1 / 23 x 1
1/2x0 1/ 4 x 1 1/8x1
0.50 x 0 0.25 x 1 0.125 x 1
Solved Multiplication 0.000 0.250 0.125
Sum of weight of all bits = 0.000 + 0.250 + 0.125 = 0.375.
Thus, the decimal equivalent of (0.011)2 is (0.375) 10.

2.9. BINARY AIRMATICS:


Binary Addition:
Rules:
0+0=0
0+1=1
1+0=1
1 + 1 = 10 here 0 is kept and 1 is carried forward to next higher bit.

Example: 10101101 + 110010 = ?

Carry  1
10101101  173
+ 110010  +50
----------------------------------- ----------------------------
11011111  223
----------------------------------- ----------------------------

25 / 29
Chapter 2. Computer Organization & Architecture
Binary Subtraction:
Rules:
0–0=0
0–1=1 here 1 is borrowed from the next higher bit.
1–0=0
1–1=0

Example: 10101101 - 110010 = ?

0111 1
10101101  173
- 110010  -50
----------------------------------- ----------------------------
1111011  123
----------------------------------- ----------------------------

Binary Multiplication:
Rules:
0x0=0
0x1=0
1x0=0
1x1=1

Example: 10101101 x 1101 = ?

10101101  173
x 1101  x 13
----------------------------------- ----------------------------
10101101 4 2 9
+ 00000000 + 1 7 3 *
+ 10101101
+10101101
----------------------------------- ----------------------------
100011001001  2 2 4 9
----------------------------------- ----------------------------

Binary Division:
Rules:
0÷1=0
1÷1=1

Example: 1111101 x 101 = ?


_____________
101 )1111101 ( 11001
101
----------------------------
x101
101
----------------------------
xxx101
101
----------------------------
xxx
26 / 29
Chapter 2. Computer Organization & Architecture

1’s Complement of Binary Numbers:


In 1’s complement of an binary numbers, all the 1’s will become 0 and all the 0’s will become 1.
For Example 1’s complement of (101100110)2 will be (010011001)2.

Given Binary Number  101100110


------------------------------------------------------------------------
1’s Complement  010011001
------------------------------------------------------------------------

2’s Complement of Binary Numbers:


In 2’s complement of an binary numbers, first, find 1’s complement and then add 1 to it to obtain 2’s
complement.
For Example 2’s complement of (101100110)2 will be 1’s complement 010011001 + 1 =
(010011010) 2.

Given Binary Number  101100110


------------------------------------------------------------------------
1’s Complement  010011001
Add 1 to it + 1
------------------------------------------------------------------------
2’s Complement  010011010
------------------------------------------------------------------------

2.10. CODING SCHEMES:


In today’s technology, the binary number system is used by the computer system to represent the
data in the computer understandable format. Numeric data (0, 1, 2 ... 9) is not the only form of data,
which is handled by the computer. Alphanumeric data (it is a string of symbols of the letters A, B, C,
..., Z or the digits 0, 1,2, ..., 9) and some special characters such as =, -, +, *, /, (, ), etc. are also
processed by the computer. There are lots of ways to represent numeric, alphabetic and special
characters in computer’s internal storage area. In computers, the code is made up of fixed size
groups of binary positions. Each binary position in a group is assigned a specific value; for example
8, 4, 2, or 1. In this way, every character can be represented by a combination of bits that is unique.
Moreover, data can also be arranged in a way that’s very simple and easy to decode, or transmitted
with varying degrees of redundancy for error detection and correction. Although there are many
coding schemes available for representing characters. The most commonly used coding systems are
American Standard Code for Information Interchange (ASCII) code and Unicode.

Binary Coded Decimal:


The Binary Codec Decimal (BCD) code is one of the early computer codes. It is based on the idea of
converting each digit of a decimal number into its binary equivalent, rather than converting the entire
decimal value into a pure binary form. This makes the conversion process easier.
The BCD equivalent of each decimal digits for 0 to 9 require 4 bits (1 nibble), hence all decimal
digits are represented by 4 bits. For example (42)10 can be represented by 4 as 0100 and 2 as 0010,
will produce 01000010 in BCD.

4 bits BCD coding system can be used to represent only decimal numbers, because 4 bits are
insufficient to represent the various characters used by a computer. Hence, instead of using 4 bits
with only 16 possible characters, computer designers commonly use 6 bits to represent characters in
BCD code. In the 6 bits BCD code, the four BCD numeric place positions are retained, but two
additional zone positions are added. With 6 bits, it is possible to represent 64 (26) different
characters. This is a sufficient number to code the 10 decimal digits, 26 alphabetic letters, and 28
other special characters.
27 / 29
Chapter 2. Computer Organization & Architecture

BCD Code BCD Code


Character Character
Zone Digit Zone Digit
11 0001 A 01 0001 S
11 0010 B 01 0010 T
11 0011 C 01 0011 U
11 0100 D 01 0100 V
11 0101 E 01 0101 W
11 0110 F 01 0110 X
11 0111 G 01 0111 Y
11 1000 H 01 1000 Z
11 1001 I 00 0001 1
10 0001 J 00 0010 2
10 0010 K 00 0011 3
10 0011 L 00 0100 4
10 0100 M 00 0101 5
10 0101 N 00 0110 6
10 0110 O 00 0111 7
10 0111 P 00 1000 8
10 1000 Q 00 1001 9
10 1001 R 00 1010 0

Extended Binary Coded Decimal Interchange Code:


The major problem with BCD code is that only 64 different characters can be represented in ti. This
is not sufficient for providing 10 decimal numbers, 26 alphabets, and 28+ large numbers of other
special characters.

Hence, the BCD code was extended from a 6 bits code to an 8 bits code. The added 2 bits are used as
additional zone bits, expanding the zone to 4 bits. The resulting code is called the Extended Binary
Coded Decimal Interchange Code (EBCDIC). In this code, it is possible to represent 256 (28)
different characters, instead of 64 (26). In addition to the various character requirements mentioned,
this also allows a large varity of printable characters and several nonprintable control characters. The
control characters are used to control such activities as printer vertical spacing, movement of cursor
on the terminal screen, etc. All of the 256 bits combinations have not yet been assigned characters.
Hence, the code can still grow, as new requirements develop.

American Standard Code for Information Interchange Code:


The standard binary code for the alphanumeric characters is ASCII. This code was originally
designed as a 7-bit code. Several computer manufacturers cooperated to develop this code for
transmitting and processing data. They made use of all eight bits providing 256 symbols.
Nevertheless, IBM had not changed the original set of 128 codes so that the original instructions and
data could still work with the new character set. ASCII is commonly used in the transmission of data
through data communication and is used almost exc1usively to represent the data internally in the
microcomputers. In ASCII, upper case letters are assigned codes beginning with hexadecimal value
41 and continuing sequentially through hexadecimal value 5A and lower case letters are assigned
hexadecimal values of 61 through 7A. The decimal values 1 to 9 are assigned the zone code 0011 in
ASCII. ASCII coding chart shows upper case and lower case alphabetic characters and numeric
digits 0 to 9. The standard ASCII code defines 128 character codes (0 to 127), of which, the first 32
are control codes (non-printable), and other 96 are represent able characters.

28 / 29
Chapter 2. Computer Organization & Architecture
Unicode:
Before the invention of Unicode, hundreds of different encoding systems for assigning numbers
were used. As no single encoding system could contain enough characters to assign the numbers,
this made the task very difficult. Even for a single language like English, no single encoding was
adequate for all the letters, punctuation, and technical symbols in common use. Moreover, these
encoding systems also conflicted with one another. Therefore, to overcome these issues, Unicode
encoding system was developed.

Unicode is a universal character-encoding standard for interpretation of text for computer


processing. It offers a constant way of encoding multi-lingual plain text. The standard provides the
capacity to encode all the characters used in different languages over the world. To keep character
coding simple and proficient, the Unicode standard allocates a unique numeric value and name to
each character. The objective behind Unicode was to use a single 16-bit encoding that provides code
points for more than 65,000 characters and to support characters in major languages of the world.
The Unicode standard incorporates punctuation marks, mathematical symbols, technical symbols,
arrows, and many more characters.

Even though a character may be used in more than one language, it is defined only once in Unicode.
For example, the Latin capital letter “A” is mapped once, even though it is used in English, German,
and in Japanese. On the other hand, it was decided that Cyrillic capital letter “A” is a different
character from Latin capital letter “A”, even though the two letters look a lot like each other. The
reasoning behind decisions like these is interesting to linguists, but usually not important to
programmers.

UTF Formats: Unicode characters are divided into two basic transformation formats, namely,
UTF-8 and UTF-16. UTF-8 (Unicode Transformation Format-8) is a lossless encoding of
Unicode characters. This format encodes each Unicode character as a variable number of 1 to 4
octets, where each character is represented by one, two, or three bytes.

In the UTF-16 encoding, characters are symbolized using either one or two unsigned 16-bit
integers, depending on the character value for storage or transmission through data networks.



29 / 29

You might also like