Chapter 2 - Computer Organization & Architecture
Chapter 2 - Computer Organization & Architecture
2.1. INTRODUCTION
Just as a tall building has
different levels of detail
namely, the number of
buildings, the size of rooms,
details of door and window
placement, similarly, each
computer has a visible
structure, which is referred to
as its “architecture”. One can
look at computer’s architecture
at similar levels of hardware
elements, which in turn
depends on the type of
computer (personal computer,
super computer, and so on)
required. Therefore, when we
talk about architecture in terms of computer, it is defined as the science of selecting and
interconnecting hardware components to create computers that meet functional, performance, and
cost goals.
Extending the concept of architecture and making these hardware components to work in a
harmonized manner in order to achieve a common objective in an environment is known as
computer organization. The study of computer organization focuses more on the collective
contribution from the hardware peripherals than individual electronic components.
The central processing unit consists of three main subsystems, the Arithmetic/Logic Unit (ALU), the
Control Unit (CD), and the Registers. The three subsystems work together to provide operational as
to the computer.
1 / 29
Chapter 2. Computer Organization & Architecture
• Arithmetic unit: The arithmetic unit contains the circuitry that is responsible for
performing the actual computing and carrying out the arithmetic calculations, such as
addition, subtraction, multiplication, and division. It can perform these operations at a very
high speed.
• Logic Unit: The logic unit enables the CPU to make logical operations based on the
instructions provided to it. These operations are logical comparison between data items. The
unit can compare numbers, letters, or special characters and can then take action based on the
result of the comparison. Logical operations of Logic unit test for three conditions:
• Equal-to Condition: In a test for this condition, the arithmetic/logic unit compares two
values to determine if they are equal. For example, if the number of tickets sold equals the
number of seats in the auditorium, then the concert is declared sold out.
• Less-than Condition: To test this condition, the ALU compares values to determine if one is
less than another. For example, if the number of speeding tickets on a driver’s record is less
than three, then insurance rates are Rs.425/-; otherwise, the rates are Rs.500/-.
• Greater-than Condition: In this type of comparison, the computer determines if one value
is greater than another. For example, if the number of hours a person works in a week is
greater than 40, then every extra hour multiplied by 1.5 times, the usual hourly wage to
compute overtime pay.
2.2.2. Registers
Registers are special purpose, high-speed
temporary memory units. These are temporary
storage areas for holding various types of
information such as data, instructions, addresses,
and the intermediate results of calculations.
Essentially, they hold the information that the
CPU is currently working on. Registers can be
thought of as CPU’s working memory, a special
additional storage location that offers the
advantage of speed. Registers work under the
direction of the control unit to accept, hold, and
transfer instructions or data and perform
arithmetic or logical comparisons at high speed.
The control unit uses a data storage register in the
way a store owner uses a cash register as a
temporary, convenient place to store the
transactions. As soon as a particular instruction or
piece of data is processed, the next instruction
immediately replaces it, and the information that
results from the processing is returned to main memory. Figure above reveals types of registers
present inside a CPU.
Instruction addresses are normally stored in consecutive registers and are executed sequentially. The
control unit reads an instruction in the memory by a specific address in the register and executes it.
The next instruction is then fetched from the sequence and executed, and so on. This type of
instruction sequencing is possible only if there is a counter to calculate the address of the instruction
that has been executed. This counter is one of the registers, which stores intermediate data used
during the execution of the instructions after it is read from the memory. Table below lists some of
the important register used in CPU.
2 / 29
Chapter 2. Computer Organization & Architecture
Register Name Function
A program counter keeps track of the next instruction to be
Program Counter (PC)
executed.
An instruction register holds the instruction to be decoded by
Instruction Register (IR)
the control unit.
A memory address register holds the address of the next
Memory Address Register (MAR)
location in memory to be accessed.
A memory buffer register is used for storing data either
Memory Buffer Register (MBR)
coming to the CPU or data being transferred by the CPU.
An accumulator is a general purpose register used for storing
Accumulator (ACC)
temporary results and produced by arithmetic logic unit.
Data Register (DR) A data register is used for storing the operands and other data.
The size or the length of each register is determined by its function. For example, the memory
address register, which holds the address of the next location in memory to be accessed, must have
the same number of bits as the memory address. Instruction register holds the next instruction to be
executed and, therefore, should be of the same number of bits as the instruction.
Figure above illustrates how control unit instructs other parts of CPU (ALU and registers and the I/O
devices) on what to do and when to do. It also determines what data is needed, where it is stored,
where to store the results of the operation, and sends the control signals to the devices involved in
the execution of the instructions. It administers the movement of large amount of instructions and
3 / 29
Chapter 2. Computer Organization & Architecture
data used by the computer. In order to maintain the proper sequence of events required for any
processing task, the control unit uses clock inputs.
one time while one or more than one can receive that signal. A bus that connects to all three
components (CPU, memory, I/O components) is called a system bus. A system bus consists of 50-
100 separate lines. These kinds are broadly categorized into three functional groups.
• Data Lines: Data lines provide a path for moving data between the system modules. Data
lines are collectively known as data bus. Normally a data bus consists of 8, 16, and 32 bits
separate lines. The number of lines present in data bus is called the width of data bus. Data
bus width limits the maximum number of bits, which can be transferred simultaneously
between two modules. The width of data bus helps in determining the overall performance of
a computer system.
• Address Lines: Address lines are used to designate the source of data for data bus. As the
memory may be divided into linear array of bytes or words, therefore, for reading or writing,
any information on to memory CPU needs to specify the address of a particular location. This
address is supplied by address bus (address lines are collectively called address bus). Thus,
the width of address specifies the maximum possible memory supported by a system. For
example, if a system has 16-bit wide address bus then it can have memory size equal to 216 =
65,536 bytes.
• Control Lines: Control lines are used to control the access to data and address bus; this is
required as bus is a shared medium. The control lines are collectively called control bus.
These lines are used for transmission of commands and timing signals (which validate data
and address) between the system modules. Timing signals indicate whether data and address
information is valid or not whereas command signals specify which operations are to be
performed. Some of the control lines of bus are required for providing clock signals to
synchronize operations, and for resetting signals to initialize the modules. Control lines are
also required for reading/writing to I/O devices or memory. A control line if used as a bus
request, indicates that a module needs to gain control of the bus. Bus grant control line, is
used to indicate that a requesting module has been granted control of the bus.
Physically, a bus is a number of parallel electrical conductors. These circuits are normally imprinted
on printed circuit boards. The bus normally extends across most of the system components, which
can be tapped into the bus lines.
4 / 29
Chapter 2. Computer Organization & Architecture
RAM directly provides the required information to the processor. It can be defined as a block of
sequential memory locations, each of which has a unique address determining the location and those
locations contain a data element. Storage locations in main memory are addressed directly by the
CPU’s instructions. It is volatile in nature, which means the information stored in it remains as long
as the power is switched on. As soon as the power is switched off, the information contained is lost.
ROM stores the initial start-up instructions and routines in BIOS (Basic Input/Output System),
which can only be read by the CPU, each time it is switched on. The contents of ROM are not lost
even in case of a sudden power failure, thus making it non-volatile in nature. The instructions in
ROM are built into the electronic circuits of the chip, called firmware. ROM is also random access
in nature, which means the CPU can randomly access any location within ROM. Improvement in
technology for constructing flexible ROM comes in various types, namely, PROM (Programmable
Read Only Memory), EPROM (Erasable Programmable Read Only Memory), and EEPROM
(Electrically Erasable Programmable Read Only Memory).
The computer uses logic to determine which data is the most frequently accessed and keeps them in
the cache. A cache is a piece of very fast memory, made from high-speed static RAM that reduces
the access time of the data. It is very expensive d generally incorporated in the processor, where
valuable data and program segments are
kept. Cache memory can be categorized
into three levels: L1 cache, L2 cache, and
L3 cache.
L2 Cache: The L2 cache is larger but slower in speed than Ll cache. It is used to see recent accesses
that is not picked by L1 cache and is usually 64 KB to 2 MB in size. A L2 cache is also found on the
CPU. If Ll and L2 caches are used together, then the missing information that is not present in Ll
cache can be retrieved quickly from the L2 cache.
L3 Cache: L3 cache memory is an enhanced form of memory present on the motherboard of the
computer. It is an extra cache built into the motherboard between the processor and main memory to
speed up the processing operations. It reduces the time gap between request and retrieving of the
data and instructions much more quickly than a main memory. L3 caches are being used with
processors nowadays, having more than 3 MB of storage in it.
The main concern in processor-memory communication is the speed mismatch between the memory
and processor. Memory access time is generally slower than the data. This speed mismatch is
eliminated by using a small fast memory as an intermediate buffer between processor and memory
called the Cache.
6 / 29
Chapter 2. Computer Organization & Architecture
7 / 29
Chapter 2. Computer Organization & Architecture
2.5. INSTRUCTION CYCLE
The basic function performed by a CPU is the execution of a program. The program to be executed
is a set of instructions, which are stored in memory. The CPU executes the instructions of the
program to complete a given task. The CPU fetches an instruction stored in the memory and then
executes the fetched instructions within the CPU before it can proceed to fetch the next instruction
from memory. This process is continuous until specified to stop. The instruction execution takes
place in the CPU registers, which are used as temporary storage areas and have limited storage
space. These CPU registers have been discussed earlier.
The processing needed for a single instruction (fetch and execution) is referred to as instruction
cycle. This instruction cycle consists of the fetch cycle and execute cycle.
Fetch Cycle:
In the beginning, the address,
which is stored in the program
counter (PC), is transferred to
the memory address register
(MAR). The CPU then
transfers the instruction located
at the address stored in the
MAR to the memory buffer
register (MBR) through the
data lines connecting the CPU
to memory. This transfer from
memory to CPU is coordinated
by the control unit. To finish
the cycle, newly fetched instruction is transferred to the instruction register (IR) and unless in-
structed otherwise, the CU increments the PC to point to the next address location.
Figure above illustrates fetch cycle, it can be summarized in the following points:
1. PC MAR
2. MAR memory MBR
3. MBR IR
4. CU PC
After the CPU has finished fetching an instruction, the CU checks contents of the IR and determines
which type of execution is to be carried out next. This process is known as the decoding phase. The
instruction is now ready for the execution cycle.
Execute Cycle:
Once an instruction has been loaded into the IR, and the control unit has examined and decoded the
fetched instruction and determined the required course of action to take, the execution cycle can
8 / 29
Chapter 2. Computer Organization & Architecture
commence. Unlike the fetch cycle and the interrupt cycle, both of which have a set instruction
sequence, the execute cycle can contain some complex operations. The actions within the execution
cycle can be categorized into the following four groups:
1. CPU - Memory: Data may be transferred from memory to CPU or from CPU to memory.
2. CPU - I/O: Data may be transferred from an I/O module to the CPU and vice versa.
3. Data Processing: The CPU may perform some arithmetic or logic operation on data via the
arithmetic-logic unit (ALU).
4. Control: An instruction may specify that the sequence of operation may be altered. For
example, the program counter may be updated with a new memory address to reflect that the
next instruction fetched should be read from this new location.
This operation loads the accumulator (ACC) with data that is stored in the memory location
specified in the instruction. The operation starts by transferring the address portion of the instruction
from IR to the memory address register (MAR). The CPU then transfers the instruction located at the
address stored in the MAR to the memory buffer register (MBR) via the data lines connecting the
CPU to memory. This transfer from memory to CPU is coordinated by the CU. To finish the cycle,
the newly fetched data is transferred to ACC. The illustrated LOAD operation (above Figure) can be
summarized in the following points:
After the execution cycle completes, the next instruction is fetched and the process starts again.
Processors differ from one another by their instruction set. If the same program can run on two
different processors, they are said to be compatible. For example, programs written for IBM
computers may not run on Apple computers because these two architectures (different processors)
are not compatible. Since each processor has its unique instruction set, machine language programs
written for one processor will normally not run on a different processor. Therefore, all operating
9 / 29
Chapter 2. Computer Organization & Architecture
systems and software programs are constructed within the boundaries of the processor’s instruction
set. Thus, the design of the instruction set for the processor becomes an important aspect of
computer architecture. Based upon the instruction sets, there are two common types of architectures,
Complex Instruction Set Computer (CISC) and Reduced Instruction Set Computer (RISC).
To make compiler development easier, CISC was developed. The sole motive of manufacturers of
CISC-based processor was to manufacture processors with more extensive and complex instruction
set. It shifted most of the burden of generating machine instructions to the processor. For example,
instead of making a compiler, to write long machine instructions for calculating a square root, a
CISC processor would incorporate a hardwired circuitry for performing the square root in a single
step. Writing instructions for a CISC processor is comparatively easy because a single instruction is
sufficient to utilize the built-in ability. In fact, the first PC microprocessors were CISC processors,
because all the instructions that the processor could execute were built into the processors. As
memory was expensive in the early days of computers, CISC processors saved memory because
their instructions could be fed directly into the processor. Most of the PCs today include a CISC
processor.
10 / 29
Chapter 2. Computer Organization & Architecture
2.6.2. RISC Architecture:
Reduced Instruction Set Computer is a processor architecture that utilizes a small, highly optimized
set of instructions. The concept behind RISC architecture is that a small number of instructions are
faster in execution as compared to a single long instruction. To implement this, RISC architecture
simplifies the instruction set of the processor, which helps in reducing the execution time.
Optimization of each instruction in the processor is done through a technique known as pipelining.
Pipelining allows the processor to work on different steps of the instruction at the same time; using
this technique, more instructions can be executed in a shorter time. This is achieved by overlapping
the fetch, decode, and execute cycles of two or more instructions. To prevent more interactions with
memory or to reduce the access time, the RISE design incorporates a larger number of registers.
As each instruction is executed directly using the processor, no hard-wired circuitry (used for
complex instructions) is required. This allows RISE processors to be smaller, consume less power,
and run cooler than CISC processors. Due to these advantages, RISE processors are ideal for
embedded applications, such as mobile phones, PDAs, and digital cameras. In addition, the simple
design of a RISE processor reduces its development time compared to a CISC processor.
The difference between the RISC approach and the CISC approach can be best explained by the
example, which shows how each design carries out five multiplications task. The general steps
required to perform the multiplications are:
11 / 29
Chapter 2. Computer Organization & Architecture
1. Read the first number out of memory.
2. Read the second number out of memory.
3. Multiply the two numbers.
4. Write the result back to memory.
5. Repeat Step 1- 4 for each of the four remaining multiplications.
On a simple CISC-based CPU, the CPU is first configured to get (read) the numbers, then the
umbers are read. Next, the CPU is configured to multiply the numbers, and then the numbers are
multiplied. Next, the CPU is configured to write the result to memory. Finally the numbers are
written into a memory. To multiply five sets of numbers, the whole process must be repeated five
times.
On a simple RISC CPU, the process is slightly different. A piece of hardware on the CPU is
dedicated to read the first number. When this operation is complete, another piece of hardware reads
the second number. After completion of this operation, another hardware performs the
multiplication, and when it is completed, another hardware writes the result to memory. If this
operation happens five times in a row, the RISC
hardware dedicated to obtaining the first number
from memory obtains the first number for the
second operation and second number for the first
operation. At the same time, the second number
for the second operation is retrieved from
memory, while the first number for the third
ration is obtained. As the first result is written
back to memory, the second multiplication is
armed, while the second number for the third
operation is read from memory and the first
number for the fourth operation is read from
memory, and so on.
• Structure: The system case provides a rigid structural framework to the components, which
ensure that everything fits together and works in a well-organized manner.
• Protection: The system case protects the inside of the system from physical damage, and
electrical interference.
• Cooling: The case provides cooling system to the vital components. Components that run
under cool temperature last longer and are less troublesome.
• Organization and Expandability: The system case is key to an organization of physical
system. If a system case is poorly designed, up gradation or expansion of peripheral is
limited.
• Status Display: The system case contains lights or LEDs that provide information inside the
box to the user.
System case encloses all the components, which are essential in running the computer system. These
components include motherboard, processors, memory, power supply, expansion slots, cables,
removable drives, and many others.
• Stability: A high quality power supply with sufficient capacity to meet the demands of the
computer provides years of stable power for the PC.
• Cooling: The power supply contains the main fan that controls the flow of air through the
system case. This fan is a major component in PC cooling system.
• Expandability: The capacity of the power supply determines the ability to add new drives to
the system or upgrade to a more powerful motherboard or processor.
2.7.2. Motherboard:
Motherboard, also known as system board, is a large multi-layered printed circuit board inside a
computer. The motherboard contains the CPU, the BIOS ROM chip, and the CMOS Setup
information. It has expansion slots for installing different adapter cards like video card, sound card,
network interface card, and modem. This circuit board provides a connector for the keyboard as well
as housing to the keyboard controller chip. It possesses RAM slots for the system’s random access
memory chips and provides the system’s chipset, controllers, and underlying circuitry (bus system)
13 / 29
Chapter 2. Computer Organization & Architecture
to tie everything together. In a typical motherboard, the circuitry is imprinted on the surface of firm
planar surface and is usually manufactured in a single piece. The most common design of
motherboard in today’s desktop computers is the ATX design. In ATX designs, the computer
components included are: processor, coprocessors (optionally), memory, BIOS, expansion slot, and
interconnecting circuitry. Additional components can be added to a motherboard through its
expansion slot. Nowadays, they are designed to put peripherals as integrated chips directly onto the
motherboard. Initially this was confined to audio and video chips but in recent times, the peripherals
integrated in this way includes SCSI, LAN, and RAID controllers. There are cost benefits to this
approach, the biggest downside is the restriction of future upgrade options. Figure below provides a
detailed look at the various components on motherboards.
BIOS:
BIOS (Basic Input/Output System) comprises a set of several routines and start up instruction inside
a ROM (Read Only Memory). This gives two advantages to the computer, firstly, the code and data
in the ROM BIOS need not be reloaded each time the computer is started, secondly, they cannot be
corrupted by wayward applications that are accidentally written into the wrong part of memory. The
first part runs as soon as the machine is switched on. It inspects the computer to determine what
hardware is fitted and then conducts simple test POST (Power-On Self Test) for normal
functionality. If all the tests are passed, the ROM then determines the drive to boot the machine.
Most PCs have the BIOS set check for the presence of an operating system in the primary hard disk
drive. Once the machine is booted, the BIOS serves a different purpose by presenting DOS with a
standardized API (Application Program Interface) for the PC hardware.
CMOS:
Motherboard includes a separate block of memory made up of very low power consumption called
CMOS (complementary metal oxide silicon) chip. This chip is kept alive by a battery even when
PCs power is off. The function of CMOS chip is to store basic information about the PCs
configuration number and type of hard and floppy drives, memory capacity, and so on. The other
important data, which is kept in CMOS memory, is system time and date. The clock, CMOS chip
and batteries are usually all integrated into a single chip.
14 / 29
Chapter 2. Computer Organization & Architecture
2.7.3. Ports And Interfaces:
Ports and interfaces are a generic name for the various “holes” (and their associated electronics),
found at the back of the computer, to which external devices are connected to the computer’s
motherboard. Different interfaces and ports run at varying speeds and work best with specific types
of devices.
• PS/2 Ports: It is a standard serial port connector used to plug computer mouse and
keyboards into personal computer. It consists of 6 pins in small and round shape socket.
• Serial Ports: It is a general-purpose communications port, through which data is passed
serially, that is, one bit at a time. These ports are used for transmitting data over long
distances. In the past, most digital cameras were connected to a computer’s serial port in
order to transfer images to the computer. However, because of its slow speed these ports are
used with computer mouse and modem.
• Parallel Port: It is an interface on a computer, which supports transmission of multiple bits
of data (usually 8 bits) at the same time. This port transmits data faster than a serial port and
is exclusively used for connecting peripherals such as printers and CD-ROM drives.
• SCSI Port: These ports are used in transmitting data up to seven devices in a “daisy chain”
fashion and at a speed faster than serial and parallel ports (usually 32 bits at a time). In daisy
chain several devices are connected in series to each other, so that data for the seventh device
need to go through the entire six devices first. These ports are hardware interface, which
includes an expansion board that plugs into the computer called a SCSI host adapter or SCSI
controller. Device which can be connected to SCSI ports are hard-disk drives and network
adapters.
• USB Port: USB (Universal Serial Bus) port is a plug-and-play hardware interface for
connecting peripherals such as the keyboard, mouse, joystick, scanner, printer, and modem. It
supports a maximum bandwidth of 12 MB/sec and has the capability to connect up to 127
devices. With USB port, a new device can be added to the computer without adding an
adapter card. These ports are replacement for parallel and serial ports.
15 / 29
Chapter 2. Computer Organization & Architecture
2.7.4. Expansion Cards:
An expansion card, also called an adapter card, is a circuit board that provides additional capabilities
to the computer system. Adapter cards are made up of large-scale integrated circuit components
installed on it. The cards are plugged into the expansion sockets present in the computer’s
motherboard to provide the computer an added functionality. Common available expansion cards
connect monitors (for enhanced graphics) and microphone (for sound), each having a special
purpose to perform. However, nowadays most of the adapters come inbuilt on the motherboard and
no expansion card is required unless the need for high performance is required.
SIMM:
Single In-Line Memory Modules are small circuit
board designed to accommodate surface-mount
memory chips. A typical SIMM chip comprises a
number of RAM chips on a PCB (printed circuit
board), which fits into a SIMM socket on a
computer’s motherboard. These chips are packed into
small plastic or ceramic dual inline packages (DIPs),
which are assembled into a memory module. A
typical motherboard offers four SIMM sockets
capable of taking either single-sided or double-sided
SIMMs with module sizes of 4, 8, 16, 32 or even 64MB. When 32-bit SIMMs chip is used with
processors, they have to be installed in pairs, with each pair of modules making up a memory bank.
These chips support 32-bit data paths, and are originally used with 32-bit CPUs. The CPU then
communicates with memory bank as one logical unit. SIMM chips usually come in two formats:
• A 30-pin SIMM, used in older system boards, which deliver one byte of data.
• A larger 72-pin SIMM, used in modem PCs, which deliver four bytes of data (plus parity) in
every memory request.
DIMM:
With the increase in speed and bandwidth capability, a new standard for memory was adopted called
dual in-line memory module (DIMM). These chips have 168-pins in two (or dual) rows of contacts;
one on each side of the card. With the additional pins, a CPU retrieves information from DIMM chip
at 64 bits as compared to a 32 or 16-bit transfers with SIMMs. Some of the physical differences
between 168-pins and 72-pin SIMMs include length of module, number of notches on the module,
and the way the module is installed. The main difference between the two is that on a SIMM,
opposing pins on either side of the board are tied together to form one electrical contact; while on a
DIMM, opposing pins remain electrically isolated to form two separate contacts. DIMMs are often
used in computer configurations that support a 64-bit or wider memory bus (like Intel’s Pentium IV).
17 / 29
Chapter 2. Computer Organization & Architecture
2.7.8. Processors:
Processor, often called CPU, is the central component of the computer. It is referred to as the brain
of a computer responsible for carrying out operations in efficient and effective manner. A processor
holds the key for carrying out all the processing and computational work. Every work that is done by
the user on the computer is performed either directly or indirectly by the processor. The following
factors should be considered while choosing a processor of a computer system:
• Performance: The processor’s capabilities dictate maximum performance of a system. It is
the most important single determinant of system performance (in terms of speed and
accuracy) in the computer.
• Speed: The speed of a processor defines how fast it can perform operations. There are many
ways to indicate speed, but the most obvious way to measure is through the internal clock
speed of the CPU. The faster the speed of the internal clock of the processor, the faster the
CPU will work, and therefore, hardware will be more expensive.
• Software Support: New and faster processors support resource-consuming software in a
better manner. For example, new processors such as the Pentium IV, enable the use of
specialized software, which were not supported on earlier machines.
• Reliability and Stability: The reliability of the computer system directly depends on the
type and quality of the processor.
• Energy Consumption and Cooling: Although processors consume relatively little power
compared to other system devices, newer processor consumes a great deal of power,
resulting in the impact on everything from cooling methods selection to overall system
reliability.
• Motherboard Support: The type of processor used in the system is a major determining
factor of chipset used on the motherboards. The motherboard, in turn, dictates many facets of
the system’s capabilities and performance.
As manual counting had limited role for carrying simple computing task, computation that was more
complex, made humans to depend on the machines to perform the computing task efficiently and
accurately. With the advancement of machines, different number systems were formed to make the
task simple, accurate, and fast. These number systems worked on the principle of digital logic design
present in the modern day computer system and opened a gateway to overcome complex
computation barriers. In a precise manner, a number system defines a set of values used to represent
‘quantity’. Generally, one talks about a number of people attending class, or a number of modules
taken by each student, and use numbers to represent grades achieved by students in tests.
Quantifying values and items in relation to each other is helpful for us to make sense of our
environment. The number system can be categorized into two broad categories:
• Non-Positional Number Systems: In ancient times, people used to count with their fingers.
When fingers became insufficient for counting, stones and pebbles were used to indicate the
values. This method of counting is called the non-positional number system. It was very
difficult to perform arithmetic operations with such a number system, as it had no symbol for
zero. The most common non-positional number system is the Roman number system. These
systems are often clumsy and it is very difficult to do calculations for large numbers.
• Positional Number Systems: A positional number system is any system that requires a finite
number of symbols/digits of the system to represent arbitrarily large numbers. When using
these systems the execution of numerical calculations becomes simplified, because a finite
set of digits are used. The value of each digit in a number is defined not only by the symbol,
18 / 29
Chapter 2. Computer Organization & Architecture
but also by the symbol’s position. The most popular positional number system being used
today is the decimal number system.
For a computer, everything is in the digital form (binary form) whether it is number, alphabet,
punctuation mark, instruction, etc. Let us illustrate with the help of an example. Consider the word
‘PANDU’ that appears on the computer screen as a series of alphabetic characters. However, for the
computer, it is a combination of numbers. To the computer it appears as:
Character Representation P A N D U
Binary Representation 01010000 01000001 01001101 01000100 01010101
Decimal Representation 80 65 77 68 85
Generally, a user is least aware of the fact that actual operations in a computer are done with a binary
number system. Traditionally, the two possible states for a binary system is represented by the digits
Os and Is. Long before the introduction of octal and hexadecimal numbers, programmers used a
convenient method of handling large binary numbers in either 3-bit or 4-bit groupings. Later, the
actual machine code for the computer instructions was replaced by mnemonics, which comprised
three or four letters of the assembly language for a particular CPU. It was also possible to use more
than one base numeration for writing data in these assembly languages, so programmers made sure
their assemblers could understand octal, hexadecimal, and binary numbers. Octal and hexadecimal
number being convenient to humans is used in computational task, since the computer only
understands binary and as octal and hexadecimal are much more compact than binary. In addition,
octal and hexadecimal numbers prevent unwieldy strings that were written in binary. For example,
the three-digit decimal number 513 requires ten digits in pure binary (1000000001) but only three
(201) in hexadecimal.
20 / 29
Chapter 2. Computer Organization & Architecture
1. Divide the decimal number by the base of the target number system. That is, to convert
decimal to binary, divide the decimal number with 2 (the base of binary number system), 8
for octal, and 16 for hexadecimal.
2. Note the remainder separately as the first digit from the right. In case of hexadecimal, if the
remainder exceeds 9, convert the remainder into equivalent hexadecimal form. For example,
if the remainder is 10 then note the remainder as A.
3. Continually repeat the process of dividing until the quotient is zero and keep writing the
remainders after each step of division.
4. Finally, when no more division can occur, write down the remainders in reverse order.
21 / 29
Chapter 2. Computer Organization & Architecture
Example 1: Determine the binary equivalent of (15407)10:
Remainder Least Significant Bit
2 15407 1
2 7703 1
2 3851 1
2 1925 1
2 962 0
2 481 1
2 240 0
2 120 0
2 60 0
2 30 0
2 15 1
2 7 1
2 3 1
1 1
Most Significant Bit
Taking remainders in reverse order, we have 11110000101111. Thus, the binary equivalent of
(15407)10 is (11110000101111)2.
Taking remainders in reverse order, we have 36057. Thus, the binary equivalent of (15407)10 is
(36057)8.
Taking remainders in reverse order, we have 3C2F. Thus, the hexadecimal equivalent of (15407)10 is
(3C2F)16.
22 / 29
Chapter 2. Computer Organization & Architecture
Example 1: Determine the decimal equivalent of (11110000101111)2.
Binary
1 1 1 1 0 0 0 0 1 0 1 1 1 1
Number
Weighted of 13 12 11 10 9 8 7 6 5 4 3 2 1 0
Each Bit
213 x 1 212 x 1 211 x 1 210 x 1 29 x 0 28 x 0 27 x 0 26 x0 25 x1 24 x0 23x1 22x1 21x1 20x1
Weighted
Value
8192x1 4096x1 2048x1 1024x1 512x0 256x0 128x0 64x0 32x1 16x0 8x1 4x1 2x1 1x1
Solved
8192 4096 2048 1024 0 0 0 0 32 0 8 4 2 1
Multiplication
Sum of weight of all bits = 8192 + 4096 + 2048 + 1024 + 0 + 0 + 0 + 0 + 32 + 0 + 8 + 4 + 2 + 1 = 15407.
Thus, the decimal equivalent of (11110000101111)2 is (15407) 10.
The method used for the conversion of hexadecimal number to octal number is the same as the octal
to hexadecimal conversion except that each hexadecimal digit is converted into 4-bit binary form
and then after grouping of all the 4-bit binary blocks, it is converted into the 3-bit binary form.
Finally, these 3-bit binary forms are converted into octal symbols.
1. Repeatedly multiply the decimal number by the base of the target number system. That is, to
convert decimal to binary, multiply the decimal number with 2 (the base of binary number
system), 8 for octal, and 16 for hexadecimal.
2. Note the integer / whole number part separately as the first digit from the left. In case of
hexadecimal, if the remainder exceeds 9, convert the remainder into equivalent hexadecimal
form. For example, if the remainder is 10 then note the remainder as A.
3. Continually repeat the process of multiplication until the fractional part is zero or until we
have enough bits to satisfy our representational requirements.
4. Finally, read the bits from the top to form the binary number result.
24 / 29
Chapter 2. Computer Organization & Architecture
Example 1: Determine the binary equivalent of (0.375)10:
Integer Part Least Significant Bit
0.375 x 2
0.750 x 2 0
0.500 x 2 1
1.0 1
0
Most Significant Bit
Taking remainders in order, we have 011. Thus, the binary equivalent of (0.375)10 is (0.011)2.
Carry 1
10101101 173
+ 110010 +50
----------------------------------- ----------------------------
11011111 223
----------------------------------- ----------------------------
25 / 29
Chapter 2. Computer Organization & Architecture
Binary Subtraction:
Rules:
0–0=0
0–1=1 here 1 is borrowed from the next higher bit.
1–0=0
1–1=0
0111 1
10101101 173
- 110010 -50
----------------------------------- ----------------------------
1111011 123
----------------------------------- ----------------------------
Binary Multiplication:
Rules:
0x0=0
0x1=0
1x0=0
1x1=1
10101101 173
x 1101 x 13
----------------------------------- ----------------------------
10101101 4 2 9
+ 00000000 + 1 7 3 *
+ 10101101
+10101101
----------------------------------- ----------------------------
100011001001 2 2 4 9
----------------------------------- ----------------------------
Binary Division:
Rules:
0÷1=0
1÷1=1
4 bits BCD coding system can be used to represent only decimal numbers, because 4 bits are
insufficient to represent the various characters used by a computer. Hence, instead of using 4 bits
with only 16 possible characters, computer designers commonly use 6 bits to represent characters in
BCD code. In the 6 bits BCD code, the four BCD numeric place positions are retained, but two
additional zone positions are added. With 6 bits, it is possible to represent 64 (26) different
characters. This is a sufficient number to code the 10 decimal digits, 26 alphabetic letters, and 28
other special characters.
27 / 29
Chapter 2. Computer Organization & Architecture
Hence, the BCD code was extended from a 6 bits code to an 8 bits code. The added 2 bits are used as
additional zone bits, expanding the zone to 4 bits. The resulting code is called the Extended Binary
Coded Decimal Interchange Code (EBCDIC). In this code, it is possible to represent 256 (28)
different characters, instead of 64 (26). In addition to the various character requirements mentioned,
this also allows a large varity of printable characters and several nonprintable control characters. The
control characters are used to control such activities as printer vertical spacing, movement of cursor
on the terminal screen, etc. All of the 256 bits combinations have not yet been assigned characters.
Hence, the code can still grow, as new requirements develop.
28 / 29
Chapter 2. Computer Organization & Architecture
Unicode:
Before the invention of Unicode, hundreds of different encoding systems for assigning numbers
were used. As no single encoding system could contain enough characters to assign the numbers,
this made the task very difficult. Even for a single language like English, no single encoding was
adequate for all the letters, punctuation, and technical symbols in common use. Moreover, these
encoding systems also conflicted with one another. Therefore, to overcome these issues, Unicode
encoding system was developed.
Even though a character may be used in more than one language, it is defined only once in Unicode.
For example, the Latin capital letter “A” is mapped once, even though it is used in English, German,
and in Japanese. On the other hand, it was decided that Cyrillic capital letter “A” is a different
character from Latin capital letter “A”, even though the two letters look a lot like each other. The
reasoning behind decisions like these is interesting to linguists, but usually not important to
programmers.
UTF Formats: Unicode characters are divided into two basic transformation formats, namely,
UTF-8 and UTF-16. UTF-8 (Unicode Transformation Format-8) is a lossless encoding of
Unicode characters. This format encodes each Unicode character as a variable number of 1 to 4
octets, where each character is represented by one, two, or three bytes.
In the UTF-16 encoding, characters are symbolized using either one or two unsigned 16-bit
integers, depending on the character value for storage or transmission through data networks.
29 / 29