0% found this document useful (0 votes)
15 views9 pages

Coa Based on Willam Stalling

This document provides a comprehensive overview of computer architecture, detailing its historical development from early mechanical computers to modern microprocessors, including key milestones like the Von Neumann architecture and the invention of the transistor. It contrasts RISC and CISC instruction sets, explains pipelining in CPU architecture, and discusses I/O communication methods such as interrupt-driven I/O and DMA. Additionally, the document compares various memory types and highlights the principle of locality and the role of cache memory in enhancing performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views9 pages

Coa Based on Willam Stalling

This document provides a comprehensive overview of computer architecture, detailing its historical development from early mechanical computers to modern microprocessors, including key milestones like the Von Neumann architecture and the invention of the transistor. It contrasts RISC and CISC instruction sets, explains pipelining in CPU architecture, and discusses I/O communication methods such as interrupt-driven I/O and DMA. Additionally, the document compares various memory types and highlights the principle of locality and the role of cache memory in enhancing performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Introduction

Computer architecture is the backbone of all digital systems. From early calculators to today’s
smartphones and laptops, the structure of computers has evolved significantly. This assignment
explores the historical development of computer architecture, compares instruction set types,
explains how modern CPUs work, discusses how devices communicate, and describes the role of
different memory types.

1.Research the historical development of computer architecture, from early mechanical


computers to modern microprocessors. Highlight key milestones such as the Von Neumann
architecture, the invention of the transistor, and the rise of RISC and CISC

Historical Development of Computer Architecture

The history of computer architecture is the story of how computers have improved over time —
from basic machines that could only do simple tasks to today’s powerful and fast
microprocessors. Here are some key stages in that journey:

1. Early Mechanical Computers

The first computers were mechanical. One famous example is Charles Babbage’s Analytical
Engine, designed in the 1830s. It was never fully built, but it had important ideas like using
memory, input/output, and a control unit — features we see in modern computers.

Another early machine was the Harvard Mark I (1944), an electromechanical computer that
was large and slow but could perform automatic calculations.

2. The Von Neumann Architecture (1945)

A major breakthrough came with John Von Neumann’s architecture design. In this model, the
program and data are stored in the same memory. This made it easier to write and run
programs. The CPU (Central Processing Unit) could fetch instructions one by one and carry
them out, which became the standard model for most computers.

Key features of Von Neumann architecture:

 A single memory for data and instructions


 Instructions processed step-by-step
 Input and output handled separately
This model is still the base of most modern computers.

3. The Invention of the Transistor (1947)

Before transistors, computers used vacuum tubes, which were large, hot, and unreliable. In 1947,
the transistor was invented at Bell Labs. Transistors were much smaller, cooler, and more
reliable. This invention led to the creation of smaller and faster computers in the 1950s and
1960s.

Later, many transistors were placed on a single chip to create integrated circuits (ICs). This
made computers even more compact and powerful.

4. Microprocessors (1970s and beyond)

In the 1970s, the first microprocessors were introduced. A microprocessor is a small chip that
acts as a complete CPU. The Intel 4004 (1971) was the first microprocessor. Later, the Intel
8086 (1978) became very popular and is considered the foundation of modern PCs.

Microprocessors made computers cheaper and smaller, leading to the personal computer (PC)
revolution in the 1980s and 1990s.

5. RISC and CISC Architectures

As computers developed, engineers found two main ways to design instruction sets for CPUs:

 CISC (Complex Instruction Set Computing): This design has many complex
instructions, where one instruction can do a lot of work. It was popular in early
computers like Intel x86. The goal was to make programming easier.
 RISC (Reduced Instruction Set Computing): This design uses fewer and simpler
instructions, which can be executed faster. RISC focuses on speed and efficiency. It
became popular in the 1980s with processors like IBM’s POWER and ARM chips,
which are used in most smartphones today.

Modern processors often combine ideas from both RISC and CISC to get the best performance.

2. Differentiate between RISC and CISC instruction sets. Provide real-world examples

(e.g., ARM vs. Intel). Include example instructions and their formats.
Difference Between RISC and CISC Instruction Sets

RISC (Reduced Instruction Set Computer) and CISC (Complex Instruction Set Computer) are
two types of CPU instruction set designs that describe how a processor handles commands.

RISC (Reduced Instruction Set Computer)

RISC is designed to use a small number of simple instructions. Each instruction is executed in
one clock cycle, making RISC fast and efficient. The idea is to keep the hardware simple and let
the software (compiler) handle complex tasks by combining many simple instructions.

 Instructions are all the same length (usually 32 bits).


 Uses load/store architecture – only specific instructions can access memory.
 Easier to pipeline, leading to faster performance.
 Example: ARM processors (used in smartphones, tablets, and IoT devices).

Example RISC instruction (ARM):

sql
CopyEdit
ADD R1, R2, R3

This means: R1 = R2 + R3 (Add values in R2 and R3, store result in R1)

Each operand is in a register, and only registers are used for operations.

CISC (Complex Instruction Set Computer)

CISC uses a larger set of complex instructions. Some instructions can do multiple operations
(like loading from memory and adding) in one command. This reduces the number of
instructions in a program but increases complexity in hardware.

 Instructions are of different lengths (8-bit to 64-bit).


 Can access memory directly in many instructions.
 More powerful individual instructions, but they take more cycles.
 Example: Intel x86 processors (used in most laptops and desktop PCs).

Example CISC instruction (Intel x86):

css
CopyEdit
ADD AX, [BX]
This means: Add the value in memory pointed to by BX to register AX.

This one instruction does memory access and addition in a single command.

Summary of Differences

 RISC: Simple, fast, fixed-size instructions, easier for pipelining. Example: ARM.
 CISC: Complex, powerful instructions, variable size, more cycles per instruction.
Example: Intel x86.

Modern CPUs often combine both approaches. For example, Intel processors use CISC on the
outside but translate them internally into RISC-like micro-operations for speed

3. Explain the concept of pipelining in CPU architecture. Discuss its advantages and the

common hazards (data, structural, control)

Pipelining in CPU Architecture

Pipelining is a technique used in CPU architecture to improve the overall performance of


instruction execution. The basic idea of pipelining is similar to an assembly line in a factory,
where different stages of a task are carried out simultaneously. In a non-pipelined CPU, each
instruction is completed before the next one begins. This causes delays and underutilizes CPU
resources. However, with pipelining, multiple instructions are processed at different stages at the
same time.

A typical instruction in a CPU goes through several stages, such as fetch, decode, execute,
memory access, and write back. In a pipelined CPU, while one instruction is being decoded, the
next instruction can be fetched, and another one can be executed. This overlap of instruction
stages increases the throughput, meaning the CPU can complete more instructions in less time.

Advantages of Pipelining

One of the main advantages of pipelining is increased instruction throughput, which means
more instructions are completed in a given time. It makes better use of CPU resources and
improves the overall system performance. Pipelining also allows the CPU to operate at a higher
clock speed, as each stage in the pipeline can be optimized to perform a small portion of the
instruction quickly.

Another advantage is reduced instruction latency, which means instructions are completed in
fewer clock cycles once the pipeline is full. Although the time to complete a single instruction
may not change, the time between completing successive instructions is significantly reduced.

Common Hazards in Pipelining

Despite its benefits, pipelining also introduces some challenges known as hazards. These are
situations that prevent the next instruction from executing in the next cycle, causing delays in the
pipeline. The three main types of hazards are:

1. Data Hazards

Data hazards occur when an instruction depends on the result of a previous instruction that has
not yet completed. For example, if one instruction is writing to a register and the next instruction
needs that value before it's written, this causes a data hazard. Techniques like forwarding or
stalling are used to solve this problem.

2. Structural Hazards

Structural hazards happen when two or more instructions need the same hardware resource at the
same time. For example, if both the fetch and memory access stages need to use the main
memory in the same cycle, a conflict occurs. This can be solved by duplicating hardware or
delaying one of the instructions.

3. Control Hazards

Control hazards are caused by branch instructions (like if-else or loops). Since the CPU may
not know the result of a branch until a later stage, it cannot be sure which instruction to fetch
next. This uncertainty causes delays. Techniques such as branch prediction or pipeline
flushing are used to minimize the impact of control hazards.

4. Explain how I/O devices communicate with the CPU. Describe interrupt-driven I/O

and Direct Memory Access (DMA).

How I/O Devices Communicate with the CPU


I/O (Input/Output) devices like keyboard, mouse, printer, etc., need to send data to or receive
data from the CPU. There are three main ways this communication happens: this are
Interrupt-Driven I/O and Direct Memory Access (DMA). Let’s break these down:

1. Programmed I/O (Polling)

 Programmed I/O is the simplest form of I/O communication.


 The CPU continuously checks the status of the I/O device to see if it’s ready for data
transfer.
 This method is not very efficient because the CPU waits (or "polls") for the device to be
ready, consuming valuable CPU time.

2. Interrupt-Driven I/O

 Instead of constantly checking the device, the CPU sends a request to the I/O device to
begin the operation.
 When the device finishes the task (e.g., reading data), it sends an interrupt signal to the
CPU.
 Upon receiving the interrupt, the CPU pauses its current task, handles the I/O operation
using an interrupt service routine, and then returns to its original task.

Benefits:

 The CPU does not waste time polling, allowing it to perform other tasks until it’s needed.

Example:

 A keyboard sends an interrupt to the CPU when a key is pressed.

3. Direct Memory Access (DMA)

 DMA is a more efficient method used for large data transfers (like moving files
between memory and I/O devices).
 Instead of the CPU managing every byte of data, a special hardware unit called the DMA
controller handles the data transfer.
 The CPU only sets up the DMA by providing instructions such as the source, destination,
and the amount of data.
 The DMA controller then takes control of the memory and the I/O device to move the
data directly, without CPU involvement.
 Once the transfer is complete, the DMA sends one interrupt to notify the CPU that the
task is done.
Benefits:

 The CPU is free to perform other tasks while DMA handles large data transfers.

Example:

 Transferring data from a hard disk to RAM.

📘 Summary of I/O Methods

Method CPU Involvement Efficiency Ideal Use Case

Programmed I/O High (CPU polls device) Low Small, simple tasks

Interrupt-Driven I/O Medium (CPU responds to interrupts) Medium Keyboard, mouse, etc.

DMA Low (CPU sets up DMA) High Large data transfers (disk, audio)

5. Compare different types of memory (RAM, ROM, Cache, Registers, HDD, SSD).

Discuss the principle of locality and how cache memory improves performance

Comparison of Different Types of Memory

In computer systems, memory is organized into several types based on speed, size, cost, and
function. The fastest and smallest type of memory is the register, which is located directly inside
the CPU. Registers are used to temporarily store data being processed, such as operands and
intermediate results. They operate at the speed of the processor but are very limited in size.

The next level of memory is the cache memory, which sits between the CPU and the main
memory (RAM). Cache stores copies of frequently accessed data from the main memory so that
the CPU can access them quickly. Although larger than registers, cache is still small and
expensive. It plays a key role in speeding up program execution by reducing the time needed to
access data from RAM.
RAM (Random Access Memory) is the main memory used during program execution. It
temporarily stores data and instructions that the CPU is currently using. RAM is volatile,
meaning it loses its contents when the power is turned off. It is larger in size compared to cache
and registers, but slower and more affordable.

ROM (Read-Only Memory) is a non-volatile memory, meaning it retains its contents even
when the power is off. ROM is mainly used to store firmware — the permanent instructions
needed for booting the computer. Unlike RAM, the contents of ROM are mostly fixed and
cannot be modified easily.

For long-term storage, computers use secondary storage devices like hard disk drives (HDDs)
and solid-state drives (SSDs). HDDs are traditional storage devices with mechanical moving
parts. They offer large storage capacities at low cost but are relatively slow. In contrast, SSDs
use flash memory and have no moving parts, making them faster, more reliable, and more
expensive than HDDs. SSDs are commonly used in modern systems for installing the operating
system and frequently used applications.

Principle of Locality

The principle of locality is a key concept in memory organization that explains how programs
tend to use memory. There are two types of locality: temporal locality and spatial locality.
Temporal locality means that if a program accesses a certain memory location, it is likely to
access the same location again soon. For example, when a loop runs repeatedly, it uses the same
set of instructions and variables. Spatial locality means that if a program accesses one memory
location, it is likely to access nearby locations soon. This happens when data is stored in arrays
or structures, where accessing one element leads to accessing the next.

How Cache Memory Improves Performance

Cache memory improves system performance by taking advantage of the principle of locality.
When the CPU needs data, it first checks whether the data is available in the cache. If the data is
found in the cache, it is called a cache hit, and the CPU can access it very quickly. If the data is
not in the cache, it is a cache miss, and the data must be fetched from RAM or even slower
storage, which takes more time.

Because most programs exhibit locality, the data the CPU needs next is often already in the
cache. This greatly reduces the average time it takes to access memory. As a result, cache
memory helps the CPU run faster and more efficiently by reducing delays caused by slower
memory acces
Conclusion

This assignment has offered an in-depth exploration of computer architecture


fundamentals, tracing their historical evolution and emphasizing modern design
paradigms. Through comparative analysis of instruction set architectures (RISC
vs. CISC), examination of pipelining techniques, and evaluation of memory and
I/O strategies, the principles underlying efficient system design have been
illustrated. Mastery of these concepts forms the bedrock for understanding
contemporary computing systems and architecting future innovations.

References

1. Tanenbaum, A. S., & Austin, T. (2012). Structured Computer Organization.


2. William Stallings. (2018). Computer Organization and Architecture.
3. ARM Holdings. https://www.arm.com
4. Intel Corporation. https://www.intel.com
5. Patterson, D. A., & Hennessy, J. L. (2014). Computer Organization and Design.

You might also like