Cit 204 Lecture Note
Cit 204 Lecture Note
Complex instruction set computer (CISC) and reduced instruction set computer (RISC)
are the two predominant approaches to the architecture that influence how computer
processors function.
CISC processors have one processing unit, auxiliary memory, and a tiny register set
containing hundreds of unique commands. These processors execute a task with a
single instruction, making a programmer’s work simpler since fewer lines of code are
required to complete the operation. This method utilizes less memory but may need
more time to execute instructions.
The input unit provides external data sources to the computer system. Therefore,
it connects the external environment to the computer. It receives information from
input devices, translates it to machine language, and then inserts it within the
computer system. The keyboard, mouse, or other input devices are the most
often utilized and have corresponding hardware drivers that allow them to work in
sync with the rest of the computer architecture.
2. Output unit and associated peripherals
The output unit delivers the computer process’s results to the user. A majority of
the output data comprises music, graphics, or video. A computer architecture’s
output devices encompass the display, printing unit, speakers, headphones, etc.
To play an MP3 file, for instance, the system reads a number array from the disc
and into memory. The computer architecture manipulates these numbers to
convert compressed audio data to uncompressed audio data and then outputs
the resulting set of numbers (uncompressed audio file) to the audio chips. The
chip then makes it user-ready through the output unit and associated peripherals.
3. Storage unit/memory
The storage unit contains numerous computer parts that are employed to store
data. It is typically separated into primary storage and secondary storage.
Primary storage unit
This component of the computer architecture is also referred to as the main
memory, as the CPU has direct access to it. Primary memory is utilized for
storing information and instructions during program execution. Random access
memory (RAM) and read-only memory (ROM) are the two kinds of memory:
• RAM supplies the necessary information straight to the CPU. It is a temporary
memory that stores data and instructions intermittently.
• ROM is a memory type that contains pre-installed instructions, including
firmware. This memory’s content is persistent and cannot be modified. ROM
is utilized to boot the machine upon initial startup. The computer is now
unaware of anything outside the ROM. The chip instructs it on how to set up
the computer architecture, conduct a power-on self-test (POST), and finally
locate the hard drive so that the operating system can be launched.
The central processing unit includes registers, an arithmetic logic unit (ALU), and control
circuits, which interpret and execute assembly language instructions. The CPU interacts
with all the other parts of the computer architecture to make sense of the data and
deliver the necessary output.
I. Registers
These are high-speed and purpose-built temporary memory devices. Rather than
being referred to by their address, they are accessed and modified directly by the
CPU throughout execution. Essentially, they contain data that the CPU is
presently processing. Registers contain information, commands, addresses, and
intermediate processing results.
The control unit collaborates with the computer’s input and output devices. It instructs
the computer to execute stored program instructions via communication with the ALU
and registers. The control unit aims to arrange data and instruction processing.
The microprocessor is the primary component of computer hardware that runs the CPU.
Large printed circuit boards (PCBs) are utilized in all electronic systems, including
desktops, calculators, and internet of things (IoT) devices. The Intel 40004 was the first
microprocessor with all CPU components on a single chip.
In addition to these four core components, a computer architecture also has supporting
elements that make it easier to function, such as:
5 Bootloader
The firmware contains the bootloader, a specific program executed by the processor
that retrieves the operating system from the disc (or non-volatile memory or network
interface, as deemed applicable) and loads it into the memory so that the processor can
execute it. The bootloader is found on desktop and workstation computers and
embedded devices. It is essential for all computer architectures.
The operating system governs the computer’s functionality just above firmware. It
manages memory usage and regulates devices such as the keyboard, mouse, display,
and disc drives. The OS also provides the user with an interface, allowing them to
launch apps and access data on the drive.
Typically, the operating system offers a set of tools for programs, allowing them to
access the screen, disc drives, and other elements of the computer’s architecture.
7 Buses
A bus is a tangible collection of signal lines with a linked purpose; a good example is
the universal serial bus (USB). Buses enable the flow of electrical impulses between
various components of a computer’s design, transferring information from one system to
another. The size of a bus is the count of information-transferring signal lines. A bus
with a size of 8 bits, for instance, transports 8 data bits in a parallel formation.
7 Interrupts
Interrupts, also known as traps or exceptions in certain processors, are a method for
redirecting the processor from the running of the current program so that it can handle
an occurrence. Such an event might be a malfunction from a peripheral or just the fact
that an I/O device has completed its previous duty and is presently ready for another
one. Every time you press a key and click a mouse button, your system will generate an
interrupt.
Instruction set architecture (ISA) is a bridge between the software and hardware of a
computer. It functions as a programmer’s viewpoint on a machine. Computers can only
comprehend binary language (0 and 1), but humans can comprehend high-level
language (if-else, while, conditions, and the like). Consequently, ISA plays a crucial role
in user-computer communications by translating high-level language into binary
language.
These different instruction set architectures arise from varying design philosophies and
trade-offs. CISC architectures aim to provide a rich set of instructions to reduce program
size, while RISC architectures focus on simplicity and efficiency in terms of instruction
execution. VLIW and EPIC architectures target parallelism and often rely on compilers
for efficient scheduling of instructions to make use of multiple functional units. The
choice between these architectures depends on the specific goals and constraints of the
system being designed.
2. Microarchitecture
3. Client-server architecture
Multiple clients (remote processors) may request and get services from a single,
centralized server in a client-server system (host computer). Client computers allow
users to request services from the server and receive the server’s reply. Servers receive
and react to client inquiries.
A server should provide clients with a standardized, transparent interface so that they
are unaware of the system’s features (software and hardware components) that are
used to provide the service.
Clients are often located on desktops or laptops, while servers are typically located
somewhere else on the network, on more powerful hardware. This computer
architecture is most efficient when the clients and the servers frequently perform pre-
specified responsibilities.
Single instruction, multiple data (SIMD) computer systems can process multiple data
points concurrently. This cleared the path for supercomputers and other devices with
incredible performance capabilities. In this form of design, all processors receive an
identical command from the control unit yet operate on distinct data packets. The
shared memory unit requires numerous modules to interact with all CPUs concurrently.
4. Multicore architecture
Two notable examples of computer architecture have paved the way for recent
advancements in computing. These are ‘Von Neumann architecture’ and ‘Harvard
architecture.’ Most other architectural designs are proprietary and are therefore not
revealed in the public domain beyond a basic abstraction.
Von Neumann Architecture refers to a design model for computers where the
processing unit, memory, and input-output devices are interconnected through a single,
central system bus. This architecture was first proposed by John von Neumann, a
Hungarian-American mathematician and physicist, in the mid-20th century.
Before the invention of Von Neumann Architecture, computers followed other designs,
such as the Harvard Architecture, where memory and processing units were separated.
The development of Von Neumann Architecture enabled a more efficient way to store
and execute instructions, which significantly improved the overall performance of
computers.
The core concept of this architecture is that it treats both instructions and data
uniformly. This means that the same memory and processing resources are used to
store and manipulate both program instructions and the data being processed. This
design greatly simplifies the structure and operations of a computer, making it easier to
understand and implement.
• Central Processing Unit (CPU): The part of a computer that carries out
instructions and performs arithmetic, logical, and control operations.
• Memory: A place where the computer stores and retrieves data and instructions.
Memory is divided into two types: primary memory, such as Random Access
Memory (RAM), and secondary memory, like hard disk drives and solid-state
drives.
• Input-Output (I/O) devices: Components responsible for interfacing the computer
with the external world. Examples of I/O devices include keyboards, mice,
printers, and monitors.
• System Bus: A communication pathway that connects the CPU, memory, and I/O
devices, enabling data and control signals to flow between these components.
The smooth interaction of these four components contributes towards the efficient
functioning of a computer system built on the principles of Von Neumann Architecture.
Despite its widespread use, the Von Neumann architecture is not without limitations.
One notable limitation is the von Neumann bottleneck, which refers to the sequential
and serial nature of the architecture that can lead to a slowdown in processing speed
due to the shared memory bus for both instructions and data. However, modern
computer systems have employed various techniques to mitigate these limitations, such
as caching and parallel processing.
Example: Consider a simple program that calculates the sum of two numbers, 'A' and
'B'. The program instructions and the variables 'A' and 'B' would all be stored in the
memory. The CPU retrieves and processes these instructions, and the result would be
stored back into the memory, which could then be accessed by the I/O devices for
display.
Finally, the I/O devices function as the bridge between the computer and the outside
world. They take input from users and provide output for them to interact with. These
devices are connected to the rest of the system through the System Bus, enabling the
exchange of data and control signals between them and other components.
The Von Neumann Architecture is characterized by its simplicity and unified approach to
handling instructions and data. This design principle has a significant influence on the
overall structure and operation of the computer system. Key features of the architecture
include:
• Unified memory structure: Both instructions and data are stored together in the
same memory.
• Sequential instruction processing: Program instructions are executed one after
another in a linear sequence.
• Shared system bus: Components are interconnected through a central
communication pathway, allowing for efficient communication and coordination.
• Modularity: The architecture is suitable for a wide range of computer systems,
from simple microcontrollers to complex supercomputers, by scaling memory and
processing capabilities.
The Role of Memory and Input /Output Devices in the Von Neumann Architecture
In Von Neumann Architecture, memory and input/output devices play critical roles in
ensuring the efficient flow of data and instructions throughout the computer system.
Understanding their specific functions can help illustrate the overall operation of the
architecture.
• Primary memory (RAM): This is volatile memory that stores program instructions,
data, and intermediate results during the execution of a program. It allows for
rapid access by the CPU and is essential for the normal operation of a computer.
• Secondary memory: This refers to non-volatile storage devices such as hard disk
drives and solid-state drives, which store data and instructions even when the
computer is powered off. These storage devices provide long-term storage for
programs and files.
The unified memory design in Von Neumann Architecture offers several advantages,
such as improved memory efficiency, greater flexibility in how programs and data are
stored, and the ability to dynamically allocate memory as needed. However, it also
contributes to the Von Neumann bottleneck, as a single system bus can limit the speed
at which data and instructions are transferred between components.
These I/O devices are crucial for enabling efficient interactions between users, software,
and hardware. They also rely on a proper system bus configuration to ensure a smooth
flow of data and control signals between them and other components, such as the CPU
and memory.
Von Neumann Architecture has been widely adopted in various computer systems and
applications due to its simplicity, flexibility, and compatibility. Some common
applications include:
1. Personal computers (PCs) and laptops: The majority of modern PCs and laptops
use the Von Neumann Architecture for their central processing and memory
management. This architecture is well-suited for general-purpose computing,
with its modular design and unified memory structure allowing for efficient
resource utilization and easy software development.
2. Microcontrollers: These small computers are embedded in a wide range of
electronic devices, such as home appliances, automotive systems, and industrial
automation equipment. The simplicity and scalability of the Von Neumann
Architecture make it ideal for microcontroller implementation, as it can be easily
adapted to fit the specific requirements of each application.
3. Embedded systems: Similar to microcontrollers, embedded systems are
computer systems designed for specific tasks and are often integrated into larger
devices or systems. Such systems typically have constrained resources and
require efficient use of memory and processing capabilities, which is facilitated by
the Von Neumann Architecture.
4. Supercomputers and high-performance computing clusters: While the bottleneck
issues associated with Von Neumann Architecture can limit parallelism and
performance in some cases, many supercomputers and high-performance
computing clusters still employ the principles of this architecture in their design.
Modifications, such as the use of multiple processors and advanced memory
management strategies, help to mitigate the inherent limitations and provide the
necessary performance for computationally intensive tasks.
Over the years, numerous computer systems have been built using the Von Neumann
Architecture. Some notable real-world examples include:
Overall, the Von Neumann Architecture has played a crucial role in the advancement of
computing technology. Its timeless design principles have facilitated the development of
computer systems of various scales and complexity, enabling computing breakthroughs
and empowering the digital world we live in today.
The Von Neumann Architecture has undergone significant transformations since its
conception in the mid-20th century. As computer hardware and software technologies
advanced, modifications to the architecture have been made to accommodate new
capabilities, improve performance, and address limitations. Here are some notable
milestones in the evolution of Von Neumann Architecture:
Although these improvements have mitigated some of the inherent limitations of Von
Neumann Architecture, challenges and performance bottlenecks still exist, driving
research into alternative computing models and architectures.
As technology advances and the demand for more powerful and energy-efficient
computing systems grows, the future of Von Neumann Architecture is subject to
adaptation and potential replacement by alternative architectures. Several directions are
being explored, including: