System Integration and Architecture II
System Integration and Architecture II
Computer architecture determines how a computer’s components exchange electronic signals to enable input, processing,
and output.
Computer architecture refers to the end-to-end structure of a computer system that determines how its components interact
with each other in helping to execute the machine’s purpose (i.e., processing data), often avoiding any reference to the
Complex instruction set computer (CISC) and reduced instruction set computer (RISC) are the two predominant
CISC processors have one processing unit, auxiliary memory, and a tiny register set containing hundreds of unique
commands. These processors execute a task with a single instruction, making a programmer’s work simpler since fewer
lines of code are required to complete the operation. This method utilizes less memory but may need more time to execute
instructions.
A reassessment led to the creation of high-performance computers based on the RISC architecture. The hardware is
designed to be as basic and swift as possible, and sophisticated instructions can be executed with simpler ones.
Computer architecture allows a computer to compute, retain, and retrieve information. This data can be digits in a
spreadsheet, lines of text in a file, dots of color in an image, sound patterns, or the status of a system such as a flash drive.
Purpose of computer architecture: Everything a system performs, from online surfing to printing,
involves the transmission and processing of numbers. A computer’s architecture is merely a mathematical
learning code and analyzing sophisticated algorithms and data structures, it is easy to forget this.
Manipulating data: The computer manages information using numerical operations. It is possible to
display an image on a screen by transferring a matrix of digits to the video memory, with every number
Multifaceted functions: The components of a computer architecture include both software and hardware.
The processor — hardware that executes computer programs — is the primary part of any computer.
Booting up: At the most elementary level of a computer design, programs are executed by the processor
whenever the computer is switched on. These programs configure the computer’s proper functioning and
initialize the different hardware sub-components to a known state. This software is known
Support for temporary storage: Memory is also a vital component of computer architecture, with several
types often present in a single system. The memory is used to hold programs (applications) while they are
being executed by the processor and the data being processed by the programs.
Support for permanent storage: There can also be tools for storing data or sending information to the
external world as part of the computer system. These provide text inputs through the keyboard, the
presentation of knowledge on a monitor, and the transfer of programs and data from or to a disc drive.
User-facing functionality: Software governs the operation and functioning of a computer. Several
software ‘layers’ exist in computer architecture. Typically, a layer would only interface with layers below
or above it.
The working of a computer architecture begins with the bootup process. Once the firmware is loaded, it can initialize the
rest of the computer architecture and ensure that it works seamlessly, i.e., helping the user retrieve, consume, and work on
It is possible to set up and configure the above architectural components in numerous ways. This gives rise to the different
Instruction set architecture (ISA) is a bridge between the software and hardware of a computer. It functions as a
programmer’s viewpoint on a machine. Computers can only comprehend binary language (0 and 1), but humans can
comprehend high-level language (if-else, while, conditions, and the like). Consequently, ISA plays a crucial role in user-
In addition, ISA outlines the architecture of a computer in terms of the fundamental activities it must support. It’s not
involved with implementation-specific computer features. Instruction set architecture dictates that the computer must
assist:
Data transfer instructions: These instructions move commands from the memory or into the processor
Branch and jump instructions: These instructions are essential to interrupt the logical sequence of
2. Microarchitecture
Microarchitecture, unlike ISA, focuses on the implementation of how instructions will be executed at a lower level. This
Microarchitecture is a technique in which the instruction set architecture incorporates a processor. Engineering specialists
and hardware scientists execute ISA with various microarchitectures that vary according to the development of new
technologies. Therefore, processors may be physically designed to execute a certain instruction set without modifying the
ISA.
Simply put, microarchitecture is the purpose-built logical arrangement of the microprocessor’s electrical components and
Multiple clients (remote processors) may request and get services from a single, centralized server in a client-server
system (host computer). Client computers allow users to request services from the server and receive the server’s reply.
A server should provide clients with a standardized, transparent interface so that they are unaware of the system’s features
(software and hardware components) that are used to provide the service.
Clients are often located on desktops or laptops, while servers are typically located somewhere else on the network, on
more powerful hardware. This computer architecture is most efficient when the clients and the servers frequently perform
pre-specified responsibilities.
Single instruction, multiple data (SIMD) computer systems can process multiple data points concurrently. This cleared the
path for supercomputers and other devices with incredible performance capabilities. In this form of design, all processors
receive an identical command from the control unit yet operate on distinct data packets. The shared memory unit requires
5. Multicore architecture
Multicore is a framework wherein a single physical processor has the logic of multiple processors. A multicore
architecture integrates numerous processing cores onto only one integrated circuit. The goal is to develop a system
Two notable examples of computer architecture have paved the way for recent advancements in computing. These are
‘Von Neumann architecture’ and ‘Harvard architecture.’ Most other architectural designs are proprietary and are therefore
Here’s a description of what these two examples of computer architecture are all about.
The von Neumann architecture, often referred to as the Princeton architecture, is a computer architecture that was
established in a 1945 presentation by John von Neumann and his collaborators in the First Draft of a Report on the
EDVAC (electronic discrete variable automatic computer). This example of computer architecture proposes five
components:
Memory capable of storing information as well as instructions and communicating via buses
2. Harvard architecture
The Harvard architecture refers to a computer architecture with distinct data and instruction storage and signal pathways.
In contrast to the von Neumann architecture, in which program instructions and data use the very same memory and
pathways, this design separates the two. In practice, a customized Harvard architecture with two distinct caches is
employed (for data and instruction); X86 and Advanced RISC Machine (ARM) systems frequently employ this
instruction.
Takeaway
Computer architecture is one of the key concepts that define modern computing. Depending on the architecture, you can
build micro-machines such as Raspberry Pi or incredibly powerful systems such as supercomputers. It determines how
electrical signals move across the different pathways in a computing system to achieve the most optimal outcome.
NUMBER SYSTEM
The binary numbering system was refined in the 17th century by Gottfried Leibniz. In mathematics and in computing
systems, a binary digit, or bit, is the smallest unit of data. Each bit has a single value of either 1 or 0, which means it can't
take on any other value.
Computers can represent numbers using binary code in the form of digital 1s and 0s inside the central processing unit
(CPU) and RAM. These digital numbers are electrical signals that are either on or off inside the CPU or RAM.
The binary system is the primary language of computing systems. Inside these systems, a binary number consists of a
series of eight bits. This series is known as a byte. In the binary schema, the position of each digit determines its decimal
value. Thus, by understanding the position of each bit, a binary number can be converted into a decimal number.
In the decimal number system, the numbers are represented with base 10. The way of denoting the decimal numbers
with base 10 is also termed as decimal notation. This number system is widely used in computer applications. It is also
called the base-10 number system which consists of 10 digits, such as, 0,1,2,3,4,5,6,7,8,9. Each digit in the decimal
system has a position and every digit is ten times more significant than the previous digit.
A number system with base 8 is called an octal number system. The position of every digit has a value which is a power
of 8. A number in the octal number system is represented with the number 8 at the base, like 5128,568, etc.
Hexadecimal -- also known as Base 16 or hex -- is one of four numbering systems. The other three are decimal (base
10), binary (base 2) and octal (base 8).
Here's what the decimal and hexadecimal systems look like for digits 0 to 15.
Decimal 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Hexadecimal 0 1 2 3 4 5 6 7 8 9 A B C D E
The hexadecimal system contains 16 sequential numbers as base units, including 0. The first nine numbers (0 to 9) are the
same ones commonly used in the decimal system. The next six two-digit numbers (10 to 15) are represented by the letters
A through F. This is how the hex system uses the numbers from 0 to 9 and the capital letters A to F to represent the
equivalent decimal number.
In this numbering system, each digit's position is 16 times more significant than the digit in the previous position. The hex
number starts with the least significant digit on the right-hand side. The numeric value of this number is calculated by
multiplying each digit by the value of its position and then adding the products. This is why hexadecimal is
a positional or weighted number system.
COMPUTER SOFTWARE
Application software is software that allows users to do user-oriented tasks such as create text documents, play or develop
games, create presentations, listen to music, draw pictures, or browse the web.
Operating System is the most important software that runs on a computer. It manages the computer's memory and
processes, as well as all of its software and hardware. It also allows you to communicate with the computer without
knowing how to speak the computer's language.
CPU SCHEDULING
CPU scheduling refers to the switching between processes that are being executed. It forms the basis of multiprogram med
systems. This switching ensures that CPU utilization is maximized so that the computer is more productive.
There are two main types of CPU scheduling, preemptive and non-preemptive. Preemptive scheduling is when a process
transitions from a running state to a ready state or from a waiting state to a ready state. Non-preemptive scheduling is
employed when a process terminates or transitions from running to waiting state.
This article will focus on two different types of non-preemptive CPU scheduling algorithms:
First Come First Serve (FCFS) and Shortest Job First (SJF).
As the name suggests, the First Come First Serve (FCFS) algorithm assigns the CPU to the process that arrives first in the
ready queue. This means that the process that requests the CPU for its execution first will get the CPU allocated first. This
is managed through the FIFO queue. The lesser the arrival time of processes in the ready queue, the sooner the process
gets the CPU.
Due to the non-preemptive nature of the algorithm, short processes at the end of the queue have to wait for long
processes that are present at the front of the queue to finish.
There is a high average waiting time that causes a starvation problem.
In the Shortest Job First (SJF) algorithm, the scheduler selects the process with the minimum burst time for its execution.
This algorithm has two versions: preemptive and non-preemptive.
The algorithm helps reduce the average waiting time of processes that are in line for execution.
Process throughput is improved as processes with the minimum burst time are executed first.
The turnaround time is significantly less.
The SJF algorithm can’t be implemented for short-term scheduling as the length of the upcoming CPU burst can’t
be predicted.