0% found this document useful (0 votes)
15 views

Unit-1-CA-Part 1

The document outlines the history of computers, detailing the evolution from the first generation IAS computer built under John von Neumann's design to the introduction of transistors and integrated circuits in subsequent generations. It highlights key advancements such as the development of high-level programming languages and the emergence of personal computers, as well as the differences between RISC and CISC architectures. Additionally, it discusses performance considerations for CPUs, including speed and execution time.

Uploaded by

robertraja006543
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Unit-1-CA-Part 1

The document outlines the history of computers, detailing the evolution from the first generation IAS computer built under John von Neumann's design to the introduction of transistors and integrated circuits in subsequent generations. It highlights key advancements such as the development of high-level programming languages and the emergence of personal computers, as well as the differences between RISC and CISC architectures. Additionally, it discusses performance considerations for CPUs, including speed and execution time.

Uploaded by

robertraja006543
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 196

ERAS IN HISTORY OF COMPUTERS

IAS COMPUTER
FIRST GENERATION
• The IAS machine was the first
electronic computer built at the Institute for
Advanced Study (IAS) in Princeton, New
Jersey. It is sometimes called the von Neumann
machine, since the paper describing its design
was by John von Neumann, a mathematics
professor.
• The computer was built under his direction,
starting in 1946 and finished in 1951. The general
organization is called von Neumann
architecture. The computer is in the collection
of the Smithsonian National Museum of
American History but is not currently on display.
IAS COMPUTER
IAS COMPUTER

• The IAS machine was a binary computer with a 40-bit word, storing two 20-bit
instructions in each word. The memory was 1,024 words (5 kilobytes in modern
terminology). Negative numbers were represented in two's complement format.
• It had two general-purpose registers available: the Accumulator (AC) and
Multiplier/Quotient (MQ). It used 1,700 vacuum tubes. The memory was originally
designed for about 2,300 RCA Selectron vacuum tubes. It weighed about 1,000 pounds
(450 kg).
• It was an asynchronous machine, meaning that there was no central clock regulating the
timing of the instructions. One instruction started executing when the previous one
finished. The addition time was 62 microseconds and the multiplication time was 713
microseconds.
IAS COMPUTER
IAS COMPUTER

• In 1947 von Neumann began to design a new stored-program electronic computer,


now referred to as the IAS computer, at the Institute for Advanced Studies in
Princeton.
• The IAS machine was designed to process all bits of a binary number simultaneously
or in parallel.
• One of the two main parts of the CPU is responsible for fetching instructions from
main memory and interpreting them; this part is variously known as the program
control unit (PCU) or the I-unit (instruction unit). The second major part of the
CPU is responsible for executing instructions and is known as the data processing
unit (DPU), the datapath, or the E-unit (execution unit).
• The major components of the PCU are the instruction register IR, which stores the
opcode that is currently being executed, and the program counter PC- which
automatically stores and keeps track of the address of the next instruction to be
fetched.
IAS COMPUTER

• The IAS has two general-purpose 40-bit data registers: AC


(accumulator) and DR (data register). It also has a third, special-purpose
data register MQ (multiplier - quotient) intended for use by multiply and
divide instructions.
• The IAS machine had around 30 types of instructions. The group of
instructions called program-control or branch instructions determine
the sequence in which instructions are executed. Program counter PC
specifies the address of the next instruction to be executed. Instructions
are normally executed in a fixed order determined by incrementing the
program counter PC.
IAS COMPUTER
ORGANIZATION OF THE CPU AND MAIN
MEMORY OF THE IAS COMPUTER.
IAS COMPUTER
• Resulting in the construction of several derivative MUSASINO-1 (Musashino, Tokyo, Japan)
computers referred to as "IAS machines", although
• JOHNNIAC (RAND)
they were not software compatible.
• MANIAC I (Los Alamos National Laboratory)
• Some of these "IAS machines" were:
• MISTIC (Michigan State University)
• AVIDAC (Argonne National Laboratory) • ORACLE (Oak Ridge National Laboratory)
• BESK (Stockholm) • ORDVAC (Aberdeen Proving Ground)

• BESM (Moscow) • PERM (Munich)[19]


• SARA (SAAB)
• Circle Computer (Hogan Laboratories, Inc.),1954
• SEAC (Washington, D.C.)[19]
• CYCLONE (Iowa State University) • SILLIAC (University of Sydney)
• DASK (Regnecentralen, Copenhagen 1958) • SMIL (Lund University)
• GEORGE (Argonne National Laboratory) • TIFRAC (Tata Institute of Fundamental
Research)
• IBM 701 (19 installations) • WEIZAC (Weizmann Institute)
• ILLIAC I (University of Illinois at Urbana–Champaign)
IAS COMPUTER -- SAMPLE INSTRUCTIONS
THE SECOND GENERATION
1959 - 1965 ---- TRANSISTOR

• Computer hardware and software evolved rapidly after the


introduction of the first commercial computers around 1950. The
vacuum tube quickly gave way to the transistor, which
was invented at Bell Laboratories in 1947, and a second generation of
computers based on transistors superseded the first generation of
vacuum tube-based machines.
• Like a vacuum tube, a transistor serves as a high-speed
electronic switch for binary signals, but it is smaller, cheaper
and requires much less power than a vacuum tube.
THE SECOND GENERATION
1959 - 1965 ---- TRANSISTOR
• "Scientific" computers of the second generation, such as the IBM
7094 which appeared in 1962, introduced floating-point number
formats and supporting instructions to facilitate numerical
processing. Floating point is a type of scientific notation where a
number such as 0.0000000709.
IBM 7094 - ARCHITECTURE
THE SECOND GENERATION
1959 - 1965 ---- TRANSISTOR

• High-level languages were also developed in the 1950s for business applications.
These are characterized by instructions that resemble English statements and operate
on textual as well as numerical data.
• One of the earliest such languages was Common Business Oriented Language
(COBOL), which was defined in 1959 by a group representing computer users and
manufacturers and sponsored by the U.S. Department of Defense. Like FORTRAN.
COBOL has continued (in various revised forms) to be among the most widely used
programming languages. FORTRAN and COBOL are the forerunners of other
important high-level languages, including Basic, Pascal, C, and Java, the latter dating from
the mid-1990s.
A NONSTANDARD ARCHITECTURE: STACK COMPUTERS.

• PUSH • POP
• Step 1 : Increment SP • Step 1 : Pop the data from the memory
• Step 2 : Push the data to the memory pointed by SP
pointed by SP and stay • Step 2 : Decrement SP and stay
A NONSTANDARD ARCHITECTURE: STACK COMPUTERS.

• z := w + 3 X (x-y).
POLISH NOTATION

• Polish notation was invented in 1924 by Polish logician Jan Łukasiewicz.


THE SECOND GENERATION
1959 - 1965 ---- TRANSISTOR

• Operating system, a system program designed to manage a computer's resources


efficiently and provide a set of common services to its users.
• It recognizes that a typical program alternates between program execution when it
requires use of the CPU, and IO operations when it requires use of an IOP.
Multiprogramming is accomplished by the CPU temporarily suspending execution of its
current program, beginning execution of a second program, and returning to the first
program later.
THE THIRD GENERATION
1965 - 1971 ---- INTEGRATED CIRCUITS
• This generation is traditionally associated with the introduction of integrated circuits (ICs),
which first appeared commercially in 1961, to replace the discrete electronic circuits used in
second-generation computers. The transistor continued as the basic switching device, but ICs
allowed large numbers of transistors and associated components to be combined on a tiny
piece of semiconductor material, usually silicon. IC technology initiated a long-term
trend computer design toward smaller size, higher speed, and lower hardware
cost. Perhaps the most significant event of the third-generation period (which began around
1965) was recognition of the need to standardize computers in order to allow software
to be developed and used more efficiently.
• The cost of writing and maintaining programs for a particular computer—the software cost
began to exceed that of the computer's hardware. At the same time many big users of
computers, such as banks and insurance companies, were creating huge amounts of
application software on which their business operations were becoming very
dependent.
IBM 360- ARCHITECTURE –SOFTWARE COMPATIBLE
THE THIRD GENERATION
1971 - 1980 ---- VLSI ERA - MICROPROCESSORS

• VLSI technology has evolved steadily from ICs containing just a few transistors (SSI, MSL ) to
those containing thousands (LSI) or millions of transistors; the latter case is termed very
large-scale integration or VLSI. The impact of VLSI on computer design and application has
been profound. VLSI allows manufacturers to fabricate a CPU , main
memory, or even all the electronic circuits of a computer, on a single
IC that can be mass-produced at very low cost.
EVOLUTION OF THE DENSITY OF COMMERCIAL ICS.
MICR0PROCESSOR

• The first microprocessor, Intel's 4004, which was introduced in 1971, was designed to
process 4-bit words.
• Intel successfully marketed the 4004 as a programmable controller to replace standard,
IC technology improved and chip
non-programmable logic circuits. As
density increased, the complexity and performance of one-chip
microprocessors increased steadily, as reflected in the increase in CPU
word size to 8 and then 16 bits by the mid-1980s. By 1990 manufacturers could
fabricate the entire CPU of a System/360-class computer, along with part of its
main memory, on a single IC. The combination of a CPU, memory, and IO circuits in one
IC (or a small number of ICs) is called a microcomputer.
INTEL CORPORATION
IC FAMILIES

• Two of the most important of these technologies are bipolar and unipolar; Both bipolar
and MOS circuits have transistors as their basic elements!
• Bipolar circuits use both negative carriers (electrons) and positive carriers (holes)--BJT.
MOS circuits, on the other hand, use only one type of charge carrier: positive in the case
of P-type MOS (PMOS) and negative in the case of N-type MOS (NMOS).
• An MOS family that efficiently combines PMOS and NMOS transistors in the same IC is
complementary MOS or CMOS. This technology came into widespread use in the 1980s
and has been the technology of choice for microprocessors and other VLSI ICs since
combination of high density, high speed, and very
then because of its
low power consumption.
ZERO DETECTOR CIRCUIT
CMOS – ZERO DETECTOR CIRCUIT
WHEN ALL INPUTS ARE ZERO, OUTPUT IS ONE
PROCESSOR ARCHITECTURE
• By 1980 computers were classified into three main
types: mainframe computers, minicomputers, and
microcomputers.
• The term mainframe was applied to the traditional
"large" computer system, often containing thousands of
ICs and costing millions of dollars. It typically served as
the central computing facility for an organization such as
a university, a factory, or a bank. Mainframes were then
room-sized machines placed in special computer centers
and not directly accessible to the average user. EX: IBM
zSeries, System z9.
PROCESSOR ARCHITECTURE

• The minicomputer was a smaller (desk size) and slower


version of the mainframe, but its relatively low cost
(hundreds of thousands of dollars) made it suitable as a
"departmental" computer to be shared by a group of
users—in a small business, for example.
EX: Web Server, Database Server.
• The microcomputer was even smaller, slower, and
cheaper (a few thousand dollars), packing all the
electronics of a computer into a handful of ICs,
including microprocessor (CPU), memory, and IO chips.
EX : Laptop
PROCESSOR ARCHITECTURE

• Microcomputer technology gave rise to a new class of general-


purpose machines called personal computers (PCs), which are
intended for a single user. These small, inexpensive computers
are designed to sit on an office desk or fold into a compact
form to be carried. The more powerful desktop computers
intended for scientific computing are referred to as
workstations.
• Personal computers were introduced in the mid-1970s by a
small electronics kit maker, MITS Inc. [Augarten 1984]. The
MITS Altair computer was built around the Intel 8008, an early
8-bit microprocessor, and cost only $395 in kit form. The most
successful personal computer family was the IBM PC series
introduced in 1981.
PROCESSOR ARCHITECTURE

• A new factor also aided the standardization process—namely,


IBM's decision to give the PC what came to be called an open
architecture, by making its design specifications available to other
manufacturers of computer hardware and software.
• As a result, the IBM PC became very popular, and many versions
of it—the so-called PC clones — were produced by others,
including startup companies that made the manufacture of low-
cost PC clones their main business.
PERSONAL COMPUTER
INTEL COMPUTER FAMILY
PROPERTIES OF RISC
(REDUCED INSTRUCTION SET COMPUTER)

• RISC
• Store/load are the only memory accesses
• Data manipulation instructions are register-to-
register
• Simple addressing mode
• Instruction formats are all the same length
• Instructions perform elementary operations
• One instruction per cycle (simple instruction)
• Examples: SUN’s SPARC, PowerPC, Microchip PIC
CPUs, and RISC-V
PROPERTIES OF CISC
(COMPLEX INSTRUCTION SET COMPUTER)

• CISC
• Memory access is available to most types of instruction
• Many addressing mode
• Instruction formats are of different lengths
• Instructions perform both elementary and complex
operations
• Multiple cycle for executing one instruction (complex
instruction)
• Exampes :: VAX,AMD, Intel x86, and the System/360
• MSUB Wd, Wn, Wm, Wa
Multiply-Subtract --- multiplies two register values (Wn – multiplicand ,
Wm- multiplier) , subtracts the product from a third register value (Wa--
minuent) , and writes the result to the destination register (Wd).
RISC VS. CISC (EX…)
RISC CISC
LD R4, (R1) ADD (R3), (R2), (R1)
LD R5, (R2)
ADD R6, R4, R5
ST (R3), R6
• Addition of two operands from memory, with result written in
memory, in RISC and CISC architectures
• Having an operation broken into small instructions (RISC) allows the
compiler to optimize the code
• i.e. between the two LD instructions (memory is slow) the compiler can
add some instructions that don’t need memory
• The CISC instruction has no option but to wait for its operands to
come from the memory, potentially delaying other instructions
PERFORMANCE CONSIDERATIONS - CPU

1. CPU speed
2. Program execution time - ET.
3. Millions of Instructions executed Per Second
– MIPS
4. Performance measure wrt ET
5. Speedup techniques.
CPU speed

• A rough indication of CPU speed is the number


of "basic" operations that it can perform per
unit of time.
• The speed of the clock is its frequency f
measured in millions of ticks per second; the
units for this are megahertz (MHz) or
microseconds.
• Each tick of the clock triggers a basic operation;
hence the time required to execute the
operation is 1/f microseconds .
• This value is called the clock cycle or clock
period (T clock) . For example, a computer
clocked at 250 MHz can perform one basic
operation in the clock period T clock = 1/250 =
0.004 microseconds
• Program execution time ET.

The CPU's processing of an instruction involves


several steps, each of which requires at least one
clock cycle:
1. Fetch the instruction from main memory M.
2. Decode the instruction's opcode.
3. Load (read) from M any operands needed unless
they are already in CPU registers.
4. Execute the instruction via a register-to-register
operation using an appropriate functional unit of the
CPU, such as a fixed-point adder.
5. Store (write) the results in M unless they are to be
retained in CPU registers.
Program execution time -- ET.

• N is the actual number of instructions executed,


including repeated executions of the same
instruction; it is not the number of instructions
appearing in Q (Program).
• As far as the typical computer user is concerned, the
key performance goal is to minimize the total
program execution time ‘ET’. ET
depends on two basic parameters of the
computer's architecture and implementation.
• IPS - Instructions executed per second.
• CPI - Cycles per instruction to execute Q.
• Program execution time ET.

• Execution time (ET) =

Instruction count * cycles per instruction * clock cycle time

Ques 1: which program runs faster? ( ALSU – Arithmetic Logic Shift


Unit) ( IC – INSTRUCTION COUNT)
• computer's performance wrt ET

Equation ET indicates how the three separate factors software, architecture


and hardware technology jointly determine a computer's performance.
Execution time (ET) = Instruction count * cycles per instruction * clock cycle time (T)

1. Software:The efficiency with which the programs are written and compiled into
object code influences N, the number of instructions executed. Other factors
being equal, reducing N tends to reduce the overall execution time T.
2.Architecture:The efficiency with which individual instructions are processed
directly affects CPI, the number of cycles per instruction executed. Reducing
CPI also tends to reduce T.
3. Hardware: The raw speed of the processor circuits determines the clock frequency f .
Increasing f tends to reduce T.
In general, the complex instruction sets of CISC processors aim to reduce N at the
expense of CPI, whereas RISC processors aim to reduce CPI at the expense of N.
• SPEEDUP TECHNIQUES

1. CACHE
2. PIPELINE ARCHITECTURE
3. SUPERSCALAR PROCESSING
1. CACHE

Cache is a memory unit placed between the CPU and main memory M and
used to store instructions, data, or both. It has much smaller storage capacity
than M, but it can be accessed (read from or written into) more rapidly and is
often placed (at least partly) on the same chip as the CPU.
• NON-PIPELINED AND PIPELINED
• PIPELINED AND SUPER-PIPELINED
• THE POWERPC
MICROPROCESSOR SERIES
In the early 1990s Apple, IBM, and Motorola jointly
developed the PowerPC. It is a family of single-chip
microprocessors, including the 601, 603, and other
models, which share a common architecture
derived from the POWER architecture used in IBM's
RISC RISC-style designs:
1. Instructions have a fixed length (32 bits or one
word) and employ just a few opcode formats and
addressing modes.
2. Only load and store instructions can access main
memory; all other instructions must have their
operands in CPU registers. This load/store
architecture reduces the time devoted to accessing
memory. This time is further reduced by the use of
one or more levels of cache memory.
• THE POWERPC
MICROPROCESSOR SERIES
3. Instruction processing is heavily pipelined. For
example, the PowerPC has an E-unit for integer
(fixed-point) operations that has the four pipeline
stages: fetch, decode, execute, and write results.
Hence if an E-unit's pipeline can be kept full, a new
result emerges from it every clock cycle, thus
achieving the ideal performance level of one fully
executed instruction per clock cycle.
4. The CPU contains several E-units—the number
depends on the model—which allow it to issue
several instructions simultaneously and puts the
PowerPC in the superscalar category.
• POWERPC
• POWERPC
•Early PowerPC models, such as the 601 and
603, which have three E-units: an integer
execution unit, a floating- point unit, and a
branch processing unit, allowing up to three
instructions to be issued in the same clock cycle.
The integer unit executes all fixed-point numerical
and logic operations, including those associated
with load-store instructions.
•Although part of the CPU's program control unit,
the branch processing unit is considered an E-unit
for branch instructions. Each PowerPC chip also
contains a cache memory, whose size and
organization vary with the model.
OVERVIEW OF COMPUTER SYSTEM
OVERVIEW OF COMPUTER SYSTEM
•The computer's main hardware components continue to be a CPU. a
main memory, and an input/output subsystem, which communicate with
one another over a system bus. Its main software component is an
operating system that performs most system management functions. The
key hardware element is a single-chip microprocessor, embodying a
modern version of the von Neumann architecture. The microprocessor
serves as the computer's CPU and is responsible for fetching, decoding,
and executing instructions.
•Data and instructions are typically composed of 32-bit words, which
constitute the basic information units processed by the computer. The
CPU is characterized by an instruction set containing up to 200 or so
instruction types, which perform data transfer, data processing, and
program control operations .

•The CPU may be augmented by on-chip or off-chip coprocessors that


implement such specialized functions as managing the graphical user
interface (GUI).
OVERVIEW OF COMPUTER SYSTEM

The role of the computer's main or primary memory M is


to store programs and data as they are being processed by
the CPU. M is a random-access memory (RAM)
comprising a linear store of items (usually 8-bit bytes),
each of which is assigned a unique address that permits
the CPU to read or write its contents via load or store
instructions, respectively.
The CPU usually takes much longer to access a word
stored in the input/output system than to access a word
stored in M—most input/output operations are quite
slow.
OVERVIEW OF COMPUTER SYSTEM

•In PowerPC, an intermediate memory called a cache is


inbuilt in CPU and M. The purpose of the input/output
system is to enable a user to communicate with the
computer.
•Input/output devices are attached to the host computer
by means of input/output ports, whose function is to
control data transfers between input/output devices and
main memory.
•Special software, such as the Windows interface
found in personal computers, supports GUIs. Audio
interfaces for speech generation and recognition
extend the computer into a multimedia system.
Microcontrollers
•Microcontrollers were designed, for tasks with special-purpose
control circuits. For example, controlling a home washing
machine or the ignition system of a car. Programs stored in a
read-only memory (ROM) that forms a part of the main memory
tailor a microcontroller to a particular application. The
microcontroller is built into, or embedded in, the controlled
device, often in a way that is invisible to the end user.
•The microcontroller has a conventional computer organization
built around a system bus to which are attached a
microprocessor (the CPU), one or more ROM chips for program
storage, and one or more RAM chips for data and working
storage. All input/output devices are also connected to the system
bus using IO ports with standard interfaces.
Computer networks.

The linking of computers to form networks of various


types has become an increasingly important feature of
modern computing.
local-area network (LAN)

•Computer in an office or industrial environment is


typically linked to other computers in the same
organization via communication links that can be
thought of as an extension to the system bus. The linked
computers then form a small, closed computer
network known as a local-area network (LAN) or
intranet.
• Several LANs can be linked together by various means
including the telephone networks, which increasingly
are designed to accommodate digital data transmission,
including video data, as well as the traditional (digitized)
voice communication.
INTERNET

The internet's development began in the 1960s.


Joseph Carl Robnett Licklider: Created the first practical
schematics for the internet in the early 1960s.
Advanced Research Projects Agency Network
(ARPANET): Created in the late 1960s, this network was funded by
the U.S. Department of Defense.
Bob Kahn and Vinton Gray Cerf : Vinton Gray Cerf is an American
Internet pioneer and is recognized as one of "the fathers of the Internet",
sharing this title with TCP/IP co-developer Bob Kahn.
Paul Mockapetris and Jon Postel: Invented the Domain Name
System (DNS) in 1983. DNS converts IP addresses into names.
Vinton Gray Cerf and Bob Kahn are American computer scientists
who are known as the "Fathers of the Internet". They designed the
Transmission Control Protocol/Internet Protocol (TCP/IP), which is
the set of rules that govern how data is sent over the internet.
What they did:
•Designed the basic architecture of the internet
•Created the Transmission Control Protocol (TCP) in 1974
•Developed the rules for how data is sent over the internet
•Allowed computers with different hardware to communicate with
each other
ARPANET

•The Internet had its origins in a computer


network called the ARPANET sponsored by the
Advanced Research Projects Agency of
the U.S. Department of Defense around 1970.
•This experimental network was originally
designed to connect research institutions in the
United States ;
•The ARPANET at an early stage in its evolution
(1972), linked 26 research organizations in the
United States.
ARPANET
•The communication software designed for the ARPANET known
as TCP/IP (Transmission Control Protocol/Internet Protocol)
defines the communication standards for the Internet.
•The ARPANET pioneered an information- transmission
technique called packet switching, which divides both long
and short messages into packets of fixed length that can be
transmitted independently from source to destination via variable
numbers of intermediate nodes.
•Each node contains a server that is responsible for sorting the
packets from the various messages and forwarding them to the
appropriate next destinations. Different packages can be sent by
different routes determined by the network traffic conditions. At
the final destination, a message is reassembled from its
constituent packets.
Two characters – ‘L’ and ‘O’ – typed into a computer terminal at
UCLA(University of California, Los Angeles) were successfully
transmitted to a computer at the Stanford Research Institute (SRI),
some 352 miles (566 km) away, before the connection was lost.
OCT 29,1969
Internet - 1990

In the early 1990s Internet emerged,


….which because of its huge size and
global reach—an estimated 16 million
server sites, in 180 countries with
72 million users in 1997—has had a
profound impact on the way people
compute and communicate.
Internet - 1990
In the early years, the Internet was used almost
exclusively to transfer text files such as
electronic mail (e-mail) messages. This situation
changed fundamentally in 1989 when scientists at
CERN (Centre Europeen pour la Recherche
Nucleaire) in Geneva overlaid on TCP/IP a
new, high-level protocol called http
{hypertext transport protocol) and an
associated programming language html
{hypertext markup language) to permit the
linking of diverse file types—text, still pictures,
movies, sound, etc.—in an simple way.
Cray-1 supercomputer
The Cray-1 was the first commercially successful
supercomputer. It was designed by Seymour Cray and
manufactured by Cray Research. The Cray-1 was the world's
fastest supercomputer from 1976 to 1982.
Speed: The Cray-1 could perform 240 million calculations per
second.
Size: The Cray-1 was smaller than other supercomputers of its
time.
Design: The Cray-1 had a round tower shape to minimize wire
lengths, a bench to hide power supplies, and integrated circuits
that were densely packed.
Memory: The Cray-1 had a 1 million-word semiconductor
memory.
Registers: The Cray-1 had three primary register sets, including
eight 24-bit address registers, eight 64-bit scalar registers, and
eight vector registers.
Cray-1 supercomputer
Tightly coupled & Loosely coupled systems

Tightly coupled systems have components that are highly


dependent on each other, while loosely coupled systems have
components that are more independent.
Tightly coupled systems
•Components are dependent: Changes to one component
can affect other components and the entire system.
•Inflexible: Modifications to one component may require
changes to others.
•Difficult to scale and maintain: Changes, can make it hard
to scale and maintain the system.
Loosely coupled systems
•Components are independent: Changes to one
component do not affect other components.
•Flexible and easier to maintain: Changes are less likely to
impact the entire system.
•Easier to develop and test: Components can be developed,
modified, and tested independently.
Tightly coupled & Loosely coupled systems
Multiprocessors

•Many large-scale scientific computations permit a task


to be partitioned into subtasks but require frequent and
rapid exchange of results between the subtasks. The
time required for such exchanges—they are essentially
slow to transfers—limits the usefulness of a computer
network as a supercomputer.
•To address the interprocessor communication problem,
computers, have been built that employ n separate
CPUs that are tightly coupled, both physically and
logically. Processors in these machines can access one
another's data rapidly and are called multiprocessors.
Types of multiprocessors
Two types of multiprocessors are shared-memory
and distributed-memory machines.
• In shared-memory machines all the processors
have access to a common main memory through
which they communicate to share programs and
data.
•In distributed-memory machines each processor
has only a private or local main memory and
communicates with other processors by sending
them messages through an I/O subsystem linking the
processors.
•In each case a key issue is to design processor-to-
memory or processor-to-processor interconnection
networks that are of high-speed and reasonable
cost. For small multiprocessors containing up to 30 , a
fast bus can serve as an interconnection network.
Types of multiprocessors
SHARED DISTRIBUTED
SYSTEM REPRESENTATION

• System ---collection of objects called components


• Function of a system---Function of its components and
how components are connected.
• Example ---Graph
SYSTEM DESIGN

• System representation
• Design process
NODES AND EDGES
CENTRAL PROPERTIES
OF SYSTEM

• Structure – Abstract graph consisting of its block


diagram
• Behavior – enables to determine for any input signal ‘a’
to the system, the corresponding output f(a).
IDENTIFY THE CIRCUIT …..
SYSTEM DESIGN—HALF ADDER
• Step 1 • Step 3

• Step 2

• C=f(a,b) D= f(a,b)
HDL- HARDWARE DESCRIPTION
LANGUAGE
• VHDL--VHSIC Hardware Description Language

VHSIC – Very High Speed Integrated


Circuits
VHDL

• Hardware description languages describe a system


• Systems can be described from many different points of view

• Behavior: what does it do?


• Structure: what is it composed of?
• Functional properties: how do I interface to it?
• Physical properties: how fast is it?
VHDL – TWO DESCRIPTION PARTS

• Entity declaration: interface to outside


world; defines input and output signals
• Architecture: describes the entity, contains
process, components operating concurrently
VHDL FOR HALF ADDER
• entity half_adder is
• port(
• a,b: in bit;
• c,d : out bit);
• end half_adder ;

• architecture myadd of half_adder is


• begin
• c <= a xor b;
• d <= a and b;
• end myadd ;
COMPONENT AND PORTMAP VHDL
COMPONENT AND PORT MAP
VHDL
DESIGN PROCESS

• Given a system's structure, the task of determining its function or behavior is termed analysis.
The converse problem of determining a system structure that exhibits a given behavior is design
or synthesis.
DESIGN PROCESS

• CAD editors or translators convert design data into


forms such as HDL descriptions or schematic diagrams,
which humans, computers, or both can efficiently process.
• Simulators create computer models of a new design, which
can mimic the design's behavior and help designers
determine how well the design meets various performance
and cost goals.
• Synthesizers automate the design process itself by deriving
structures that implement all or part of some design step.
synthesis refers to the process of generating a hardware design or
circuit description automatically using a hardware description language (HDL)
such as VHDL or System Verilog.
DESIGN PROCESS

 Design rule checking (DRC)


 LVS (layout versus schematic)
 ERC (electrical rule check)
HIERARCHY OF COMPUTER ARCHITECTURE

High-Level Language Programs

Assembly Language
Software Application Programs
Operating
Machine Language System
Program
Compiler Firmware
Software/Hardware Instruction Set
Architecture
Boundary Instr. Set Proc. I/O system
Datapath & Control
Hardware
Digital Design Microprogram
Circuit Design
Layout

Register Transfer
Logic Diagrams
Notation (RTN)

Circuit Diagrams
DESIGN PROCESS FOR DIGITAL
SYSTEMS

• Processor level
• Register level
• Gate level
DESIGN LEVELS.
FULL ADDER
FOUR BIT RIPPLE C ARRY ADDER
RIPPLE C ARRY ADDER
RIPPLE CARRY ADDER

• It is usual to supply a sequential circuit with a precisely


controlled clock signal that determines the times at which the
flip-flops change state; the resulting circuit is said to be clocked
or synchronous. Each tick (cycle or period) of the clock
permits a single change in the circuit's state Y.
REGISTER LEVEL
MULTIPLEXERS.

• A multiplexer is a device intended to route data from one of several


sources to a common destination; the source is specified by applying
appropriate control (select) signals to the multiplexer. If the
maximum number of data sources is k and each 10 data line carries
m bits, the multiplexer is referred to as a k-input (or k-way), m-bit
multiplexer. It is convenient to make k = 2^P, so that data source
selection is determined by an encoded pattern or address of p bits.
MULTIPLEXERS
2:4 DECODER
DESIGN OF MAGNITUDE COMPARATOR USING 4 BIT
BINARY ADDER
X X’ Y Y’ Z1 = X<Y Z2 = Z3 = X>Y
X’+Y= (X=Y) X+Y’=
Cout Cout
0011 1100 1011 0100 1 0 0

0111 1000 1100 0011 1 0 0

1011 0100 0011 1100 0 0 1

1100 0011 0111 1000 0 0 1

1100 0011 1100 0011 0 1 0


PROGRAMMABLE LOGIC DEVICES

• A class of components called programmable logic devices or PLDs, a term applied to ICs
containing many gates or other general-purpose cells whose interconnections can be
configured or "programmed" to implement any desired combinational or
sequential function [Alford 1989].
• PLDs are relatively easy to design and inexpensive to manufacture. They constitute a key
technology for building application-specific integrated circuits (ASICs).
• Two techniques, are used to program PLDs: mask programming, which requires a few
special steps in the IC chip-manufacturing process, and field programming, which is
done by designers or end users "in the field" via small, low-cost programming units. Some
field-programmable PLDs are erasable, implying that the same IC can be reprogrammed
many times. This technology is especially convenient when developing and debugging a
prototype design for a new product.
PROGRAMMABLE ARRAYS.
• The connections leading to and from logic elements in a PLD contain transistor switches
that can be programmed to be permanently switched on or switched off. These switches
are laid out in two-dimensional arrays so that large gates can be implemented
with minimum IC area.
• The programmable logic gates of a PLD array are represented abstractly in Figure 2.32b,
with x denoting a programmable connection or cross point in a gate's input line.
The absence of an x means that the corresponding connection has been programmed to
the off (disconnected) state.
• The gate structures of Figure 2.32b can be combined in various ways to implement logic
functions. The programmable logic array (PLA) shown in Figure 2.33 is intended to realize
a set of combinational logic functions in minimal SOP form. It consists of an array of
AND gates (the AND plane), which realize a set of product terms (prime implicants),
and a set of OR gates (the OR plane), which form various logical sums of the product
terms.
PROGRAMMABLE ARRAYS.
• The inputs to the AND gates are programmable and include all the input variables
and their complements. Hence it is possible to program any desired product term
into any row of the PLA.
PROGRAMMABLE ARRAYS.
• Closely related to a PLA is a read-only memory (ROM) that generates all 2^n
possible n -variable product terms (minterms) in its AND plane. This
enables each output column of the OR plane to realize any desired function
of n or fewer variables in sum-of-minterms form.
• In ROM, the AND plane is fixed; the programming that determines the
functions generated by a ROM is confined to the OR plane. A small ROM
with three input variables, 2^3 = 8 rows, and two output columns is shown in
Figure 2.34b. It has been programmed to realize the full-adder function defined by
Figure 2.34a—compare the multiplexer realizations of the full adder appearing in
Figure 2.22. Note the use of dots to denote the fixed connections in the AND
plane.
PROGRAMMABLE ARRAYS.

• Field-programmable ROMs are known as PROMs (programmable ROMs). PLAs


and ROMs are universal function generators capable of realizing a set of
logic functions that depend on some maximum number of variables.
• They are two level logic circuits in which the lines can have large fan-out and the
gates (especially the output gates) can have large fan-in. High fan-in and fan-
out tends to make these circuits' propagation delays quite high, however.
• A ROM is a memory device only in the sense that its OR plane "stores" the 2^n
data words that have been programmed into it. A stored word is read out
each time the ROM receives a new input combination or address. The
AND plane therefore serves as a 1-out-of-2^n address decoder.
FIELD-PROGRAMMABLE GATE ARRAY (FPGA)

• Field-programmable gate arrays. This important class of PLDs was introduced


in the mid-1980s. A field-programmable gate array (FPGA) is a two-
dimensional array of general-purpose logic circuits, called cells or logic
blocks, whose functions are programmable; the cells are linked to one
another by programmable buses.
• The cell types are not restricted to gates. They are small multifunction
circuits capable of realizing all Boolean functions of a few variables; a
cell may also contain one or two flip-flops. Like all field-programmable devices,
FPGAs are suitable for implementing prototype designs and for small-scale
manufacture. FPGAs can store the program that determines the circuit to be
implemented in a RAM or PROM on the FPGA chip.
FIELD-PROGRAMMABLE GATE ARRAY (FPGA)

• The pattern of the data in this configuration memory (CM) determines the cells'
functions and their interconnection wiring. Each bit of CM controls a transistor
switch in the target circuit that can select some cell function or make (break) some
connection.
• By replacing the contents of CM, designers can make design changes or correct design
errors. This type of FPGA can be reprogrammed repeatedly, which significantly reduces
development and manufacturing costs. Some FPGAs employ fuses or antifuses as
switches, which means that each FPGA IC can be programmed only once. These
one-time programmable FPGAs have other advantages, however, such as higher density,
and smaller or more predictable delays.
Field-Programmable Gate Array (FPGA)

• Two types of logic cells found in FPGAs are


those based on
1.Multiplexers based FPGA
2.PROM table-lookup memories.
MULTIPLEXERS BASED FPGA
GATE ???
D FF
SERIAL ADDER
2 STAGE PIPELINE DESIGN
Processor LEVEL
I/O processors
 In computer systems, input-output processors (IOPs) are specialized components
or units that manage the flow of data between the computer's central
processing unit (CPU) and external devices (such as storage devices, printers,
keyboards, etc.). The primary function of IOPs is to offload input/output (I/O)
operations from the main processor, enabling faster and more efficient data
transfers.
 Key functions of an Input-Output Processor:
 Data Transfer: They facilitate the transfer of data between the CPU and
peripheral devices (like hard drives, keyboards, printers, etc.).
 Data Conversion: Some IOPs also handle conversion between data formats to
ensure compatibility between the CPU and different devices (for instance,
converting digital data to analog signals for audio).
 Interrupt Handling: IOPs manage interrupts generated by external devices, which
prompt the CPU to process certain I/O operations.
 Buffering: IOPs often use buffers to temporarily store data during transmission,
preventing the CPU from being slowed down by slow devices.
CPU ORGANIZATION
Fundamentals
 The primary function of the CPU and other instruction-set processors is to
execute sequences of instructions, that is, programs, which are stored in
an external main memory.
 Program execution is therefore carried out as follows:
 1. The CPU transfers instructions and, when necessary, their input data
(operands) from main memory to registers in the CPU.
 2. The CPU executes the instructions in their stored sequence except
when the execution sequence is explicitly altered by a branch
instruction.
 3. When necessary, the CPU transfers output data (results) from the CPU
registers to main memory.
Cache
 When no cache memory is present, the CPU communicates directly
with the main memory M, which is typically a high-capacity multichip
random-access memory (RAM). The CPU is significantly faster than M:
that is. it can read from or write to the CPU's registers perhaps 5 to 10
times faster than it can read from or write to Memory. VLSI technology,
especially the single-chip microprocessor, has tended to increase the
processor/main-memory speed disparity.
 To remedy this situation, many computers have a cache memory CM
positioned between the CPU and main memory. The cache CM is
smaller and faster than main memory and may reside, wholly or in
part, on the same chip as the CPU.
 It typically permits the CPU to perform a memory load or store
operation in a single clock cycle, whereas a memory access that
bypasses the cache and is handled by main memory takes many
clock cycles.
User and supervisor modes.
 The programs executed by a general -purpose computer fall into two broad
groups: user programs and supervisor programs.
 A user or application program handles a specific application, such as word
processing, of interest to the computer's users.
 A supervisor program, on the other hand, manages various routine aspects of the
computer system on behalf of its users; it is typically part of the computer's
operating system. Examples of supervisory functions are controlling a graphics
interface and transferring data between secondary and main memory. In normal
operation the CPU continually switches back and forth between user and
supervisor programs.
 For example, while executing a user program, the need often arises for
information that is available only on some hard disk unit in the computer's IO
system. This condition causes the control to temporarily suspend execution of the
user program, execute a routine, that initiates the required Input/Output data
transfer operation, and then resume execution of the user program.

You might also like