0% found this document useful (0 votes)
105 views

The Essentials of Computer Organization and Architecture, Fifth Edition by Null and

Uploaded by

bsnqhhnq2p
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
105 views

The Essentials of Computer Organization and Architecture, Fifth Edition by Null and

Uploaded by

bsnqhhnq2p
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Our required textbook for this course is:

The Essentials of Computer Organization


and Architecture, Fifth Edition by Null and
Lobur (2019).

The online/navigate access code is not


required for this course, please don’t
purchase one!!
Computer Architecture
COSC/ELEN-3443

Chapter 1 - Introduction
You may be wondering…

Why do I need to take computer architecture?

Why can’t I just take the courses that I want to take? Like machine
learning, web development, etc.

The problem with this line of thinking is, you are:

1. Locking yourself out of opportunities


2. Denying yourself knowledge
The material in this course is especially useful in the
following areas:

1. Programming language design and compiler development


2. Device driver development
3. Embedded systems
4. Optimization

and more.
Ch. 1.2 Computer Systems

Principle of equivalence of hardware and software


“Any task done by software can also be done
using hardware, and any operation performed
directly by hardware can be done using software.”

https://visualboyadvance.org/
At their core, computers consist of at least three major parts:

1. a processor to interpret and execute programs


2. some memory to store programs and other data
3. some input/output (IO) device to communicate with the outside
world
We typically consider five categories of computers (from most
powerful to least powerful):

1. Supercomputers
2. Mainframes (a.k.a. servers)
3. Personal computers (desktops, laptops)
4. Mobile devices
5. Embedded systems (sensors)

But these categories are not hard and fast.


Ch. 1.3 Example System
In computer architecture, we have a few typical metrics that we’re
looking at:

• Size in bytes
• Frequency in hertz
• Power draw in watts
• Network speed in bits per second
• And more 
Source: http://en.wikipedia.org/wiki/SI_prefix
Source: http://en.wikipedia.org/wiki/Binary_prefix
Who’s faster?

Student A: 11,000 millimeters/second


Student B: 0.009 kilometers/second
Student C: 10.7 meters/second
Which SSD is larger?

Option A: 2,500 GB
Option B: 3.1 MB
• Fast
• Large
• Cheap

At most, we
can pick two.
Ch. 1.4 Standards Organizations
We live in a plug-and-play world where:
• Every monitor and graphics card supports HDMI
• Every mouse and PC use USB
• Your phone can connect to any set of Bluetooth earbuds
• Etc.

This is only possible because of the standards that the electronic


industry follows. We can thank organizations like IEEE (the Institute
of Electrical and Electronics Engineers) for these standards.
The most recent fight over standards has been the charger debate of
Apple vs. the European Union. Apple’s proprietary lightning charger is
being phased out due to EU legislation demanding that all mobile
devices use USB-C chargers.

Can you think of one pro to this decision?


USB-C Lightning

Can you think of one con to this decision?

https://www.theguardian.com/technology/2022/oct/26/iphone-usb-c-lightning-connectors-apple-eu-rules
Ch. 1.5 Historical Development
Generation Zero: Mechanical Calculating Machines (1642-1945)

• The Calculating Clock (1592-1635?)


• The Pascaline (1642)
• The Stepped Reckoner (1672)
• The Lightning Portable Adder (1908)
• Addometer (1920)

While all of these machines were useful, none of them can be


classified as computers. They were mostly mechanical calculators.
The Calculating Clock (1592-1635?)
The Pascaline (1642)
The Stepped Reckoner (1672)
The Lightning Portable Adder (1908)
Addometer (1920)
The Jacquard loom
(1801) was a loom that
used punch cards to
automated the weaving
process.

https://www.youtube.com/watch?v=K6NgMNvK52A
Charles Babbage invented the
Analytical Engine in 1833. This
machine, unlike his previously
invented Difference Engine, met
all of the requirements of being a
computer.

Ada Lovelace worked with


Charles Babbage and wrote
programs for the machine.
Several electro-mechanical computers were
invented, but they were always limited by their
moving parts.

The Z1 (1938) by Konrad Zuse The Bombe (1940) by Alan Turing Harvard Mark I by Howard Aiken
(1944)
The First Generation: Vacuum Tube Computers (1945-1953)

The introduction of the vacuum tube spelled the end of electro-


mechanical computers.

The first fully electronic


computer was the Atanasoff-
Berry Computer (ABC)
completed in 1942.

https://www.youtube.com/watch?v=YyxGIbtMS9E
The ENIAC (Electronic Numerical Integrator and Computer), invented
by Maulchy and Eckert in 1945, is considered the first all-electronic,
general-purpose digital computer.

The ENIAC could process 500 FLOPS (FLoating point Operations Per
Second)
The Second Generation: Transistorized Computers
(1954-1965)

The introduction of the vacuum tube transistor


spelled the end of electro-mechanical vacuum tube
computers.

Transistors can perform the role of the


vacuum tube while being much smaller,
cooler, power efficient, and reliable.
The first supercomputer, the CDC 6600, was built in 1964. The CDC
6600 could process up to 3 mega FLOPS – compare that to the ENIAC!
The Third Generation: Integrated Circuit Computers
(1965-1980)

Jack Kilby invented the integrated circuit in 1958.

Integrated circuits solved the largest problem in


circuit manufacturing – manually soldering all of the
transistors and other components together.

As time went on, IC manufacturing improved


and more transistors could fit on a single
chip…

https://ethw.org/Milestones:First_Semiconductor_Integrated_Circuit_(IC),_1958
Seymour Cray, the lead designer of the CDC 6600, went on to build a
new supercomputer called the Cray-1 in 1976.

The Cray-1 could perform 160 mega FLOPs and hold 8 megabytes
of memory.
The Fourth Generation: VLSI Computers (1980-???)

The industry standard for measuring transistor density is as follows:


 SSI (small-scale integration) – 10 to 100 components per chip
 MSI (medium-scale integration) – 100 to 1000 components per chip
 LSI (large-scale integration) – 1000 to 10,000 components per chip
 VLSI (very-large-scale integration) – more than 10,000 components

In 1965, the CEO of Intel – Gordon Moore, made the claim that the
component density of integrated circuits would double every year. This is
Moore’s Law. This was later updated to every 18 months.
Intel released the world’s first microprocessor, the 4004, in 1971. This
processor contained 2,300 transistors.

Intel’s “Raptor Lake” series of processors that were released in 2022


contain approximately 26 billion transistors. This is the power of Moore’s
law.

https://forums.tomshardware.com/threads/how-many-transistors-on-raptor-lake-cpus.3791683/
The fastest supercomputer today is the Hewlett Packard Enterprise
Frontier, or OLCF-5. The Frontier supercomputer can process 1.102 exa
FLOPS and consumes 21 megawatts of power.
While Moore’s law slowed down (transistor density doubling every 18
months instead of every year), the transistor density was still trending
into insanity.

A new industry standard was being considered: ULSI (ultra-large-scale


integration) for chips with more than one billion transistors, but this
standard was already reached in 2005.

Some people today consider Moore’s law to be dead, but the industry still
has some tricks up their sleeve.
With ICs becoming smaller, home computers were becoming a
possibility.

After several attempts to break into the home computer market, the IBM
Personal Computer in 1981 was a success.
The PC was built on open architecture. This
has had a long-standing effect on the home
computer market even to this day.

The term “PC” has


become synonymous with
any open architecture
home computer.
Ch. 1.6 The Computer Level Hierarchy
Most people know how to use simple applications on a computer, and
they’re also aware that there are electronics in the computer that
allow the application to run, but they have no clue what exists in-
between: this is called a semantic gap.

We can apply divide-and-conquer to shorten the semantic gap by


separating the computer out into several layers, with the gap between
each layer being relatively small.
Ch. 1.7 Cloud Computing
Skip – covered in more detail in COSC 4451 Distributed Applications.

Ch. 1.8 The Fragility of the Internet


Skip – covered in more detail in COSC 4478 Computer Networks.
Ch. 1.9 The von Neumann Model
One of the largest hurdles that computer designers in the early 20th
century had to overcome was that there was no blueprint to build off
of, no design they could easily replicate or use as a launching off
point.

Mauchly and Eckert developed a proposal for the EDVAC (Electronic


Discrete Variable Automatic Computer), which would include stored-
program memory. John von Neumann published their results, and we
now refer to this proposed architecture as the von Neumann model.
Von Neumann computers…

• Consist of a CPU, main


memory system, and I/O
system,
• can process instructions
sequentially,
• have a single line of
communication between
the CPU and main
memory system – the
von Neumann bottleneck.
Von Neumann computers
execute programs using the
fetch-decode-execute cycle.

1. The control unit fetches


the next instruction from
memory,

2. The instruction is
decoded by the ALU,

3. The ALU executes the


instruction and stores the
result back.
Most computers do not use the “pure” von Neumann model, rather,
they used some modified version of it that fits their needs.

Some popular modifications include:


• The addition of a system bus where all components communicate
through a shared set of wires.

• The addition of a floating-point unit for handling floating-point


numbers.

• The addition of interrupts (or traps) that can disrupt the normal
fetch-execute-decode cycle to handle important events

• And more.
Ch. 1.10 Non-von Neumann Models
Here are some examples of non-von Neumann models:

• The Harvard architecture uses separate memory modules to store


program instructions and program data.

• Digital signal processors (DSPs) are specialized chips that operate


on streams of data and do not hold to the conventional von
Neumann architecture.

• Parallel computers that have multiple cores (or even multiple


processors) do not hold to the von Neumann model.
Ch. 1.11 Parallel Processors
The simplest idea for parallel processing is to use more than one
chip. The Frontier supercomputer uses 9,472 individual processors.

Another idea is to stick more execution units (a.k.a. cores) on a single


chip, allowing one CPU to perform several tasks in parallel.

https://socs.binus.ac.id/2017/03/27/multi-core-processors/
If you’re multitasking with several independent programs on your
computer, then each program can be given a dedicated core, which
speeds up the system. The alternative is that every program would
have to share one core and wait their turn.

However, if you’re trying to speed up one program, then that program


must be multithreaded to achieve any speedup in a parallel computer.

Multithreading is only possible if the program itself can be parallelized.


Some programs (or portions of a program) cannot be parallelized.

Amdahl’s Law states that “the performance enhancement possible with


a given improvement is limited by the amount that the improved feature
is used”.
Summary
In summary, we’ve laid the groundwork for our course by discussing
what a computer system is, analyzing an example computer system,
discussing how the standards for computer systems are developed,
discussing the history of computer systems, analyzing the layers of a
computer system and what each layer is responsible for, discussing
the von Neumann model, and briefly touching on parallel processing.

You might also like