0% found this document useful (0 votes)
2 views

2.Week

The document provides an overview of advanced computer architecture, focusing on performance issues, including designing for performance, multicore systems, and key laws like Amdahl's and Little's Law. It discusses techniques to enhance microprocessor speed, the importance of cache and memory hierarchy, and the challenges of power and logic density. Additionally, it outlines methods for calculating performance metrics using different means and the significance of benchmarking in evaluating system performance.

Uploaded by

Aiman Al Arab
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

2.Week

The document provides an overview of advanced computer architecture, focusing on performance issues, including designing for performance, multicore systems, and key laws like Amdahl's and Little's Law. It discusses techniques to enhance microprocessor speed, the importance of cache and memory hierarchy, and the challenges of power and logic density. Additionally, it outlines methods for calculating performance metrics using different means and the significance of benchmarking in evaluating system performance.

Uploaded by

Aiman Al Arab
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

(Advanced) Computer Architechture

Prof. Dr. Hasan Hüseyin BALIK


(2nd Week)
Outline
1. Overview
—Basic Concepts and Computer Evolution
—Performance Issues
+
1.2 Performance Issues
1.2 Outline
• Designing for Performance
• Multicore, MICs, and GPGPUs
• Two Laws that Provide Insight: Ahmdahl’s
Law and Little’s Law
• Basic Measures of Computer Performance
• Calculating the Mean
• Benchmarks and SPEC
Designing for Performance
• The cost of computer systems continues to drop dramatically, while the performance and
capacity of those systems continue to rise equally dramatically
• Today’s laptops have the computing power of an IBM mainframe from 10 or 15 years ago
• Processors are so inexpensive that we now have microprocessors we throw away
• Desktop applications that require the great power of today’s microprocessor-based
systems include:
– Image processing
– Three-dimensional rendering
– Speech recognition
– Videoconferencing
– Multimedia authoring
– Voice and video annotation of files
– Simulation modeling

• Workstation systems now support highly sophisticated engineering and scientific


applications and have the capacity to support image and video applications.
• Businesses are relying on increasingly powerful servers to handle transaction and
database processing and to support massive client/server networks that have replaced
the huge mainframe computer centers of yesteryear
• Cloud service providers use massive high-performance banks of servers to satisfy high-
volume, high-transaction-rate applications for a broad spectrum of clients
Microprocessor Speed
Techniques built into contemporary processors include:

• Processor moves data or instructions into a


Pipelining conceptual pipe with all stages of the pipe processing
simultaneously

• Processor looks ahead in the instruction code fetched


Branch prediction from memory and predicts which branches, or groups
of instructions, are likely to be processed next

Superscalar • This is the ability to issue more than one instruction in


every processor clock cycle. (In effect, multiple
execution parallel pipelines are used.)

• Processor analyzes which instructions are dependent


Data flow analysis on each other’s results, or data, to create an
optimized schedule of instructions

Speculative • Using branch prediction and data flow analysis, some


processors speculatively execute instructions ahead
of their actual appearance in the program execution,
execution holding the results in temporary locations, keeping
execution engines as busy as possible
Performance Balance
• Adjust the organization and Increase the number
of bits that are
architecture to compensate retrieved at one time
by making DRAMs
for the mismatch among the “wider” rather than
“deeper” and by
using wide bus data
capabilities of the various paths

components
Reduce the frequency
• Architectural examples of memory access by
incorporating
increasingly complex
include: and efficient cache
structures between
the processor and
main memory

Change the DRAM Increase the


interface to make it interconnect
more efficient by bandwidth between
processors and
including a cache or memory by using
other buffering higher speed buses
scheme on the DRAM and a hierarchy of
chip buses to buffer and
structure data flow
Ethernet modem
(max speed)

Graphics display

Wi-Fi modem
(max speed)

Hard disk

Optical disc

Laser printer

Scanner

Mouse

Keyboard

101 102 103 104 105 106 107 108 109 1010 1011
Data Rate (bps)

Figure 2.1 Typical I/O Device Data Rates


Improvements in Chip Organization and
Architecture

• Increase hardware speed of processor


– Fundamentally due to shrinking logic gate size
▪ More gates, packed more tightly, increasing clock rate
▪ Propagation time for signals reduced

• Increase size and speed of caches


– Dedicating part of processor chip
▪ Cache access times drop significantly

• Change processor organization and architecture


– Increase effective speed of instruction execution
– Parallelism
Problems with Clock Speed and Logic
Density
• Power
– Power density increases with density of logic and clock speed
– Dissipating heat
• RC delay
– Speed at which electrons flow limited by resistance and capacitance
of metal wires connecting them
– Delay increases as the RC product increases
– As components on the chip decrease in size, the wire interconnects
become thinner, increasing resistance
– Also, the wires are closer together, increasing capacitance
• Memory latency and throughput
– Memory access speed (latency) and transfer speed (throughput) lag
processor speeds
107

106
Transistors (Thousands)
105 Frequency (MHz)
Power (W)
104 Cores

103

102

10

0.1
1970 1975 1980 1985 1990 1995 2000 2005 2010

Figur e 2 .2 Pr oce ssor Tr e nds


The use of multiple
processors on the same chip
Multicore provides the potential to
increase performance without
increasing the clock rate

Strategy is to use two simpler


processors on the chip rather
than one more complex
processor

With two processors larger


caches are justified

As caches became larger it


made performance sense to
create two and then three
levels of cache on a chip
Many Integrated Core (MIC)
Graphics Processing Unit (GPU)
MIC GPU
• Leap in performance as well • Core designed to perform
as the challenges in parallel operations on graphics
developing software to data
exploit such a large number • Traditionally found on a plug-in
of cores graphics card, it is used to
• The multicore and MIC encode and render 2D and 3D
strategy involves a graphics as well as process
homogeneous collection of video
general purpose processors • Used as vector processors for
on a single chip a variety of applications that
require repetitive computations
Amdahl’s Law

• Amdahl’s law was first proposed by Gene Amdahl in


1967
• Deals with the potential speedup of a program using
multiple processors compared to a single processor
• Illustrates the problems facing industry in the
development of multi-core machines
– Software must be adapted to a highly parallel execution
environment to exploit the power of parallel processing

• Can be generalized to evaluate and design technical


improvement in a computer system
T
(1 – f)T fT

(1 – f)T fT Speedup =
N Time to execute program on a single processor
Time to execute program on N parallel
1
1 f 1 T processors
N
=T(1 - f) + Tf
T(1 - f) +Tf
N
Figur e 2 .3 I llust r a t ion of Am da hl’s La w
T is the total execution time of the program using a single processor
f is a fraction of the execution time involves code that is infinitely
parallelizable with no scheduling overhead
Spe dup f = 0 .9 5

f = 0 .9 0

f = 0 .7 5

f = 0 .5

N um be r of Pr oce ssor s

Figur e 2 .4 Am da hl’s La w for M ult ipr oce ssor s


Little’s Law
• Fundamental and simple relation with broad applications
• Can be applied to almost any system that is statistically in
steady state, and in which there is no leakage
• Queuing system
– If server is idle an item is served immediately, otherwise an arriving
item joins a queue
– There can be a single queue for a single server or for multiple servers,
or multiple queues with one being for each of multiple servers
• Average number of items in a queuing system equals the
average rate at which items arrive multiplied by the time that
an item spends in the system
– Relationship requires very few assumptions
– Because of its simplicity and generality it is extremely useful
Performance Factors and System Attributes
Ic p m k 

Instruction set architecture X X

Compiler technology X X X

Processor implementation X X

Cache and memory hierarchy X X

Ic : Instruction Count
p : The number of processor cycles needed to decode and execute the instruction
m: The number of memory references needed
k: the ratio between memory cycle time and processor cycle time
 : cycle time (1/f)
CPI: Clock cycle per instruction
T: The time needed to execute a given programe
Calculating the Mean

The three
The use of benchmarks to
compare systems involves common
calculating the mean value of formulas used
a set of data points related to for calculating
execution time
a mean are:

• Arithmetic
• Geometric
• Harmonic
MD
AM
(a) GM
HM

MD
AM
(b) GM
HM

MD
AM
(c) GM
HM

MD
AM
(d) GM
HM

MD
AM
(e) GM
HM

MD
AM
(f) GM
HM

MD
AM
(g) GM
HM

0 1 2 3 4 5 6 7 8 9 10 11

(a) Constant (11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11) MD = median
(b) Clustered around a central value (3, 5, 6, 6, 7, 7, 7, 8, 8, 9, 1 1) AM = arithmetic mean
(c) Uniform distribution (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1 1) GM = geometric mean
(d) Large-number bias (1, 4, 4, 7, 7, 9, 9, 10, 10, 1 1, 11) HM = harmonic mean
(e) Small-number bias(1, 1, 2, 2, 3, 3, 5, 5, 8, 8, 1 1)
(f) Upper outlier (11, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1)
(g) Lower outlier (1, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11)
Arithmetic Mean
◼ An Arithmetic Mean (AM) is an appropriate measure
if the sum of all the measurements is a meaningful
and interesting value

◼ The AM is a good candidate for comparing the execution


time performance of several systems

For example, suppose we were interested in using a system


for large-scale simulation studies and wanted to evaluate several
alternative products. On each system we could run the simulation
multiple times with different input values for each run, and then take
the average execution time across all runs. The use of
multiple runs with different inputs should ensure that the results are
not heavily biased by some unusual feature of a given input set. The
AM of all the runs is a good measure of the system’s performance on
simulations, and a good number to use for system comparison.

◼ The AM used for a time-based variable, such as program execution time, has the
important property that it is directly proportional to the total time
◼ If the total time doubles, the mean value doubles
A Comparison of Arithmetic and Harmonic Means for Rates
Computer Computer Computer Computer Computer Computer
A time B time C time A rate B rate C rate
(secs) (secs) (secs) (MFLOPS) (MFLOPS) (MFLOPS)

Program 1
2.0 1.0 0.75 50 100 133.33
(108 FP ops)

Program 2
0.75 2.0 4.0 133.33 50 25
(108 FP ops)

Total
execution 2.75 3.0 4.75 – – –
time

Arithmetic
mean of 1.38 1.5 2.38 – – –
times

Inverse
of total
0.36 0.33 0.21 – – –
execution
time (1/sec)

Arithmetic
mean of – – – 91.67 75.00 79.17
rates

Harmonic
mean of – – – 72.72 66.67 42.11
rates
A Comparison of Arithmetic and Geometric Means for Normalized
Results
(a) Results normalized to Computer A

Computer A time Computer B time Computer C time

Program 1 2.0 (1.0) 1.0 (0.5) 0.75 (0.38)

Program 2 0.75 (1.0) 2.0 (2.67) 4.0 (5.33)

Total execution time 2.75 3.0 4.75


Arithmetic mean of
1.00 1.58 2.85
normalized times
Geometric mean of
1.00 1.15 1.41
normalized times

(a) Results normalized to Computer B

Computer A time Computer B time Computer C time

Program 1 2.0 (2.0) 1.0 (1.0) 0.75 (0.75)

Program 2 0.75 (0.38) 2.0 (1.0) 4.0 (2.0)

Total execution time 2.75 3.0 4.75


Arithmetic mean of
1.19 1.00 1.38
normalized times
Geometric mean of
0.87 1.00 1.22
normalized times
Another Comparison of Arithmetic and Geometric Means for
Normalized Results
(a) Results normalized to Computer A

Computer A time Computer B time Computer C time

Program 1 2.0 (1.0) 1.0 (0.5) 0.20 (0.1)

Program 2 0.4 (1.0) 2.0 (5.0) 4.0 (10.0)

Total execution time 2.4 3.00 4.2


Arithmetic mean of
1.00 2.75 5.05
normalized times
Geometric mean of
1.00 1.58 1.00
normalized times

(a) Results normalized to Computer B

Computer A time Computer B time Computer C time

Program 1 2.0 (2.0) 1.0 (1.0) 0.20 (0.2)

Program 2 0.4 (0.2) 2.0 (1.0) 4.0 (2.0)

Total execution time 2.4 3.0 4.2


Arithmetic mean of
1.10 1.00 1.10
normalized times
Geometric mean of
0.63 1.00 0.63
normalized times
Benchmark Principles

• Desirable characteristics of a benchmark


program:

1. It is written in a high-level language, making it portable


across different machines
2. It is representative of a particular kind of programming
domain or paradigm, such as systems programming,
numerical programming, or commercial programming
3. It can be measured easily
4. It has wide distribution
System Performance Evaluation
Corporation (SPEC)

• Benchmark suite
– A collection of programs, defined in a high-level language
– Together attempt to provide a representative test of a computer in a
particular application or system programming area

– SPEC
– An industry consortium
– Defines and maintains the best known collection of benchmark suites
aimed at evaluating computer systems
– Performance measurements are widely used for comparison and
research purposes
SPEC CPU2017
• Best known SPEC benchmark suite
• Industry standard suite for processor intensive applications
• Appropriate for measuring performance for applications that
spend most of their time doing computation rather than I/O
• Consists of 20 integer benchmarks and 23 floating-point
benchmarks written in C, C++, and Fortran
• For all of the integer benchmarks and most of the floating-
point benchmarks, there are both rate and speed benchmark
programs
• The suite contains over 11 million lines of code
Rate Speed Language Kloc Application Area

500.perlbench_r 600.perlbench_s C 363 Perl interpreter

502.gcc_r 602.gcc_s C 1304 GNU C compiler

505.mcf_r 605.mcf_s C 3 Route planning

520.omnetpp_r 620.omnetpp_s C++ 134 Discrete event simulation - computer


network

523.xalancbmk_r 623.xalancbmk_s C++ 520 XML to HTML conversion via XSLT (A)

525.x264_r 625.x264_s C 96 Video compression SPEC


531.deepsjeng_r 631.deepsjeng_s C++ 10 AI: alpha-beta tree search (chess)
CPU2017
Benchmarks
541.leela_r 641.leela_s C++ 21 AI: Monte Carlo tree search (Go)

548.exchange2_r 648.exchange2_s Fortran 1 AI: recursive solution generator


(Sudoku)

557.xz_r 657.xz_s C 33 General data compression

Kloc = line count (including comments/whitespace) for source files used in a build/1000
Rate Speed Language Kloc Application Area

503.bwaves_r 603.bwaves_s Fortran 1 Explosion modeling


507.cactuBSSN_r 607.cactuBSSN_s C++, C, 257 Physics; relativity
Fortran

508.namd_r C++, C 8 Molecular dynamics


510.parest_r C++ 427 Biomedical imaging; optical
tomography with finite elements

(B)
511.povray_r C++ 170 Ray tracing
519.ibm_r 619.ibm_s C 1 Fluid dynamics SPEC
521.wrf_r 621.wrf_s Fortran, C 991 Weather forecasting CPU2017
526.blender_r C++ 1577 3D rendering and animation
Benchmarks
527.cam4_r 627.cam4_s Fortran, C 407 Atmosphere modeling
628.pop2_s Fortran, C 338 Wide-scale ocean modeling
(climate level)

538.imagick_r 638.imagick_s C 259 Image manipulation


544.nab_r 644.nab_s C 24 Molecular dynamics
549.fotonik3d_r 649.fotonik3d_s Fortran 14 Computational electromagnetics

554.roms_r 654.roms_s Fortran 210 Regional ocean modeling.

Kloc = line count (including comments/whitespace) for source files used in a build/1000
Base Peak

Seconds Rate Seconds Rate


Benchmark
1141 1070 933 1310
500.perlbench_r

1303 835 1276 852


502.gcc_r
1433 866 1378 901
SPEC
505.mcf_r
1664 606 1634 617
CPU 2017
520.omnetpp_r Integer
722 1120 713 1140
Benchmarks
523.xalancbmk_r for HP
655 2053 661 2030 Integrity
525.x264_r
604 1460 597 1470 Superdome X
531.deepsjeng_r

541.leela_r
892 1410 896 1420 (a) Rate Result
833 2420 770 2610 (768 copies)
548.exchange2_r

870 953 863 961


557.xz_r
© 2018 Pearson Education, Inc., Hoboken, NJ. All rights reserved.
Base Peak

Seconds Ratio Seconds Ratio


Benchmark
358 4.96 295 6.01
600.perlbench_s

602.gcc_s
546 7.29 535 7.45 SPEC
605.mcf_s
866 5.45 700 6.75 CPU 2017
276 5.90 247 6.61 Integer
620.omnetpp_s
Benchmarks
188 7.52 179 7.91 for HP
623.xalancbmk_s
Integrity
625.x264_s
283 6.23 271 6.51 Superdome X
407 3.52 343 4.18
631.deepsjeng_s
(b) Speed
469 3.63 439 3.88 Result
641.leela_s
329 8.93 299 9.82 (384 threads)
648.exchange2_s

2164 2.86 2119 2.92


657.xz_s
Terms Used in SPEC Documentation
• Benchmark • Peak metric
– A program written in a high-level – This enables users to attempt to
language that can be compiled and optimize system performance by
executed on any computer that optimizing the compiler output
implements the compiler • Speed metric
• System under test – This is simply a measurement of the
– This is the system to be evaluated time it takes to execute a compiled
• Reference machine benchmark
– This is a system used by SPEC to • Used for comparing the ability of a
establish a baseline performance for all computer to complete single tasks
benchmarks • Rate metric
▪ Each benchmark is run and – This is a measurement of how many
measured on this machine to tasks a computer can accomplish in a
establish a reference time for that certain amount of time
benchmark • This is called a throughput, capacity,
• Base metric or rate measure
– These are required for all reported • Allows the system under test to
results and have strict guidelines for execute simultaneous tasks to take
compilation advantage of multiple processors
Start

Get next
program

Run program
three times

Select
median value

Ratio(prog) =
Tref(prog)/TSUT(prog)

Yes More No Compute geometric


programs? mean of all ratios

End

Figure 2.7 SPEC Evaluation Flowchart


Benchmark Seconds Energy (kJ) Average Power Maximum
(W) Power (W)

1774 1920 1080 1090


600.perlbench_s

3981 4330 1090 1110


602.gcc_s

4721 5150 1090 1120


605.mcf_s
1630 1770 1090 1090
620.omnetpp_s SPECspeed
2017_int_base
1417 1540 1090 1090 Benchmark
623.xalancbmk_s Results for
Reference
1764 1920 1090 1100 Machine (1
625.x264_s

1432 1560 1090 1130 thread)


631.deepsjeng_s

1706 1850 1090 1090


641.leela_s

2939 3200 1080 1090


648.exchange2_s

6182 6730 1090 1140


657.xz_s

You might also like