0% found this document useful (0 votes)
279 views

Assignment 1.2 CHAPTER 1

This document discusses various techniques used in contemporary processors to increase speed such as pipelining, branch prediction, superscalar execution, and speculative execution. It also defines terms like MIPS, FLOPS, Amdahl's law, and Little's law. Finally, it lists desirable characteristics of benchmark programs and differences between base, peak, speed, and rate metrics.

Uploaded by

Scholar Macay
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
279 views

Assignment 1.2 CHAPTER 1

This document discusses various techniques used in contemporary processors to increase speed such as pipelining, branch prediction, superscalar execution, and speculative execution. It also defines terms like MIPS, FLOPS, Amdahl's law, and Little's law. Finally, it lists desirable characteristics of benchmark programs and differences between base, peak, speed, and rate metrics.

Uploaded by

Scholar Macay
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Assignment 1.

2 CPE 310 - Computer Architecture and Organization

2.1 List and briefly define some of the techniques used in contemporary
processors to increase speed.

- Pipelining - Processors transfer data or instructions into a notional pipe, with all
sections of the pipe being processed at the same time 

- Branch Prediction- The processor looks forward in the memory-fetched


instruction code, attempting to predict which direction a branch or group of
instructions will proceed.

- Superscalar execution - refers to the capacity to send many instructions in a


single clock cycle (multiple parallel pipelines used).

- Data Flow Analysis -To generate an optimal schedule of instructions, the


processor analyzes which instructions are reliant on each other's results, or data.

- Speculative Execution: By storing results in temporary places and using branch


prediction and data flow analysis, computers can execute instructions before they
appear in the program execution.

2.2 Explain the concept of performance balance.

- Changing the structure and architecture to compensate for the shortcomings of


different components.

2.3 Explain the differences among multicore systems, MICs, and GPGPUs.

- Multicore Systems & Many Integrated Cores (MICs) are Multiple processors are
combined on a single chip.

- GPGPUs (General Purpose GPUs) take advantage of how GPUs operate and use
them to enable general-purpose processors.
Assignment 1.2 CPE 310 - Computer Architecture and Organization

2.4 Briefly characterize Amdahl’s law.

- Has to do with potential speedup of programs using multiple processors


compared to one.

 Speed = (Time in single processor) / (Time in multiple processors).

 Speed = 1 / (1 - f( 1-1/N ) )

 Explains that software has to adapt to parallel execution to use the full
power of parallel processing.

 Using more cores eventually doesn't improve much speed.


2.5 Briefly characterize Little’s law.
- Is fundamental and simple relation with broad range of applications.
 Average number of items in a Qing system = (Average rate items arrive) *
(Time item spends in system).

 can be applied to almost any system that’s statistically in steady state, and
if there’s no leakage.

 uses Qing theory terminology and applied to Qing systems.

 server is the Qing systems central element, which provides services for
items which requires the be served.

 the item is served quickly, if the server is idle. If server is busy, the item
should wait in the Q.

 the Q for servers vary based on single or multiple cores.

 the item departs server once it is served and completed.


Assignment 1.2 CPE 310 - Computer Architecture and Organization

2.6 Define MIPS and FLOPS.

- The pace at which instructions are executed is measured in millions of


instructions per second (MIPS), which is a standard metric of processor
performance.
(Instruction count) / (Total execution time x 106) = (Constant frequency) / (Avg
cycles per instruction x 106) = (Constant frequency) / (Avg cycles per instruction
x 106)

- Another typical performance metric that solely considers floating-point


instructions is Millions of Floating-point Operations Per Second (MFLOPS).
(Number of executed floating point operations in a program) / (Execution time x
106) Equals MFLOPS rate.
2.7 List and define three methods for calculating a mean value of a set of
data values.

- Arithmetic
 If the total of all the measures is a meaningful and interesting
number, the arithmetic mean (AM) is an acceptable metric.

 AM is an excellent tool for comparing the execution times of various


systems.

 AM is a time-based variable that has the essential characteristic of


being directly proportional to the total time, such as program
execution time. (If the entire amount of time twice, the mean value
doubles as well)

- Geometric

 When assessing the relative performance of machines, GM


produces consistent findings regardless of whatever system is
chosen as a reference.

- Harmonic

 The feature of HM being inversely proportional to the total


execution time is desirable.
Assignment 1.2 CPE 310 - Computer Architecture and Organization

2.8List the desirable characteristics of a benchmark program.


 it is written in a high-level language, making it portable across diff
machines.

 it is representative of a particular kind of programming domain or


paradigm, such as systems, numerical, or commercial programming.

 it can be measured easily.

 it has wide distribution.


2.9 What are the SPEC benchmarks

- The Standard Performance Evaluation Corporation (SPEC) maintains a database


of benchmark suites, which are collections of programs written in a high-level
language that try to represent a computer in a certain application or system
programming area.

2.10 What are the differences among base metric, peak metric, speed
metric, and rate metric?

 Base Metrics - These are necessary for all reported outcomes and must be
complied with stringent criteria.

 Peak Metric - Users can use this to try to improve system performance by
tweaking the compiler output.

 Speed Metric This is just a measurement of how long it takes to run a benchmark
that has been compiled. The speed measure is used to compare a computer's
capacity to execute individual tasks.

 Rare Metric - The throughput, capacity, or rate measure of a computer is a


measurement of how many jobs it can do in a given amount of time. The rate
metric enables the system under test to do many tasks at the same time in order
to take use of multiple processors.

You might also like