Computer Performance Evaluation
Computer Performance Evaluation
FACULTY OF SCIENCE
DEPARTMENT OF MATHEMATICAL SCIENCES
PROGRAM: COMPUTER SCIENCE
Assignment
By
Question:
Write notes on the following Computer Performance Measures
i. Productivity
ii. Usage Level
iii. Missionability
iv. Responsiveness
March, 2020
CSC 415: Computer System Performance and Evaluation – Assignment by 17/08/05/018, Umar Ba’abba Goni
1
CSC 415: Computer System Performance and Evaluation – Assignment by 17/08/05/018, Umar Ba’abba Goni
ii. High availability: Such systems are designed for transaction processing
environments (banks, airlines, or telephone databases, switching systems, etc.). The
most important measures are responsiveness and dependability. Both of these
requirements are more sever than for general purpose computing systems.
Productivity is also an important factor.
iii. Real-time control: Such systems must respond to both periodic and randomly
occurring events within some (possibly hard) timing constraints. They require high
levels of responsiveness and dependability for most workloads and failure types and
are therefore significantly over-designed. Note that the utilization and throughput
pay little role in such.
iv. Mission oriented: These systems require high levels of reliability over a short
period, called the mission time. Little or no repair / tuning is possible during the
mission. Such systems include fly-by-wire airplanes, battlefield systems, and
spacecrafts. Responsiveness is also important, but usually not difficult to achieve.
Such systems may try to achieve high reliability during short term at the expense of
poor reliability beyond mission period.
v. Long-life: Systems like the ones used in unmanned spaceships need long life without
provision for manual diagnostics and repairs. Thus, in addition to being highly
dependable, they should have considerable intelligence built in to do diagnostics and
repair either automatically or by remote control from a ground station.
Responsiveness is important but not difficult to achieve.
Techniques for Performance Evaluation
i. Measurement: Measurement is the most fundamental technique and is needed even
in analysis and simulation to calibrate the models. Some measurements are best done
in hardware, some in software, and some in hybrid manner.
ii. Simulation Modeling: Simulation involves constructing a model for the behavior of
the system and driving it with an appropriate abstraction of the workload. The major
advantage of simulation is its generality and flexibility; almost any behavior can be
easily simulated.
Both measurement and simulation involve careful experiment design, data
gathering, and data analysis. These steps could be tedious; moreover, the final results
obtained from the data analysis only characterize the system behavior for the range of input
parameters covered. Although exploration can be used to obtain results for the nearby
parameter values, it is not possible to ask “what if” questions for arbitrary values.
iii. Analytic Modeling: Analytic modeling involves constructing a mathematical model
of the system behavior (at the desired level of detail) and solving it. The main
2
CSC 415: Computer System Performance and Evaluation – Assignment by 17/08/05/018, Umar Ba’abba Goni
difficulty here is that the domain of tractable models is rather limited. Thus, analytic
modeling will fail if the objective is to study the behavior in great detail. However,
for an overall behavior characterization, analytic modeling is an excellent tool.
The major advantages of analytic modeling over the other two techniques are:
- It generates good insight into the workings of the system that is valuable even if the
model is too difficult to solve.
- Simple analytic models can usually be solved easily, yet provide surprisingly accurate
results.
- Results from analysis have better predictive value than those obtained from
measurement or simulation.
iv. Hybrid Modeling: A complex model may consist of several sub-models, each
representing certain aspect of the system. Only some of these sub-models may be
analytically tractable; the others must be simulated.
Applications of Performance Evaluation
System Design
In designing a new system, one typically starts out with certain
performance/reliability objectives and a basic system architecture, and then decides how
to choose various parameters to achieve the objectives. This involves constructing a model
of the system behavior at the appropriate level of detail, and evaluating it to choose the
parameters. At higher levels of design, simple analytic reasoning may be adequate to
eliminate bad choices, but simulation becomes an indispensable tool for making detailed
design decisions and avoiding costly mistakes.
System Selection
Here the problem is to select the “best” system from among a group of system that
are under consideration for reasons of cost, availability, compatibility, etc.
Although direct measurement is the ideal technique to use here, there might be practical
difficulties in doing so (e.g., not being able to use them under realistic workloads, or not
having the system available locally). Therefore, it may be necessary to make projections
based on available data and some simple modeling.
System Upgrade
This involves replacing either the entire system or parts thereof with a newer but
compatible unit. The compatibility and cost considerations may dictate the vendor, so the
only remaining problem is to choose quantity, speed, and the like.
3
CSC 415: Computer System Performance and Evaluation – Assignment by 17/08/05/018, Umar Ba’abba Goni
Often, analytic modeling is adequate here; however, in large systems involving complex
interactions between subsystems, simulation modeling may be essential. Note that a direct
experimentation would require installing the new unit first, and thus is not practical.
System Tuning
The purpose of tune-up is to optimize the performance by appropriately changing
the various resource management policies. It is necessary to decide which parameters to
consider changing and how to change them to get maximum potential benefit.
Direct experimentation is the simplest technique to use here, but may not be feasible
in a production environment. Since the tuning often involves changes to aspects that cannot
be easily represented in analytic models, simulation is indispensable in this application.
System Analysis
Suppose that we find a system to be unacceptably sluggish. The reason could be
either inadequate hardware resources or poor system management. In the former case, we
need system upgrade, and in the latter, a system tune-up. Nevertheless, the first task is to
determine which of the two cases applies. This involves monitoring the system and
examining the behavior of various resource management policies under different loading
conditions.
Experimentation coupled with simple analytic reasoning is usually adequate to
identify the trouble spots; however, in some cases, complex interactions may make a
simulation study essential.
System Workload
The workload of a system refers to a set of inputs generated by the environment in
which the system is used, e.g., the inter-arrival times and service demands of incoming
jobs, and are usually not under the control of the system designer/administrator. These
inputs can be used for driving the real system (as in measurement) or its simulation model,
and for determining distributions for analytic/simulation modeling.
Workload Characterization
Workload characterization is one of the central issues in performance evaluation
because it is not always clear what aspects of the workload are important, in how much
detail the workload should be recorded, and how the workload should be represented and
used.
4
CSC 415: Computer System Performance and Evaluation – Assignment by 17/08/05/018, Umar Ba’abba Goni
Workload Model
Workload characterization only builds a model of the real workload, since note very
aspect of the real workload may be captured or is relevant. A workload model may be
executable or non-executable. For example, recording the arrival instants and service
durations of jobs creates an executable model, whereas only determining the distributions
creates a non-executable model.
An executable model need not be a record of inputs, it can also be a program that
generates the inputs.
Executable workloads are useful in direct measurements and trace-driven
simulations, whereas non-executable workloads are useful for analytic modeling and
distribution-driven simulations.
REFERENCES
Ali Movaghar, Fall 2012. Performance Evaluation of Computer Systems.
https://www.coursehero.com/file/19243100/1-Perf911-12/ Course Hero
Textbooks
- Main Texts
K. Kant, Introduction to Computer System Performance Evaluation, McGraw-Hill Inc.,
1992.
M. Harchol-Balter, Performance Modeling and Design of Computer Systems, Cambridge
University Press, 2013.
- Secondary Texts
B.R. Haverkort, Performance of computer Communication Systems, John Wiley & Sons
Ltd., 1998.
G. Bolch, S. Greiner, H. de Meer and K.S. Trivedi, Queueing Networking and Markov
Chains, John Wiley & Sons Ltd., 1998.
D.W. Stroock, An Introduction to Markov Processes, Springer-Verlag, Berlin Heidelberg,
2005