100% found this document useful (1 vote)
208 views

Computer Performance Evaluation

This document contains an assignment on evaluating computer system performance from the University of Maiduguri Department of Mathematical Sciences. It defines four key performance measures: productivity, usage level, missionability, and responsiveness. For each measure, it provides examples of possible metrics. It also discusses application domains for performance evaluation and common techniques, including measurement, simulation, analytic modeling, and hybrid modeling. The techniques are applied to system design, selection, upgrade, tuning, analysis, and characterizing workload.

Uploaded by

Sadiq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
208 views

Computer Performance Evaluation

This document contains an assignment on evaluating computer system performance from the University of Maiduguri Department of Mathematical Sciences. It defines four key performance measures: productivity, usage level, missionability, and responsiveness. For each measure, it provides examples of possible metrics. It also discusses application domains for performance evaluation and common techniques, including measurement, simulation, analytic modeling, and hybrid modeling. The techniques are applied to system design, selection, upgrade, tuning, analysis, and characterizing workload.

Uploaded by

Sadiq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

UNIVERSITY OF MAIDUGURI

FACULTY OF SCIENCE
DEPARTMENT OF MATHEMATICAL SCIENCES
PROGRAM: COMPUTER SCIENCE

COURSE CODE: CSC 415


COURSE TITLE: Computer System Performance and Evaluation

Assignment

By

Umar Ba’abba Goni


17/08/05/018

Question:
Write notes on the following Computer Performance Measures
i. Productivity
ii. Usage Level
iii. Missionability
iv. Responsiveness

March, 2020
CSC 415: Computer System Performance and Evaluation – Assignment by 17/08/05/018, Umar Ba’abba Goni

PERFORMANCE EVALUATION OF COMPUTER SYSTEMS


PERFORMANCE MEASURES
1. Productivity
These measures indicate how effectively a user can get his or her work accomplished.
Possible measures are:
i. user friendliness,
ii. maintainability, and
iii. understandability.
It also indicates a measure of the throughput of the given system.
2. Usage Level
These measures are intended to evaluate how well the various components of the
system are being used. Possible measures are throughput and utilization of various
resources. Usage level also indicates the system's degree to which it is loaded-for example,
is the system 50 percent loaded or 100 percent saturated?
3. Missionability
These measures indicate if the system would remain continuously operational for
the duration of a Possible measures are the distribution of the work accomplished during
the mission time, interval availability (probability that the system will keep performing
satisfactorily throughout the mission time), and the life-time (time when the probability of
unacceptable behavior increases beyond some threshold). It also refers to the system's
ability to perform as it was intended for the duration demanded. For example, a spaceship
must be highly missionable. Dependability is related to the last measure but indicates the
system's ability to resist failure or to stay operational.
4. Responsiveness
These measures are intended to evaluate how quickly a given task can be
accomplished by the system. Possible measures are waiting time, queue length, etc. It
indicates the system's ability to be provided commands and to deliver answers within a
reasonable time period.
Application Domains
i. General purpose computing: These Systems are designed for general purpose
problem solving. Relevant measures are responsiveness, usage level, and
productivity. Dependability requirements are modest, especially for benign failures.

1
CSC 415: Computer System Performance and Evaluation – Assignment by 17/08/05/018, Umar Ba’abba Goni

ii. High availability: Such systems are designed for transaction processing
environments (banks, airlines, or telephone databases, switching systems, etc.). The
most important measures are responsiveness and dependability. Both of these
requirements are more sever than for general purpose computing systems.
Productivity is also an important factor.
iii. Real-time control: Such systems must respond to both periodic and randomly
occurring events within some (possibly hard) timing constraints. They require high
levels of responsiveness and dependability for most workloads and failure types and
are therefore significantly over-designed. Note that the utilization and throughput
pay little role in such.
iv. Mission oriented: These systems require high levels of reliability over a short
period, called the mission time. Little or no repair / tuning is possible during the
mission. Such systems include fly-by-wire airplanes, battlefield systems, and
spacecrafts. Responsiveness is also important, but usually not difficult to achieve.
Such systems may try to achieve high reliability during short term at the expense of
poor reliability beyond mission period.
v. Long-life: Systems like the ones used in unmanned spaceships need long life without
provision for manual diagnostics and repairs. Thus, in addition to being highly
dependable, they should have considerable intelligence built in to do diagnostics and
repair either automatically or by remote control from a ground station.
Responsiveness is important but not difficult to achieve.
Techniques for Performance Evaluation
i. Measurement: Measurement is the most fundamental technique and is needed even
in analysis and simulation to calibrate the models. Some measurements are best done
in hardware, some in software, and some in hybrid manner.
ii. Simulation Modeling: Simulation involves constructing a model for the behavior of
the system and driving it with an appropriate abstraction of the workload. The major
advantage of simulation is its generality and flexibility; almost any behavior can be
easily simulated.
Both measurement and simulation involve careful experiment design, data
gathering, and data analysis. These steps could be tedious; moreover, the final results
obtained from the data analysis only characterize the system behavior for the range of input
parameters covered. Although exploration can be used to obtain results for the nearby
parameter values, it is not possible to ask “what if” questions for arbitrary values.
iii. Analytic Modeling: Analytic modeling involves constructing a mathematical model
of the system behavior (at the desired level of detail) and solving it. The main

2
CSC 415: Computer System Performance and Evaluation – Assignment by 17/08/05/018, Umar Ba’abba Goni

difficulty here is that the domain of tractable models is rather limited. Thus, analytic
modeling will fail if the objective is to study the behavior in great detail. However,
for an overall behavior characterization, analytic modeling is an excellent tool.
The major advantages of analytic modeling over the other two techniques are:
- It generates good insight into the workings of the system that is valuable even if the
model is too difficult to solve.
- Simple analytic models can usually be solved easily, yet provide surprisingly accurate
results.
- Results from analysis have better predictive value than those obtained from
measurement or simulation.
iv. Hybrid Modeling: A complex model may consist of several sub-models, each
representing certain aspect of the system. Only some of these sub-models may be
analytically tractable; the others must be simulated.
Applications of Performance Evaluation
System Design
In designing a new system, one typically starts out with certain
performance/reliability objectives and a basic system architecture, and then decides how
to choose various parameters to achieve the objectives. This involves constructing a model
of the system behavior at the appropriate level of detail, and evaluating it to choose the
parameters. At higher levels of design, simple analytic reasoning may be adequate to
eliminate bad choices, but simulation becomes an indispensable tool for making detailed
design decisions and avoiding costly mistakes.
System Selection
Here the problem is to select the “best” system from among a group of system that
are under consideration for reasons of cost, availability, compatibility, etc.
Although direct measurement is the ideal technique to use here, there might be practical
difficulties in doing so (e.g., not being able to use them under realistic workloads, or not
having the system available locally). Therefore, it may be necessary to make projections
based on available data and some simple modeling.
System Upgrade
This involves replacing either the entire system or parts thereof with a newer but
compatible unit. The compatibility and cost considerations may dictate the vendor, so the
only remaining problem is to choose quantity, speed, and the like.

3
CSC 415: Computer System Performance and Evaluation – Assignment by 17/08/05/018, Umar Ba’abba Goni

Often, analytic modeling is adequate here; however, in large systems involving complex
interactions between subsystems, simulation modeling may be essential. Note that a direct
experimentation would require installing the new unit first, and thus is not practical.
System Tuning
The purpose of tune-up is to optimize the performance by appropriately changing
the various resource management policies. It is necessary to decide which parameters to
consider changing and how to change them to get maximum potential benefit.
Direct experimentation is the simplest technique to use here, but may not be feasible
in a production environment. Since the tuning often involves changes to aspects that cannot
be easily represented in analytic models, simulation is indispensable in this application.
System Analysis
Suppose that we find a system to be unacceptably sluggish. The reason could be
either inadequate hardware resources or poor system management. In the former case, we
need system upgrade, and in the latter, a system tune-up. Nevertheless, the first task is to
determine which of the two cases applies. This involves monitoring the system and
examining the behavior of various resource management policies under different loading
conditions.
Experimentation coupled with simple analytic reasoning is usually adequate to
identify the trouble spots; however, in some cases, complex interactions may make a
simulation study essential.
System Workload
The workload of a system refers to a set of inputs generated by the environment in
which the system is used, e.g., the inter-arrival times and service demands of incoming
jobs, and are usually not under the control of the system designer/administrator. These
inputs can be used for driving the real system (as in measurement) or its simulation model,
and for determining distributions for analytic/simulation modeling.
Workload Characterization
Workload characterization is one of the central issues in performance evaluation
because it is not always clear what aspects of the workload are important, in how much
detail the workload should be recorded, and how the workload should be represented and
used.

4
CSC 415: Computer System Performance and Evaluation – Assignment by 17/08/05/018, Umar Ba’abba Goni

Workload Model
Workload characterization only builds a model of the real workload, since note very
aspect of the real workload may be captured or is relevant. A workload model may be
executable or non-executable. For example, recording the arrival instants and service
durations of jobs creates an executable model, whereas only determining the distributions
creates a non-executable model.
An executable model need not be a record of inputs, it can also be a program that
generates the inputs.
Executable workloads are useful in direct measurements and trace-driven
simulations, whereas non-executable workloads are useful for analytic modeling and
distribution-driven simulations.

REFERENCES
Ali Movaghar, Fall 2012. Performance Evaluation of Computer Systems.
https://www.coursehero.com/file/19243100/1-Perf911-12/ Course Hero
Textbooks
- Main Texts
K. Kant, Introduction to Computer System Performance Evaluation, McGraw-Hill Inc.,
1992.
M. Harchol-Balter, Performance Modeling and Design of Computer Systems, Cambridge
University Press, 2013.
- Secondary Texts
B.R. Haverkort, Performance of computer Communication Systems, John Wiley & Sons
Ltd., 1998.
G. Bolch, S. Greiner, H. de Meer and K.S. Trivedi, Queueing Networking and Markov
Chains, John Wiley & Sons Ltd., 1998.
D.W. Stroock, An Introduction to Markov Processes, Springer-Verlag, Berlin Heidelberg,
2005

You might also like