0% found this document useful (0 votes)
59 views56 pages

Understanding Software Risk Management

Uploaded by

dapas42556
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views56 pages

Understanding Software Risk Management

Uploaded by

dapas42556
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

 A risk is a potential problem – it might happen and it

might not

 Conceptual definition of risk


 Risk concerns future happenings
 Risk involves change in mind, opinion, actions,
places, etc.

 Two characteristics of risk


 Uncertainty – the risk may or may not happen, that
is, there are no 100% risks .
 Loss – the risk becomes a reality and unwanted
consequences or losses occur
 Reactive Versus Proactive Risk Strategies:-
 Reactive risk strategies:-
 The majority of software teams rely solely on reactive
risk strategies.

 “Don’t worry, I’ll think of something!” Never worrying


about problems until they happened.

 The team flies into action in an attempt to correct the


problem rapidly. This is often called a fire-fighting
mode.
 Proactive strategy:-
 A proactive strategy begins long before technical work
is initiated.

 Potential risks are identified, their probability and


impact are assessed, and they are ranked by
importance.

 The software team establishes a plan for managing


risk.
 Software Risks
 Risk Categorization:
 Project risks threaten the project plan.
 Project risks identify potential budgetary, schedule,
personnel (staffing and organization), resource,
stakeholder, and requirements problems and their
impact on a software project.

 Technical risks threaten the quality and timeliness of


the software to be produced.
 Technical risks identify potential design,
implementation, interface, verification, and
maintenance problems.
 Business risks threaten the viability of the software to
be built and often risk the project or the product.

 Sub-categories of Business risks


 Market risk – building an excellent product or system
that no one really wants
 Strategic risk – building a product that no longer fits
into the overall business strategy for the company
 Sales risk – building a product that the sales force
doesn't understand how to sell
 Management risk – losing the support of senior
management due to a change in focus or a change in
people
 Budget risk – losing budgetary or personnel
commitment
 Risk Identification
 Risk identification is a systematic attempt to specify
threats to the project plan.

 By identifying known and predictable risks, the project


manager takes a first step toward avoiding them when
possible and controlling them when necessary.

 Generic risks
 Risks that are a potential threat to every software
project
 Product-specific risks
 Risks that can be identified only by those a with a
clear understanding of the technology, the people,
and the environment that is specific to the software
that is to be built.

 This requires examination of the project plan and


the statement of scope.

 "What special characteristics of this product may


threaten our project plan?"
 One method for identifying risks is to create a risk
item checklist.

 Focuses on known and predictable risks in specific


subcategories.

 Following generic subcategories:


 Product size – risks associated with overall size of the
software to be built

 Business impact – risks associated with constraints


imposed by management or the marketplace
 Customer characteristics – risks associated with
sophistication of the customer and the developer's
ability to communicate with the customer in a timely
manner

 Process definition – risks associated with the degree


to which the software process has been defined and is
followed

 Development environment – risks associated with


availability and quality of the tools to be used to build
the project
 Technology to be built – risks associated with complexity
of the system to be built and the "newness" of the
technology in the system

 Staff size and experience – risks associated with overall


technical and project experience of the software engineers
who will do the work

 Questionnaire on Project Risk:-


1) Have top software and customer managers formally
committed to support the project?
2) Are end-users actively committed to the project and the
system/product to be built?
3) Are requirements fully understood by the software
engineering team and its customers?
4) Have customers been involved fully in the definition
of requirements?
5) Do end-users have realistic expectations?
6) Is the project scope stable?
7) Does the software engineering team have the right
mix of skills?
8) Are project requirements stable?
9) Does the project team have experience with the
technology to be implemented?
10) Is the number of people on the project team
adequate to do the job?
11) Do all customer/user constituencies agree on the
importance of the project and on the requirements
for the system/product to be built?
 Risk Components and Drivers
 The project manager identifies the risk drivers that
affect the following risk components
 Performance risk - the degree of uncertainty that
the product will meet its requirements and be fit for
its intended use

 Cost risk - the degree of uncertainty that the


project budget will be maintained

 Support risk - the degree of uncertainty that the


resultant software will be easy to correct, adapt, and
enhance
 Schedule risk - the degree of uncertainty that the
project schedule will be maintained and that the
product will be delivered on time

 The impact of each risk driver on the risk component


is divided into one of four impact levels
 Negligible, marginal, critical, and catastrophic
 Risk Projection:-
 Risk projection (or estimation) attempts to rate each
risk in two ways
 The probability that the risk is real
 The consequence of the problems associated with
the risk, should it occur.
The project planner, managers, and technical staff
perform four risk projection steps
1) Establish a scale that reflects the perceived likelihood of
a risk (e.g., 1-low, 10-high)
2) Define the consequences of the risk
3) Estimate the impact of the risk on the project and
product
4) Note the overall accuracy of the risk projection so that
there will be no misunderstandings
 The intent of these steps is to consider risks in a
manner that leads to prioritization.

 Be prioritizing risks, the software team can allocate


limited resources where they will have the most
impact.

 Developing a Risk Table


 A risk table provides a project manager with a
simple technique for risk projection
 It consists of five columns
 Risk Summary – short description of the risk
 Risk Category – one of seven risk categories
 Probability – estimation of risk occurrence based on
group input
 Impact – (1) catastrophic (2) critical (3) marginal (4)
negligible
 RMMM – Pointer to a paragraph in the Risk
Mitigation, Monitoring, and Management Plan
 Developing a Risk Table
 List all risks in the first column (by way of the help of the
risk item checklists)

 Mark the category of each risk

 Estimate the probability of each risk occurring

 Assess the impact of each risk based on an averaging of the


four risk components to determine an overall impact value

 Sort the rows by probability and impact in descending


order

 Draw a horizontal cutoff line in the table that indicates the


risks that will be given further attention
 Assessing Risk Impact:-
 Three factors affect the consequences that are likely if
a risk does occur
 Its nature – This indicates the problems that are
likely if the risk occurs
 Its scope – This combines the severity of the risk
(how serious was it) with its overall distribution
(how much was affected)
 Its timing – This considers when and for how long
the impact will be felt
 The overall risk exposure formula is RE = P x C
 P = the probability of occurrence for a risk
 C = the cost to the project should the risk actually
occur

 Example
 P = 80% probability that 18 of 60 software
components will have to be developed
 C = Total cost of developing 18 components is
$25,000
 RE = .80 x $25,000 = $20,000
 Risk Refinement
 As time passes and more is learned about the project
and the risk, it may be possible to refine the risk.

Given that <condition> then there is concern that


(possibly) <consequence>
 Risk Mitigation, Monitoring, And Management
 An effective strategy for dealing with risk must
consider three issues
 Risk mitigation (i.e., avoidance)
 Risk monitoring
 Risk management and contingency planning

 Risk mitigation (avoidance) is the primary strategy


and is achieved through a plan
 Example: Risk of high staff turnover
 Strategy for Reducing Staff Turnover
 Meet with current staff to determine causes for turnover
(e.g., poor working conditions, low pay, competitive job
market)

 Mitigate those causes that are under our control before the
project starts

 Once the project commences, assume turnover will occur


and develop techniques to ensure continuity when people
leave

 Organize project teams so that information about each


development activity is widely dispersed
Define documentation standards and establish
mechanisms to ensure that documents are developed
in a timely manner

Conduct peer reviews of all work (so that more than


one person is "up to speed")

Assign a backup staff member for every critical


technologist
 During risk monitoring, the project manager monitors
factors that may provide an indication of whether a
risk is becoming more or less likely .

 Risk management and contingency planning assume


that mitigation efforts have failed and that the risk has
become a reality

 RMMM steps incur additional project cost


 Large projects may have identified 30 – 40 risks

 Risk is not limited to the software project itself


 Risks can occur after the software has been
delivered to the user
 Software safety and hazard analysis
 These are software quality assurance activities that
focus on the identification and assessment of
potential hazards that may affect software
negatively and cause an entire system to fail

 If hazards can be identified early in the software


process, software design features can be specified
that will either eliminate or control potential
hazards
 The RMMM Plan:-
 The RMMM plan documents all work performed as
part of risk analysis and is used by the project manager
as part of the overall project plan.

 Each risk is documented individually using a risk


information sheet

 Once RMMM has been documented and the project


has begun, risk mitigation and monitoring steps
commence.
 Risk monitoring has three objectives
 To assess whether predicted risks do, in fact, occur
 To ensure that risk aversion steps defined for the
risk are being properly applied
 To collect information that can be used for future
risk analysis
UNIT 5 COST ESTIMATION & MAINTENENCE
Software cost estimation - COCOMO model - Quality management - Quality concepts- SQA -
Software reviews - Formal technical reviews - Formal approaches of SQA and software
reliability - Software maintenance - SCM - Need for SCM - Version control - Introduction to SCM
process - Software configuration items. Re-Engineering - Software reengineering - Reverse
engineering - Restructuring - Forward engineering.

5.1 Software cost estimation

Predicting the resources required for a software development process.

Fundamental estimation questions

a) How much effort is required to complete an activity?

b) How much calendar time is needed to complete an activity?

c) What is the total cost of an activity?

d) Project estimation and scheduling are interleaved management activities.

The cost in a project is due to:

a. due the requirements for software, hardware and human resources

b. the cost of software development is due to the human resources needed

c. most cost estimates are measured in person-months (PM)

Software cost components

▪ Hardware and software costs

▪ Travel and training costs

▪ Effort costs (the dominant factor in most


projects)

o salaries of engineers involved in the project

o Social and insurance costs

▪ Effort costs must take overheads into account

o costs of building, heating, lighting

o costs of networking and communications

o costs of shared facilities (e.g library, staff restaurant, etc.)


Costing and pricing

▪ Estimates are made to discover the cost, to the developer, of producing a software
system

▪ There is not a simple relationship between the development cost and the price
charged to the customer

▪ Broader organisational, economic, political and business considerations influence the


price charged

Software pricing factors

Programmer productivity

▪ A measure of the rate at which individual engineers involved in software


development produce software and associated documentation.

▪ Not quality-oriented although quality assurance is a factor in productivity assessment

▪ Essentially, we want to measure useful functionality produced per time unit

Productivity measures

▪ Size related measures based on some output from the software process. This may
be lines of delivered source code, object code instructions, etc.

▪ Function-related measures based on an estimate of the functionality of the delivered


software. Function-points are the best known of this type of measure
Measurement problems

▪ Estimating the size of the measure

▪ Estimating the total number of programmer months which have elapsed

▪ Estimating contractor productivity (e.g. documentation team) and incorporating this


estimate in overall estimate

Lines of code

▪ What's a line of code?

o The measure was first proposed when programs were typed on cards with one
line per card

o How does this correspond to statements as in Java which can span several lines
or where there can be several statements on one line

▪ What programs should be counted as part of the system?

▪ Assumes linear relationship between system size and volume of documentation

Productivity comparisons

▪ The lower level the language, the more productive the programmer

o The same functionality takes more code to implement in a lower-level language


than in a high-level language

High and low level languages

Function points

▪ Based on a combination of program characteristics


o external inputs and outputs
o user interactions
o external interfaces
o files used by the system
▪ A weight is associated with each of these
▪ The function point count is computed by multiplying each raw count by the weight and
summing all values
Object points

▪ Object points are an alternative function-related measure to function points


▪ Object points are NOT the same as object classes
▪ The number of object points in a program is a weighted estimate of
o The number of separate screens that are displayed
o The number of reports that are produced by the system
o The number of modules that must be developed
Productivity estimates

▪ Real-time embedded systems, 40-160 LOC/P-month

▪ Systems programs , 150-400 LOC/P-month

▪ Commercial applications, 200-800 LOC/P-month

▪ In object points, productivity has been measured between 4 and 50 object


points/month depending on tool support and developer capability

Quality and productivity

▪ All metrics based on volume/unit time are flawed because they do not take quality
into account

▪ Productivity may generally be increased at the cost of quality

▪ It is not clear how productivity/quality metrics are related

▪ If change is constant then an approach based on counting lines of code is not


meaningful

Estimation techniques

▪ There is no simple way to make an accurate estimate of the effort required to develop a
software system
o Initial estimates are based on inadequate information in a user requirements
definition
o The software may run on unfamiliar computers or use new technology
o The people in the project may be unknown
▪ Project cost estimates may be self-fulfilling
o The estimate defines the budget and the product is adjusted to meet the budget
▪ Algorithmic cost modelling
▪ Expert judgement
▪ Estimation by analogy
▪ Parkinson's Law
▪ Pricing to win
Algorithmic code modelling
A formulaic approach based on historical cost information and which is generally based
on the size of the software
Expert judgement

▪ One or more experts in both software development and the application domain use
their experience to predict software costs. Process iterates until some consensus is
reached.
▪ Advantages: Relatively cheap estimation method. Can be accurate if experts have
direct experience of similar systems
▪ Disadvantages: Very inaccurate if there are no experts!
Estimation by analogy
▪ The cost of a project is computed by comparing the project to a similar project in the
same application domain
▪ Advantages: Accurate if project data available
▪ Disadvantages: Impossible if no comparable project has been tackled. Needs
systematically maintained cost database
Parkinson's Law

▪ The project costs whatever resources are available

▪ Advantages: No overspend

▪ Disadvantages: System is usually unfinished

▪ PL states that work expands to fill the time available. The cost is determined by
available resources rather than by objective statement.

▪ Example: Project should be delivered in 12 months and 5 people are available.


Effort = 60 p/m
Pricing to win

▪ The project costs whatever the customer has to spend on it

▪ Advantages: You get the contract

▪ Disadvantages: The probability that the customer gets the system he or she wants is
small. Costs do not accurately reflect the work required

Top-down and bottom-up estimation

▪ Any of these approaches may be used top-down or bottom-up.


▪ Top-down
o Start at the system level and assess the overall system functionality and how this
is delivered through sub-systems.
▪ Bottom-up
o Start at the component level and estimate the effort required for each
component. Add these efforts to reach a final estimate.
Top-down estimation

▪ Usable without knowledge of the system architecture and the components that might be
part of the system.
▪ Takes into account costs such as integration, configuration management and
documentation.
▪ Can underestimate the cost of solving difficult low-level technical problems.
Bottom-up estimation

▪ Usable when the architecture of the system is known and components identified.
▪ This can be an accurate method if the system has been designed in detail.
▪ It may underestimate the costs of system level activities such as integration and
documentation.
5.2 The COCOMO model

• The COstructive COst Model (COCOMO) is the most widely used software estimation
model in the world. It

• The COCOMO model predicts the effort and duration of a project based on inputs
relating to the size of the resulting systems and a number of "cost drives" that affect
productivity.

▪ An empirical model based on project experience.


▪ Well-documented, ‘independent’ model which is not tied to a specific software vendor.
▪ Long history from initial version published in 1981 (COCOMO-81) through various
instantiations to COCOMO 2.
▪ COCOMO 2 takes into account different approaches to software development, reuse,
etc.
Effort
• Effort Equation
– PM = C * (KDSI)n (person-months)
• where PM = number of person-month (=152 working hours),
• C = a constant,
• KDSI = thousands of "delivered source instructions" (DSI) and
• n = a constant.
Productivity
• Productivity equation
– (DSI) / (PM)
• where PM = number of person-month (=152 working hours),
• DSI = "delivered source instructions"
Schedule
• Schedule equation
– TDEV = C * (PM)n (months)
• where TDEV = number of months estimated for software development.
Average Staffing
• Average Staffing Equation
– (PM) / (TDEV) (FSP)
• where FSP means Full-time-equivalent Software Personnel.
COCOMO Models
• COCOMO is defined in terms of three different models:
– the Basic model,
– the Intermediate model, and
– the Detailed model.
• The more complex models account for more factors that influence software projects, and
make more accurate estimates.
The Development mode
• the most important factors contributing to a project's duration and cost is the
Development Mode
• Organic Mode: The project is developed in a familiar, stable
environment, and the product is similar to previously developed products.
The product is relatively small, and requires little innovation.
• Semidetached Mode: The project's characteristics are intermediate
between Organic and Embedded.
• Embedded Mode: The project is characterized by tight, inflexible
constraints and interface requirements. An embedded mode project will
require a great deal of innovation.
Cost Estimation Process
Cost=SizeOfTheProject x Productivity
Project Size – Metrics
1. Number of functional requirements
2. Cumulative number of functional and non-functional requirements
3. Number of Customer Test Cases
4. Number of ‘typical sized’ use cases
5. Number of inquiries
6. Number of files accessed (external, internal, master)
7. Total number of components (subsystems, modules, procedures, routines, classes,
methods)
8. Total number of interfaces
9. Number of System Integration Test Cases
10. Number of input and output parameters (summed over each interface)
11. Number of Designer Unit Test Cases
12. Number of decisions (if, case statements) summed over each routine or method
13. Lines of Code, summed over each routine or method
Availability of Size Estimation Metrics:

Function Points
STEP 1: measure size in terms of the amount of functionality in a system. Function points are
computed by first calculating an unadjusted function point count (UFC). Counts are made for
the following categories
– ·External inputs – those items provided by the user that describe distinct
application-oriented data (such as file names and menu selections)
– ·External outputs – those items provided to the user that generate distinct
application-oriented data (such as reports and messages, rather than the
individual components of these)
– · External inquiries – interactive inputs requiring a response
– · External files – machine-readable interfaces to other systems
– · Internal files – logical master files in the system
STEP 2: Multiply each number by a weight factor, according to complexity (simple, average or
complex) of the parameter, associated with that number. The value is given by a table:

STEP 3: Calculate the total UFP (Unadjusted Function Points)


STEP 4: Calculate the total TCF (Technical Complexity Factor) by giving a value between 0
and 5 according to the importance of the following points:
Technical Complexity Factors:
1. Data Communication
2. Distributed Data Processing
3. Performance Criteria
4. Heavily Utilized Hardware
5. High Transaction Rates
6. Online Data Entry
7. Online Updating
8. End-user Efficiency
9. Complex Computations
10. Reusability
[Link] of Installation
12. Ease of Operation
13. Portability
14. Maintainability
STEP 5: Sum the resulting numbers too obtain DI (degree of influence)
STEP 6: TCF (Technical Complexity Factor) by given by the formula
– TCF=0.65+0.01*DI
STEP 6: Function Points are by given by the formula
– FP=UFP*TCF
COCOMO 81

Project Formula Description


complexity
Well-understood applications
Simple PM = 2.4 (KDSI)1.05 × M developed by small teams.
More complex projects where
Moderate PM = 3.0 (KDSI)1.12 ×M team members may have limited
experience of related systems.
Complex projects where the
software is part of a strongly
Embedded PM = 3.6 (KDSI)1.20 × M coupled complex of hardware,
software, regulations and
operational procedures.
COCOMO 2

▪ COCOMO 81 was developed with the assumption that a waterfall process would be
used and that all software would be developed from scratch.

▪ Since its formulation, there have been many changes in software engineering
practice and COCOMO 2 is designed to accommodate different approaches to
software development.

▪ COCOMO 2 incorporates a range of sub-models that produce increasingly detailed


software estimates.

▪ The sub-models in COCOMO 2 are:

o Application composition model. Used when software is composed from


existing parts.

o Early design model. Used when requirements are available but design has
not yet started.

o Reuse model. Used to compute the effort of integrating reusable


components.

o Post-architecture model. Used once the system architecture has been


designed and more information about the system is available.

Application composition model

▪ Supports prototyping projects and projects where there is extensive reuse.


▪ Based on standard estimates of developer productivity in application (object)
points/month.
▪ Takes CASE tool use into account.
▪ Formula is
o PM = ( NAP ´ (1 - %reuse/100 ) ) / PROD
o PM is the effort in person-months, NAP is the number of application points and
PROD is the productivity.
5.3 Quality management
Measuring Quality

• Correctness

• Maintainability

• Integrity

• Usability

Software quality

▪ The degree to which a system, component, or process meets specified requirements.


▪ The degree to which a system, component, or process meets customer or user needs or
expectations.
▪ Conformance to explicitly stated functional and performance requirements, explicitly
documented development standards, and implicit characteristics that are expected of all
professionally developed software.
5.4 Software Quality Assurance (SQA)

• A planned and systematic pattern of all actions necessary to provide adequate


confidence that an item or product conforms to established technical requirements.

• A set of activities designed to evaluate the process by which the products are developed
or manufactured. Contrast with: quality control.

Quality control

• Ensure that procedures and standards are followed by the software development
team.

• Quality control involves the series of inspections, reviews, and tests used
throughout the software process

SQA Group Activities

• Prepare SQA plan for the project.

• Participate in the development of the project's software process description.

• Review software engineering activities to verify compliance with the defined software
process.

• Audit designated software work products to verify compliance with those defined as part
of the software process.
• Ensure that any deviations in software or work products are documented and handled
according to a documented procedure.

• Record any evidence of noncompliance and reports them to management

5.5 Software Reviews

• Purpose is to find defects (errors) before they are passed on to another software
engineering activity or released to the customer.

• Software engineers (and others) conduct formal technical reviews (FTR) for software
engineers.

• Using formal technical reviews (walkthroughs or inspections) is an effective means for


improving software quality.

Review Roles

• Presenter (designer/producer).

• Coordinator (not person who hires/fires).

• Recorder

– records events of meeting

– builds paper trail

• Reviewers

– maintenance oracle

– standards bearer

– user representative

– others

5.6 Formal Technical Reviews

• Involves 3 to 5 people (including reviewers)

• Advance preparation (no more than 2 hours per person) required

• Duration of review meeting should be less than 2 hours

• Focus of review is on a discrete work product

• Review leader organizes the review meeting at the producer's request.

• Reviewers ask questions that enable the producer to discover his or her own error (the
product is under review not the producer)

• Producer of the work product walks the reviewers through the product
• Recorder writes down any significant issues raised during the review

• Reviewers decide to accept or reject the work product and whether to require additional
reviews of product or not.

Need

• To improve quality.

• Catches 80% of all errors if done properly.

• Catches both coding errors and design errors.

• Enforce the spirit of any organization standards.

• Training

Formality and Timing

• Formal review presentations


– resemble conference presentations.
• Informal presentations
– less detailed, but equally correct.
• Early
– tend to be informal
– may not have enough information
• Later
– tend to be more formal
– Feedback may come too late to avoid rework
• Analysis is complete.

• Design is complete.

• After first compilation.

• After first test run.

• After all test runs.

• Any time you complete an activity that produce a complete work product.

Review Guidelines

• Keep it short (< 30 minutes).

• Don’t schedule two in a row.

• Don’t review product fragments.

• Use standards to avoid style disagreements.


• Let the coordinator run the meeting and maintain order.

Formal SQA Approaches

1. Proof of correctness.

2. Statistical quality assurance.

3. Cleanroom process combines items 1 & 2.

Statistical Quality Assurance

• Information about software defects is collected and categorized

• Each defect is traced back to its cause

• Using the Pareto principle (80% of the defects can be traced to 20% of the causes)
isolate the "vital few" defect causes

• Move to correct the problems that caused the defects

5.7 Software Reliability

• Defined as the probability of failure free operation of a computer program in a specified


environment for a specified time period

• Can be measured directly and estimated using historical and developmental data (unlike
many other software quality factors)

• Software reliability problems can usually be traced back to errors in design or


implementation.

Software Reliability Metrics

• Reliability metrics are units of measure for system reliability


• System reliability is measured by counting the number of operational failures and
relating these to demands made on the system at the time of failure

• A long-term measurement program is required to assess the reliability of critical systems

• Probability of Failure on Demand (POFOD)

• POFOD = 0.001

• For one in every 1000 requests the service fails per time unit

• Rate of Fault Occurrence (ROCOF)

• ROCOF = 0.02

• Two failures for each 100 operational time units of operation

• Mean Time to Failure (MTTF)

• average time between observed failures (aka MTBF)

• Availability = MTBF / (MTBF+MTTR)

• MTBF = Mean Time Between Failure

• MTTR = Mean Time to Repair

• Reliability = MTBF / (1+MTBF)

Software Safety

• SQA activity that focuses on identifying potential hazards that may cause a software
system to fail.

• Early identification of software hazards allows developers to specify design features to


can eliminate or at least control the impact of potential hazards.

• Software reliability involves determining the likelihood that a failure will occur without
regard to consequences of failures.

Validation Perspectives

• Reliability validation
– Does measured system reliability meet its specification?
– Is system reliability good enough to satisfy users?
• Safety validation
– Does system operate so that accidents do not occur?
– Are accident consequences minimized?
• Security validation
– Is system secure against external attack?
Validation Techniques

• Static techniques

– design reviews and program inspections

– mathematical arguments and proof

• Dynamic techniques

– statistical testing

– scenario-based testing

– run-time checking

• Process validation

– SE processes should minimize the chances of introducing system defects

SQA Plan

• Management section

– describes the place of SQA in the structure of the organization

• Documentation section

– describes each work product produced as part of the software process

• Standards, practices, and conventions section

– lists all applicable standards/practices applied during the software process and
any metrics to be collected as part of the software engineering work

• Reviews and audits section

– provides an overview of the approach used in the reviews and audits to be


conducted during the project

• Test section

– references the test plan and procedure document and defines test record
keeping requirements

• Problem reporting and corrective action section

– defines procedures for reporting, tracking, and resolving errors or defects,


identifies organizational responsibilities for these activities

• Other

– tools, SQA methods, change control, record keeping, training, and risk
management
5.8 Software Configuration Management

• Why Software Configuration Management ?


• The problem:
– Multiple people have to work on software that is changing
– More than one version of the software has to be supported:
• Released systems
• Custom configured systems (different functionality)
• System(s) under development
– Software must run on different machines and operating systems
Need for coordination
• Software Configuration Management
– manages evolving software systems
– controls the costs involved in making changes to a system
• Definition:

– A set of management disciplines within the software engineering process to


develop a baseline.

• Description:

– Software Configuration Management encompasses the disciplines and


techniques of initiating, evaluating and controlling change to software products
during and after the software engineering process.

SCM Activities

• Configuration item identification


– modeling of the system as a set of evolving components
• Promotion management
– is the creation of versions for other developers
• Release management
– is the creation of versions for the clients and users
• Branch management
– is the management of concurrent development
• Variant management
– is the management of versions intended to coexist
• Change management
– is the handling, approval and tracking of change requests
SCM Roles

• Configuration Manager
– Responsible for identifying configuration items. The configuration manager can
also be responsible for defining the procedures for creating promotions and
releases
• Change control board member
– Responsible for approving or rejecting change requests
• Developer
– Creates promotions triggered by change requests or the normal activities of
development. The developer checks in changes and resolves conflicts
• Auditor
– Responsible for the selection and evaluation of promotions for release and for
ensuring the consistency and completeness of this release

Terminology and Methodology

– Configuration Items

– Baselines

– SCM Directories

– Versions, Revisions and Releases

Configuration Item

“An aggregation of hardware, software, or both, that is designated for configuration


management and treated as a single entity in the configuration management process.”
▪ Software configuration items are not only program code segments but all type of
documents according to development, e.g
▪ all type of code files
▪ drivers for tests
▪ analysis or design documents
▪ user or developer manuals
▪ system configurations (e.g. version of compiler used)
▪ In some systems, not only software but also hardware configuration items (CPUs, bus
speed frequencies) exist!
• Large projects typically produce thousands of entities (files, documents, ...) which must
be uniquely identified.
• But not every entity needs to be configured all the time. Issues:
– What: Selection of CIs (What should be managed?)
– When: When do you start to place an entity under configuration control?
– Starting too early introduces too much bureaucracy
▪ Starting too late introduces chaos
▪ Some of these entities must be maintained for the lifetime of the software. This includes
also the phase, when the software is no longer developed but still in use; perhaps by
industrial customers who are expecting proper support for lots of years.
▪ An entity naming scheme should be defined
so that related documents have related names.
▪ Selecting the right configuration items is a skill that takes practice
▪ Very similar to object modeling
▪ Use techniques similar to object modeling for finding CIs

Baseline

A specification or product that has been formally reviewed and agreed to by responsible
management, that thereafter serves as the basis for further development, and can be changed
only through formal change control procedures.”

Examples:

Baseline A: The API of a program is completely defined; the bodies of the methods are empty.
Baseline B: All data access methods are implemented and tested; programming of the GUI can
start.
Baseline C: GUI is implemented, test-phase can start.
• As systems are developed, a series of baselines is developed, usually after a review
(analysis review, design review, code review, system testing, client acceptance, ...)
– Developmental baseline (RAD, SDD, Integration Test, ...)
• Goal: Coordinate engineering activities.
– Functional baseline (first prototype, alpha release, beta release)
• Goal: Get first customer experiences with functional system.
– Product baseline (product)
• Goal: Coordinate sales and customer support.
• Many naming scheme for baselines exist (1.0, 6.01a, ...)

SCM Directories

• Programmer’s Directory (IEEE: Dynamic Library)


– Library for holding newly created or modified software entities. The programmer’s
workspace is controlled by the programmer only.
• Master Directory (IEEE: Controlled Library)
– Manages the current baseline(s) and for controlling changes made to them. Entry
is controlled, usually after verification. Changes must be authorized.
• Software Repository (IEEE: Static Library)
– Archive for the various baselines released for general use. Copies of these
baselines may be made available to requesting organizations.
Change management

• Change management is the handling of change requests


– A change request leads to the creation of a new release
• General change process
– The change is requested (this can be done by anyone including users and
developers)
– The change request is assessed against project goals
– Following the assessment, the change is accepted or rejected
– If it is accepted, the change is assigned to a developer and implemented
– The implemented change is audited.
• The complexity of the change management process varies with the project. Small
projects can perform change requests informally and fast while complex projects require
detailed change request forms and the official approval by one more managers.
Controlling Changes

• Two types of controlling change:


– Promotion: The internal development state of a software is changed.
– Release: A set of promotions is distributed outside the development organization.
• Approaches for controlling change to libraries (Change Policy)
– Informal (good for research type environments)
– Formal approach (good for externally developed CIs and for releases)
Change Policies

• Whenever a promotion or a release is performed, one or more policies apply. The


purpose of change policies is to guarantee that each version, revision or release
conforms to commonly accepted criteria.
• Examples for change policies:
“No developer is allowed to promote source code which cannot be compiled without errors
and warnings.”
“No baseline can be released without having been beta-tested by at least 500 external
persons.”
5.9 Version: Version vs. Revision vs. Release

• Version: Version vs. Revision vs. Release


– An initial release or re-release of a configuration item associated with a complete
compilation or recompilation of the item. Different versions have different
functionality.
• Revision:
– Change to a version that corrects only errors in the design/code, but does not
affect the documented functionality.
• Release:
– The formal distribution of an approved version.
SCM planning

• Software configuration management planning starts during the early phases of a project.
• The outcome of the SCM planning phase is the Software Configuration Management
Plan (SCMP) which might be extended or revised during the rest of the project.
• The SCMP can either follow a public standard like the IEEE 828, or an internal (e.g.
company specific) standard.
The Software Configuration Management Plan

• Defines the types of documents to be managed and a document naming scheme.


• Defines who takes responsibility for the CM procedures and creation of baselines.
• Defines policies for change control and version management.
• Describes the tools which should be used to assist the CM process and any limitations
on their use.
• Defines the configuration management database used to record configuration
information.
5.10 Outline of a Software Configuration Management Plan

1. Introduction
Describes purpose, scope of application, key terms and references
2. Management (WHO?)
Identifies the responsibilities and authorities for accomplishing the planned
configuration management activities
3. Activities (WHAT?)
Identifies the activities to be performed in applying to the project.
4. Schedule (WHEN?)
Establishes the sequence and coordination of the SCM activities with project mile
stones.
5. Resources (HOW?)
Identifies tools and techniques required for the implementation of the SCMP
6. Maintenance
Identifies activities and responsibilities on how the SCMP will be kept current during
the life-cycle of the project.
Tools for Software Configuration Management

• Software configuration management is normally supported by tools with different


functionality.
• Examples:
– RCS
• very old but still in use; only version control system
– CVS
• based on RCS, allows concurrent working without locking
– Perforce
• Repository server; keeps track of developer’s activities
– ClearCase
• Multiple servers, process modeling, policy check mechanisms
5.11 Software Reengineering

• Re – Engineering
• Reengineering = transformation process
• Goals:
• improve one’s understanding of the system
• improve system’s maintainability, evolvability, reusability

• Reorganising and modifying existing software systems to make them more maintainable

Software Reengineering Process Model

• Every software organization should have an inventory of all applications.

• The inventory can be nothing more than a spreadsheet model containing information
that provides a detailed description (e.g., size, age, business criticality) of every active
application.

• By sorting this information according to business criticality, longevity, current


maintainability, and other locally important criteria, candidates for reengineering appear

Inventory analysis

• Every software organization should have an inventory of all applications.


• The inventory can be nothing more than a spreadsheet model containing information
that provides a detailed description (e.g., size, age, business criticality) of every active
application.
• By sorting this information according to business criticality, longevity, current
maintainability, and other locally important criteria, candidates for reengineering appear
Document restructuring

• Weak documentation is the trademark of many legacy

• systems. But what do we do about it? What are our options?

• Creating documentation is far too time consuming. If the system works, we’ll live with
what we have Documentation must be updated, but we have limited resources. We’ll use
a “document when touched” approach.

• The system is business critical and must be fully redocumented.

Reverse engineering
• The term reverse engineering has its origins in the hardware world. A company
disassembles a competitive hardware product in an effort to understand its competitor's
design and manufacturing "secrets." These secrets could be easily understood if the
competitor's design and manufacturing specifications were obtained. But these
documents are proprietary and unavailable to the company doing the reverse
engineering.
• successful reverse engineering derives one or more design and manufacturing
specifications for a product by examining actual specimens of the product
• Therefore, reverse engineering for software is the process of analyzing a program in
an effort to create a representation of the program at a higher level of abstraction than
source code.

• Reverse engineering is a process of design recovery.

• Reverse engineering tools extract data, architectural, and procedural design information
from an existing program.

Code restructuring

• The most common type of reengineering is code restructuring. Some legacy systems
have a relatively solid program architecture, but individual modules were coded in a way
that makes them difficult to understand, test, and maintain.

• In such cases, the code within the suspect modules can be restructured.

• To accomplish this activity, the source code is analyzed using a restructuring tool.
Violations of structured programming constructs are noted and code is then restructured.
• The resultant restructured code is reviewed and tested to ensure that no anomalies have
been introduced. Internal code documentation is updated.

Data restructuring

• Data structuring is a full-scale reengineering activity. In most cases, data restructuring


begins with a reverse engineering activity. Current data architecture is dissected and
necessary data models are defined. Data objects and attributes are identified, and
existing data structures are reviewed for quality.

Forward engineering

• Forward engineering, also called renovation or reclamation , not only recovers design
information from existing software, but uses this information to alter or reconstitute the
existing system in an effort to improve its overall quality

REVERSE ENGINEERING

References

1 Pressman, “Software Engineering and Application”, 6th Edition, McGraw International Edition,
2005.

2. Sommerville, ”Software Engineering”, 6th Edition, Pearson Education, 2000.

You might also like