0% found this document useful (0 votes)
75 views

Software Design Midterm

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views

Software Design Midterm

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 246

Week 1-2

Introduction to Software Design


SOFTWARE

•Computer software, or just software, is a collection of computer programs


and related data that provide the instructions for telling a computer what to
do and how to do it.

Software is a set of :

❑ Programs
❑ Procedures
❑ Algorithms & its documentation
SOFTWARE’S OVERVIEW
HISTORY

•The first theory about software was proposed by Alan Turing in his 1935 essay
Computable numbers with an application to the Entscheidungsproblem (Decision
problem). The term "software" was first used in print by John W. Tukey in 1958.

•The term is often used to mean application software. In computer science and
software engineering, software is all information processed by computer system,
programs and data.
SOFTWARE
GENERATIONS
FIRST GENERATION

•During the 1950's the first computers were programmed by changing the
wires and set tens of dials and switches. One for every bit sometimes these
settings could be stored on paper tapes that looked like a ticker tape from
the telegraph.
SECOND GENERATION

•The first generation "languages" were regarded as very user unfriendly


people set out to look for something else, faster and easier to understand.
The result was the birth of the second generation languages (2GL) at the mid
of the 1950's
THIRD GENERATION

•At the end of the 1950's the 'natural language’ interpreters and compilers
were made. But it took some time before the new languages were accepted
by enterprises.

•About the oldest 3GL is FORTRAN (Formula Translation) which was developed
around 1953 by IBM. This is a language primarily intended for technical and
scientific purposes. Standardization of FORTRAN started 10 years later, and a
recommendation was finally published by the International Standardization
Organization (ISO) in 1968.
FOURTH GENERATION

•A 4GL is an aid which the end user or programmer can use to build an
application without using a third generation programming language.
Therefore knowledge of a programming language is strictly spoken not
needed.
•Practical computer system divided software systems into 3 major classes.

❑System software
❑Programming software
❑Application software
SYSTEM SOFTWARE

•System software provides the basic functions for computer usage and helps
run the computer hardware and system.

It includes a combination of the following:

❑ device drivers
❑ operating systems
❑ utilities
❑ window systems
PROGRAMMING SOFTWARE

•Programming software usually provides tools to assist a programmer in


writing computer programs, and software using different programming
languages in a more convenient way.

The tools include:

❑compilers
❑debuggers
❑interpreters
❑linkers
❑text editors
APPLICATION SOFTWARE

•Application software is developed to aid in any task that benefits from


computation. it is a broad category, and encompasses software of many
kinds, including the internet browser being used to display this page.

This category includes:


❑business software
❑computer-aided design
❑databases
❑decision making software
❑educational software
❑image editing
❑Etc.
WHAT IS A PROGRAMMING LANGUAGE

❖ A tool for instructing machines.


❖ A notation for algorithms.
❖ A means for communication among programmers.
❖ A tool for experimentation.
❖ A means for controlling computer-controlled gadgets.
❖ A means for controlling computerized devices.
❖ A way of expressing relationships among concepts.
❖ A means for expressing high-level designs.
LANGUAGES
1950s: 1960s: 1970s:
Simula

Lisp
Algol60 Algol68

Fortran Pascal

BCPL Classic C
COBOL

PL\1
MODERN PROGRAMMING LANGUAGE
Lisp
Python
Smalltalk PHP
Fortran77

Eiffel Java95 Java04


Simula67

C89 C++ C++0x


C++98

Ada Ada98 C#

Object Pascal
COBOL04 Javascript
COBOL89
Visual Basic
PERL
CSCI624
Software Design and
Development
Week 2
Process, Methods and Tools
 Software Engineering is a layered
technology

“ ”
Process
 Foundation for SE
 Glue that hold the technology layers together
 Enables timely development of software
 Framework for a set of key process areas
(KPAs) developed for effective delivery of s/w
engg technology.

3
Methods
 Provide the technical how-to’s for building
software.
 Include tasks that include:
 Requirements analysis
 Design
 Program construction
 Testing
 Support

4
Tools
 Provide automated/semi-automated
support for the process and the
methods.
 When tools are integrated, information
created by one tool can be used by
another, creating a system for the
support of software development called
CASE.
 CASE combines s/w, h/w and a s/w
Engg database to create a S/w Engg
environment.

5
A generic Process Framework
Engineering is the analysis , design , construction verification and
management of technical entities.

a) Definition phase :
• Focus on what , Identify the requirement , what is to be
processed, what system behaviour is needed, what
interface is to be designed, Key requirements is identified
b) Development Phase :
• Focus on how, how data is constructed ,implementation of
procedures and methods
c) Support phase :
• change associated with error corrections, adaption to new
s/w changes.
There are 4 types of changes
Correction , Adaptation , Enhancement ,Prevention
6
Software FRAME WORK
Framework Activities
It has 5 activities
 Communication : communicate with customer and
understand objectives
 Planning: A map that helps in the development of
software
 Modeling
 Analysis of requirements
 Design
 Construction
 Code generation
 Testing
 Deployment : Delivery of product
Umbrella Activities
 Software project management :
 Assess project progress against project plan
 Formal technical reviews :
 Remove errors migrating to next activity
 Software quality assurance :
 Define activities to ensure quality
 Software configuration management :
 manage effect of changes in the project
 Work product preparation and production
 Reusability management:
 Define criteria for work product reuse
 Measurement:
 define measures for meeting customer needs
 Risk management :
 Assess risks that may affect the s/w
The Process Model:
Adaptability

 The framework activities will always


be applied on every project ... BUT
 The tasks (and degree of rigor) for
each activity will vary based on:
 The type of project
 Characteristics of the project
 Common sense judgment;
concurrence of the project team

9
The Primary Goal of Any Software
Process: High Quality

Remember:

High quality = project timeliness

Why?

Less rework!

10
SOFTWARE PROCESS MODELS
 A process model/ software engineering paradigm is
chosen based on the nature of the project and
application, the methods and tools to be used and the
controls and deliverables that are required.
 The different phases of a problem solving loop are:
 Problem definition - identifies the specific problem to be solved
 Technical Development-solves the problem through application of
some technology
 Solution integration- delivers the results or documents, programs,
data, etc
 Status quo- represents the current state of affairs
Prescriptive Process Models
 Prescriptive process models advocate an orderly
approach to software engineering.
 That leads to a few questions …
 If prescriptive process models strive for structure and
order, are they inappropriate for a software world that
thrives on change?
 Yet, if we reject traditional process models (and the order
they imply) and replace them with something less
structured, do we make it impossible to achieve
coordination and coherence in software work?
1. The Linear Sequential Model
 Also called the classic life cycle/waterfall
model.
 Has a systematic sequential approach to
software development that begins at system
level and progresses through analysis,
design, coding, testing and support.
 One of the oldest models.
 Requirements should be well understood
Activities:
 Communication : Requirements gathering
 Planning : Estimate schedule
 Modelling: Design
 Construction : Coding and testing
 Deployment : Delivery of software

OR
 Analysis
 Design
 Implementation
 Verification
 Testing
Analysis

 Focuses mainly on software.


 To understand the nature of the program to
be built, software engineer must
understand information domain for
software, required function, behavior,
performance and interface.
 Requirements for system and software are
documented and reviewed with the
customer.
Design

 Focuses on 4 attributes of a program:


 Data structures
 Software architecture
 Interface representation
 Procedural (algorithmic) detail
 It translates the requirements into a representation of the
software that can be assessed for quality before coding
begins.
 The design is also documented and becomes part of
software configuration.
Implementation:
 The design must be translated into a machine
readable form and this is done in this step.
Verification:
 once the code has been generated, program
testing begins.
 It ensures that all the statements have been
tested.
 Tests are also conducted to uncover errors and
ensure defined input will produce actual results.
Maintenance:
 software will change after delivery.
 Either due to errors, the software must be
changed to accommodate changes in its
environment.
Seat Work
 What is the generic process framework
 Differentiate Process , method and tools in
Software Development.
CSCI624
Software Design and
Development
Week 3
1. Waterfall Model
• The Waterfall Model was the first Process
Model to be introduced. It is also referred to
as a linear-sequential life cycle model. It is
very simple to understand and use. In a
waterfall model, each phase must be
completed before the next phase can begin
and there is no overlapping in the phases..
Waterfall Model
Problems Faced Practically
• Real projects rarely follow sequence
• Difficult for customer to state all requirements
explicitly
• Customer must have patience.
• Used only when requirements are fixed
• Note: we should be careful and bear in mind
the above points.
2. The Prototyping Model
• Customer defines a set of general objectives for software
but does not identify detailed i/p, or o/p requirements.
• Also a developer may be unsure of the efficiency of an
algorithm, adaptability of an OS.
• So for such cases this model is best.
• It begins with the requirements gathering
• The developer and customer meet to define
overall objectives for the software, identify
whatever requirements are known and outline
areas where further definition is needed.
• A quick design is then made which focuses on
the parts of the software that are visible to the
customer/user.
• This leads to construction of a prototype.
• This is showed to the user to make further
changes if needed.
Prototyping
ACTIVITIES
• Communication : Define overoll objective
,identify requirements
• Modelling and Quick plan : representation of
aspects that is visible to end users
• Construction of prototype : prototype is
evaluated by stakeholders who provide
feedback and thus enhance the product
• Deployment and Delivery :
Problems
Excessive When the end user is asked to evaluate a prototype and provide
development time feedback there is a risk that they will be forever wanting to tweak
the system, thus leading to delays in development
User confusion Sometimes features appear in a prototype which are then removed
from the final system. Users can become confused or disappointed
with the final system if it differs greatly from the prototype.
Increased The end user might start to ask for features to be included which
development time were never in the original user requirements specification. This can
lead to increased development time and costs.
Too much focus When a lot of time is spent on a prototype of one specific part of
on one part of the the system, other parts of the system might end up being neglected
system
Expense of Building a prototype costs money in terms of development time
prototyping and possibly hardware. While the prototype is being worked on,
the real system is on hold.
3. RAD Model
 Rapid application development (RAD) is an
incremental software development process
model that emphasizes an extremely short
development cycle.
 The RAD model is a “high-speed”
adaptation of the linear sequential model in
which rapid development is achieved by
using component-based construction.
 Used when
 Requirements are clearly understood
 Project scope is constrained
 Creating a fully functional system in a short
span of time
 If requirements are well understood and
project scope is constrained, the RAD process
enables a development team to create a “fully
functional system” within very short time
periods (e.g., 60 to 90 days.
 Used primarily for information systems
applications, the RAD approach encompasses
the following phases
 Business Modeling
 Data Modeling
 Process Modeling
 Application Generation
 Testing and Turnaround
Evolutionary Software Process
Models:
 Linear Sequential Model is designed for straight
line development.
 The Prototyping model is designed to assist the
customer/developer in understanding
requirements.
 Evolutionary models are iterative, i.e. they help
software engineers to develop more complete
versions of the software.
 2 types of evolutionary software process models:
 Incremental Model
 Spiral Model
1. Incremental Model
 Incremental model combines Linear Sequential Model with
prototyping.
 Applies linear sequences and each sequence produces a
deliverable increment of the software.
 E.g. Word processing software developed using incremental
model might deliver:
 1st increment- basic file management, editing and document production
functions
 2nd increment- more sophisticated editing and document production functions
 3rd increment- spelling and grammar checking
 4th increment- advanced page layout capability
Incremental Model
 When the incremental model is used, the 1st
increment is the core product.
 Basic requirements are mentioned but additional
features are not added.
 The core product is used by the customer after
which a plan is developed for the next increment
which modifies the core product to meet the
needs of the customer.
 This process is repeated after delivery of each
increment until complete product is produced.
 Figure:
Spiral Model

 This also combines Linear Sequential Model with


prototyping.
 It provides potential for rapid development of incremental
versions of the software.
 Using spiral model, software is developed in series of
incremental releases.
 At first the 1st increment might be on paper model/prototype.
Later more complete versions will be made.
 Spiral model is divided into no. of framework activities called
task regions.
Spiral
planning
estimation
scheduling
risk analysis

communication

modeling
analysis
design
start

deployment
construction
delivery
code
feedback test
 There are mainly 6 task regions:
1. Customer communication: tasks required to establish effective
communication between developer and customer.
2. Planning: tasks required to define resources, timelines and
other project related information.
3. Risk Analysis: tasks required to assess both technical and
management risks.
4. Engineering: tasks required to build one or more
representations of the application.
5. Construction and release: tasks required to construct, test,
install and provide user support. (E.g. documentation and
training).
6. Customer evaluation: tasks required to obtain customer
feedback based on evaluation of the software representations.
Seat Work

 What is advantages and disadvatanges of


the waterfall model.
 When should we use the waterfall model to
develop the software?
 Define a spiral Model. What are the six
quadrants (divisions) in a spiral model?

****
CSCI624
Software Design and
Development
Week 4
What Is Agility
 Agile software development refers
to software development
methodologies centered round the idea of
iterative development, where
requirements and solutions evolve through
collaboration between self-organizing
cross-functional teams
 Effective (rapid and adaptive) response to
change (team members, new technology,
requirements)!
 Effective communication in structure and
attitudes among all team members,
technological and business people, software
engineers and managers.
Why and What Steps are“Agility”
important?
 Why? The modern business environment is
fast-paced and ever-changing. It
represents a reasonable alternative to
conventional software engineering for
certain classes of software projects. It has
been demonstrated to deliver successful
systems quickly. !
 What? May be termed as “software
engineering lite” The basic activities-
communication, planning, modeling,
 construction and deployment remain. But
they morph into a minimal task set that
push the team toward construction and
delivery sooner.
Agile Methodologies
 The most popular and common examples
are Scrum, eXtreme Programming (XP),
Feature Driven Development (FDD),
Dynamic Systems Development Method
(DSDM), Adaptive Software Development
(ASD), Crystal, and Lean Software
Development (LSD)
Agility and the Cost of Change
 Conventional wisdom is that the cost of
change increases nonlinearly as a project
progresses.
 It is relatively easy to accommodate a
change when a team is gathering
requirements early in a project.
 If there are any changes, the costs of
doing this work are minimal
 But if the middle of validation testing, a
stakeholder is requesting a major
functional change. Then the change
requires a modification to the architectural
design, construction of new components,
changes to other existing components, new
testing and so on.
 Costs escalate quickly
Agility and the Cost of Change
Extreme Programming (XP)
 XP Design ( occurs both before and after
coding as refactoring is encouraged)!
 Follows the KIS principle (keep it simple)
Nothing more nothing less than the story. !
 Encourage the use of CRC (class-
responsibility-collaborator) cards in an
object-oriented context. The only design
work product of XP. They identify and
organize the classes that are relevant to
the current software.
Seat Work

 What is Extreme Programming.


 What is Agile Programming
Week 1
Introduction to Software Design
SYSTEM DESIGN

•It is a process of planning a new business system or replacing an existing


system by defining its components or modules to satisfy the specific
requirements.

•System Design mainly focuses on:

• Systems
• Processes
• Technology
What is aSystem?

•The word System is derived from Greek word Systema, which means an
organized relationship between any set of components to achieve some
common cause or objective.

• A system is “an orderly grouping of interdependent components linked


together according to a plan to achieve a specific goal.”
Constraints of a System

A system must have three basic constraints:


1. A system must have some structure and behavior which is designed to
achieve a predefined objective.

2. Interconnectivity and interdependence must exist among the system


components.

3.The objectives of the organization have a higher priority than the


objectives of its subsystems.
Properties of aSystem
A system has the following properties:
Organization
•Organization implies structure and order. It is the arrangement of
components that helps to achieve predetermined objectives.

Interaction
• It is defined by the manner in which the components operate with each other.
•For example, in an organization, purchasing department must interact with
production department and payroll with personnel department.
Properties of aSystem

Interdependence
•Interdependence means how the components of a system depend on one
another. For proper functioning, the components are coordinated and linked
together according to a specified plan. The output of one subsystem is the
required by other subsystem as input.

Integration
•Integration is concerned with how a system components are connected
together. It means that the parts of the system work together within the
system even if each part performs a unique function.
Properties of aSystem
Central Objective
•The objective of system must be central. It may be real or stated. It is not
uncommon for an organization to state an objective and operate to achieve
another.

•The users must know the main objective of a computer application early in
the analysis for a successful design and conversion.
Elements of a System(Diagram)
Elements of aSystem

Outputs and Inputs


• The main aim of a system is to produce an output which is useful for its
user.

• Inputs are the information that enters into the


•system for processing.

• Output is the outcome of processing.


Elements of aSystem

Processor(s)
•The processor is the element of a system that involves the actual
transformation of input into output.

•It is the operational component of a system. Processors may modify the


input either totally or partially, depending on the output specification.

•As the output specifications change, so does the processing. In some


cases, input is also modified to enable the processor for handling the
transformation.
Elements of aSystem

Control
•The control element guides the system.

•It is the decision–making subsystem that controls the pattern of activities


governing input, processing, and output.

•The behavior of a computer System is controlled by the Operating System


and software. In order to keep system in balance, what and how much input is
needed is determined by Output Specifications.
Elements of aSystem

Feedback
•Feedback provides the control in a dynamic system.

•Positive feedback is routine in nature that encourages the performance of the


system.

•Negative feedback is informational in nature that provides the controller


with information for action.
Elements of aSystem

Environment
•The environment is the “supersystem” within which an organization operates.

•It is the source of external elements that strike on the system.

•It determines how a system must function. For example, vendors and
competitors of organization’s environment, may provide constraints that
affect the actual performance of the business.
Elements of aSystem

Boundaries and Interface


•A system should be defined by its boundaries. Boundaries are the limits that
identify its components, processes, and interrelationship when it interfaces
with another system.

•Each system has boundaries that determine its sphere of influence and
control.

•The knowledge of the boundaries of a given system is crucial in determining


the nature of its interface with other systems for successful design.
Types ofSystem
•The systems can be divided into the following types:

Physical or Abstract Systems


•Physical systems are tangible entities. We can touch
•and feel them.

•Physical System may be static or dynamic in nature. For example, desks and
chairs are the physical parts of computer center which are static. A
programmed computer is a dynamic system in which programs, data, and
applications can change according to the user's needs.
Types ofSystem
• Abstract systems are non-physical entities or conceptual that may be
formulas, representation or model of a real system.
Types ofSystem
Open or Closed Systems
•An open system must interact with its environment. It receives inputs from
and delivers outputs to the outside of the system. For example, an information
system which must adapt to the changing environmental conditions.

•A closed system does not interact with its environment. It is isolated from
environmental influences. A completely closed system is rare in reality.
Types ofSystem
•Adaptive and Non Adaptive System
•Adaptive System responds to the change in the environment in a way to
improve their performance and to survive. For example, human beings,
animals.
•Non Adaptive System is the system which does not respond to the
environment. For example, machines.
Types ofSystem
Permanent or Temporary System
•Permanent System persists for long time. For example, business policies.
•Temporary System is made for specified time and after that they are
demolished. For example, A DJ system is set up for a program and it is
dissembled after the program.
Types ofSystem
Natural and Manufactured System
•Natural systems are created by the nature. For example, Solar system,
seasonal system.

•Manufactured System is the man-made system. For example, Rockets, dams,


trains.
Types ofSystem
•Deterministic or Probabilistic System
•Deterministic system operates in a predictable manner and the interaction
between system components is known with certainty. For example, two
molecules of hydrogen and one molecule of oxygen makes water.

•Probabilistic System shows uncertain behavior. The exact output is not


known. For example, Weather forecasting, mail delivery.
Types ofSystem
Social, Human-Machine, Machine System
•Social System is made up of people. For example, social clubs, societies.

•In Human-Machine System, both human and machines are involved to


perform a particular task. For example, Computer programming.

•Machine System is where human interference is neglected. All the tasks are
performed by the machine. For example, an autonomous robot.
Types ofSystem
Man–Made Information Systems
•It is an interconnected set of information resources to manage data for
particular organization, under Direct Management Control (DMC).

•This system includes hardware, software, communication, data, and


application for producing information according to the need of an
organization.
Types ofSystem
•Man-made information systems are divided into three types:
•Formal Information System: It is based on the flow of information in the
form of memos, instructions, etc., from top level to lower levels of
management.

•Informal Information System: This is employee based system which solves


the day to day work related problems.
Types ofSystem
• Computer Based System: This system is directly dependent on the
computer for managing business applications. For example, automatic
library system, railway reservation system, banking system, etc.
Systems Models
Schematic Models
•A schematic model is a 2-D chart that shows system elements and their
linkages.

•Different arrows are used to show information flow, material flow, and
information feedback.
Systems Models
Flow System Models
•A flow system model shows the orderly flow of the material, energy, and
information that hold the system together.

•Program Evaluation and Review Technique (PERT), for example, is used to


abstract a real world system in model form.
Systems Models
Static System Models
• They represent one pair of relationships such as activity–time or cost–
quantity.

• The Gantt chart, for example, gives a static pictureof an activity-time


relationship
Systems Models
Dynamic System Models
•Business organizations are dynamic systems. A dynamic model approximates
the type of organization or application that analysts deal with.

•It shows an ongoing, constantly changing status of the system. It consists of:
1. Inputs that enter the system
2. The processor through which transformation takes place
3. The program(s) required for processing
4. The output(s) that result from processing
Categories ofInformation
There are three categories of information related to managerial levels and the
decision managers make.

Strategic Information
•This information is required by topmost management for long range
planning policies for next few years. For example, trends in revenues,
financial investment, and human resources, and population growth.

•This type of information is achieved with the aid of Decision Support


System (DSS).
Categories ofInformation
Managerial Information
•This type of Information is required by middle management for short and
intermediate range planning which is in terms of months. For example, sales
analysis, cash flow projection, and annual financial statements.

•It is achieved with the aid of Management Information Systems (MIS).


Categories ofInformation
Operational information
•This type of information is required by low management for daily and short
term planning to enforce day-to-day operational activities.
- For example, keeping employee attendance records, overdue purchase
orders, and current stocks available.

•It is achieved with the aid of Data Processing Systems(DPS).


CSCI624
Software Design and
Development
Week 5
Agile Process Models
Agile model believes that every project
needs to be handled differently and the
existing methods need to be tailored to
best suit the project requirements. In Agile,
the tasks are divided to time boxes (small
time frames) to deliver specific features for
a release
The Agile Process Flow

 Concept - Projects are envisioned and


prioritized
 Inception - Team members are identified,
funding is put in place, and initial
environments and requirements are
discussed
 Iteration/Construction - The
development team works to deliver working
software based on iteration requirements
and feedback
 Release - QA (Quality Assurance) testing,
internal and external training,
documentation development, and final
release of the iteration into production
 Production - Ongoing support of the
software
 Retirement - End-of-life activities,
including customer notification and
migration
Agile Process Models
Tools for Agile Process
 Agilean

 First one on the list of agile project


management tools is Agilean. It is a SaaS
enterprise workflow automation and project
management software solution that is
basically created to be used by small-
medium IT enterprises.
Tools Set for Agile Process
 Wrike

 Another great one from the list of the best


agile project management tools is Wrike
that is one of the best in terms of
integrating email with project
management, having main features inside.
Tools Set for Agile Process
 Kanbanize

 Kanbanize is a Kanban software for Agile


project management that brings full
transparency within both individual team
workflows as well as across the entire
organization.

 These some of the tools commonly used.


Seat Work
 What do you understand by Agile Process.
 List at least 10 tools used for the Agile
Process.
 Find the details on scrum Agile Process.

 For the above questions discuss your


finding in the class session.

****
Project Management Concepts
Lecture Module Week 6

1
What is Project Management?
 Project management is the process of the
application of knowledge, skills, tools, and
techniques to project activities to meet project
requirements
 an interrelated group of processes that enables
the project team to achieve a successful project

2
The 4 P’s
 People — the most important element of a
successful project
 Product — the software to be built
 Process — the set of framework activities and
software engineering tasks to get the job done
 Project — all work required to make the product a
reality

3
PEOPLE
 The people management maturity model defines the
following key practice areas for software people
people::
 recruiting,
 selection,
 performance management,
 training,
 compensation,
 career development,
 organization and work design,
 and team/culture development
development..
 Organizations that achieve high levels of maturity in the
people management area have a higher likelihood of
implementing effective software engineering practices
practices.. 4
PRODUCT
 Before a project can be planned, product objectives and
scope should be established, alternative solutions should
be considered, and technical and management
constraints should be identified
identified..
 Without this information, it is impossible to define
reasonable (and accurate) estimates of the cost, an
effective assessment of risk, a realistic breakdown of
project tasks, or a manageable project schedule that
provides a meaningful indication of progress
progress..
 The software developer and customer must meet to
define product objectives and scope
scope..
5
 Objectives identify the overall goals for the product
(from the customer’s point of view) without considering
how these goals will be achieved
achieved..
 Scope identifies the primary data, functions and
behaviors that characterize the product, and more
important, attempts to bound these characteristics in a
quantitative manner
manner..
 Once the product objectives and scope are understood,
alternative solutions are considered

6
PROCESS
 A software process provides the framework from which a
comprehensive plan for software development can be
established..
established
 A small number of framework activities are applicable to
all software projects, regardless of their size or
complexity..
complexity
 A number of different task sets
sets—
—tasks, milestones, work
products, and quality assurance points
points——enable the
framework activities to be adapted to the characteristics
of the software project and the requirements of the
project team
team..
 Finally, umbrella activities
activities—
—such as software quality
assurance, software configuration management,and
measurement—
measurement —overlay the process model
model.. 7
PROJECT
 In order to avoid project failure, a software project
manager and the software engineers who build the
product must avoid a set of common warning signs,
understand the critical success factors that lead to good
project management, and develop a commonsense
approach for planning, monitoring and controlling the
project..
project

8
Software Teams
How to lead?
How to organize?
How to collaborate?

How to motivate? How to create good ideas?

9
THE PEOPLE
 THE PLAYERS-
PLAYERS-STAKEHOLDERS
 TEAM LEADERS
 TEAM ORGANIZATION

10
Stakeholders
 Senior managers who define the business issues that often
have significant influence on the project.
 Project (technical) managers who must plan, motivate,
organize, and control the practitioners who do software
work..
work
 Practitioners who deliver the technical skills that are
necessary to engineer a product or application.
 Customers who specify the requirements for the software to
be engineered and other stakeholders who have a
peripheral interest in the outcome.
 End--users who interact with the software once it is released
End
for production use.
11
Team Leader
 The MOI Model
 Motivation
Motivation.. The ability to encourage (by “push or
pull”) technical people to produce to their best ability
ability..

 Organization. The ability to mold existing processes


Organization.
(or invent new ones) that will enable the initial
concept to be translated into a final product
product..

 Ideas or innovation
innovation.. The ability to encourage people
to create and feel creative even when they must work
within bounds established for a particular software
product or application
application..
12
Team Organization

1. Democratic decentralized (DD)


2. Controlled decentralized (CD)
3. Controlled Centralized (CC)

13
Democratic Decentralized (DD)
 Has no permanent leader
 Task coordinators are appointed for short durations and
then replaced by others who may coordinate different
tasks.
 Decisions on problems and approach are made by group
consensus.
 Communication among team members is horizontal.

14
Controlled Decentralized (CD)
 Has a defined leader who coordinates specific tasks and
secondary leaders that have responsibility for subtasks
 Problem solving remains a group activity, but
implementation of solutions is partitioned among
subgroups by the team leader
leader..
 Communication among subgroups and individuals is
horizontal.. Vertical communication along the control
horizontal
hierarchy also occurs
occurs..

15
Controlled Centralized (CC)
 Top-level problem solving and internal team
Top-
 coordination are managed by a team leader.
 Communication between the leader and team members
is vertical.

16
Software Teams
The following factors must be considered when selecting a
software project team structure ...
 the difficulty of the problem to be solved
(qualifications)
 the size of the resultant program(s) in LOC or FPs,
 the degree to which the problem can be
modularized
 the required quality and reliability of the system to be
built
 the degree of sociability (communication) required for
the project
 The delivery date 17
Seat Work
 Define PPP in software Project Management
 List the team Organizations elements.
 List the 5 questions for a successful team to address.

 `What is MOI model for a team Leader to implement.

18
CS-6209 Software Engineering 1
1
Week 6: Software Quality Management

Module 005: Software Quality Management

Course Learning Outcomes:


1. Understand the issue of Software Quality and the activities present in a
typical Quality Management process.
2. Understand the advantages of difficulties presented by the use of Quality
standards in Software Engineering
3. Understand the concepts of Software Quality Management

Introduction
Computers and software are ubiquitous. Mostly they are embedded and we don’t even
realize where and how we depend on software. We might accept it or not, but software is
governing our world and society and will continue further on. It is difficult to imagine our
world without software. There would be no running water, food supplies, business or
transportation would disrupt immediately, diseases would spread, and security would be
dramatically reduced – in short, our society would disintegrate rapidly. A key reason our
planet can bear over six billion people is software. Since software is so ubiquitous, we need
to stay in control. We have to make sure that the systems and their software run as we
intend – or better. Only if software has the right quality, we will stay in control and not
suddenly realize that things are going awfully wrong. Software quality management is the
discipline that ensures that the software we are using and depending upon is of right
quality. Only with solid under- standing and discipline in software quality management, we
will effectively stay in control.

What exactly is software quality management? To address this question we first need to
define the term “quality”. Quality is the ability of a set of inherent characteristics of a
product, service, product component, or process to fulfill requirements of customers [1].
From a management and controlling perspective quality is the degree to which a set of
inherent characteristics fulfills requirements. Quality management is the sum of all planned
systematic activities and processes for creating, controlling and assuring quality [1]. Fig. 1
indicates how quality management relates to the typical product development. We have
used a V-type visualization of the development process to illustrate that different quality
control techniques are applied to each level of abstraction from requirements engineering
to implementation. Quality control questions are mentioned on the right side. They are
addressed by techniques such as reviews or testing. Quality assurance questions are
mentioned in the middle. They are addressed by audits or sample checks. Quality
improvement questions are mentioned on the left side. They are addressed by dedicated
improvement projects and continuous improvement activities.

Course Module
CS-6209 Software Engineering 1
2
Week 6: Software Quality Management

Quality Concepts
The long-term profitability of a company is heavily impacted by the quality
perceived by customers. Customers view achieving the right balance of reliability,
market window of a product and cost as having the greatest effect on their long-
term link to a company. This has been long articulated, and applies in different
economies and circumstances. Even in restricted competitive situations, such as a
market with few dominant players (e.g., the operating system market of today or the
database market of few years ago), the principle applies and has given rise to open
source development. With the competitor being often only a mouse-click away,
today quality has even higher relevance. This applies to Web sites as well as to
commodity goods with either embedded or dedicated software deliveries. And the
principle certainly applies to investment goods, where suppliers are evaluated by a
long list of different quality attributes.

Methodological approaches to guarantee quality products have led to international


guidelines (e.g., ISO 9001) and widely applied methods to assess the development
processes of software providers (e.g., SEI CMMI). In addition, most companies apply
certain techniques of criticality prediction that focus on identifying and reducing
release risk. Unfortunately, many efforts usually concentrate on testing and
reworking instead of proactive quality management.

Yet there is a problem with quality in the software industry. By quality we mean the
bigger picture, such as delivering according to commitments. While solutions
abound, knowing which solutions work is the big question. What are the most
fundamental underlying principles in successful projects? What can be done right
now? What actually is good or better? What is good enough – considering the
immense market pressure and competition across the globe?

A simple – yet difficult to digest and implement – answer to these questions is that
software quality management is not simply a task, but rather a habit. It must be
engrained in the company culture. It is something that is visible in the way people
are working, independent on their role. It certainly means that every single person
in the organization sees quality as her own business not that of a quality manager or
a testing team. A simple yet effective test to quickly identify the state of practice
with respect to quality management is to ask around what quality means for an
employee and how he delivers ac- cording to this meaning. You will identify that
many see it as a bulky and formal approach to be done to achieve necessary
certificates. Few exceptions exist, such as industries with safety and health impacts.
But even there, you will find different approaches to quality, depending on culture.
Those with carrot and stick will not achieve a true quality culture. Quality is a habit.
It is driven by objectives and not based on believes. It is primarily achieved when
each person in the organization knows and is aware on her own role to deliver
quality.

Quality management is the responsibility of the entire enterprise. It is


strategically defined, directed and operationally implemented on various
Course Module
CS-6209 Software Engineering 1
3
Week 6: Software Quality Management

organizational levels. Fig. 1 shows in a simplified organizational layout with four


tiers the respective responsibilities to successfully implement quality management.
Note that it is not a top-down approach where management sets unrealistic targets
that must be implemented on the project level. It is even more important that
continuous feedback is pro- vided bottom-up so that decisions can be changed or
directions can be adjusted.

Fig. 1: Quality management within the organization

Quality is implemented along the product life-cycle. Fig. 2 shows some pivotal
quality-related activities mapped to the major life-cycle phases. Note that on the left
side strategic directions are set and a respective management system is
implemented. Towards the right side, quality related processes, such as test or
supplier audits are implemented. During evolution of the product with dedicated
services and customer feedback, the product is further optimized and the
management system is adapted where necessary.

It is important to understand that the management system is not specific to a


project, but drives a multitude of projects or even the entire organization. Scale
effects occur with having standardized processes that are systematically applied.
This not only allows moving people to different projects with- out long learning
curve but also assures proven quality at best possible efficiency. A key step along all
these phases is to recognize that all quality requirements can and should be
specified in quantitative terms. This does not mean “counting defects” as it would be
too late and reactive. It means quantifying quality attributes such as security,
portability, adaptability, maintainability, robustness, usability, reliability and
performance as an objective before the design of a product.

Course Module
CS-6209 Software Engineering 1
4
Week 6: Software Quality Management

Fig. 2: Quality management along the product life-cycle

A small example will illustrate this need. A software system might have strict
reliability constraints. Instead of simply stating that reliability should achieve less
than one failure per month in operation, which would be reactive, related quality
requirements should target underlying product and process needs to achieve such
reliability. During the strategy phase, the market or customer needs for reliability
need to be elicited. Is reliability important as an image or is it rather availability?
What is the perceived value of different failure rates? A next step is to determine
how these needs will be broken down to product features, components and
capabilities. Which architecture will deliver the desired re- liability and what are
cost impacts? What component and supplier qualification criteria need to be
established and maintained throughout the product life-cycle? Then the underlying
quality processes need to be determined. This should not be done ad-hoc and for
each single project individually but by tailoring organizational processes, such as
product life-cycle, project reviews, or testing to the specific needs of the product.
What test coverage is necessary and how will it be achieved? Which test equipment
and infrastructure for interoperability of components needs to be applied? What
checklists should be used in preparing for reviews and releases? These processes
need to be carefully applied during development. Quality control will be applied by
each single engineer and quality assurance will be done systematically for selected
processes and work products. Finally the evolution phase of the product needs to
establish criteria for service request management and assuring the right quality
level of follow-on releases and potential defect corrections. A key question to
address across all these phases is how to balance quality needs with necessary effort
and availability of skilled people. Both relate to business, but that is at times
overlooked. We have seen companies that due to cost and time constraints would
Course Module
CS-6209 Software Engineering 1
5
Week 6: Software Quality Management

reduce requirements engineering or early reviews of specifications and later found


out that follow-on cost were higher than what was cut out. A key understanding to
achieving quality and therefore business performance has once been phrased by
Abraham Lincoln: “If someone gave me eight hours to chop down a tree, I would
spend six hours sharpening the axe.”

Process Maturity and Quality


The quality of a product or service is mostly determined by the processes and
people developing and delivering the product or service. Technology can be bought,
it can be created almost on the spot, and it can be introduced by having good
engineers. What matters to the quality of a product is how they are working and
how they introduce this new technology. Quality is not at a stand-still, it needs to be
continuously questioned and improved. With today’s low entry barriers to software
markets, one thing is sure: There is always a company or entrepreneur just
approaching your home turf and conquering your customers. To continuously
improve and thus stay ahead of competition, organizations need to change in a
deterministic and results-oriented way. They need to know and improve their
process maturity.

The concept of process maturity is not new. Many of the established quality models
in manufacturing use the same concept. This was summarized by Philip Crosby in
his bestselling book “Quality is Free” in 1979. He found from his broad experiences
as a senior manager in different industries that business success depends on quality.
With practical insight and many concrete case studies he could empirically link
process performance to quality. His credo was stated as: “Quality is measured by the
cost of quality which is the expense of nonconformance – the cost of doing things
wrong.”

First organizations must know where they are; they need to assess their processes.
The more detailed the results from such an assessment, the easier and more
straightforward it is to establish a solid improvement plan. That was the basic idea
with the “maturity path” concept proposed by Crosby in the 1970s. He distinguishes
five maturity stages, namely

 Stage 1: Uncertainty
 Stage 2: Awakening
 Stage 3: Enlightening
 Stage 4: Wisdom
 Stage 5: Certainty

Defects – Prediction, Detection, Correction and Prevention


To achieve the right quality level in developing a product it is necessary to understand
what it means not to have insufficient quality. Let us start with the concept of defects. A

Course Module
CS-6209 Software Engineering 1
6
Week 6: Software Quality Management

defect is defined as an imperfection or deficiency in a system or component where that


component does not meet its requirements or specifications which could yield a failure.
Causal relationship distinguishes the failure caused by a defect which itself is caused by
a human error during design of the product. Defects are not just information about
something wrong in a software system or about the progress in building up quality.
Defects are information about problems in the process that created this software. The
four questions to address are:

1. How many defects are there and have to be removed?


2. How can the critical and relevant defects be detected most efficiently?
3. How can the critical and relevant defects be removed most effectively and
efficiently?
4. How can the process be changed to avoid the defects from reoccurring?

These four questions relate to the four basic quality management techniques of
prediction, detection, correction and prevention. The first step is to identify how
many defects there are and which of those defects are critical to product performance.
The underlying techniques are statistical methods of defect estimation, reliability
prediction and criticality assessment. These defects have to be detected by quality
control activities, such as inspections, reviews, unit test, etc. Each of these techniques
has their strengths and weaknesses which explains why they ought to be combined to
be most efficient. It is of not much value to spend loads of people on test, when in-depth
requirements reviews would be much faster and cheaper. Once defects are detected and
identified, the third step is to remove them. This sounds easier than it actually is due to
the many ripple effects each correction has to a system. Regression tests and reviews of
corrections are absolutely necessary to assure that quality won’t degrade with changes.
A final step is to embark on preventing these defects from re-occurring. Often engineers
and their management state that this actually should be the first and most relevant step.
We agree, but experience tells that again and again, people stumble across defect
avoidance simply because their processes won’t support it. In order to effectively avoid
defects engineering processes must be defined, systematically applied and
quantitatively managed. This being in place, defect prevention is a very cost-effective
means to boost both customer satisfaction and business performance, as many high-
maturity organizations such as Motorola, Boeing or Wipro show.

Defect removal is not about assigning blame but about building better quality and
improving the processes to ensure quality. Reliability improvement always needs
measurements on effectiveness (i.e., percentage of removed defects for a given activity)
compared to efficiency (i.e., effort spent for detecting and removing a defect in the
respective activity). Such measurement asks for the number of residual defects at a
given point in time or within the development process.

But how is the number of defects in a piece of software or in a product estimated? We


will outline the approach we follow for up-front estimation of residual defects in any
software that may be merged from various sources with different degrees of stability.
We distinguish between upfront defect estimation which is static by nature as it looks
Course Module
CS-6209 Software Engineering 1
7
Week 6: Software Quality Management

only on the different components of the system and their inherent quality before the
start of validation activities, and reliability models which look more dynamically during
validation activities at residual defects and failure rates.

Only a few studies have been published that typically relate static defect estimation to
the number of already detected defects independently of the activity that resulted in
defects, or the famous error seeding which is well known but is rarely used due to the
belief of most software engineers that it is of no use to add errors to software when
there are still far too many defects in, and when it is known that defect detection costs
several person hours per defect.

Defects can be easily estimated based on the stability of the underlying software. All
software in a product can be separated into four parts according to its origin:

 Software that is new or changed. This is the standard case where software had
been designed especially for this project, either internally or from a supplier.
 Software reused but to be tested (i.e., reused from another project that was
never integrated and therefore still contains lots of defects; this includes ported
functionality). This holds for reused software with unclear quality status, such as
internal libraries.
 Software reused from another project that is in testing (almost) at the same
time. This software might be partially tested, and therefore the overlapping of
the two test phases of the parallel projects must be accounted for to estimate
remaining defects. This is a specific segment of software in product lines or any
other parallel usage of the same software without having hardened it so far for
field usage.
 Software completely reused from a stable product. This software is considered
stable and there- fore it has a rather low number of defects. This holds especially
for commercial off the shelf software components and opens source software
which is used heavily.

The base of the calculation of new or changed software is the list of modules to be used
in the complete project (i.e., the description of the entire build with all its components).
A defect correction in one of these components typically results in a new version, while
a modification in functionality (in the context of the new project) results in a new
variant. Configuration management tools are used to distinguish the one from the other
while still maintaining a single source.

To statically estimate the number of residual defects in software at the time it is


delivered by the author (i.e., after the author has done all verification activities, she can
execute herself), we distinguish four different levels of stability of the software that are
treated independently

f = a × x + b × y + c × z + d × (w – x – y – z)

Course Module
CS-6209 Software Engineering 1
8
Week 6: Software Quality Management

with
 x: the number of new or changed KStmt designed and to be tested within this
project. This soft- ware was specifically designed for that respective project. All
other parts of the software are re- used with varying stability.
 y: the number of KStmt that are reused but are unstable and not yet tested
(based on functionality that was designed in a previous project or release, but
was never externally delivered; this includes ported functionality from other
projects).
 z: the number of KStmt that are tested in parallel in another project. This
software is new or changed for the other project and is entirely reused in the
project under consideration.
 w: the number of KStmt in the total software – i.e., the size of this product in its
totality.

The factors a-d relate defects in software to size. They depend heavily on the
development environment, project size, maintainability degree and so on. Our
starting point for this initial estimation is actually driven by psychology. Any person
makes roughly one (non-editorial) defect in ten written lines of work. This applies
to code as well as a design document or e-mail, as was observed by the personal
software process (PSP) and many other sources [1,16,17]. The estimation of
remaining defects is language independent because defects are introduced per
thinking and editing activity of the programmer, i.e., visible by written statements.

This translates into 100 defects per KStmt. Half of these defects are found by careful
checking by the author which leaves some 50 defects per KStmt delivered at code
completion. Training, maturity and coding tools can further reduce the number
substantially. We found some 10-50 defects per KStmt de- pending on the maturity
level of the respective organization. This is based only on new or changed code, not
including any code that is reused or automatically generated.

Most of these original defects are detected by the author before the respective work
product is re- leased. Depending on the underlying individual software process, 40–
80% of these defects are re- moved by the author immediately. We have
experienced in software that around 10–50 defects per KStmt remain. For the
following calculation we will assume that 30 defects/KStmt are remaining (which is
a common value [18]. Thus, the following factors can be used:

 defects per KStmt (depending on the engineering methods; should be based on


own data)
 b: 30 × 60% defects per KStmt, if defect detection before the start of testing is
60%

Course Module
CS-6209 Software Engineering 1
9
Week 6: Software Quality Management

 c: 30 × 60% × (overlapping degree) × 25% defects per KStmt (depending on


overlapping degree of resources)
 d: 30 × 0.1–1% defects per KStmt depending on the number of defects remaining
in a product at the time when it is reused
 The percentages are, of course, related to the specific defect detection
distribution in one’s own historical database (Fig. 3). A careful investigation of
stability of reused software is necessary to better substantiate the assumed
percentages.

5
0
%

6
5
%

Fig. 3: Typical benchmark effects of detecting defects earlier in the life-cycle

Since defects can never be entirely avoided, different quality control techniques are
used in combination for detecting defects during the product life-cycle. They are listed
in sequence when they are applied throughout the development phase, starting with
requirements and ending with system test:

 Requirements (specification) reviews


 Design (document) reviews
 Compile level checks with tools
 Static code analysis with automatic tools
 Manual code reviews and inspections with checklists based on typical defect
situations or critical areas in the software
 Enhanced reviews and testing of critical areas (in terms of complexity, former
Course Module
CS-6209 Software Engineering 1
10
Week 6: Software Quality Management

failures, expected defect density, individual change history, customer’s risk and
occurrence probability)
 Unit testing
 Focused testing by tracking the effort spent for analyses, reviews, and
inspections and separating according to requirements to find out areas not
sufficiently covered
 Systematic testing by using test coverage measurements (e.g., C0 and C1
coverage) and improvement
 Operational testing by dynamic execution already during integration testing
 Automatic regression testing of any redelivered code
 System testing by applying operational profiles and usage specifications.

We will further focus on several selected approaches that are applied for improved
defect detection before starting with integration and system test because those
techniques are most cost-effective.

Note that the starting point for effectively reducing defects and improving reliability is
to track all defects that are detected. Defects must be recorded for each defect
detection activity. Counting defects and deriving the reliability (that is failures over
time) is the most widely applied and accepted method used to determine software
quality. Counting defects during the complete project helps to estimate the duration of
distinct activities (e.g., unit testing or subsystem testing) and improves the underlying
processes. Failures reported during system testing or field application must be traced
back to their primary causes and specific defects in the design (e.g., design decisions or
lack of design reviews).

Quality improvement activities must be driven by a careful look into what they mean
for the bottom line of the overall product cost. It means to continuously investigate
what this best level of quality re- ally means, both for the customers and for the
engineering teams who want to deliver it.

One does not build a sustainable customer relationship by delivering bad quality and
ruining his reputation just to achieve a specific delivery date. And it is useless to spend
an extra amount on improving quality to a level nobody wants to pay for. The optimum
seemingly is in between. It means to achieve the right level of quality and to deliver in
time. Most important yet is to know from the beginning of the project what is actually
relevant for the customer or market and set up the project accordingly. Objectives will
be met if they are driven from the beginning.
We look primarily at factors such as cost of non-quality to follow through this business
reasoning of quality improvements. For this purpose we measure all cost related to
error detection and removal (i.e., cost of non-quality) and normalize by the size of the
product (i.e., normalize defect costs). We take a conservative approach in only
considering those effects that appear inside our engineering activities, i.e., not
considering opportunistic effects or any penalties for delivering insufficient quality.

The most cost-effective techniques for defect detection are requirements reviews. For
Course Module
CS-6209 Software Engineering 1
11
Week 6: Software Quality Management

code re- views, inspections and unit test are most cost-effective techniques aside static
code analysis. Detecting defects in architecture and design documents has
considerable benefit from a cost perspective, because these defects are expensive to
correct at later stages. Assuming good quality specifications, major yields in terms of
reliability, however, can be attributed to better code, for the simple reason that there
are many more defects residing in code that were inserted during the coding activity.
We therefore provide more depth on techniques that help to improve the quality of
code, namely code reviews (i.e., code reviews and formal code inspections) and unit
test (which might include static and dynamic code analysis).

There are six possible paths of combining manual defect detection techniques in the
delivery of a piece of software from code complete until the start of integration test
(Fig. 4). The paths indicate the per- mutations of doing code reviews alone, performing
code inspections and applying unit test. Each path indicated by the arrows shows
which activities are performed on a piece of code. An arrow crossing a box means that
the activity is not applied. Defect detection effectiveness of a code inspection is much
higher than that of a code review. Unit test finds different types of defects than
reviews. However cost also varies depending on which technique is used, which
explains why these different permutations are used. In our experience code reviews is
the cheapest detection technique (with ca. 1-2 PH/defect), while manual unit test is
the most expensive (with ca. 1-5 PH/defect, depending on automation degree). Code
inspections lie somewhere in between. Although the best approach from a mere defect
detection perspective is to apply inspections and unit test, cost considerations and the
objective to reduce elapsed time and thus improve throughput suggest carefully
evaluating which path to follow in order to most efficiently and effectively detect and
remove defects
Entire set of modules

Code
integration test

Reviews
test
Unit

Formal
Code
Inspection

Fig. 4: Six possible paths for modules between end of coding and start of
integration test

Unit tests, however, combined with C0 coverage targets, have the highest effectiveness
for regression testing of existing functionality. Inspections, on the other hand, help in
detecting distinct defect classes that can only be found under real load (or even stress)
in the field.

Course Module
CS-6209 Software Engineering 1
12
Week 6: Software Quality Management

Defects are not distributed homogeneously through new or changed code. An analysis
of many projects revealed the applicability of the Pareto rule: 20-30% of the modules
are responsible for 70- 80% of the defects of the whole project. These critical
components need to be identified as early as possible, i.e., in the case of legacy systems
at start of detailed design, and for new software during coding. By concentrating on
these components the effectiveness of code inspections and unit testing is increased
and fewer defects have to be found during test phases. By concentrating on defect-
prone modules both effectiveness and efficiency are improved. Our main approach to
identify defect- prone software-modules is a criticality prediction taking into account
several criteria. One criterion is the analysis of module complexity based on
complexity measurements. Other criteria concern the number of new or changed code
in a module, and the number of field defects a module had in the pre- ceding project.
Code inspections are first applied to heavily changed modules, in order to optimize
payback of the additional effort that has to be spent compared to the lower effort for
code reading. Formal code reviews are recommended even for very small changes
with a checking time shorter than two hours in order to profit from a good efficiency of
code reading. The effort for know-how transfer to another designer can be saved.

It is of great benefit for improved quality management to be able to predict early on in


the development process those components of a software system that are likely to
have a high defect rate or those requiring additional development effort. Criticality
prediction is based on selecting a distinct small share of modules that incorporate sets
of properties that would typically cause defects to be introduced during design more
often than in modules that do not possess such attributes. Criticality prediction is thus
a technique for risk analysis during the design process.

 Criticality prediction addresses typical questions often asked in software


engineering projects:
 How can I early identify the relatively small number of critical components that
make significant contribution to defects identified later in the life-cycle?
 Which modules should be redesigned because their maintainability is bad and
their overall criticality to the project’s success is high?
 Are there structural properties that can be measured early in the code to
predict quality attributes?
 If so, what is the benefit of introducing a measurement program that
investigates structural properties of software?
 Can I use the often heuristic design and test know-how on trouble identification
and risk assessment to build up a knowledge base to identify critical
components early in the development process?

Criticality prediction is a multifaceted approach taking into account several criteria.


Complexity is a key influence on quality and productivity. Having uncontrolled
Course Module
CS-6209 Software Engineering 1
13
Week 6: Software Quality Management

accidental complexity in the product will definitely decrease productivity (e.g., gold
plating, additional rework, more test effort) and quality (more defects). A key to
controlling accidental complexity from creeping into the project is the measurement
and analysis of complexity throughout in the life-cycle. Volume, structure, order or the
connections of different objects contribute to complexity. However, do they all account
for it equally? The clear answer is no, because different people with different skills
assign complexity subjectively, according to their experience in the area. Certainly
criticality must be predicted early in the life-cycle to effectively serve as a managerial
instrument for quality improvement, quality control effort estimation and resource
planning as soon as possible in a project. Tracing comparable complexity metrics for
different products throughout the life-cycle is advisable to find out when essential
complexity is over- ruled by accidental complexity. Care must be used that the
complexity metrics are comparable, that is they should measure the same factors of
complexity.

Having identified such overly critical modules, risk management must be applied. The
most critical and most complex, for instance, the top 5% of the analyzed modules are
candidates for a redesign. For cost reasons mitigation is not only achieved with
redesign. The top 20% should have a code inspection instead of the usual code
reading, and the top 80% should be at least entirely (C0 coverage of 100%) unit tested.
By concentrating on these components the effectiveness of code inspections and unit
test is increased and fewer defects have to be found during test phases. To achieve
feedback for improving predictions the approach is integrated into the development
process end-to-end (requirements, design, code, system test, deployment).

It must be emphasized that using criticality prediction techniques does not mean
attempting to detect all defects. Instead, they belong to the set of managerial
instruments that try to optimize resource allocation by focusing them on areas with
many defects that would affect the utility of the delivered product. The trade-off of
applying complexity-based predictive quality models is estimated based on

 limited resources are assigned to high-risk jobs or components


 impact analysis and risk assessment of changes is feasible based on affected or
changed complexity
 gray-box testing strategies are applied to identified high-risk components
 ewer customers reported failures

Our experiences show that, in accordance with other literature corrections of defects in
early phases is more efficient, because the designer is still familiar with the problem
and the correction delay during testing is reduced

The effect and business case for applying complexity-based criticality prediction to a
Course Module
CS-6209 Software Engineering 1
14
Week 6: Software Quality Management

new project can be summarized based on results from our own experience database
(taking a very conservative ratio of only 40% defects in critical components):

 20% of all modules in the project were predicted as most critical (after coding);
 These modules contained over 40% of all defects (up to release time). Knowing
from these and many other projects that
 60% of all defects can theoretically be detected until the end of unit test and
defect correction during unit test and code reading costs less than 10%
compared to defect correction during system test

It can be calculated that 24% of all defects can be detected early by investigating 20% of
all modules more intensively with 10% of effort compared to late defect correction
during test, therefore yielding a 20% total cost reduction for defect correction.
Additional costs for providing the statistical analysis are in the range of two person
days per project. Necessary tools are off the shelf and account for even less per project.

References and Supplementary Materials


Books and Journals
1. Information Technology Project Management: Kathy Schwalbe Thomson
Publication.
2. Information Technology Project Management providing measurable
organizational value Jack Marchewka Wiley India.
3. Applied software project management Stellman & Greene SPD.
4. Software Engineering Project Management by Richard Thayer, Edward Yourdon
WILEY INDIA.

Online Supplementary Reading Materials

Online Instructional Videos

Course Module
PROJECT AND PROCESS
METRICS

Lecture Module Week 7

1
Process Metrics
 They are quantitative measures that enable you to gain
an insight into the efficiency of the software
software..
 Basic quality and productivity data are collected
collected..
 The data is analyzed compared against past averages
and accessed to determine the whether productivity and
quality has increased
increased..
 Metrics are used to :
 assess the status of an ongoing project
 track potential risks
 uncover problem areas before they go “critical,”
 adjust work flow or tasks,
 evaluate the project team’s ability to control quality of software
work and products. 2
A Good Manager Measures

3
Process Metrics
 Measure the process to help update and change the
process as needed across many projects projects.. They are
collected across all projects over long periods of time
time..
 We measure the efficacy of a software process
indirectly..
indirectly
 That is, we derive a set of metrics based on the outcomes of the
process
 Outcomes include
 measures of errors uncovered before release of the software
 defects delivered to and reported by end
end--users
 work products delivered (productivity)
 human effort expended
 calendar time expended
4
Project Metrics
 Measure specific aspects of a single project to improve
the decisions made on that project.
 It helps to
 Assess the status of ongoing project
 Track risks
 Find problem areas that can go critical
 Adjust work flow
 Evaluate teams ability to control the work

5
 Typical project Metrics are:
 Effort/time per software engineering task
 Errors uncovered per review hour
 Scheduled vs. actual milestone dates
 Changes (number) and their characteristics
 Distribution of effort on software engineering tasks

6
 Typical Size-
Size-Oriented Metrics:
 errors per KLOC (thousand lines of code)
 defects per KLOC
 $ per LOC
 pages of documentation per KLOC
 errors per person-
person-month
 Errors per review hour
 LOC per person-
person-month
 $ per page of documentation

7
 Typical Function-
Function-Oriented Metrics:
 errors per Function Point (FP)
 defects per FP
 $ per FP
 pages of documentation per FP
 FP per person-
person-month

8
Function Point
 Function points (FP) are a unit measure for software
size developed at IBM in 1979 by Richard Albrecht
 To determine your number of FPs, you classify a system
into five classes
classes::
 Transactions - External Inputs, External Outputs, External
Inquires
 Data storage - Internal Logical Files and External Interface Files
 Each class is then weighted by complexity as
low/average/high
 Multiplied by a value adjustment factor (determined by
asking questions based on 14 system characteristics
9
Project Scheduling and
Tracking

10
Why Are Projects Late?
 an unrealistic deadline established by someone outside the
software development group
 changing customer requirements that are not reflected in
schedule changes;
 an honest underestimate of the amount of effort and/or the
number of resources that will be required to do the job;
 predictable and/or unpredictable risks that were not considered
when the project commenced;
 technical difficulties that could not have been foreseen in
advance;
 human difficulties that could not have been foreseen in
advance;
 miscommunication among project staff that results in delays;
11
Scheduling Principles

 compartmentalization—define distinct tasks


compartmentalization—
 interdependency—
interdependency —indicate task interrelationship
 effort validation—
validation—be sure resources are available
 defined responsibilities—
responsibilities—people must be assigned
 defined outcomes—
outcomes—each task must have an
output
 defined milestones—
milestones—review for quality

12
PERT/CPM
 PERT
 Program Evaluation and Review Technique
 Developed to handle uncertain activity times

 CPM
 Critical Path Method
 Developed for industrial projects for which
activity times generally were known
 Today’s project management software packages
have combined the best features of both
approaches..
approaches

13
PERT/CPM

PERT and CPM have been used to plan, schedule, and


control a wide variety of projects
projects::
 R&D of new products and processes

 Construction of buildings and highways

 Maintenance of large and complex equipment

 Design and installation of new systems

14
PERT/CPM
 Project managers rely on PERT/CPM to help them
answer questions such as:
 What is the total time to complete the project?

 What are the scheduled start and finish dates for


each specific activity?
 Which activities are critical and must be
completed exactly as scheduled to keep the
project on schedule?
 How long can noncritical activities be delayed
before they cause an increase in the project
completion time? 15
CPM - Project Network
 A project network can be constructed to model the
precedence of the activities
activities..
 The nodes of the network represent the activities
activities..
 The arcs of the network reflect the precedence
relationships of the activities
activities..
 A critical path for the network is a path consisting of
activities with zero slack
slack..

16
Example: Frank’s Fine Floats
Immediate Completion
Activity Description Predecessors Time (days)
A Initial Paperwork --- 10
B Build Body A 20
C Finish Body B 5
D Build Frame C 10
F Finish Paperwork A 15
H Final Paperwork A 15
G Mount Body to Frame C, F 5
E Complete the tasks D, G, H 20

17
Example: Frank’s Fine Floats
Project Network

18
Draw the network diagram and determine
the critical path.
Account Activities Predecessors Time(t)
Code
A Gather data None 4
B Analyze the problem None 4
C Identify activities None 4
D Identify dependencies A 6
E Estimate resources A 8
F Create project charts B 14
G Allocate people B 12
H Distribute task D,E 8
I Program Coding C 20
J Program Debugging G,I 6
K Project Implementation F,H,J 8
DECISION TREE

20
Considerations in Decision
Making
What are the action choices?
What are the pros and cons of each possible action?
What are the consequences of each choice?
What is the probability of each consequence?
What is the relative importance of each possibility?
What is the identification of the best course of action?
The Make-
Make-Buy Decision
The make-
make-buy decision is an important concern
these days.
customers will not have a good feel for when an
application may be bought off the shelf and when it
needs to be developed.
The software engineer needs to perform a cost
benefit analysis in order to give the customer a
realistic picture of the true costs of the proposed
development options.
The use of a decision tree is a reasonable way to
organize this information.
Decision Trees
 excellent tools for helping you to choose between
several courses of action
 provide a highly effective structure within which
you can lay out options and investigate the
possible outcomes of choosing those options
 help you to form a balanced picture of the risks
and rewards associated with each possible
course of action.
Decision Tree Symbols
Box - represents a decision node.
node. Lines from the box
denote the decision alternatives (one line per decision
alternative).

Alternative Name 1

Alternative Name 2
Decision Tree Symbols
Circle - represents a chance node.
node. Lines from the circle
denote the events that could occur at the chance node. The
name of the chance-
chance-driven event goes above the line. The
probability of the event goes below the line.

Event 1

P1
Event 2
P2
Decision Tree Symbols

Horizontal rectangle - represents a terminal node.


node. A
terminal node represents an outcome state, so there are
no events that occur to a terminal node. The value of the
outcome appears in the rectangle.

Outcome 1

Outcome 2
Creating a Decision Tree
 Start with a decision that you need to make
make.. Draw a small square to
represent this towards the left of a large piece of paper
paper..
 From this box draw out lines towards the right for each possible
solution, and write that solution along the line
line.. Keep the lines apart as
far as possible so that you can expand your thoughts
thoughts..
 At the end of each line, consider the results
results.. If the result of taking that
decision is uncertain, draw a small circlecircle.. If the result is another
decision that you need to make, draw another square square..
 Starting from the new decision squares on your diagram, draw out
lines representing the options that you could selectselect.. From the circles
draw lines representing possible outcomes
outcomes.. Keep on doing this until
you have drawn out as many of the possible outcomes and decisions
as you can see
Outcome 1a TN 1

Choice 1 P Outcome 1b
P
TN 2

Outcome 2a TN 3
Choice 2

P
P Outcome 2b

TN 4
Sample Problem 1
 Develop a decision tree for a software
software--based system, X.
The SE organization can build system from scratch or buy
an available software product and modify it to meet local
needs..
needs
 If the system is to be built from scratch, there is a 70 70%
%
probability that the job will be difficult
difficult.. Using estimation
techniques, the project planner projects that a difficult
development effort will cost 450450,,000
000BHD
BHD and a simple
development effort is estimated to cost P380
380,,000
000BHD
BHD
 If the organization will buy an available product, the
probability for minor changes to be addressed is 30 30%
% and
this is estimated to cost 210
210,,000
000BHD
BHD.. On the otherhand
otherhand,,
the major changes to be addressed is estimated to cost
P400
400,,000
000BHD
BHD..
Simple 380, 000

Build 0.30 Difficult


0.70 450, 000

Minor Changes 210, 000


Buy

0.30
0. 70 Major Changes

400, 000
Evaluating Decision Trees
 Start by assigning a cash value or score to each
possible outcome. Estimate how much you think
it would be worth to you if that outcome came
about.
 Next look at each circle (representing an
uncertainty point) and estimate the probability of
each outcome. If you use percentages, the total
must come to 100% at each circle.
Calculating Decision Tree
Values
 Start on the right hand side of the decision tree, and
work back towards the left.
 Where you are calculating the value of uncertain
outcomes (circles on the diagram), do this by multiplying
the value of the outcomes by their probability. The total
for that node of the tree is the total of these values.
Simple 380, 000

Build 0.30 Difficult

429, 000bhd 0.70 450, 000


System X
Minor Changes 210, 000
Buy

0.30
343,000bhd 0. 70 Major Changes

400, 000
Sample Problem # 2
 A company is trying to determine whether or not to drill an oil
well.. If it decides not to drill the well, no money will be made
well
or lost
lost.. Therefore, the value of the decision not to drill can
immediately be assigned a sum of zero bahrain dinars dinars..

 If the decision is to drill, there are several potential outcomes,


including In method A, a 10 percent chance of getting
300,,000
300 000BHD
BHD in profits from the oil oil;; in method B, a 20 percent
chance of extracting 200
200,,000
000BHD
BHD in profits
profits;; in method C, a 10
percent chance of wresting 100 100,,000
000BHDBHD in profits from the
well;; and in method D, a 60 percent chance that the well will
well
be dry and post a loss of 100 100,,000
000BHD
BHD in drilling costs
costs..

 Show the decision tree for this data


data..
Point to Note:-
Note:-
 For the purposes of demonstration, suppose that the
chance of hitting no oil was increased from 60 percent to
70 percent, and the chance of gleaning $300 300,,000 in
profits was reduced from ten percent to zero zero.. In that
case, the dollar value of the decision to drill would fall to
-$20
20,,000
000.. A profit
profit--maximizing decision maker would then
elect to not drill the well
well.. The effect of this relatively small
change in the probability calculation underscores
decision trees' dependence on accurate information,
which often may not be available
available..
Project Planning

 When: need for software has already been


When:
established;; stakeholders are on
established on--board
board;; coding is
ready to begin
 What:: project planning spans five major activities
What activities—

estimation, scheduling, risk analysis, quality
management planning, and change management
planning
 Who:: software project managers, with information
Who
from stakeholders and engineers
Estimation
 Planning requires estimation early
early--on, even though it is
likely this “commitment” will be proven wrong
 Some degree of uncertainty is unavoidable when
predicting into the future
 Solid techniques and concrete procedures help reduce
the inaccuracy of estimates
Problem-based Estimation
Problem-
 LOC - Lines of Code

 Function – Point (FP)

Process--based Estimation
Process
 COCOMO
Steps in Estimation

 Start with description of scope of product


 decompose into set of smaller problems
 estimate time and cost of each
 estimate other resources (hardware, software, etc.)
required
Estimation requires…
 experience
 access to good historical information
 courage to commit to quantitative predictions
Reliability of estimates is affected by…
 Project complexity
 Project size
 Degree of structural uncertainty
 Availability of historical information
LOC - Lines of Code - Estimation Methods

Cost per LOC = burdened rate /LOC produced per PM

Project Cost = Cost per LOC * LOCE


Project Effort = LOCE / LOC Produced Per Month
Example 1:

Assuming that your organization produces 620 LOC/PM with a burdened


labor rate of 8000BD/PM, estimate the cost and the effort required to build
the software using LOC-based estimation technique. LOC(e)=33200

8
Solution –LOC

9
Example 2:

Assuming that your organization produces 500 LOC/PM with a burdened


labor rate of 10000BD/PM, estimate the cost and the effort required to
build the software using LOC-based estimation technique.
Assume Total LOC(e)=30000

10
Function Point
 Use measure of functionality
 Derived directly from 5 information domain
characteristics
- User Input
- User Output
- User Inquiries
- Files
- External interfaces
Information Count Simple Average Complex FP
Domain Value Count

External Inputs 15 3 4 6

External Outputs 10 4 5 7

External Inquiries 5 3 4 6

Internal Logical 7 7 10 15
Files
External Interface 11 5 7 10
Files
TOTAL 1
Factor
Backup and Recovery 5
Data communications 4
Distributed processing 4
Performance critical 4
Existing operating environment 4
Online data entry 4
Input transaction over multiple screens 3
ILFs updated online 3
Information Domain Values Complex 4
Internal Processing Complex 3
Code designed for Reuse 4
Conversion/installation in design 3
Multiple installations 3
Application designed for change 3
 Past Projects show a burdened labor
rate of 9000BD/PM
 Average productivity is 8FP/pm

 Assume simple complexity value


FPE = Count_total * (0.65 + 0.01 * ∑(Fi))

Project Effort = FPE / average productivity on FP

Project Cost = Project Effort * burdened rate


Information Count Simple Average Complex FP
Domain Value Count
External Inputs 3 4 6

External Outputs 4 5 7

External Inquiries 3 4 6

Internal Logical 7 10 15
Files
External Interface 5 7 10
Files
TOTAL
Past Projects show a burdened labor rate of P8000/PM
Average productivity is 9FP/pm
Assume average complexity value
Total adjustment value is estimated to be 49
COCOMO II
uses object points, using counts of the number of
- screens
- reports
- components
Object Type Complexity Weight
Simple Medium Difficult
Screen 1 2 3
Report 2 5 8
3GL Component 10 11 10

Developer’s Very Low Nominal High Very


Experience low High
Development Very Low Nominal High Very
Environment low High
maturity/
capability
PROD 4 7 13 25 50
Object points = S * CW + R * CW + C * CW
NOP = (object points) * ((100 - %reuse)/100)

Estimated Effort = NOP/PROD


Estimated Cost = Estimated Effort * Burdened rate
Seat Work

Use the COCOMO II model to estimate the effort required


to build software for a simple ATM that produces 12
screens, 10 reports, and will require approximately 80
software components
Past projects show a burdened labor rate of 9000
9000BD/PM
BD/PM..
assume reuse=
reuse=10
10%
%
Assume Medium complexity and Nominal
developer/environment maturity
CS-6209 Software Engineering 1
1
Week 7: Software Development Method

Module 006: Software Development Method

Course Learning Outcomes:


1. Understand the concepts of software processes and software process
models;
2. Have been introduced to three generic software process models and
when they might be used;
3. Know about the fundamental process activities of software requirements
engineering, software development, testing, and evolution;
4. Understand why processes should be organized to cope with changes in
the software requirements and design;
5. Understand how the Rational Unified Process integrates good software
engineering practice to create adaptable software processes.

Introduction
A software process is a set of related activities that leads to the production of a soft-
ware product. These activities may involve the development of software from
scratch in a standard programming language like Java or C. However, business
applications are not necessarily developed in this way. New business software is
now often developed by extending and modifying existing systems or by configuring
and integrating off-the-shelf software or system components.

There are many different software processes but all must include four activities that
are fundamental to software engineering:
 Software specification. The functionality of the software and constraints on
its operation must be defined.
 Software design and implementation. The software to meet the
specification must be produced.
 Software validation. The software must be validated to ensure that it does
what the customer wants.
 Software evolution. The software must evolve to meet changing customer
needs.

In some form, these activities are part of all software processes. In practice, of
course, they are complex activities in themselves and include sub-activities such as
requirements validation, architectural design, unit testing, etc. There are also
supporting process activities such as documentation and software configuration
management.
Course Module
CS-6209 Software Engineering 1
2
Week 7: Software Development Method

When we describe and discuss processes, we usually talk about the activities in
these processes such as specifying a data model, designing a user interface, etc., and
the ordering of these activities. However, as well as activities, process descriptions
may also include:

1. Products, which are the outcomes of a process activity. For example, the out-
come of the activity of architectural design may be a model of the software
architecture.
2. Roles, which reflect the responsibilities of the people involved in the process.
Examples of roles are project manager, configuration manager, programmer,
etc.
3. Pre- and post-conditions, which are statements that are true before and after
a process activity has been enacted or a product produced. For example,
before architectural design begins, a pre-condition may be that all
requirements have been approved by the customer; after this activity is
finished, a post-condition might be that the UML models describing the
architecture have been reviewed.

Software processes are complex and, like all intellectual and creative processes, rely
on people making decisions and judgments. There is no ideal process and most
organizations have developed their own software development processes.
Processes have evolved to take advantage of the capabilities of the people in an
organization and the specific characteristics of the systems that are being
developed. For some systems, such as critical systems, a very structured
development process is required. For business systems, with rapidly changing
requirements, a less formal, flexible process is likely to be more effective.

Sometimes, software processes are categorized as either plan-driven or agile


processes. Plan-driven processes are processes where all of the process activities
are planned in advance and progress is measured against this plan. In agile
processes, which I discuss in Chapter 3, planning is incremental and it is easier to
change the process to reflect changing customer requirements. As Boehm and
Turner (2003) discuss, each approach is suitable for different types of software.
Generally, you need to find a balance between plan-driven and agile processes.

Although there is no ‘ideal’ software process, there is scope for improving the
software process in many organizations. Processes may include outdated
techniques or may not take advantage of the best practice in industrial software
engineering. Indeed, many organizations still do not take advantage of software
engineering methods in their software development.

Course Module
CS-6209 Software Engineering 1
3
Week 7: Software Development Method

Software process models


Software process model is a simplified representation of a software process. Each
process model represents a process from a particular perspective, and thus provides
only partial information about that process. For example, a process activity model
shows the activities and their sequence but may not show the roles of the people
involved in these activities. In this section, I introduce a number of very general
process models (sometimes called ‘process paradigms’) and present these from an
architectural perspective. That is, we see the framework of the process but not the
details of specific activities.

These generic models are not definitive descriptions of software processes. Rather,
they are abstractions of the process that can be used to explain different approaches
to software development. You can think of them as process frameworks that may be
extended and adapted to create more specific software engineering processes.

The process models that I cover here are:


1. The waterfall model. This takes the fundamental process activities of
specification, development, validation, and evolution and represents them as
separate process phases such as requirements specification, software design,
implementation, testing, and so on.
2. Incremental development. This approach interleaves the activities of
specification, development, and validation. The system is developed as a
series of versions (increments), with each version adding functionality to the
previous version.
3. Reuse-oriented software engineering. This approach is based on the
existence of a significant number of reusable components. The system
development process focuses on integrating these components into a system
rather than developing them from scratch

These models are not mutually exclusive and are often used together, especially for
large systems development. For large systems, it makes sense to combine some of
the best features of the waterfall and the incremental development models. You
need to have information about the essential system requirements to design a soft-
ware architecture to support these requirements. You cannot develop this
incrementally. Sub-systems within a larger system may be developed using different
approaches. Parts of the system that are well understood can be specified and
developed using a waterfall-based process. Parts of the system which are difficult to
specify in advance, such as the user interface, should always be developed using an
incremental approach

Course Module
CS-6209 Software Engineering 1
4
Week 7: Software Development Method

The waterfall model


The first published model of the software development process was derived from
more general system engineering processes (Royce, 1970). This model is illustrated
in Figure 6.1. Because of the cascade from one phase to another, this model is known
as the ‘waterfall model’ or software life cycle. The waterfall model is an example of a
plan-driven process—in principle, you must plan and schedule all of the process
activities before starting work on them.

Figure 6.1 The waterfall model

The principal stages of the waterfall model directly reflect the fundamental
development activities:
1. Requirements analysis and definition. The system’s services, constraints,
and goals are established by consultation with system users. They are then
defined in detail and serve as a system specification.
2. System and software design. The systems design process allocates the
requirements to either hardware or software systems by establishing an
overall system architecture. Software design involves identifying and
describing the fundamental software system abstractions and their
relationships.
3. Implementation and unit testing. During this stage, the software design is
realized as a set of programs or program units. Unit testing involves verifying
that each unit meets its specification.
4. Integration and system testing. The individual program units or programs
are integrated and tested as a complete system to ensure that the software

Course Module
CS-6209 Software Engineering 1
5
Week 7: Software Development Method

requirements have been met. After testing, the software system is delivered
to the customer.
5. Operation and maintenance. Normally (although not necessarily), this is the
longest life cycle phase. The system is installed and put into practical use.
Maintenance involves correcting errors which were not discovered in earlier
stages of the life cycle, improving the implementation of system units and
enhancing the system’s services as new requirements are discovered.

In principle, the result of each phase is one or more documents that are approved
(‘signed off’). The following phase should not start until the previous phase has
finished. In practice, these stages overlap and feed information to each other. During
design, problems with requirements are identified. During coding, design problems
are found and so on. The software process is not a simple linear model but involves
feedback from one phase to another. Documents produced in each phase may then
have to be modified to reflect the changes made.

Because of the costs of producing and approving documents, iterations can be costly
and involve significant rework. Therefore, after a small number of iterations, it is
normal to freeze parts of the development, such as the specification, and to continue
with the later development stages. Problems are left for later resolution, ignored, or
programmed around. This premature freezing of requirements may mean that the
system won’t do what the user wants. It may also lead to badly structured systems
as design problems are circumvented by implementation tricks.

During the final life cycle phase (operation and maintenance) the software is put
into use. Errors and omissions in the original software requirements are discovered.
Program and design errors emerge and the need for new functionality is identified.
The system must therefore evolve to remain useful. Making these changes (software
maintenance) may involve repeating previous process stages.

Incremental development
Incremental development is based on the idea of developing an initial
implementation, exposing this to user comment and evolving it through several
versions until an adequate system has been developed (Figure 6.2). Specification,
development, and validation activities are interleaved rather than separate, with
rapid feedback across activities.

Course Module
CS-6209 Software Engineering 1
6
Week 7: Software Development Method

Figure 6.2 Incremental development

Incremental software development, which is a fundamental part of agile


approaches, is better than a waterfall approach for most business, e-commerce, and
personal systems. Incremental development reflects the way that we solve
problems. We rarely work out a complete problem solution in advance but move
toward a solution in a series of steps, backtracking when we realize that we have
made a mistake. By developing the software incrementally, it is cheaper and easier
to make changes in the software as it is being developed.

Each increment or version of the system incorporates some of the functionality that
is needed by the customer. Generally, the early increments of the system include the
most important or most urgently required functionality. This means that the
customer can evaluate the system at a relatively early stage in the development to
see if it delivers what is required. If not, then only the current increment has to be
changed and, possibly, new functionality defined for later increments.

Incremental development has three important benefits, compared to the waterfall


model:

1. The cost of accommodating changing customer requirements is reduced. The


amount of analysis and documentation that has to be redone is much less
than is required with the waterfall model.
2. It is easier to get customer feedback on the development work that has been
done. Customers can comment on demonstrations of the software and see

Course Module
CS-6209 Software Engineering 1
7
Week 7: Software Development Method

how much has been implemented. Customers find it difficult to judge


progress from software design documents.
3. More rapid delivery and deployment of useful software to the customer is
possible, even if all of the functionality has not been included. Customers are
able to use and gain value from the software earlier than is possible with a
waterfall process.

Incremental development in some form is now the most common approach for the
development of application systems. This approach can be either plan-driven, agile,
or, more usually, a mixture of these approaches. In a plan-driven approach, the
system increments are identified in advance; if an agile approach is adopted, the
early increments are identified but the development of later increments depends on
progress and customer priorities.

From a management perspective, the incremental approach has two problems:


1. The process is not visible. Managers need regular deliverables to measure
progress. If systems are developed quickly, it is not cost-effective to produce
documents that reflect every version of the system.
2. System structure tends to degrade as new increments are added. Unless time
and money is spent on refactoring to improve the software, regular change
tends to corrupt its structure. Incorporating further software changes
becomes increasingly difficult and costly.

The problems of incremental development become particularly acute for large,


complex, long-lifetime systems, where different teams develop different parts of the
system. Large systems need a stable framework or architecture and the
responsibilities of the different teams working on parts of the system need to be
clearly defined with respect to that architecture. This has to be planned in advance
rather than developed incrementally.

You can develop a system incrementally and expose it to customers for comment,
without actually delivering it and deploying it in the customer’s environment.
Incremental delivery and deployment means that the software is used in real,
operational processes. This is not always possible as experimenting with new
software can disrupt normal business processes.

Course Module
CS-6209 Software Engineering 1
8
Week 7: Software Development Method

Reuse-oriented software engineering


In the majority of software projects, there is some software reuse. This often
happens informally when people working on the project know of designs or
codes that are similar to what is required. They look for these, modify them as
needed, and incorporate them into their system.

This informal reuse takes place irrespective of the development process that is
used. However, in the 21st century, software development processes that focus
on the reuse of existing software have become widely used. Reuse-oriented
approaches rely on a large base of reusable software components and an
integrating framework for the composition of these components. Sometimes,
these components are systems in their own right (COTS or commercial off-the-
shelf systems) that may provide specific functionality such as word processing or
a spreadsheet.

Figure 6.3 Reuse-oriented software engineering

A general process model for reuse-based development is shown in Figure 6.3.


Although the initial requirements specification stage and the validation stage are
comparable with other software processes, the intermediate stages in a reuse-
oriented process are different. These stages are:

1. Component analysis. Given the requirements specification, a search is


made for components to implement that specification. Usually, there is no
exact match and the components that may be used only provide some of
the functionality required.
2. Requirements modification. During this stage, the requirements are
analyzed using information about the components that have been
discovered. They are then modified to reflect the available components.
Where modifications are impossible, the component analysis activity may
be re-entered to search for alternative solutions.
3. System design with reuse. During this phase, the framework of the
system is designed or an existing framework is reused. The designers take
Course Module
CS-6209 Software Engineering 1
9
Week 7: Software Development Method

into account the components that are reused and organize the framework
to cater for this. Some new software may have to be designed if reusable
components are not available.
4. Development and integration. Software that cannot be externally
procured is developed, and the components and COTS systems are
integrated to create the new system. System integration, in this model,
may be part of the development process rather than a separate activity.

There are three types of software component that may be used in a reuse-
oriented process:
1. Web services that are developed according to service standards and which
are available for remote invocation.
2. Collections of objects that are developed as a package to be integrated
with a component framework such as .NET or J2EE.
3. Stand-alone software systems that are configured for use in a particular
environment.

Reuse-oriented software engineering has the obvious advantage of reducing the


amount of software to be developed and so reducing cost and risks. It usually also
leads to faster delivery of the software. However, requirements compromises are
inevitable and this may lead to a system that does not meet the real needs of
users. Furthermore, some control over the system evolution is lost as new
versions of the reusable components are not under the control of the
organization using them.

Course Module
CS-6209 Software Engineering 1
10
Week 7: Software Development Method

References and Supplementary Materials


Books and Journals
1. Information Technology Project Management: Kathy Schwalbe Thomson
Publication.
2. Information Technology Project Management providing measurable
organizational value Jack Marchewka Wiley India.
3. Applied software project management Stellman & Greene SPD.
4. Software Engineering Project Management by Richard Thayer, Edward Yourdon
WILEY INDIA.

Online Supplementary Reading Materials

Online Instructional Videos

Course Module
Risk Management
(Mitigation and Monitoring)
• Effective strategy must consider 3 issues.
– Risk avoidance
– Risk monitoring
– Risk management and planning

Ex, High staff turnover.


Impact on project cost and schedule.
MITIGATION
• Project management must develop a strategy for
reducing turnover. Among the possible steps to be
taken are:
MONITOR
• Factors can be monitored :
MANAGEMENT
• Assumes mitigation efforts have failed.
• If mitigation strategies has been followed,
– Backup is available
– Documentation is available
– Knowledge has been dispersed across the team.
• “get up to speed”
• “knowledge transfer mode”
• “commentary documents”
RMMM – Risk Mitigation and Monitoring Management

• RMMM increases both cost and schedule.


– Ex, “backup”
– Large project, 30-40 risks

• Pareto 80-20 rule.


• Risk may occur during maintenance.
RMMM plan
• Documents all work performed as part of risk analysis.
• Used by project manager as part of overall project plan
• Risk Information Sheet(RIS)
– Maintained using a database system.

• Risk mitigation :
– Achieved by developing a plan.

• Primary objectives of monitoring are:


– To assess whether predicted risks do, in fact, occur.
– To ensure that risk aversion steps defined for the risk are being
properly applied.
– To collect information that can be used for future risk analysis.
• RIS contains the following information
– Risk ID, Date, Probability & Impact
– Description Refinement/context
– Mitigation/monitoring
– Management/Contingency Plan/trigger
– Current Status
– Originator & Assigned (to whom) information

Seat Work
•What is risk Mitigation?
•What is Risk Monitoring ?
•What is risk Monitoring plan in software
Engineering?
•Give a risk Monitoring plan in software
Engineering.
CS-6209 Software Engineering 1
1
Week 8-9: Software Implementation and Documentation

Module 007: Software Implementation and


Documentation

Course Learning Outcomes:


1. Implement the Structured Programming in Software Coding and
Implementation
2. Understand the different Programming style and coding guidelines
3. Know the key issues that have to be considered when implementing
software, including software reuse and open-source development.

Software Implementation
In this chapter, we will study about programming methods, documentation and
challenges in software implementation.

Structured Programming
In the process of coding, the lines of code keep multiplying, thus, size of the software
increases. Gradually, it becomes next to impossible to remember the flow of
program. If one forgets how software and its underlying programs, files, procedures
are constructed, it then becomes very difficult to share, debug, and modify the
program. The solution to this is structured programming. It encourages the
developer to use subroutines and loops instead of using simple jumps in the code,
thereby bringing clarity in the code and improving its efficiency Structured
programming also helps programmer to reduce coding time and organize code
properly.

Structured programming states how the program shall be coded. It uses three main
concepts:
1. Top-down analysis - A software is always made to perform some rational
work. This rational work is known as problem in the software parlance. Thus
it is very important that we understand how to solve the problem. Under top-
down analysis, the problem is broken down into small pieces where each one
has some significance. Each problem is individually solved and steps are
clearly stated about how to solve the problem.

2. Modular Programming - While programming, the code is broken down into


smaller group of instructions. These groups are known as modules,
subprograms, or subroutines. Modular programming based on the
understanding of top-down analysis. It discourages jumps using ‘goto’
Course Module
CS-6209 Software Engineering 1
2
Week 8-9: Software Implementation and Documentation

statements in the program, which often makes the program flow non-
traceable. Jumps are prohibited and modular format is encouraged in
structured programming.

3. Structured Coding - In reference with top-down analysis, structured coding


sub-divides the modules into further smaller units of code in the

Functional Programming
Functional programming is style of programming language, which uses the concepts
of mathematical functions. A function in mathematics should always produce the
same result on receiving the same argument. In procedural languages, the flow of
the program runs through procedures, i.e. the control of program is transferred to
the called procedure. While control flow is transferring from one procedure to
another, the program changes its state.

In procedural programming, it is possible for a procedure to produce different


results when it is called with the same argument, as the program itself can be in
different state while calling it. This is a property as well as a drawback of procedural
programming, in which the sequence or timing of the procedure execution becomes
important.

Functional programming provides means of computation as mathematical functions,


which produces results irrespective of program state. This makes it possible to
predict the behavior of the program.

Functional programming uses the following concepts:


First class and High-order functions - These functions have capability to accept
another function as argument or they return other functions as results.
 Pure functions - These functions do not include destructive updates, that is,
they do not affect any I/O or memory and if they are not in use, they can
easily be removed without hampering the rest of the program.
 Recursion - Recursion is a programming technique where a function calls
itself and repeats the program code in it unless some pre-defined condition
matches. Recursion is the way of creating loops in functional programming.
 Strict evaluation - It is a method of evaluating the expression passed to a
function as an argument. Functional programming has two types of
evaluation methods, strict (eager) or non-strict (lazy). Strict evaluation
always evaluates the expression before invoking the function. Non-strict
evaluation does not evaluate the expression unless it is needed.

Course Module
CS-6209 Software Engineering 1
3
Week 8-9: Software Implementation and Documentation

 λ-calculus - Most functional programming languages use λ-calculus as their


type systems. λ-expressions are executed by evaluating them as they occur.

Common Lisp, Scala, Haskell, Erlang, and F# are some examples of functional
programming languages.

Programming style
Programming style is set of coding rules followed by all the programmers to write
the code. When multiple programmers work on the same software project, they
frequently need to work with the program code written by some other developer.
This becomes tedious or at times impossible, if all developers do not follow some
standard programming style to code the program.

An appropriate programming style includes using function and variable names


relevant to the intended task, using well-placed indentation, commenting code for
the convenience of reader and overall presentation of code. This makes the program
code readable and understandable by all, which in turn makes debugging and error
solving easier. Also, proper coding style helps ease the documentation and updation.

Coding Guidelines
Practice of coding style varies with organizations, operating systems and language
of coding itself.

The following coding elements may be defined under coding guidelines of an


organization:
 Naming conventions - This section defines how to name functions,
variables, constants and global variables.
 Indenting - This is the space left at the beginning of line, usually 2-8
whitespace or single tab.
 Whitespace - It is generally omitted at the end of line.
 Operators - Defines the rules of writing mathematical, assignment and
logical operators. For example, assignment operator ‘=’ should have space
before and after it, as in “x = 2”.
 Control Structures - The rules of writing if-then-else, case-switch, while-
until and for control flow statements solely and in nested fashion.
 Line length and wrapping - Defines how many characters should be there
in one line, mostly a line is 80 characters long. Wrapping defines how a line
should be wrapped, if is too long.
 Functions - This defines how functions should be declared and invoked, with
and without parameters.
Course Module
CS-6209 Software Engineering 1
4
Week 8-9: Software Implementation and Documentation

 Variables - This mentions how variables of different data types are declared
and defined.
 Comments - This is one of the important coding components, as the
comments included in the code describe what the code actually does and all
other associated descriptions. This section also helps creating help
documentations for other developers.

Software Implementation Challenges


There are some challenges faced by the development team while implementing the
software. Some of them are mentioned below:

 Code-reuse - Programming interfaces of present-day languages are very


sophisticated and are equipped huge library functions. Still, to bring the cost
down of end product, the organization management prefers to re-use the
code, which was created earlier for some other software. There are huge
issues faced by programmers for compatibility checks and deciding how
much code to re-use.
 Version Management - Every time a new software is issued to the customer,
developers have to maintain version and configuration related
documentation. This documentation needs to be highly accurate and
available on time.
 Target-Host - The software program, which is being developed in the
organization, needs to be designed for host machines at the customers end.

But at times, it is impossible to design software that works on the target machines.

Software Documentation
Software documentation is an important part of software process. A well written
document provides a great tool and means of information repository necessary to
know about software process. Software documentation also provides information
about how to use the product.

A well-maintained documentation should involve the following documents:


 Requirement documentation - This documentation works as key tool for
software designer, developer, and the test team to carry out their respective
tasks. This document contains all the functional, non-functional and
behavioral description of the intended software.

Source of this document can be previously stored data about the software,
already running software at the client’s end, client’s interview,
Course Module
CS-6209 Software Engineering 1
5
Week 8-9: Software Implementation and Documentation

questionnaires, and research. Generally it is stored in the form of


spreadsheet or word processing document with the high-end software
management team.

This documentation works as foundation for the software to be developed


and is majorly used in verification and validation phases. Most test-cases are
built directly from requirement documentation.

 Software Design documentation - These documentations contain all the


necessary information, which are needed to build the software. It contains:
(a) High-level software architecture, (b) Software design details, (c) Data
flow diagrams, (d) Database design

These documents work as repository for developers to implement the


software. Though these documents do not give any details on how to code
the program, they give all necessary information that is required for coding
and implementation.

 Technical documentation - These documentations are maintained by the


developers and actual coders. These documents, as a whole, represent
information about the code. While writing the code, the programmers also
mention objective of the code, who wrote it, where will it be required, what it
does and how it does, what other resources the code uses, etc.

The technical documentation increases the understanding between various


programmers working on the same code. It enhances re-use capability of the
code. It makes debugging easy and traceable.

There are various automated tools available and some comes with the
programming language itself. For example java comes JavaDoc tool to
generate technical documentation of code.

 User documentation - This documentation is different from all the above


explained. All previous documentations are maintained to provide
information about the software and its development process. But user
documentation explains how the software product should work and how it
should be used to get the desired results.

Course Module
CS-6209 Software Engineering 1
6
Week 8-9: Software Implementation and Documentation

These documentations may include software installation procedures, how-to


guides, user-guides, uninstallation method and special references to get more
information like license updation etc.

References and Supplementary Materials


Books and Journals
1. SOFTWARE ENGINEERING, 9th Edition; Ian Sommerville

Online Supplementary Reading Materials


1. Software Design and Implementation;
https://courses.cs.washington.edu/courses/cse331/10sp/lectures/lectures.html;
November 4, 2019

Online Instructional Videos

Course Module
Requirements Engineering

Week 10

1
Requirements Engineering
• Inception— ask a set of questions that establish …
– basic understanding of the problem
– the people who want a solution
– the nature of the solution that is desired, and
– the effectiveness of preliminary communication and
collaboration
– between the customer and the developer
• Elicitation—elicit requirements from all stakeholders
• Elaboration—create an analysis model that identifies
data, function and behavioral requirements
• Negotiation—agree on a deliverable system that is
realistic for developers and customers
• Specification—can be any one (or more) of the following:
– A written document
– A set of models
– A formal mathematical
– A collection of user scenarios (use-cases)
– A prototype
• Validation—a review mechanism that looks for
– errors in content or interpretation
– areas where clarification may be required
– missing information
– inconsistencies (a major problem when large products or systems
– are engineered)
– conflicting or unrealistic (unachievable) requirements.
– Requirements management
Inception
• Identify stakeholders
– “who else do you think I should talk to?”
– Recognize multiple points of view
– Work toward collaboration
• The first questions
– Who is behind the request for this work?
– Who will use the solution?
– What will be the economic benefit of a successful
solution
– Is there another source for the solution that you
need?
Eliciting Requirements
• Find out about the application domain, the
services that the system should provide and the
system’s operational constraints.
• May involve end users, managers, engineers
involved in maintenance, domain experts, trade
unions etc. (They are called stakeholders)
• the goal is
– to identify the problem
– propose elements of the solution
– negotiate different approaches, and
– specify a preliminary set of solution requirements
Building an Analysis Model
• Elements of the analysis model
– Scenario-based elements
• Functional—processing narratives for software functions
• Use-case—descriptions of the interaction between an
“actor” and the system
– Class-based elements
• Implied by scenarios
– Behavioral elements
• State diagram
– Flow-oriented elements
• Data flow diagram
Scenario Based Modeling
Use Cases
• A collection of user scenarios that describe the thread of usage of a
system
• Each scenario is described from the point-of-view of an “actor”—a
person or device that interacts with the software in some way
• Each scenario answers the following questions:
– Who is the primary actor, the secondary actor (s)?
– What are the actor’s goals?
– What preconditions should exist before the story begins?
– What main tasks or functions are performed by the actor?
– What extensions might be considered as the story is described?
– What variations in the actor’s interaction are possible?
– What system information will the actor acquire, produce, or change?
– Will the actor have to inform the system about changes in the
external environment?
– What information does the actor desire from the system?
– Does the actor wish to be informed about unexpected changes?
Hospital Management System
Class Based Modeling
Defining Attributes of a Class
• Attributes of a class are those nouns from the
grammatical parse that reasonably belong to a class
• Attributes hold the values that describe the current
properties or state of a class
• In identifying attributes, the following question should be
answered
– What data items (composite and/or elementary) will
fully define a specific class in the context of the
problem at hand?
• Usually an item is not an attribute if more than one of
them is to be associated with a class

11
Defining Operations of a Class
• Operations define the behavior of an object.
• An operation has knowledge about the state of a class
and the nature of its associations.
• The action performed by an operation is based on the
current values of the attributes of a class.
• Using a grammatical parse again, circle the verbs; then
select the verbs that relate to the problem domain
classes that were previously identified.

12
Example Class Box

Class Name Component

+ componentID
- telephoneNumber
Attributes - componentStatus
- delayTime
- masterPassword
- numberOfTries
+ program()
+ display()
Operations + reset()
+ query()
- modify()
+ call()

13
Association, Generalization and
Dependency (Ref: Fowler)
• Association
– Represented by a solid line between two classes directed from the
source class to the target class
– Used for representing (i.e., pointing to) object types for attributes
– May also be a part-of relationship (i.e., aggregation), which is
represented by a diamond-arrow
• Generalization
– Portrays inheritance between a super class and a subclass
– Is represented by a line with a triangle at the target end
• Dependency
– A dependency exists between two elements if changes to the
definition of one element (i.e., the source or supplier) may cause
changes to the other element (i.e., the client)
– Examples
• One class calls a method of another class
14
• One class utilizes another class as a parameter of a method
Example Class Diagram
Accountant Auditor Record
Keeper

Input Production Transaction


Verifier Manager Processor

Report
Error Log Input Handler Account Account List
Generator

1..n

Local File Remote File Accounts Accounts


Handler Handler Receivable Payable

15
Class Diagram
Flow-oriented Modeling
Data Modeling
• Identify the following items
– Data objects (Entities)
– Data attributes
– Relationships
– Cardinality (number of occurrences)

18
Data Flow and Control Flow

• Data Flow Diagram


– Depicts how input is transformed into output as data objects
move through a system
• Process Specification
– Describes data flow processing at the lowest level of
refinement in the data flow diagrams
• Control Flow Diagram
– Illustrates how events affect the behavior of a system
through the use of state diagrams

19
Diagram Layering and Process
Refinement

Context-level diagram

Level 1 diagram

Process Specification 20
Elements of the Analysis Model
Object-oriented Analysis Structured Analysis

Scenario-based Flow-oriented
modeling modeling
Use case text Data structure diagrams
Use case diagrams Data flow diagrams
Activity diagrams Control-flow diagrams
Swim lane diagrams Processing narratives

Class-based Behavioral
modeling modeling
Class diagrams
State diagrams
Analysis packages
Sequence diagrams
CRC models
Collaboration diagrams

21
Validating Requirements
• Is each requirement consistent with the overall objective
for the system/product?
• Have all requirements been specified at the proper level
of abstraction? That is, do some requirements provide a
level of technical detail that is inappropriate at this
stage?
• Is the requirement really necessary or does it represent
a feature that may not be essential to the objective of the
system?
• Does each requirement have attribution? That is, is a
source (generally, a specific individual) noted for each
requirement?
• Do any requirements conflict with other requirements?
• Is each requirement achievable in the technical
environment that will house the system or product?
• Is each requirement testable, once implemented?
• Does the requirements model properly reflect the
information, function and behavior of the system to be
built.
• Are all patterns consistent with customer requirements?
Scenario for collecting medical history
in MHC-PMS

Chapter 4 Requirements engineering 24


Scenario for collecting medical history
in MHC-PMS

Chapter 4 Requirements engineering 25

You might also like