Software Design Midterm
Software Design Midterm
Software is a set of :
❑ Programs
❑ Procedures
❑ Algorithms & its documentation
SOFTWARE’S OVERVIEW
HISTORY
•The first theory about software was proposed by Alan Turing in his 1935 essay
Computable numbers with an application to the Entscheidungsproblem (Decision
problem). The term "software" was first used in print by John W. Tukey in 1958.
•The term is often used to mean application software. In computer science and
software engineering, software is all information processed by computer system,
programs and data.
SOFTWARE
GENERATIONS
FIRST GENERATION
•During the 1950's the first computers were programmed by changing the
wires and set tens of dials and switches. One for every bit sometimes these
settings could be stored on paper tapes that looked like a ticker tape from
the telegraph.
SECOND GENERATION
•At the end of the 1950's the 'natural language’ interpreters and compilers
were made. But it took some time before the new languages were accepted
by enterprises.
•About the oldest 3GL is FORTRAN (Formula Translation) which was developed
around 1953 by IBM. This is a language primarily intended for technical and
scientific purposes. Standardization of FORTRAN started 10 years later, and a
recommendation was finally published by the International Standardization
Organization (ISO) in 1968.
FOURTH GENERATION
•A 4GL is an aid which the end user or programmer can use to build an
application without using a third generation programming language.
Therefore knowledge of a programming language is strictly spoken not
needed.
•Practical computer system divided software systems into 3 major classes.
❑System software
❑Programming software
❑Application software
SYSTEM SOFTWARE
•System software provides the basic functions for computer usage and helps
run the computer hardware and system.
❑ device drivers
❑ operating systems
❑ utilities
❑ window systems
PROGRAMMING SOFTWARE
❑compilers
❑debuggers
❑interpreters
❑linkers
❑text editors
APPLICATION SOFTWARE
Lisp
Algol60 Algol68
Fortran Pascal
BCPL Classic C
COBOL
PL\1
MODERN PROGRAMMING LANGUAGE
Lisp
Python
Smalltalk PHP
Fortran77
Ada Ada98 C#
Object Pascal
COBOL04 Javascript
COBOL89
Visual Basic
PERL
CSCI624
Software Design and
Development
Week 2
Process, Methods and Tools
Software Engineering is a layered
technology
“ ”
Process
Foundation for SE
Glue that hold the technology layers together
Enables timely development of software
Framework for a set of key process areas
(KPAs) developed for effective delivery of s/w
engg technology.
3
Methods
Provide the technical how-to’s for building
software.
Include tasks that include:
Requirements analysis
Design
Program construction
Testing
Support
4
Tools
Provide automated/semi-automated
support for the process and the
methods.
When tools are integrated, information
created by one tool can be used by
another, creating a system for the
support of software development called
CASE.
CASE combines s/w, h/w and a s/w
Engg database to create a S/w Engg
environment.
5
A generic Process Framework
Engineering is the analysis , design , construction verification and
management of technical entities.
a) Definition phase :
• Focus on what , Identify the requirement , what is to be
processed, what system behaviour is needed, what
interface is to be designed, Key requirements is identified
b) Development Phase :
• Focus on how, how data is constructed ,implementation of
procedures and methods
c) Support phase :
• change associated with error corrections, adaption to new
s/w changes.
There are 4 types of changes
Correction , Adaptation , Enhancement ,Prevention
6
Software FRAME WORK
Framework Activities
It has 5 activities
Communication : communicate with customer and
understand objectives
Planning: A map that helps in the development of
software
Modeling
Analysis of requirements
Design
Construction
Code generation
Testing
Deployment : Delivery of product
Umbrella Activities
Software project management :
Assess project progress against project plan
Formal technical reviews :
Remove errors migrating to next activity
Software quality assurance :
Define activities to ensure quality
Software configuration management :
manage effect of changes in the project
Work product preparation and production
Reusability management:
Define criteria for work product reuse
Measurement:
define measures for meeting customer needs
Risk management :
Assess risks that may affect the s/w
The Process Model:
Adaptability
9
The Primary Goal of Any Software
Process: High Quality
Remember:
Why?
Less rework!
10
SOFTWARE PROCESS MODELS
A process model/ software engineering paradigm is
chosen based on the nature of the project and
application, the methods and tools to be used and the
controls and deliverables that are required.
The different phases of a problem solving loop are:
Problem definition - identifies the specific problem to be solved
Technical Development-solves the problem through application of
some technology
Solution integration- delivers the results or documents, programs,
data, etc
Status quo- represents the current state of affairs
Prescriptive Process Models
Prescriptive process models advocate an orderly
approach to software engineering.
That leads to a few questions …
If prescriptive process models strive for structure and
order, are they inappropriate for a software world that
thrives on change?
Yet, if we reject traditional process models (and the order
they imply) and replace them with something less
structured, do we make it impossible to achieve
coordination and coherence in software work?
1. The Linear Sequential Model
Also called the classic life cycle/waterfall
model.
Has a systematic sequential approach to
software development that begins at system
level and progresses through analysis,
design, coding, testing and support.
One of the oldest models.
Requirements should be well understood
Activities:
Communication : Requirements gathering
Planning : Estimate schedule
Modelling: Design
Construction : Coding and testing
Deployment : Delivery of software
OR
Analysis
Design
Implementation
Verification
Testing
Analysis
communication
modeling
analysis
design
start
deployment
construction
delivery
code
feedback test
There are mainly 6 task regions:
1. Customer communication: tasks required to establish effective
communication between developer and customer.
2. Planning: tasks required to define resources, timelines and
other project related information.
3. Risk Analysis: tasks required to assess both technical and
management risks.
4. Engineering: tasks required to build one or more
representations of the application.
5. Construction and release: tasks required to construct, test,
install and provide user support. (E.g. documentation and
training).
6. Customer evaluation: tasks required to obtain customer
feedback based on evaluation of the software representations.
Seat Work
****
CSCI624
Software Design and
Development
Week 4
What Is Agility
Agile software development refers
to software development
methodologies centered round the idea of
iterative development, where
requirements and solutions evolve through
collaboration between self-organizing
cross-functional teams
Effective (rapid and adaptive) response to
change (team members, new technology,
requirements)!
Effective communication in structure and
attitudes among all team members,
technological and business people, software
engineers and managers.
Why and What Steps are“Agility”
important?
Why? The modern business environment is
fast-paced and ever-changing. It
represents a reasonable alternative to
conventional software engineering for
certain classes of software projects. It has
been demonstrated to deliver successful
systems quickly. !
What? May be termed as “software
engineering lite” The basic activities-
communication, planning, modeling,
construction and deployment remain. But
they morph into a minimal task set that
push the team toward construction and
delivery sooner.
Agile Methodologies
The most popular and common examples
are Scrum, eXtreme Programming (XP),
Feature Driven Development (FDD),
Dynamic Systems Development Method
(DSDM), Adaptive Software Development
(ASD), Crystal, and Lean Software
Development (LSD)
Agility and the Cost of Change
Conventional wisdom is that the cost of
change increases nonlinearly as a project
progresses.
It is relatively easy to accommodate a
change when a team is gathering
requirements early in a project.
If there are any changes, the costs of
doing this work are minimal
But if the middle of validation testing, a
stakeholder is requesting a major
functional change. Then the change
requires a modification to the architectural
design, construction of new components,
changes to other existing components, new
testing and so on.
Costs escalate quickly
Agility and the Cost of Change
Extreme Programming (XP)
XP Design ( occurs both before and after
coding as refactoring is encouraged)!
Follows the KIS principle (keep it simple)
Nothing more nothing less than the story. !
Encourage the use of CRC (class-
responsibility-collaborator) cards in an
object-oriented context. The only design
work product of XP. They identify and
organize the classes that are relevant to
the current software.
Seat Work
• Systems
• Processes
• Technology
What is aSystem?
•The word System is derived from Greek word Systema, which means an
organized relationship between any set of components to achieve some
common cause or objective.
Interaction
• It is defined by the manner in which the components operate with each other.
•For example, in an organization, purchasing department must interact with
production department and payroll with personnel department.
Properties of aSystem
Interdependence
•Interdependence means how the components of a system depend on one
another. For proper functioning, the components are coordinated and linked
together according to a specified plan. The output of one subsystem is the
required by other subsystem as input.
Integration
•Integration is concerned with how a system components are connected
together. It means that the parts of the system work together within the
system even if each part performs a unique function.
Properties of aSystem
Central Objective
•The objective of system must be central. It may be real or stated. It is not
uncommon for an organization to state an objective and operate to achieve
another.
•The users must know the main objective of a computer application early in
the analysis for a successful design and conversion.
Elements of a System(Diagram)
Elements of aSystem
Processor(s)
•The processor is the element of a system that involves the actual
transformation of input into output.
Control
•The control element guides the system.
Feedback
•Feedback provides the control in a dynamic system.
Environment
•The environment is the “supersystem” within which an organization operates.
•It determines how a system must function. For example, vendors and
competitors of organization’s environment, may provide constraints that
affect the actual performance of the business.
Elements of aSystem
•Each system has boundaries that determine its sphere of influence and
control.
•Physical System may be static or dynamic in nature. For example, desks and
chairs are the physical parts of computer center which are static. A
programmed computer is a dynamic system in which programs, data, and
applications can change according to the user's needs.
Types ofSystem
• Abstract systems are non-physical entities or conceptual that may be
formulas, representation or model of a real system.
Types ofSystem
Open or Closed Systems
•An open system must interact with its environment. It receives inputs from
and delivers outputs to the outside of the system. For example, an information
system which must adapt to the changing environmental conditions.
•A closed system does not interact with its environment. It is isolated from
environmental influences. A completely closed system is rare in reality.
Types ofSystem
•Adaptive and Non Adaptive System
•Adaptive System responds to the change in the environment in a way to
improve their performance and to survive. For example, human beings,
animals.
•Non Adaptive System is the system which does not respond to the
environment. For example, machines.
Types ofSystem
Permanent or Temporary System
•Permanent System persists for long time. For example, business policies.
•Temporary System is made for specified time and after that they are
demolished. For example, A DJ system is set up for a program and it is
dissembled after the program.
Types ofSystem
Natural and Manufactured System
•Natural systems are created by the nature. For example, Solar system,
seasonal system.
•Machine System is where human interference is neglected. All the tasks are
performed by the machine. For example, an autonomous robot.
Types ofSystem
Man–Made Information Systems
•It is an interconnected set of information resources to manage data for
particular organization, under Direct Management Control (DMC).
•Different arrows are used to show information flow, material flow, and
information feedback.
Systems Models
Flow System Models
•A flow system model shows the orderly flow of the material, energy, and
information that hold the system together.
•It shows an ongoing, constantly changing status of the system. It consists of:
1. Inputs that enter the system
2. The processor through which transformation takes place
3. The program(s) required for processing
4. The output(s) that result from processing
Categories ofInformation
There are three categories of information related to managerial levels and the
decision managers make.
Strategic Information
•This information is required by topmost management for long range
planning policies for next few years. For example, trends in revenues,
financial investment, and human resources, and population growth.
****
Project Management Concepts
Lecture Module Week 6
1
What is Project Management?
Project management is the process of the
application of knowledge, skills, tools, and
techniques to project activities to meet project
requirements
an interrelated group of processes that enables
the project team to achieve a successful project
2
The 4 P’s
People — the most important element of a
successful project
Product — the software to be built
Process — the set of framework activities and
software engineering tasks to get the job done
Project — all work required to make the product a
reality
3
PEOPLE
The people management maturity model defines the
following key practice areas for software people
people::
recruiting,
selection,
performance management,
training,
compensation,
career development,
organization and work design,
and team/culture development
development..
Organizations that achieve high levels of maturity in the
people management area have a higher likelihood of
implementing effective software engineering practices
practices.. 4
PRODUCT
Before a project can be planned, product objectives and
scope should be established, alternative solutions should
be considered, and technical and management
constraints should be identified
identified..
Without this information, it is impossible to define
reasonable (and accurate) estimates of the cost, an
effective assessment of risk, a realistic breakdown of
project tasks, or a manageable project schedule that
provides a meaningful indication of progress
progress..
The software developer and customer must meet to
define product objectives and scope
scope..
5
Objectives identify the overall goals for the product
(from the customer’s point of view) without considering
how these goals will be achieved
achieved..
Scope identifies the primary data, functions and
behaviors that characterize the product, and more
important, attempts to bound these characteristics in a
quantitative manner
manner..
Once the product objectives and scope are understood,
alternative solutions are considered
6
PROCESS
A software process provides the framework from which a
comprehensive plan for software development can be
established..
established
A small number of framework activities are applicable to
all software projects, regardless of their size or
complexity..
complexity
A number of different task sets
sets—
—tasks, milestones, work
products, and quality assurance points
points——enable the
framework activities to be adapted to the characteristics
of the software project and the requirements of the
project team
team..
Finally, umbrella activities
activities—
—such as software quality
assurance, software configuration management,and
measurement—
measurement —overlay the process model
model.. 7
PROJECT
In order to avoid project failure, a software project
manager and the software engineers who build the
product must avoid a set of common warning signs,
understand the critical success factors that lead to good
project management, and develop a commonsense
approach for planning, monitoring and controlling the
project..
project
8
Software Teams
How to lead?
How to organize?
How to collaborate?
9
THE PEOPLE
THE PLAYERS-
PLAYERS-STAKEHOLDERS
TEAM LEADERS
TEAM ORGANIZATION
10
Stakeholders
Senior managers who define the business issues that often
have significant influence on the project.
Project (technical) managers who must plan, motivate,
organize, and control the practitioners who do software
work..
work
Practitioners who deliver the technical skills that are
necessary to engineer a product or application.
Customers who specify the requirements for the software to
be engineered and other stakeholders who have a
peripheral interest in the outcome.
End--users who interact with the software once it is released
End
for production use.
11
Team Leader
The MOI Model
Motivation
Motivation.. The ability to encourage (by “push or
pull”) technical people to produce to their best ability
ability..
Ideas or innovation
innovation.. The ability to encourage people
to create and feel creative even when they must work
within bounds established for a particular software
product or application
application..
12
Team Organization
13
Democratic Decentralized (DD)
Has no permanent leader
Task coordinators are appointed for short durations and
then replaced by others who may coordinate different
tasks.
Decisions on problems and approach are made by group
consensus.
Communication among team members is horizontal.
14
Controlled Decentralized (CD)
Has a defined leader who coordinates specific tasks and
secondary leaders that have responsibility for subtasks
Problem solving remains a group activity, but
implementation of solutions is partitioned among
subgroups by the team leader
leader..
Communication among subgroups and individuals is
horizontal.. Vertical communication along the control
horizontal
hierarchy also occurs
occurs..
15
Controlled Centralized (CC)
Top-level problem solving and internal team
Top-
coordination are managed by a team leader.
Communication between the leader and team members
is vertical.
16
Software Teams
The following factors must be considered when selecting a
software project team structure ...
the difficulty of the problem to be solved
(qualifications)
the size of the resultant program(s) in LOC or FPs,
the degree to which the problem can be
modularized
the required quality and reliability of the system to be
built
the degree of sociability (communication) required for
the project
The delivery date 17
Seat Work
Define PPP in software Project Management
List the team Organizations elements.
List the 5 questions for a successful team to address.
18
CS-6209 Software Engineering 1
1
Week 6: Software Quality Management
Introduction
Computers and software are ubiquitous. Mostly they are embedded and we don’t even
realize where and how we depend on software. We might accept it or not, but software is
governing our world and society and will continue further on. It is difficult to imagine our
world without software. There would be no running water, food supplies, business or
transportation would disrupt immediately, diseases would spread, and security would be
dramatically reduced – in short, our society would disintegrate rapidly. A key reason our
planet can bear over six billion people is software. Since software is so ubiquitous, we need
to stay in control. We have to make sure that the systems and their software run as we
intend – or better. Only if software has the right quality, we will stay in control and not
suddenly realize that things are going awfully wrong. Software quality management is the
discipline that ensures that the software we are using and depending upon is of right
quality. Only with solid under- standing and discipline in software quality management, we
will effectively stay in control.
What exactly is software quality management? To address this question we first need to
define the term “quality”. Quality is the ability of a set of inherent characteristics of a
product, service, product component, or process to fulfill requirements of customers [1].
From a management and controlling perspective quality is the degree to which a set of
inherent characteristics fulfills requirements. Quality management is the sum of all planned
systematic activities and processes for creating, controlling and assuring quality [1]. Fig. 1
indicates how quality management relates to the typical product development. We have
used a V-type visualization of the development process to illustrate that different quality
control techniques are applied to each level of abstraction from requirements engineering
to implementation. Quality control questions are mentioned on the right side. They are
addressed by techniques such as reviews or testing. Quality assurance questions are
mentioned in the middle. They are addressed by audits or sample checks. Quality
improvement questions are mentioned on the left side. They are addressed by dedicated
improvement projects and continuous improvement activities.
Course Module
CS-6209 Software Engineering 1
2
Week 6: Software Quality Management
Quality Concepts
The long-term profitability of a company is heavily impacted by the quality
perceived by customers. Customers view achieving the right balance of reliability,
market window of a product and cost as having the greatest effect on their long-
term link to a company. This has been long articulated, and applies in different
economies and circumstances. Even in restricted competitive situations, such as a
market with few dominant players (e.g., the operating system market of today or the
database market of few years ago), the principle applies and has given rise to open
source development. With the competitor being often only a mouse-click away,
today quality has even higher relevance. This applies to Web sites as well as to
commodity goods with either embedded or dedicated software deliveries. And the
principle certainly applies to investment goods, where suppliers are evaluated by a
long list of different quality attributes.
Yet there is a problem with quality in the software industry. By quality we mean the
bigger picture, such as delivering according to commitments. While solutions
abound, knowing which solutions work is the big question. What are the most
fundamental underlying principles in successful projects? What can be done right
now? What actually is good or better? What is good enough – considering the
immense market pressure and competition across the globe?
A simple – yet difficult to digest and implement – answer to these questions is that
software quality management is not simply a task, but rather a habit. It must be
engrained in the company culture. It is something that is visible in the way people
are working, independent on their role. It certainly means that every single person
in the organization sees quality as her own business not that of a quality manager or
a testing team. A simple yet effective test to quickly identify the state of practice
with respect to quality management is to ask around what quality means for an
employee and how he delivers ac- cording to this meaning. You will identify that
many see it as a bulky and formal approach to be done to achieve necessary
certificates. Few exceptions exist, such as industries with safety and health impacts.
But even there, you will find different approaches to quality, depending on culture.
Those with carrot and stick will not achieve a true quality culture. Quality is a habit.
It is driven by objectives and not based on believes. It is primarily achieved when
each person in the organization knows and is aware on her own role to deliver
quality.
Quality is implemented along the product life-cycle. Fig. 2 shows some pivotal
quality-related activities mapped to the major life-cycle phases. Note that on the left
side strategic directions are set and a respective management system is
implemented. Towards the right side, quality related processes, such as test or
supplier audits are implemented. During evolution of the product with dedicated
services and customer feedback, the product is further optimized and the
management system is adapted where necessary.
Course Module
CS-6209 Software Engineering 1
4
Week 6: Software Quality Management
A small example will illustrate this need. A software system might have strict
reliability constraints. Instead of simply stating that reliability should achieve less
than one failure per month in operation, which would be reactive, related quality
requirements should target underlying product and process needs to achieve such
reliability. During the strategy phase, the market or customer needs for reliability
need to be elicited. Is reliability important as an image or is it rather availability?
What is the perceived value of different failure rates? A next step is to determine
how these needs will be broken down to product features, components and
capabilities. Which architecture will deliver the desired re- liability and what are
cost impacts? What component and supplier qualification criteria need to be
established and maintained throughout the product life-cycle? Then the underlying
quality processes need to be determined. This should not be done ad-hoc and for
each single project individually but by tailoring organizational processes, such as
product life-cycle, project reviews, or testing to the specific needs of the product.
What test coverage is necessary and how will it be achieved? Which test equipment
and infrastructure for interoperability of components needs to be applied? What
checklists should be used in preparing for reviews and releases? These processes
need to be carefully applied during development. Quality control will be applied by
each single engineer and quality assurance will be done systematically for selected
processes and work products. Finally the evolution phase of the product needs to
establish criteria for service request management and assuring the right quality
level of follow-on releases and potential defect corrections. A key question to
address across all these phases is how to balance quality needs with necessary effort
and availability of skilled people. Both relate to business, but that is at times
overlooked. We have seen companies that due to cost and time constraints would
Course Module
CS-6209 Software Engineering 1
5
Week 6: Software Quality Management
The concept of process maturity is not new. Many of the established quality models
in manufacturing use the same concept. This was summarized by Philip Crosby in
his bestselling book “Quality is Free” in 1979. He found from his broad experiences
as a senior manager in different industries that business success depends on quality.
With practical insight and many concrete case studies he could empirically link
process performance to quality. His credo was stated as: “Quality is measured by the
cost of quality which is the expense of nonconformance – the cost of doing things
wrong.”
First organizations must know where they are; they need to assess their processes.
The more detailed the results from such an assessment, the easier and more
straightforward it is to establish a solid improvement plan. That was the basic idea
with the “maturity path” concept proposed by Crosby in the 1970s. He distinguishes
five maturity stages, namely
Stage 1: Uncertainty
Stage 2: Awakening
Stage 3: Enlightening
Stage 4: Wisdom
Stage 5: Certainty
Course Module
CS-6209 Software Engineering 1
6
Week 6: Software Quality Management
These four questions relate to the four basic quality management techniques of
prediction, detection, correction and prevention. The first step is to identify how
many defects there are and which of those defects are critical to product performance.
The underlying techniques are statistical methods of defect estimation, reliability
prediction and criticality assessment. These defects have to be detected by quality
control activities, such as inspections, reviews, unit test, etc. Each of these techniques
has their strengths and weaknesses which explains why they ought to be combined to
be most efficient. It is of not much value to spend loads of people on test, when in-depth
requirements reviews would be much faster and cheaper. Once defects are detected and
identified, the third step is to remove them. This sounds easier than it actually is due to
the many ripple effects each correction has to a system. Regression tests and reviews of
corrections are absolutely necessary to assure that quality won’t degrade with changes.
A final step is to embark on preventing these defects from re-occurring. Often engineers
and their management state that this actually should be the first and most relevant step.
We agree, but experience tells that again and again, people stumble across defect
avoidance simply because their processes won’t support it. In order to effectively avoid
defects engineering processes must be defined, systematically applied and
quantitatively managed. This being in place, defect prevention is a very cost-effective
means to boost both customer satisfaction and business performance, as many high-
maturity organizations such as Motorola, Boeing or Wipro show.
Defect removal is not about assigning blame but about building better quality and
improving the processes to ensure quality. Reliability improvement always needs
measurements on effectiveness (i.e., percentage of removed defects for a given activity)
compared to efficiency (i.e., effort spent for detecting and removing a defect in the
respective activity). Such measurement asks for the number of residual defects at a
given point in time or within the development process.
only on the different components of the system and their inherent quality before the
start of validation activities, and reliability models which look more dynamically during
validation activities at residual defects and failure rates.
Only a few studies have been published that typically relate static defect estimation to
the number of already detected defects independently of the activity that resulted in
defects, or the famous error seeding which is well known but is rarely used due to the
belief of most software engineers that it is of no use to add errors to software when
there are still far too many defects in, and when it is known that defect detection costs
several person hours per defect.
Defects can be easily estimated based on the stability of the underlying software. All
software in a product can be separated into four parts according to its origin:
Software that is new or changed. This is the standard case where software had
been designed especially for this project, either internally or from a supplier.
Software reused but to be tested (i.e., reused from another project that was
never integrated and therefore still contains lots of defects; this includes ported
functionality). This holds for reused software with unclear quality status, such as
internal libraries.
Software reused from another project that is in testing (almost) at the same
time. This software might be partially tested, and therefore the overlapping of
the two test phases of the parallel projects must be accounted for to estimate
remaining defects. This is a specific segment of software in product lines or any
other parallel usage of the same software without having hardened it so far for
field usage.
Software completely reused from a stable product. This software is considered
stable and there- fore it has a rather low number of defects. This holds especially
for commercial off the shelf software components and opens source software
which is used heavily.
The base of the calculation of new or changed software is the list of modules to be used
in the complete project (i.e., the description of the entire build with all its components).
A defect correction in one of these components typically results in a new version, while
a modification in functionality (in the context of the new project) results in a new
variant. Configuration management tools are used to distinguish the one from the other
while still maintaining a single source.
f = a × x + b × y + c × z + d × (w – x – y – z)
Course Module
CS-6209 Software Engineering 1
8
Week 6: Software Quality Management
with
x: the number of new or changed KStmt designed and to be tested within this
project. This soft- ware was specifically designed for that respective project. All
other parts of the software are re- used with varying stability.
y: the number of KStmt that are reused but are unstable and not yet tested
(based on functionality that was designed in a previous project or release, but
was never externally delivered; this includes ported functionality from other
projects).
z: the number of KStmt that are tested in parallel in another project. This
software is new or changed for the other project and is entirely reused in the
project under consideration.
w: the number of KStmt in the total software – i.e., the size of this product in its
totality.
The factors a-d relate defects in software to size. They depend heavily on the
development environment, project size, maintainability degree and so on. Our
starting point for this initial estimation is actually driven by psychology. Any person
makes roughly one (non-editorial) defect in ten written lines of work. This applies
to code as well as a design document or e-mail, as was observed by the personal
software process (PSP) and many other sources [1,16,17]. The estimation of
remaining defects is language independent because defects are introduced per
thinking and editing activity of the programmer, i.e., visible by written statements.
This translates into 100 defects per KStmt. Half of these defects are found by careful
checking by the author which leaves some 50 defects per KStmt delivered at code
completion. Training, maturity and coding tools can further reduce the number
substantially. We found some 10-50 defects per KStmt de- pending on the maturity
level of the respective organization. This is based only on new or changed code, not
including any code that is reused or automatically generated.
Most of these original defects are detected by the author before the respective work
product is re- leased. Depending on the underlying individual software process, 40–
80% of these defects are re- moved by the author immediately. We have
experienced in software that around 10–50 defects per KStmt remain. For the
following calculation we will assume that 30 defects/KStmt are remaining (which is
a common value [18]. Thus, the following factors can be used:
Course Module
CS-6209 Software Engineering 1
9
Week 6: Software Quality Management
5
0
%
6
5
%
Since defects can never be entirely avoided, different quality control techniques are
used in combination for detecting defects during the product life-cycle. They are listed
in sequence when they are applied throughout the development phase, starting with
requirements and ending with system test:
failures, expected defect density, individual change history, customer’s risk and
occurrence probability)
Unit testing
Focused testing by tracking the effort spent for analyses, reviews, and
inspections and separating according to requirements to find out areas not
sufficiently covered
Systematic testing by using test coverage measurements (e.g., C0 and C1
coverage) and improvement
Operational testing by dynamic execution already during integration testing
Automatic regression testing of any redelivered code
System testing by applying operational profiles and usage specifications.
We will further focus on several selected approaches that are applied for improved
defect detection before starting with integration and system test because those
techniques are most cost-effective.
Note that the starting point for effectively reducing defects and improving reliability is
to track all defects that are detected. Defects must be recorded for each defect
detection activity. Counting defects and deriving the reliability (that is failures over
time) is the most widely applied and accepted method used to determine software
quality. Counting defects during the complete project helps to estimate the duration of
distinct activities (e.g., unit testing or subsystem testing) and improves the underlying
processes. Failures reported during system testing or field application must be traced
back to their primary causes and specific defects in the design (e.g., design decisions or
lack of design reviews).
Quality improvement activities must be driven by a careful look into what they mean
for the bottom line of the overall product cost. It means to continuously investigate
what this best level of quality re- ally means, both for the customers and for the
engineering teams who want to deliver it.
One does not build a sustainable customer relationship by delivering bad quality and
ruining his reputation just to achieve a specific delivery date. And it is useless to spend
an extra amount on improving quality to a level nobody wants to pay for. The optimum
seemingly is in between. It means to achieve the right level of quality and to deliver in
time. Most important yet is to know from the beginning of the project what is actually
relevant for the customer or market and set up the project accordingly. Objectives will
be met if they are driven from the beginning.
We look primarily at factors such as cost of non-quality to follow through this business
reasoning of quality improvements. For this purpose we measure all cost related to
error detection and removal (i.e., cost of non-quality) and normalize by the size of the
product (i.e., normalize defect costs). We take a conservative approach in only
considering those effects that appear inside our engineering activities, i.e., not
considering opportunistic effects or any penalties for delivering insufficient quality.
The most cost-effective techniques for defect detection are requirements reviews. For
Course Module
CS-6209 Software Engineering 1
11
Week 6: Software Quality Management
code re- views, inspections and unit test are most cost-effective techniques aside static
code analysis. Detecting defects in architecture and design documents has
considerable benefit from a cost perspective, because these defects are expensive to
correct at later stages. Assuming good quality specifications, major yields in terms of
reliability, however, can be attributed to better code, for the simple reason that there
are many more defects residing in code that were inserted during the coding activity.
We therefore provide more depth on techniques that help to improve the quality of
code, namely code reviews (i.e., code reviews and formal code inspections) and unit
test (which might include static and dynamic code analysis).
There are six possible paths of combining manual defect detection techniques in the
delivery of a piece of software from code complete until the start of integration test
(Fig. 4). The paths indicate the per- mutations of doing code reviews alone, performing
code inspections and applying unit test. Each path indicated by the arrows shows
which activities are performed on a piece of code. An arrow crossing a box means that
the activity is not applied. Defect detection effectiveness of a code inspection is much
higher than that of a code review. Unit test finds different types of defects than
reviews. However cost also varies depending on which technique is used, which
explains why these different permutations are used. In our experience code reviews is
the cheapest detection technique (with ca. 1-2 PH/defect), while manual unit test is
the most expensive (with ca. 1-5 PH/defect, depending on automation degree). Code
inspections lie somewhere in between. Although the best approach from a mere defect
detection perspective is to apply inspections and unit test, cost considerations and the
objective to reduce elapsed time and thus improve throughput suggest carefully
evaluating which path to follow in order to most efficiently and effectively detect and
remove defects
Entire set of modules
Code
integration test
Reviews
test
Unit
Formal
Code
Inspection
Fig. 4: Six possible paths for modules between end of coding and start of
integration test
Unit tests, however, combined with C0 coverage targets, have the highest effectiveness
for regression testing of existing functionality. Inspections, on the other hand, help in
detecting distinct defect classes that can only be found under real load (or even stress)
in the field.
Course Module
CS-6209 Software Engineering 1
12
Week 6: Software Quality Management
Defects are not distributed homogeneously through new or changed code. An analysis
of many projects revealed the applicability of the Pareto rule: 20-30% of the modules
are responsible for 70- 80% of the defects of the whole project. These critical
components need to be identified as early as possible, i.e., in the case of legacy systems
at start of detailed design, and for new software during coding. By concentrating on
these components the effectiveness of code inspections and unit testing is increased
and fewer defects have to be found during test phases. By concentrating on defect-
prone modules both effectiveness and efficiency are improved. Our main approach to
identify defect- prone software-modules is a criticality prediction taking into account
several criteria. One criterion is the analysis of module complexity based on
complexity measurements. Other criteria concern the number of new or changed code
in a module, and the number of field defects a module had in the pre- ceding project.
Code inspections are first applied to heavily changed modules, in order to optimize
payback of the additional effort that has to be spent compared to the lower effort for
code reading. Formal code reviews are recommended even for very small changes
with a checking time shorter than two hours in order to profit from a good efficiency of
code reading. The effort for know-how transfer to another designer can be saved.
accidental complexity in the product will definitely decrease productivity (e.g., gold
plating, additional rework, more test effort) and quality (more defects). A key to
controlling accidental complexity from creeping into the project is the measurement
and analysis of complexity throughout in the life-cycle. Volume, structure, order or the
connections of different objects contribute to complexity. However, do they all account
for it equally? The clear answer is no, because different people with different skills
assign complexity subjectively, according to their experience in the area. Certainly
criticality must be predicted early in the life-cycle to effectively serve as a managerial
instrument for quality improvement, quality control effort estimation and resource
planning as soon as possible in a project. Tracing comparable complexity metrics for
different products throughout the life-cycle is advisable to find out when essential
complexity is over- ruled by accidental complexity. Care must be used that the
complexity metrics are comparable, that is they should measure the same factors of
complexity.
Having identified such overly critical modules, risk management must be applied. The
most critical and most complex, for instance, the top 5% of the analyzed modules are
candidates for a redesign. For cost reasons mitigation is not only achieved with
redesign. The top 20% should have a code inspection instead of the usual code
reading, and the top 80% should be at least entirely (C0 coverage of 100%) unit tested.
By concentrating on these components the effectiveness of code inspections and unit
test is increased and fewer defects have to be found during test phases. To achieve
feedback for improving predictions the approach is integrated into the development
process end-to-end (requirements, design, code, system test, deployment).
It must be emphasized that using criticality prediction techniques does not mean
attempting to detect all defects. Instead, they belong to the set of managerial
instruments that try to optimize resource allocation by focusing them on areas with
many defects that would affect the utility of the delivered product. The trade-off of
applying complexity-based predictive quality models is estimated based on
Our experiences show that, in accordance with other literature corrections of defects in
early phases is more efficient, because the designer is still familiar with the problem
and the correction delay during testing is reduced
The effect and business case for applying complexity-based criticality prediction to a
Course Module
CS-6209 Software Engineering 1
14
Week 6: Software Quality Management
new project can be summarized based on results from our own experience database
(taking a very conservative ratio of only 40% defects in critical components):
20% of all modules in the project were predicted as most critical (after coding);
These modules contained over 40% of all defects (up to release time). Knowing
from these and many other projects that
60% of all defects can theoretically be detected until the end of unit test and
defect correction during unit test and code reading costs less than 10%
compared to defect correction during system test
It can be calculated that 24% of all defects can be detected early by investigating 20% of
all modules more intensively with 10% of effort compared to late defect correction
during test, therefore yielding a 20% total cost reduction for defect correction.
Additional costs for providing the statistical analysis are in the range of two person
days per project. Necessary tools are off the shelf and account for even less per project.
Course Module
PROJECT AND PROCESS
METRICS
1
Process Metrics
They are quantitative measures that enable you to gain
an insight into the efficiency of the software
software..
Basic quality and productivity data are collected
collected..
The data is analyzed compared against past averages
and accessed to determine the whether productivity and
quality has increased
increased..
Metrics are used to :
assess the status of an ongoing project
track potential risks
uncover problem areas before they go “critical,”
adjust work flow or tasks,
evaluate the project team’s ability to control quality of software
work and products. 2
A Good Manager Measures
3
Process Metrics
Measure the process to help update and change the
process as needed across many projects projects.. They are
collected across all projects over long periods of time
time..
We measure the efficacy of a software process
indirectly..
indirectly
That is, we derive a set of metrics based on the outcomes of the
process
Outcomes include
measures of errors uncovered before release of the software
defects delivered to and reported by end
end--users
work products delivered (productivity)
human effort expended
calendar time expended
4
Project Metrics
Measure specific aspects of a single project to improve
the decisions made on that project.
It helps to
Assess the status of ongoing project
Track risks
Find problem areas that can go critical
Adjust work flow
Evaluate teams ability to control the work
5
Typical project Metrics are:
Effort/time per software engineering task
Errors uncovered per review hour
Scheduled vs. actual milestone dates
Changes (number) and their characteristics
Distribution of effort on software engineering tasks
6
Typical Size-
Size-Oriented Metrics:
errors per KLOC (thousand lines of code)
defects per KLOC
$ per LOC
pages of documentation per KLOC
errors per person-
person-month
Errors per review hour
LOC per person-
person-month
$ per page of documentation
7
Typical Function-
Function-Oriented Metrics:
errors per Function Point (FP)
defects per FP
$ per FP
pages of documentation per FP
FP per person-
person-month
8
Function Point
Function points (FP) are a unit measure for software
size developed at IBM in 1979 by Richard Albrecht
To determine your number of FPs, you classify a system
into five classes
classes::
Transactions - External Inputs, External Outputs, External
Inquires
Data storage - Internal Logical Files and External Interface Files
Each class is then weighted by complexity as
low/average/high
Multiplied by a value adjustment factor (determined by
asking questions based on 14 system characteristics
9
Project Scheduling and
Tracking
10
Why Are Projects Late?
an unrealistic deadline established by someone outside the
software development group
changing customer requirements that are not reflected in
schedule changes;
an honest underestimate of the amount of effort and/or the
number of resources that will be required to do the job;
predictable and/or unpredictable risks that were not considered
when the project commenced;
technical difficulties that could not have been foreseen in
advance;
human difficulties that could not have been foreseen in
advance;
miscommunication among project staff that results in delays;
11
Scheduling Principles
12
PERT/CPM
PERT
Program Evaluation and Review Technique
Developed to handle uncertain activity times
CPM
Critical Path Method
Developed for industrial projects for which
activity times generally were known
Today’s project management software packages
have combined the best features of both
approaches..
approaches
13
PERT/CPM
14
PERT/CPM
Project managers rely on PERT/CPM to help them
answer questions such as:
What is the total time to complete the project?
16
Example: Frank’s Fine Floats
Immediate Completion
Activity Description Predecessors Time (days)
A Initial Paperwork --- 10
B Build Body A 20
C Finish Body B 5
D Build Frame C 10
F Finish Paperwork A 15
H Final Paperwork A 15
G Mount Body to Frame C, F 5
E Complete the tasks D, G, H 20
17
Example: Frank’s Fine Floats
Project Network
18
Draw the network diagram and determine
the critical path.
Account Activities Predecessors Time(t)
Code
A Gather data None 4
B Analyze the problem None 4
C Identify activities None 4
D Identify dependencies A 6
E Estimate resources A 8
F Create project charts B 14
G Allocate people B 12
H Distribute task D,E 8
I Program Coding C 20
J Program Debugging G,I 6
K Project Implementation F,H,J 8
DECISION TREE
20
Considerations in Decision
Making
What are the action choices?
What are the pros and cons of each possible action?
What are the consequences of each choice?
What is the probability of each consequence?
What is the relative importance of each possibility?
What is the identification of the best course of action?
The Make-
Make-Buy Decision
The make-
make-buy decision is an important concern
these days.
customers will not have a good feel for when an
application may be bought off the shelf and when it
needs to be developed.
The software engineer needs to perform a cost
benefit analysis in order to give the customer a
realistic picture of the true costs of the proposed
development options.
The use of a decision tree is a reasonable way to
organize this information.
Decision Trees
excellent tools for helping you to choose between
several courses of action
provide a highly effective structure within which
you can lay out options and investigate the
possible outcomes of choosing those options
help you to form a balanced picture of the risks
and rewards associated with each possible
course of action.
Decision Tree Symbols
Box - represents a decision node.
node. Lines from the box
denote the decision alternatives (one line per decision
alternative).
Alternative Name 1
Alternative Name 2
Decision Tree Symbols
Circle - represents a chance node.
node. Lines from the circle
denote the events that could occur at the chance node. The
name of the chance-
chance-driven event goes above the line. The
probability of the event goes below the line.
Event 1
P1
Event 2
P2
Decision Tree Symbols
Outcome 1
Outcome 2
Creating a Decision Tree
Start with a decision that you need to make
make.. Draw a small square to
represent this towards the left of a large piece of paper
paper..
From this box draw out lines towards the right for each possible
solution, and write that solution along the line
line.. Keep the lines apart as
far as possible so that you can expand your thoughts
thoughts..
At the end of each line, consider the results
results.. If the result of taking that
decision is uncertain, draw a small circlecircle.. If the result is another
decision that you need to make, draw another square square..
Starting from the new decision squares on your diagram, draw out
lines representing the options that you could selectselect.. From the circles
draw lines representing possible outcomes
outcomes.. Keep on doing this until
you have drawn out as many of the possible outcomes and decisions
as you can see
Outcome 1a TN 1
Choice 1 P Outcome 1b
P
TN 2
Outcome 2a TN 3
Choice 2
P
P Outcome 2b
TN 4
Sample Problem 1
Develop a decision tree for a software
software--based system, X.
The SE organization can build system from scratch or buy
an available software product and modify it to meet local
needs..
needs
If the system is to be built from scratch, there is a 70 70%
%
probability that the job will be difficult
difficult.. Using estimation
techniques, the project planner projects that a difficult
development effort will cost 450450,,000
000BHD
BHD and a simple
development effort is estimated to cost P380
380,,000
000BHD
BHD
If the organization will buy an available product, the
probability for minor changes to be addressed is 30 30%
% and
this is estimated to cost 210
210,,000
000BHD
BHD.. On the otherhand
otherhand,,
the major changes to be addressed is estimated to cost
P400
400,,000
000BHD
BHD..
Simple 380, 000
0.30
0. 70 Major Changes
400, 000
Evaluating Decision Trees
Start by assigning a cash value or score to each
possible outcome. Estimate how much you think
it would be worth to you if that outcome came
about.
Next look at each circle (representing an
uncertainty point) and estimate the probability of
each outcome. If you use percentages, the total
must come to 100% at each circle.
Calculating Decision Tree
Values
Start on the right hand side of the decision tree, and
work back towards the left.
Where you are calculating the value of uncertain
outcomes (circles on the diagram), do this by multiplying
the value of the outcomes by their probability. The total
for that node of the tree is the total of these values.
Simple 380, 000
0.30
343,000bhd 0. 70 Major Changes
400, 000
Sample Problem # 2
A company is trying to determine whether or not to drill an oil
well.. If it decides not to drill the well, no money will be made
well
or lost
lost.. Therefore, the value of the decision not to drill can
immediately be assigned a sum of zero bahrain dinars dinars..
Process--based Estimation
Process
COCOMO
Steps in Estimation
8
Solution –LOC
9
Example 2:
10
Function Point
Use measure of functionality
Derived directly from 5 information domain
characteristics
- User Input
- User Output
- User Inquiries
- Files
- External interfaces
Information Count Simple Average Complex FP
Domain Value Count
External Inputs 15 3 4 6
External Outputs 10 4 5 7
External Inquiries 5 3 4 6
Internal Logical 7 7 10 15
Files
External Interface 11 5 7 10
Files
TOTAL 1
Factor
Backup and Recovery 5
Data communications 4
Distributed processing 4
Performance critical 4
Existing operating environment 4
Online data entry 4
Input transaction over multiple screens 3
ILFs updated online 3
Information Domain Values Complex 4
Internal Processing Complex 3
Code designed for Reuse 4
Conversion/installation in design 3
Multiple installations 3
Application designed for change 3
Past Projects show a burdened labor
rate of 9000BD/PM
Average productivity is 8FP/pm
External Outputs 4 5 7
External Inquiries 3 4 6
Internal Logical 7 10 15
Files
External Interface 5 7 10
Files
TOTAL
Past Projects show a burdened labor rate of P8000/PM
Average productivity is 9FP/pm
Assume average complexity value
Total adjustment value is estimated to be 49
COCOMO II
uses object points, using counts of the number of
- screens
- reports
- components
Object Type Complexity Weight
Simple Medium Difficult
Screen 1 2 3
Report 2 5 8
3GL Component 10 11 10
Introduction
A software process is a set of related activities that leads to the production of a soft-
ware product. These activities may involve the development of software from
scratch in a standard programming language like Java or C. However, business
applications are not necessarily developed in this way. New business software is
now often developed by extending and modifying existing systems or by configuring
and integrating off-the-shelf software or system components.
There are many different software processes but all must include four activities that
are fundamental to software engineering:
Software specification. The functionality of the software and constraints on
its operation must be defined.
Software design and implementation. The software to meet the
specification must be produced.
Software validation. The software must be validated to ensure that it does
what the customer wants.
Software evolution. The software must evolve to meet changing customer
needs.
In some form, these activities are part of all software processes. In practice, of
course, they are complex activities in themselves and include sub-activities such as
requirements validation, architectural design, unit testing, etc. There are also
supporting process activities such as documentation and software configuration
management.
Course Module
CS-6209 Software Engineering 1
2
Week 7: Software Development Method
When we describe and discuss processes, we usually talk about the activities in
these processes such as specifying a data model, designing a user interface, etc., and
the ordering of these activities. However, as well as activities, process descriptions
may also include:
1. Products, which are the outcomes of a process activity. For example, the out-
come of the activity of architectural design may be a model of the software
architecture.
2. Roles, which reflect the responsibilities of the people involved in the process.
Examples of roles are project manager, configuration manager, programmer,
etc.
3. Pre- and post-conditions, which are statements that are true before and after
a process activity has been enacted or a product produced. For example,
before architectural design begins, a pre-condition may be that all
requirements have been approved by the customer; after this activity is
finished, a post-condition might be that the UML models describing the
architecture have been reviewed.
Software processes are complex and, like all intellectual and creative processes, rely
on people making decisions and judgments. There is no ideal process and most
organizations have developed their own software development processes.
Processes have evolved to take advantage of the capabilities of the people in an
organization and the specific characteristics of the systems that are being
developed. For some systems, such as critical systems, a very structured
development process is required. For business systems, with rapidly changing
requirements, a less formal, flexible process is likely to be more effective.
Although there is no ‘ideal’ software process, there is scope for improving the
software process in many organizations. Processes may include outdated
techniques or may not take advantage of the best practice in industrial software
engineering. Indeed, many organizations still do not take advantage of software
engineering methods in their software development.
Course Module
CS-6209 Software Engineering 1
3
Week 7: Software Development Method
These generic models are not definitive descriptions of software processes. Rather,
they are abstractions of the process that can be used to explain different approaches
to software development. You can think of them as process frameworks that may be
extended and adapted to create more specific software engineering processes.
These models are not mutually exclusive and are often used together, especially for
large systems development. For large systems, it makes sense to combine some of
the best features of the waterfall and the incremental development models. You
need to have information about the essential system requirements to design a soft-
ware architecture to support these requirements. You cannot develop this
incrementally. Sub-systems within a larger system may be developed using different
approaches. Parts of the system that are well understood can be specified and
developed using a waterfall-based process. Parts of the system which are difficult to
specify in advance, such as the user interface, should always be developed using an
incremental approach
Course Module
CS-6209 Software Engineering 1
4
Week 7: Software Development Method
The principal stages of the waterfall model directly reflect the fundamental
development activities:
1. Requirements analysis and definition. The system’s services, constraints,
and goals are established by consultation with system users. They are then
defined in detail and serve as a system specification.
2. System and software design. The systems design process allocates the
requirements to either hardware or software systems by establishing an
overall system architecture. Software design involves identifying and
describing the fundamental software system abstractions and their
relationships.
3. Implementation and unit testing. During this stage, the software design is
realized as a set of programs or program units. Unit testing involves verifying
that each unit meets its specification.
4. Integration and system testing. The individual program units or programs
are integrated and tested as a complete system to ensure that the software
Course Module
CS-6209 Software Engineering 1
5
Week 7: Software Development Method
requirements have been met. After testing, the software system is delivered
to the customer.
5. Operation and maintenance. Normally (although not necessarily), this is the
longest life cycle phase. The system is installed and put into practical use.
Maintenance involves correcting errors which were not discovered in earlier
stages of the life cycle, improving the implementation of system units and
enhancing the system’s services as new requirements are discovered.
In principle, the result of each phase is one or more documents that are approved
(‘signed off’). The following phase should not start until the previous phase has
finished. In practice, these stages overlap and feed information to each other. During
design, problems with requirements are identified. During coding, design problems
are found and so on. The software process is not a simple linear model but involves
feedback from one phase to another. Documents produced in each phase may then
have to be modified to reflect the changes made.
Because of the costs of producing and approving documents, iterations can be costly
and involve significant rework. Therefore, after a small number of iterations, it is
normal to freeze parts of the development, such as the specification, and to continue
with the later development stages. Problems are left for later resolution, ignored, or
programmed around. This premature freezing of requirements may mean that the
system won’t do what the user wants. It may also lead to badly structured systems
as design problems are circumvented by implementation tricks.
During the final life cycle phase (operation and maintenance) the software is put
into use. Errors and omissions in the original software requirements are discovered.
Program and design errors emerge and the need for new functionality is identified.
The system must therefore evolve to remain useful. Making these changes (software
maintenance) may involve repeating previous process stages.
Incremental development
Incremental development is based on the idea of developing an initial
implementation, exposing this to user comment and evolving it through several
versions until an adequate system has been developed (Figure 6.2). Specification,
development, and validation activities are interleaved rather than separate, with
rapid feedback across activities.
Course Module
CS-6209 Software Engineering 1
6
Week 7: Software Development Method
Each increment or version of the system incorporates some of the functionality that
is needed by the customer. Generally, the early increments of the system include the
most important or most urgently required functionality. This means that the
customer can evaluate the system at a relatively early stage in the development to
see if it delivers what is required. If not, then only the current increment has to be
changed and, possibly, new functionality defined for later increments.
Course Module
CS-6209 Software Engineering 1
7
Week 7: Software Development Method
Incremental development in some form is now the most common approach for the
development of application systems. This approach can be either plan-driven, agile,
or, more usually, a mixture of these approaches. In a plan-driven approach, the
system increments are identified in advance; if an agile approach is adopted, the
early increments are identified but the development of later increments depends on
progress and customer priorities.
You can develop a system incrementally and expose it to customers for comment,
without actually delivering it and deploying it in the customer’s environment.
Incremental delivery and deployment means that the software is used in real,
operational processes. This is not always possible as experimenting with new
software can disrupt normal business processes.
Course Module
CS-6209 Software Engineering 1
8
Week 7: Software Development Method
This informal reuse takes place irrespective of the development process that is
used. However, in the 21st century, software development processes that focus
on the reuse of existing software have become widely used. Reuse-oriented
approaches rely on a large base of reusable software components and an
integrating framework for the composition of these components. Sometimes,
these components are systems in their own right (COTS or commercial off-the-
shelf systems) that may provide specific functionality such as word processing or
a spreadsheet.
into account the components that are reused and organize the framework
to cater for this. Some new software may have to be designed if reusable
components are not available.
4. Development and integration. Software that cannot be externally
procured is developed, and the components and COTS systems are
integrated to create the new system. System integration, in this model,
may be part of the development process rather than a separate activity.
There are three types of software component that may be used in a reuse-
oriented process:
1. Web services that are developed according to service standards and which
are available for remote invocation.
2. Collections of objects that are developed as a package to be integrated
with a component framework such as .NET or J2EE.
3. Stand-alone software systems that are configured for use in a particular
environment.
Course Module
CS-6209 Software Engineering 1
10
Week 7: Software Development Method
Course Module
Risk Management
(Mitigation and Monitoring)
• Effective strategy must consider 3 issues.
– Risk avoidance
– Risk monitoring
– Risk management and planning
• Risk mitigation :
– Achieved by developing a plan.
Software Implementation
In this chapter, we will study about programming methods, documentation and
challenges in software implementation.
Structured Programming
In the process of coding, the lines of code keep multiplying, thus, size of the software
increases. Gradually, it becomes next to impossible to remember the flow of
program. If one forgets how software and its underlying programs, files, procedures
are constructed, it then becomes very difficult to share, debug, and modify the
program. The solution to this is structured programming. It encourages the
developer to use subroutines and loops instead of using simple jumps in the code,
thereby bringing clarity in the code and improving its efficiency Structured
programming also helps programmer to reduce coding time and organize code
properly.
Structured programming states how the program shall be coded. It uses three main
concepts:
1. Top-down analysis - A software is always made to perform some rational
work. This rational work is known as problem in the software parlance. Thus
it is very important that we understand how to solve the problem. Under top-
down analysis, the problem is broken down into small pieces where each one
has some significance. Each problem is individually solved and steps are
clearly stated about how to solve the problem.
statements in the program, which often makes the program flow non-
traceable. Jumps are prohibited and modular format is encouraged in
structured programming.
Functional Programming
Functional programming is style of programming language, which uses the concepts
of mathematical functions. A function in mathematics should always produce the
same result on receiving the same argument. In procedural languages, the flow of
the program runs through procedures, i.e. the control of program is transferred to
the called procedure. While control flow is transferring from one procedure to
another, the program changes its state.
Course Module
CS-6209 Software Engineering 1
3
Week 8-9: Software Implementation and Documentation
Common Lisp, Scala, Haskell, Erlang, and F# are some examples of functional
programming languages.
Programming style
Programming style is set of coding rules followed by all the programmers to write
the code. When multiple programmers work on the same software project, they
frequently need to work with the program code written by some other developer.
This becomes tedious or at times impossible, if all developers do not follow some
standard programming style to code the program.
Coding Guidelines
Practice of coding style varies with organizations, operating systems and language
of coding itself.
Variables - This mentions how variables of different data types are declared
and defined.
Comments - This is one of the important coding components, as the
comments included in the code describe what the code actually does and all
other associated descriptions. This section also helps creating help
documentations for other developers.
But at times, it is impossible to design software that works on the target machines.
Software Documentation
Software documentation is an important part of software process. A well written
document provides a great tool and means of information repository necessary to
know about software process. Software documentation also provides information
about how to use the product.
Source of this document can be previously stored data about the software,
already running software at the client’s end, client’s interview,
Course Module
CS-6209 Software Engineering 1
5
Week 8-9: Software Implementation and Documentation
There are various automated tools available and some comes with the
programming language itself. For example java comes JavaDoc tool to
generate technical documentation of code.
Course Module
CS-6209 Software Engineering 1
6
Week 8-9: Software Implementation and Documentation
Course Module
Requirements Engineering
Week 10
1
Requirements Engineering
• Inception— ask a set of questions that establish …
– basic understanding of the problem
– the people who want a solution
– the nature of the solution that is desired, and
– the effectiveness of preliminary communication and
collaboration
– between the customer and the developer
• Elicitation—elicit requirements from all stakeholders
• Elaboration—create an analysis model that identifies
data, function and behavioral requirements
• Negotiation—agree on a deliverable system that is
realistic for developers and customers
• Specification—can be any one (or more) of the following:
– A written document
– A set of models
– A formal mathematical
– A collection of user scenarios (use-cases)
– A prototype
• Validation—a review mechanism that looks for
– errors in content or interpretation
– areas where clarification may be required
– missing information
– inconsistencies (a major problem when large products or systems
– are engineered)
– conflicting or unrealistic (unachievable) requirements.
– Requirements management
Inception
• Identify stakeholders
– “who else do you think I should talk to?”
– Recognize multiple points of view
– Work toward collaboration
• The first questions
– Who is behind the request for this work?
– Who will use the solution?
– What will be the economic benefit of a successful
solution
– Is there another source for the solution that you
need?
Eliciting Requirements
• Find out about the application domain, the
services that the system should provide and the
system’s operational constraints.
• May involve end users, managers, engineers
involved in maintenance, domain experts, trade
unions etc. (They are called stakeholders)
• the goal is
– to identify the problem
– propose elements of the solution
– negotiate different approaches, and
– specify a preliminary set of solution requirements
Building an Analysis Model
• Elements of the analysis model
– Scenario-based elements
• Functional—processing narratives for software functions
• Use-case—descriptions of the interaction between an
“actor” and the system
– Class-based elements
• Implied by scenarios
– Behavioral elements
• State diagram
– Flow-oriented elements
• Data flow diagram
Scenario Based Modeling
Use Cases
• A collection of user scenarios that describe the thread of usage of a
system
• Each scenario is described from the point-of-view of an “actor”—a
person or device that interacts with the software in some way
• Each scenario answers the following questions:
– Who is the primary actor, the secondary actor (s)?
– What are the actor’s goals?
– What preconditions should exist before the story begins?
– What main tasks or functions are performed by the actor?
– What extensions might be considered as the story is described?
– What variations in the actor’s interaction are possible?
– What system information will the actor acquire, produce, or change?
– Will the actor have to inform the system about changes in the
external environment?
– What information does the actor desire from the system?
– Does the actor wish to be informed about unexpected changes?
Hospital Management System
Class Based Modeling
Defining Attributes of a Class
• Attributes of a class are those nouns from the
grammatical parse that reasonably belong to a class
• Attributes hold the values that describe the current
properties or state of a class
• In identifying attributes, the following question should be
answered
– What data items (composite and/or elementary) will
fully define a specific class in the context of the
problem at hand?
• Usually an item is not an attribute if more than one of
them is to be associated with a class
11
Defining Operations of a Class
• Operations define the behavior of an object.
• An operation has knowledge about the state of a class
and the nature of its associations.
• The action performed by an operation is based on the
current values of the attributes of a class.
• Using a grammatical parse again, circle the verbs; then
select the verbs that relate to the problem domain
classes that were previously identified.
12
Example Class Box
+ componentID
- telephoneNumber
Attributes - componentStatus
- delayTime
- masterPassword
- numberOfTries
+ program()
+ display()
Operations + reset()
+ query()
- modify()
+ call()
13
Association, Generalization and
Dependency (Ref: Fowler)
• Association
– Represented by a solid line between two classes directed from the
source class to the target class
– Used for representing (i.e., pointing to) object types for attributes
– May also be a part-of relationship (i.e., aggregation), which is
represented by a diamond-arrow
• Generalization
– Portrays inheritance between a super class and a subclass
– Is represented by a line with a triangle at the target end
• Dependency
– A dependency exists between two elements if changes to the
definition of one element (i.e., the source or supplier) may cause
changes to the other element (i.e., the client)
– Examples
• One class calls a method of another class
14
• One class utilizes another class as a parameter of a method
Example Class Diagram
Accountant Auditor Record
Keeper
Report
Error Log Input Handler Account Account List
Generator
1..n
15
Class Diagram
Flow-oriented Modeling
Data Modeling
• Identify the following items
– Data objects (Entities)
– Data attributes
– Relationships
– Cardinality (number of occurrences)
18
Data Flow and Control Flow
19
Diagram Layering and Process
Refinement
Context-level diagram
Level 1 diagram
Process Specification 20
Elements of the Analysis Model
Object-oriented Analysis Structured Analysis
Scenario-based Flow-oriented
modeling modeling
Use case text Data structure diagrams
Use case diagrams Data flow diagrams
Activity diagrams Control-flow diagrams
Swim lane diagrams Processing narratives
Class-based Behavioral
modeling modeling
Class diagrams
State diagrams
Analysis packages
Sequence diagrams
CRC models
Collaboration diagrams
21
Validating Requirements
• Is each requirement consistent with the overall objective
for the system/product?
• Have all requirements been specified at the proper level
of abstraction? That is, do some requirements provide a
level of technical detail that is inappropriate at this
stage?
• Is the requirement really necessary or does it represent
a feature that may not be essential to the objective of the
system?
• Does each requirement have attribution? That is, is a
source (generally, a specific individual) noted for each
requirement?
• Do any requirements conflict with other requirements?
• Is each requirement achievable in the technical
environment that will house the system or product?
• Is each requirement testable, once implemented?
• Does the requirements model properly reflect the
information, function and behavior of the system to be
built.
• Are all patterns consistent with customer requirements?
Scenario for collecting medical history
in MHC-PMS