Advance Software Engineering Notes
Advance Software Engineering Notes
EDUCATION
Figure 1.9: Control flow graphs of the programs of Figures 1.8(a) and (b).
Let us now try to understand why a program having good control flow
structure would be easier to develop and understand.
we may start with the input data and check by running through the
program how each statement processes (transforms) the input data
until the output is produced.
For example, for the program of Fig 1.9(a) you would have to
understand the execution of the program along the paths 1-2-3-7-8-
10, 1-4-5-6-9-10, and 1- 4-5-2-3-7-8-10.
A program having a messy control flow (i.e. flow chart) structure,
would have a large number of execution paths (see Figure 1.10).
Consequently, it would become extremely difficult to determine all
the execution paths, and tracing the execution sequence along all the
paths trying to understand them can be nightmarish. It is therefore
evident that a program having a messy flow chart representation
would indeed be difficult to understand and debug.
Figure 1.10: CFG of a program having too many GO TO statements.
Are GO TO statements the culprits?
GO TO statements alter the flow of control arbitrarily, resulting in too
many paths. But, then why does use of too many GO TO statements
makes a program hard to understand?
A programmer trying to understand a program would have to mentally
trace and understand the processing that take place along all the paths
of the program making program understanding and debugging
extremely complicated.
Soon it became widely accepted that good programs should have very
simple control structures
The use of flow charts to design good control flow structures of programs
became wide spread.
Structured programming—a logical extension
The need to restrict the use of GO TO statements was recognised by
everybody.
However, many programmers were still using assembly languages.
JUMP instructions are frequently used for program branching in
assembly languages.
Bohm and Jacopini that only three programming constructs—sequence,
selection, and iteration—were sufficient to express any programming
logic.
An example of a sequence statement is an assignment statement of the
form a=b;.
Examples of selection and iteration statements are the if-then-else
and the do-while statements respectively.
A program is called structured when it uses only the
sequence, selection, and iteration types of
constructs and is modular.
In the next step, the program design is derived from the data structure.
The functions (also called as processes ) and the data items that are
exchanged between the different functions are represented in a diagram
known as a data flow diagram (DFD).
Design
The goal of the design phase is to transform the requirements specified
in the SRS document into a structure that is suitable for implementation
in some programming language.
In technical terms, during the design phase the software architecture is
derived from the SRS document.
Two distinctly different design approaches are popularly being used at
present—the procedural and object-oriented design approaches.
Procedural design approach:
– The traditional design approach is in use in many software
development projects at the present time. This traditional
design technique is based on the data flow-oriented design
approach.
– It consists of two important activities; first structured analysis
of the requirements specification is carried out where the
detailed structure of the problem is examined.
– This is followed by a structured design step where the results
of structured analysis are transformed into the software design.
– During structured analysis, the functional requirements
specified in the SRS document are decomposed into
subfunctions and the data-flow among these subfunctions
is analysed and represented diagrammatically in the form
of DFDs.
– Structured design is undertaken once the structured analysis
activity is complete. Structured design consists of two main
activities—architectural design (also called high-level
design ) and detailed design (also called Low-level design ).
– High-level design involves decomposing the system in to
modules, and representing the interfaces and the invocation
relationships among the modules. A high-level software design
is some times referred to as the software architecture.
– During the detailed design activity, internals of the individual
modules such as the data structures and algorithms of the
modules are designed and documented.
Object-oriented design approach:
– In this technique, various objects that occur in the problem
domain and the solution domain are first identified and the
different relationships that exist among these objects are
identified.
– The object structure is further refined to obtain the detailed
design. The OOD approach is credited to have several benefits
such as lower development time and effort, and better
maintainability of the software
Coding and unit testing
The purpose of the coding and unit testing phase is to translate a
software design into source code and to ensure that individually each
function is working correctly.
The coding phase is also sometimes called the implementation phase,
since the design is implemented into a workable solution in this phase.
Each component of the design is implemented as a program module.
The end-product of this phase is a set of program modules that have
been individually unit tested. The main objective of unit testing is to
determine the correct working of the individual modules.
The specific activities carried out during unit testing include designing
test cases, testing, debugging to fix problems, and management of test
cases.
Integration and system testing
Integration of different modules is undertaken soon after they have been
coded and unit tested.
During the integration and system testing phase, the different modules
are integrated in a planned manner.
Integration of various modules are normally carried out incrementally
over a number of steps.
During each integration step, previously planned modules are added to
the partially integrated system and the resultant system is tested.
Finally, after all the modules have been successfully integrated and
tested, the full working system is obtained.
Integration testing is carried out to verify that the interfaces
among different units are working satisfactorily. On the other
hand, the goal of system testing is to ensure that the
developed system conforms to the requirements that have
been laid out in the SRS document.
System testing is carried out on this fully working system.
System testing usually consists of three different kinds of testing
activities:
- α testing: testing is the system testing performed by the development
team.
- β testing: This is the system testing performed by a friendly set of
customers.
-Acceptance testing: After the software has been delivered, the
customer performs system testing to determine whether to accept the
delivered software or to reject it.
Maintenance
The total effort spent on maintenance of a typical software during its
operation phase is much more than that required for developing the
software itself.
Many studies carried out in the past confirm this and indicate that the
ratio of relative effort of developing a typical software product and the
total effort spent on its maintenance is roughly 40:60.
Maintenance is required in the following three types of situations:
Corrective maintenance: This type of maintenance is carried out to
correct errors that were not discovered during the product
development phase.
Perfective maintenance: This type of maintenance is carried out to
improve the performance of the system, or to enhance the
functionalities of the system based on customer’s requests.
Adaptive maintenance: Adaptive maintenance is usually required for
porting the software to work in a new environment. For example,
porting may be required to get the software to work on a new computer
platform or with a new operating system.
Shortcomings of the classical waterfall model
Let us identify some of the important shortcomings of the classical
waterfall model:
No feedback paths:
o Just as water in a waterfall after having flowed down cannot flow
back, once a phase is complete, the activities carried out in it and
any artifacts produced in this phase are considered to be final and
are closed for any rework. This requires that all activities during
a phase are flawlessly carried out.
o The classical waterfall model is idealistic in the sense that it
assumes that no error is ever committed by the developers during
any of the life cycle phases, and therefore, incorporates no
mechanism for error correction.
Difficult to accommodate change requests:
o The customers’ requirements usually keep on changing with time.
But, in this model it becomes difficult to accommodate any
requirement change requests made by the customer after the
requirements specification phase is complete, and this often
becomes a source of customer discontent.
Inefficient error corrections:
o This model defers integration of code and testing tasks until it is
very late when the problems are harder to resolve.
No overlapping of phases:
o This model recommends that the phases be carried out
sequentially—new phase can start only after the previous one
completes.
o Consequently, it is safe to say that in a practical software
development scenario, rather than having a precise point in time
at which a phase transition occurs, the different phases need to
overlap for cost and efficiency reasons.
Is the classical waterfall model useful at all?
o It is hard to use the classical waterfall model in real projects.
o In any practical development environment, as the software takes
shape, several iterations through the different waterfall stages
become necessary for correction of errors committed during
various phases. Therefore, the classical waterfall model is hardly
usable for software development.
6. ITERATIVE WATERFALL MODEL
For example, if during the testing phase a design error is identified, then
the feedback path allows the design to be reworked and the changes to
be reflected in the design documents and all other subsequent
documents.
Almost every life cycle model that we discuss are iterative in nature,
except the classical waterfall model and the V-model—which are
sequential in nature.
In a sequential model, once a phase is complete, no work product of that
phase are changed later.
Considering these situations, the effort distribution for different phases with
time would be as shown in Figure 2.4.
Figure 2.4: Distribution of effort for various phases in the iterative waterfa
l model.
Shortcomings of the iterative waterfall model
The iterative waterfall model is a simple and intuitive software
development model.
It was used satisfactorily during 1970s and 1980s. Now, not only software
has become very large and complex, very few (if at all any) software
project is being developed from scratch.
Difficult to accommodate change requests: A major problem with the
waterfall model is that the requirements need to be frozen before the
development starts. Therefore, accommodating even small change
requests after the development activities are underway not only requires
overhauling the plan, but also the artifacts that have already been
developed.
o
o The prototyping model is especially useful when the exact
technical solutions are unclear to the development team. A
prototype can help them to critically examine the technical
issues associated with product development.
o An important reason for developing a prototype is that it is
impossible to “get it right” the first time. As advocated by Brooks
[1975], one must plan to throw away the software in order to
develop a good software later. Thus, the prototyping model can
be deployed when development of highly optimised and efficient
software is required.
From the above discussions, we can conclude the following:
1. PROJECT PLANNING
Once a project has been found to be feasible, software project managers undertake
project planning.
Project planning is undertaken and completed before any development activity
starts.
For effective project planning, in addition to a thorough knowledge of the various
estimation techniques, past experience is crucial.
During project planning, the project manager performs the following activities.
o Estimation: The following project attributes are estimated.
o Cost: How much is it going to cost to develop the software product?
o Duration: How long is it going to take to develop the product?
o Effort: How much effort would be necessary to develop the product?
o Scheduling: After all the necessary project parameters have
been estimated, the schedules for manpower and other resources are
developed.
o Staffing: Staff organisation and staffing plans are made.
o Risk management: This includes risk identification, analysis,
and abatement planning.
o Miscellaneous plans: This includes making several other plans such as
quality assurance plan, and configuration management plan, etc.
Order in which the planning activities are undertaken.
4. RISK MANAGEMENT
A risk is any anticipated unfavourable event or circumstance that can occur while a
project is underway.
Every project is susceptible to a large number of risks. Without effective
management of the risks, even the most meticulously planned project may go hay
ware.
It is necessary for the project manager to anticipate and identify different risks
that a project is susceptible to, so that contingency plans can be prepared
beforehand to contain each risk.
In this context, risk management aims at reducing the chances of a risk becoming real
as well as reducing the impact of a risks that becomes real.
Risk management consists of three essential activities—
o risk identification,
o risk assessment, and
o risk mitigation.
(i) Risk Identification
The project manager needs to predict the risks in a project as early as possible.
As soon as a risk is identified, effective risk management plans are made, so that
the possible impacts of the risks is minimised.
So, early risk identification is important.
For example, project manager might be worried whether the vendors whom you
have asked to develop certain modules might not complete their work in time,
whether they would turn in poor quality work, whether some of your key
personnel might leave the organisation, etc. All such risks that are likely to affect
a project must be identified and listed.
A project can be subject to a large variety of risks.
In order to be able to systematically identify the important risks which might
affect a project, it is necessary to categorise risks into different classes.
The project manager can then examine which risks from each class are relevant to
the project.
There are three main categories of risks which can affect a software project:
o project risks
o technical risks
o business risks.
Project risks:
o Project risks concern various forms of budgetary, schedule, personnel,
resource, and customer-related problems.
o An important project risk is schedule slippage.
o Since, software is intangible, it is very difficult to monitor and control a
software project.
Technical risks:
o Technical risks concern potential design, implementation, interfacing,
testing, and maintenance problems.
o Technical risks also include ambiguous specification, incomplete
specification, changing specification, technical uncertainty, and technical
obsolescence.
o Most technical risks occur due the development team’s insufficient
knowledge about the product.
Business risks:
o This type of risks includes the risk of building an excellent product that no
one wants, losing budgetary commitments, etc.
(ii) Risk Assessment
The objective of risk assessment is to rank the risks in terms of their damage
causing potential.
For risk assessment, first each risk should be rated in two ways:
o The likelihood of a risk becoming real (r).
o The consequence of the problems associated with that risk (s).
Based on these two factors, the priority of each risk can be computed as follows:
p=r□s
o where, p is the priority with which the risk must be handled,
o r is the probability of the risk becoming real, and
o s is the severity of damage caused due to the risk becoming real.
If all identified risks are prioritised, then the most likely and damaging risks can
be handled first and more comprehensive risk abatement procedures can be
designed for those risks.
(iii) Risk Mitigation
After all the identified risks of a project have been assessed, plans are made to
contain the most damaging and the most likely risks first.
Different types of risks require different containment procedures. Infact, most
risks require considerable ingenuity on the part of the project manager in tackling
the risks.
There are three main strategies for risk containment:
Avoid the risk:
o Risks can be avoided in several ways.
o Risks often arise due to project constraints and can be avoided by suitably
modifying the constraints.
o The different categories of constraints that usually give rise to risks are:
Process-related risk: These risks arise due to aggressive work
schedule, budget, and resource utilisation.
Product-related risks: These risks arise due to commitment to
challenging product features (e.g. response time of one second, etc.),
quality, reliability etc.
Technology-related risks: These risks arise due to commitment to
use certain technology (e.g., satellite communication).
o A few examples of risk avoidance can be the following: Discussing with the
customer to change the requirements to reduce the scope of the work,
giving incentives to the developers to avoid the risk of manpower turnover,
etc.
Transfer the risk:
o This strategy involves getting the risky components developed by a third
party, buying insurance cover, etc.
Risk reduction:
o This involves planning ways to contain the damage due to a risk.
o For example, if there is risk that some key personnel might leave, new
recruitment may be planned.
o The most important risk reduction techniques for technical risks is to build
a prototype that tries out the technology that you are trying to use.
o For example, if you are using a compiler for recognising user commands,
you would have to construct a compiler for a small and very primitive
command language first.
There can be several strategies to cope up with a risk.
To choose the most appropriate strategy for handling a risk, the project manager
must consider the cost of handling the risk and the corresponding reduction of
risk.
For this we may compute the risk leverage of the different risks.
Risk leverage is the difference in risk exposure divided by the cost of reducing
the risk. More formally,
Even though we identified three broad ways to handle any risk, effective risk
handling cannot be achieved by mechanically following a set procedure, but
requires a lot of ingenuity on the part of the project manager.
The requirements analysis and specification phase starts after the feasibility study
stage is complete and the project has been found to be financially viable and
technically feasible.
The requirements analysis and specification phase ends when the requirements
specification document has been developed and reviewed.
The requirements specification document is usually called as the software
requirements specification (SRS) document.
The goal of the requirements analysis and specification phase can be stated as
follows
We can conceptually divide the requirements gathering and analysis activity into
two separate tasks:
Requirements gathering
Requirements analysis
After the analyst has gathered all the required information regarding the software
to be developed, and has removed all incompleteness, inconsistencies, and
anomalies from the specification, he starts to systematically organise the
requirements in the form of an SRS document.
The SRS document usually contains all the user requirements in a structured
though an informal form.
Among all the documents produced during a software development life cycle, SRS
document is probably the most important document and is the toughest to write.
(i) Users of SRS Document
Usually a large number of different people need the SRS document for very
different purposes. Some of the important categories of users of the SRS document
and their needs for use are as follows:
Users, customers, and marketing personnel:
o These stakeholders need to refer to the SRS document to ensure that the
system as described in the document will meet their needs.
o Remember that the customer may not be the user of the software, but may
be some one employed or designated by the user.
o For generic products, the marketing personnel need to
understand therequirements that they can explain to the customers.
Software developers:
o The software developers refer to the SRS document to make sure
that theyare developing exactly what is required by the customer.
Test engineers:
o The test engineers use the SRS document to understand the functionalities,
and based on this write the test cases to validate its working. They need
that the required functionality should be clearly described, and the input
and output data should have been identified precisely.
User documentation writers:
o The user documentation writers need to read the SRS document to ensure
that they understand the features of the product well enough to be able to
write the users’ manuals.
Project managers:
o The project managers refer to the SRS document to ensure that they can
estimate the cost of the project easily by referring to the SRS document and
that it contains all the information required to plan the project.
Maintenance engineers:
o The SRS document helps the maintenance engineers to under- stand the
functionalities supported by the system. A clear knowledge of the
functionalities can help them to understand the design and code.
o Also, a proper understanding of the functionalities supported enables them
to determine the specific modifications to the system’s functionalities
would be needed for a specific purpose
(ii) Characteristics of a Good SRS Document
The skill of writing a good SRS document usually comes from the experience
gained from writing SRS documents for many projects.
However, the analyst should be aware of the desirable qualities that every good
SRS document should possess.
Some of the identified desirable qualities of an SRS document are the following:
o Concise:
The SRS document should be concise and at the same time
unambiguous, consistent, and complete.
Verbose and irrelevant descriptions reduce readability and also
increase the possibilities of errors in the document.
o Implementation-independent:
The SRS should be free of design and implementation decisions
unless those decisions reflect actual requirements.
It should only specify what the system should do and refrain from
stating how to do these. This means that the SRS document should
specify the externally visible behaviour of the system and not
discuss the implementation issues.
The SRS document should describe the system to be developed as a
black box, and should specify only the externally visible behaviour
of the system. For this reason, the S R S document is also called
the black-box specification of the software being developed.
This view with which a requirements specification is written, has
been shown in Figure 4.1. Observe that in Figure 4.1, the SRS
document describes the output produced for the different types of
input and a description of the processing required to produce the
output from the input (shown in ellipses) and the internal working
of the software is not discussed at all.
Figure 4.1: The black-box view of a system as performing a set of
functions.
o Traceable:
It should be possible to trace a specific requirement to the design
elements that implement it and vice versa.
Similarly, it should be possible to trace a requirement to the code
segments that implement it and the test cases that test this
requirement and vice versa.
Traceability is also important to verify the results of a phase with
respect to the previous phase and to analyse the impact of changing
a requirement on the design elements and the code.
o Modifiable:
Customers frequently change the requirements during the software
development development due to a variety of reasons.
Therefore, in practice the SRS document undergoes several
revisions during software development. Also, an SRS document is
often modified after the project completes to accommodate future
enhancements and evolution.
To cope up with the requirements changes, the SRS document
should be easily modifiable.
For this, an SRS document should be well-structured. A well-
structured document is easy to understand and modify.
o Identification of response to undesired events:
The SRS document should discuss the system responses to various
undesired events and exceptional conditions that may arise.
o Verifiable:
All requirements of the system as documented in the SRS document
should be verifiable.
This means that it should be possible to design test cases based on
the description of the functionality as to whether or not
requirements have been met in an implementation.
A requirement such as “the system should be user friendly” is not
verifiable. On the other hand, the requirement—“When the name of
a book is entered, the software should display whether the book is
available for issue or it has been loaned out” is verifiable.
Any feature of the required system that is not verifiable should be
listed separately in the goals of the implementation section of the
SRS document.
(iii) Important Categories of Customer Requirements
A good SRS document, should properly categorize and organise the requirements
into different sections.
As per the IEEE 830 guidelines, the important categories of user requirements are
the following.
An SRS document should clearly document the following aspects of a software:
Functional requirements
Non-functional requirements
Design and implementation constraints
External interfaces required
Other non-functional requirements
Goals of implementation.
Functional requirements
The functional requirements capture the functionalities required by the users
from the system.
To consider a software as offering a set of functions {fi} to the user.
These functions can be considered similar to a mathematical function f : I → O,
meaning that a function transforms an element (ii) in the input domain (I) to a
value (oi) in the output (O).
This functional view of a system is shown schematically in Figure.
Each function fi of the system can be considered as reading certain data ii,
and then transforming a set of input data (ii) to the corresponding set of output
data (oi).
The functional requirements of the system, should clearly describe each
functionality that the system would support along with the corresponding input
and output data set.
Non-functional requirements
The non-functional requirements are non-negotiable obligations that must be
supported by the software.
The non-functional requirements capture those requirements of the customer
that cannot be expressed as functions (i.e., accepting input data and producing
output data).
Non-functional requirements usually address aspects concerning external
interfaces, user interfaces, maintainability, portability, usability, maximum
number of concurrent users, timing, and throughput (transactions per second,
etc.).
The non-functional requirements can be critical in the sense that any failure by
the developed software to achieve some minimum defined level in these
requirements can be considered as a failure and make the software unacceptable
by the customer.
The different categories of non- functional requirements that are described under
three different sections:
Design and implementation constraints:
o Design and implementation constraints are an important category of non-
functional requirements describe any items or issues that will limit the
options available to the developers.
o Some of the example constraints can be—corporate or regulatory policies
that needs to be honoured; hardware limitations; interfaces with other
applications; specific technologies, tools, and databases to be used; specific
communications protocols to be used; security considerations; design
conventions or programming standards to be followed, etc.
o Consider an example of a constraint that can be included in this section—
Oracle DBMS needs to be used as this would facilitate easy interfacing with
other applications that are already operational in the organisation.
External interfaces required:
o Examples of external interfaces are— hardware, software and
communication interfaces, user interfaces, report formats, etc.
o To specify the user interfaces, each interface between the software and the
users must be described.
o The description may include sample screen images, any GUI standards or
style guides that are to be followed, screen layout constraints, standard
buttons and functions (e.g., help) that will appear on every screen,
keyboard shortcuts, error message display standards, and so on.
o The details of the user interface design such as screen designs, menu
structure, navigation diagram, etc. should be documented in a separate
user interface specification document.
Other non-functional requirements:
o This section contains a description of non- functional requirements that are
neither design constraints and nor are external interface requirements.
o An important example is a performance requirement such as the number
of transactions completed per unit time. Besides performance
requirements, the other non-functional requirements to be described in
this section may include reliability issues, accuracy of results, and security
issues.
Goals of implementation
The ‘goals of implementation’ part of the SRS document offers some general
suggestions regarding the software to be developed.
These are not binding on the developers, and they may take these suggestions into
account if possible.
A goal, in contrast to the functional and non-functional requirements, is not
checked by the customer for conformance at the time of acceptance testing.
The goals of implementation section might document issues such as easier
revisions to the system functionalities that may be required in the future, easier
support for new devices to be supported in the future, reusability issues, etc.
These are the items which the developers might keep in their mind during
development so that the developed system may meet some aspects that are not
required immediately.
It is useful to remember that anything that would be tested by the user and the
acceptance of the system would depend on the outcome of this task, is usually
considered as a requirement to be fulfilled by the system and not a goal and vice
versa.
Figure 4.2: User and system interactions in high-level functional requirement.
o Typically , there is some initial data input by the user. After accepting this,
the system may display some response (called system action ). Based on
this, the user may input further data, and so on.
o For any given high-level function, there can be different interaction
sequences or scenarios due to users selecting different options or entering
different data items.
o In Figure 4.2, the different scenarios occur depending on the amount
entered for withdrawal. The different scenarios are essentially different
behaviour exhibited by the system for the same high-level function.
o Typically, each user input and the corresponding system action may be
considered as a sub-requirement of a high-level requirement. Thus, each
high-level requirement can consist of several sub-requirements.
Is it possible to determine all input and output data precisely?
o In a requirements specification document, it is desirable to define the
precise data input to the system and the precise data output by the system.
o Sometimes, the exact data items may be very difficult to identify.
o This is especially the case, when no working model of the system to be
developed exists.
o In such cases, the data in a high-level requirement should be described
using high-level terms and it may be very difficult to identify the exact
components of this data accurately.
o Another aspect that must be kept in mind is that the data might be input to
the system in stages at different points in execution.
o For example, consider the withdraw-cash function of an automated teller
machine (ATM) of Figure 4.2. Since during the course of execution of the
withdraw-cash function, the user would have to input the type of account,
the amount to be withdrawn, it is very difficult to form a single high-level
name that would accurately describe both the input data. However, the
input data for the subfunctions can be more accurately described.
Please note that IEEE 830 standard has been intended to serve only as a
guideline for organizing a requirements specification document into sections
and allows the flexibility of tailoring it, as may be required for specific projects.
Depending on the type of project being handled, some sections can be omitted,
introduced, or interchanged as may be considered prudent by the analyst.
Purpose: This section should describe where the software would be deployed
and and how the software would be used.
Project scope: This section should briefly describe the overall context within
which the software is being developed. For example, the parts of a problem that
are being automated and the parts that would need to be automated during
future evolution of the software.
Product features: This section should summarize the major ways in which the
software would be used. Details should be provided in Section 3 of the
document. So, only a brief summary should be presented here.
User classes: Various user classes that are expected to use this software are
identified and described here. The different classes of users are identified by the
types of functionalities that they are expected to invoke, or their levels of
expertise in using computers.
User documentation: This section should list out the types of user
documentation, such as user manuals, on-line help, and trouble-shooting
manuals that will be delivered to the customer along with the software.
This section can classify the functionalities either based on the specific
functionalities invoked by different users, or the functionalities that are
available in different modes, etc., depending what may be appropriate.
1. User class 1
2. User class 2
Hardware interfaces: This section should describe the interface between the
software and the hardware components of the system. This section may include
the description of the supported device types, the nature of the data and control
interactions between the software and the hardware, and the communication
protocols to be used.
This section should describe the non-functional requirements other than the
design and implementation constraints and the external interface requirements
that have been described in Sections 2 and 4 respectively.
Functional requirements
1. Operation mode 1
2. Operation mode 2
The behaviour of a system can be specified using either the finite state
machine (FSM) formalism and any other alternate formalisms. The FSMs can
used to specify the possible states (modes) of the system and the transition
among these states due to occurrence of events.
Functional requirements
: Issue book
Description: A friend can be issued book only if he is registered. The various books
outstanding against him along with the date borrowed are first displayed.
: Display outstanding books
Description: First a friend’s name and the serial number of the book to be issued are
entered. Then the books outstanding against the friend should be displayed.
Output: List of outstanding books along with the date on which each was borrowed.
: Confirm issue book
If the owner confirms, then the book should be issued to him and the relevant records
should be updated.
Input: Owner confirmation for book issue. Output: Confirmation of book issue.
Description: Details of friends who have books outstanding against their name is
displayed.
Input: User selection
Output: The display includes the name, address and telephone numbers of each friend
against whom books are outstanding along with the titles of the outstanding books and
the date on which those were issued.
: Query book
: Return book
Description: Upon return of a book by a friend, the date of return is stored and the book
is removed from the borrowing list of the concerned friend.
Input: Name of the book.
Description: A friend must be registered before he can be issued books. After the
registration data is entered correctly, the data should be stored and a confirmation
message should be displayed.
Input: Friend details including name of the friend, address, land line number and
mobile number.
Output: Confirmation of registration status.
Description: The books borrowed by the user of the personal library are registered.
Input: Title of the book and the date
borrowed.
Description: The data about the books borrowed by the owner are displayed.
4. Manage statistics
: Display book count
Description: The total number of books in the personal library should be displayed.
Input: User selection.
Output: Total number of books issued and total number of books returned.
Non-functional requirements
N.1: Database: A data base management system that is available free of cost in the
public domain should be used.
N.2: Platform: Both Windows and Unix versions of the software need to be developed.
N.3: Web-support: It should be possible to invoke the query book functionality from
any place by using a web browser.
Observation: Since there are many functional requirements, the requirements have
been organised into four sections: Manage own books, manage friends, manage
borrowed books, and manage statistics. Now each section has less than 7 functional
requirements. This would not only enhance the readability of the document, but would
also help in design.
Page 1 of 23
Unit III
OVERVIEW OF THE DESIGN PROCESS
The activities carried out during the design phase (called as design process)
transform the SRS document into the design document.
Page 3 of 23
considerations given to the accuracy of the results, space and time
complexities.
Starting with the SRS document (as shown in Figure 5.1), the design documents
are produced through iterations over a series of steps.
The design documents are reviewed by the members of the development team to
ensure that the design solution conforms to the requirements specification.
Page 4 of 23
When the high-level design is complete, the problem should have been decomposed
into many small functionally independent modules that are cohesive, have low coupling
among themselves, and are arranged in a hierarchy.
Many different types of notations have been used to represent a high-level design.
A notation that is widely being used for procedural development is a tree-like
diagram called the structure chart.
Another popular design representation techniques called UML that is being used
to document object-oriented design, involves developing several types of
diagrams to document the object-oriented design of a systems.
Though other notations such as Jackson diagram [1975] or Warnier-Orr [1977,
1981] diagram are available to document a software design, we confine our
attention in this text to structure charts and UML diagrams only.
(b) Detailed Design
Once the high-level design is complete, detailed design is undertaken
The definition of a “good” software design can vary depending on the exact
application being designed.
For example, “memory size used up by a program” may be an important issue to
Characterise a good solution for embedded software development—since
embedded applications are often required to work under severely limited
memory sizes due to cost, space, or power consumption considerations.
Every good software design for general applications must possess some
characteristics are listed below:
Page 5 of 23
Correctness:
o A good design should first of all be correct. That is, it should correctly
implement all the functionalities of the system.
Understandability:
o A good design should be easily understandable. Unless a design solution is
easily understandable, it would be difficult to implement and maintain it.
Efficiency:
o A good design solution should adequately address resource, time, and cost
optimisation issues.
Maintainability:
o A good design should be easy to change.
o This is an important requirement, since change requests usually keep
coming from the customer even after product release.
(i) Understandability of a Design: A Major Concern
While performing the design of a certain problem, assume that we have arrived at
a large number of design solutions and need to choose the best one.
Obviously all incorrect designs have to be discarded first.
Out of the correct design solutions, how can we identify the best one?
Given that we are choosing from only correct design solutions, understandability
of a design solution is possibly the most important issue to be considered while
judging the goodness of a design.
A good design solution should be simple and easily understandable. A design that is
easy to understand is also easy to develop and maintain.
A complex design would lead to severely increased life cycle costs.
Unless a design is easily understandable, it would require tremendous effort to
implement, test, debug, and maintain it.
About 60 percent of the total effort in the life cycle of a typical product is spent on
maintenance. If the software is not easy to understand, not only would it lead to
increased development costs, the effort required to maintain the product would
also increase manifold.
Besides, a design solution that is difficult to understand would lead to a program
that is full of bugs and is unreliable.
Page 6 of 23
Understandability of a design solution can be enhanced through clever
applications of the principles of abstraction and decomposition.
An understandable design is modular and layered
o To be able to compare the understandability of two design solutions, we
should at least have an understanding of the general features that an easily
understandable design should possess.
o A design solution should have the following characteristics to be easily
understandable:
It should assign consistent and meaningful names to various design
components.
It should make use of the principles of decomposition and
abstraction in good measures to simplify the design.
o A design solution is understandable, if it is modular
and the modulesare arranged in distinct layers.
Modularity
o A modular design is an effective decomposition of a problem.
o It is a basic characteristic of any good design solution.
o A modular design, in simple words, implies that the problem has been
decomposed into a set of modules that have only limited interactions with each
other.
o Decomposition of a problem into modules facilitates taking advantage of
the divide and conquer principle.
o If different modules have either no interactions or little interactions with
each other, then each module can be understood separately. This reduces
the perceived complexity of the design solution greatly.
o How can we compare the modularity of two
alternate designsolutions?
o For example, consider two alternate design solutions to a problem that are
represented in Figure 5.2, in which the modules M1 , M2 etc. have been
drawn as rectangles.
o The invocation of a module by another module has been shown as an
arrow. It can easily be seen that the design solution of Figure 5.2(a) would
Page 7 of 23
be easier to understand since the interactions among the different
modules is low.
o But, can we quantitatively measure the modularity of a design solution?
Unless we are able to quantitatively measure the modularity of a design
solution, it will be hard to say which design solution is more modular than
another.
o Unfortunately, there are no quantitative metrics available yet to directly
measure the modularity of a design. However, we can quantitatively
characterise the modularity of a design solution based on the cohesion and
coupling existing in the design.
o A design solution is said to be highly modular, if the
different modules in the solution have high
cohesion and their inter-module couplings are low.
o A software design with high cohesion and low coupling among modules is
the effective problem decomposition. Such a design would lead to
increased productivity during program development by bringing down the
perceived problem complexity.
Layered design
o A layered design is one in which when the call relations among different
modules are represented graphically, it would result in a tree-like diagram
with clear layering.
o In a layered design solution, the modules are arranged in a hierarchy of
layers.
o A module can only invoke functions of the modules in the layer
immediately below it.
Page 8 of 23
o The higher layer modules can be considered to be similar to managers that
invoke (order) the lower layer modules to get certain tasks done.
o A layered design can be considered to be implementing control abstraction,
since a module at a lower layer is unaware of (about how to call) the higher
layer modules.
o A layered design can make the design solution easily understandable, since
to understand the working of a module, one would at best have to
understand how the immediately lower layer modules work without
having to worry about the functioning of the upper layer modules.
o When a failure is detected while executing a module, it is obvious that the
modules below it can possibly be the source of the error.
o This greatly simplifies debugging since one would need to concentrate only
on a few modules to detect the error.
As the name itself implies, SA/SD methodology involves carrying out two distinct
activities:
o Structured analysis (SA)
o Structured design (SD)
The roles of structured analysis (SA) and structured design (SD) have been shown
schematically in Figure 6.1. Observe the following from the figure:
Page 9 of 23
o During structured design, the DFD model is transformed into a structure
chart.
As shown in Figure 6.1, the structured analysis activity transforms the SRS
document into a graphic model called the DFD model.
During structured analysis, functional decomposition of the system is achieved.
That is, each function that the system needs to perform is analysed and
hierarchically decomposed into more detailed functions.
On the other hand, during structured design, all functions identified during
structured analysis are mapped to a module structure.
This module structure is also called the high- level design or the software
architecture for the given problem. This is represented using a structure chart.
The high-level design stage is normally followed by a detailed design stage. During
the detailed design stage, the algorithms and data structures for the individual
modules are designed. The detailed design can directly be implemented as a
working system using a conventional programming language.
The results of structured analysis can therefore, be easily understood by the user.
In fact, the different functions and data in structured analysis are named using the
user’s terminology. The user can therefore even review the results of the
structured analysis to ensure that it captures all his requirements.
3. STRUCTURED ANALYSIS
Page 10 of 23
o Top-down decomposition approach.
o Application of divide and conquer principle. Through this each high- level
function is independently decomposed into detailed functions.
o Graphical representation of the analysis results using Data Flow Diagrams
(DFDs).
A DFD is a hierarchical graphical model of a system that shows the different
processing activities or functions that the system performs and the data
interchange among those functions.
Please note that a DFD model only represents the data flow aspects and does not
show the sequence of execution of the different functions and the conditions based
on which a function may or may not be executed.
In the DFD terminology, each function is called a process or a bubble.
It is useful to consider each function as a processing station (or process) that
consumes some input data and produces some output data.
DFD is an elegant modelling technique that can be used not only to represent the
results of structured analysis of a software problem, but also useful for several
other applications such as showing the flow of documents or items in an
organisation.
(I) DATA FLOW DIAGRAMS (DFDS)
The DFD (also known as the bubble chart) is a simple graphical formalism that can
be used to represent a system in terms of the input data to the system, various
processing carried out on those data, and the output data generated by the system.
It is simple to understand and use.
A DFD model uses a very limited number of primitive symbols (shown in Figure
6.2) to represent the functions performed by a system and the data flow among
these functions.
Starting with a set of high-level functions that a system performs, a DFD model
represents the sub functions performed by the functions using a hierarchy of
diagrams.
The DFD technique is also based on a very simple set of intuitive concepts and
rules.
The different concepts associated with building a DFD model of a system are:
Page 11 of 23
Primitive symbols used for constructing DFDs
o There are essentially five different types of symbols used for constructing
DFDs.
Figure 6.2: Symbols used for designing DFDs.
o Function symbol:
A function is represented using a circle.
This symbol is called a process or a bubble.
Bubbles are annotated with the names of the corresponding
functions.
o External entity symbol:
An external entity such as a librarian, a library member, etc. is
represented by a rectangle.
The external entities are essentially those physical entities external
to the software system which interact with the system by inputting
data to the system or by consuming the data produced by the
system.
In addition to the human users, the external entity symbols can be
used to represent external hardware and software such as another
application software that would interact with the software being
modelled.
o Data flow symbol:
A directed arc (or an arrow) is used as a data flow symbol.
A data flow symbol represents the data flow occurring between two
processes or between an external entity and a process in the
direction of the data flow arrow.
Page 12 of 23
Data flow symbols are usually annotated with the corresponding
data names.
For example the DFD in Figure 6.3(a) shows three data flows—the
data item number flowing from the process read-number to
validate-number, data- item flowing into read-number, and valid-
number flowing out of validate-number.
o Data store symbol:
A data store is represented using two parallel lines.
It represents a logical file.
That is, a data store symbol can represent either a data structure or
a physical file on disk.
Each data store is connected to a process by means of a data flow
symbol.
The direction of the data flow arrow shows whether data is being
read from or written into a data store.
An arrow flowing in or out of a data store implicitly represents the
entire data of the data store and hence arrows connecting t o a data
store need not be annotated with the name of the corresponding
data items.
As an example of a data store, number is a data store in Figure
6.3(b).
o Output symbol:
The output symbol is used when a hard copy is produced.
Important concepts associated with constructing DFD
models
o Synchronous and asynchronous operations
If two bubbles are directly connected by a data flow arrow, then
they are synchronous.
This means that they operate at the same speed.
An example of such an arrangement is shown in Figure 6.3(a).
Here, the validate-number bubble can start processing only after
the read- number bubble has supplied data to it; and the read-
number bubble has to wait until the validate-number bubble has
consumed its data.
Page 13 of 23
However, if two bubbles are connected through a data store, as in
Figure 6.3(b) then the speed of operation of the bubbles are
independent.
The data produced by a producer bubble gets stored in the data
store. It is therefore possible that the producer bubble stores
several pieces of data items, even before the consumer bubble
consumes any of them.
Figure 6.3: Synchronous and asynchronous data flow.
Data dictionary
o Every DFD model of a system must be accompanied by a data dictionary.
o A data dictionary lists all data items that appear in a DFD model.
o The data items listed include all data flows and the contents of all data
stores appearing on all the DFDs in a DFD model.
o Please remember that the DFD model of a system typically consists of
several DFDs, viz., level 0 DFD, level 1 DFD, level 2 DFDs, etc., as shown in
Figure 6.4.
o However, a single data dictionary should capture all the data appearing in
all the DFDs constituting the DFD model of a system.
o A data dictionary lists the purpose of all data items and the definition of all
composite data items in terms of their component data items.
o For example, a data dictionary entry may represent that the data grossPay
consists of the components regularPay and overtimePay.
grossPay = regularPay + overtimePay
o For the smallest units of data items, the data dictionary simply lists their
name and their type.
Page 14 of 23
o Composite data items are expressed in terms of the component data items
using certain operators. The operators using which a composite data item
can be expressed in terms of its component data items are discussed
subsequently.
o The dictionary plays a very important role in any software development
process, especially for the following reasons:
A data dictionary provides a standard terminology for all relevant
data for use by the developers working in a project. A consistent
vocabulary for data items is very important, since in large projects
different developers of the project have a tendency to use different
terms to refer to the same data, which unnecessarily causes
confusion.
The data dictionary helps the developers to determine the
definition of different data structures in terms of their component
elements while implementing the design.
The data dictionary helps to perform impact analysis. That is, it is
possible to determine the effect of some data on various processing
activities and vice versa. Such impact analysis is especially useful
when one wants to check the impact of changing an input value type,
or a bug in some functionality, etc.
o For large systems, the data dictionary can become extremely complex and
voluminous.
o Even moderate-sized projects can have thousands of entries in the data
dictionary. It becomes extremely difficult to maintain a voluminous
dictionary manually.
o Computer-aided software engineering (CASE) tools come handy to
overcome this problem. Most CASE tools usually capture the data items
appearing in a DFD as the DFD is drawn, and automatically generate the
data dictionary. As a result, the designers do not have to spend almost any
effort in creating the data dictionary. These CASE tools also support some
query language facility to query about the definition and usage of data
items. For example, queries may be formulated to determine which data
item affects which processes, or a process affects which data items, or the
Page 15 of 23
definition and usage of specific data items, etc. Query handling is facilitated
by storing the data dictionary in a relational database management system
(RDBMS).
Data definition
o Composite data items can be defined in terms of primitive data items using
the following data definition operators.
+: denotes composition of two data items, e.g. a+b represents data
a and b. [,,]: represents selection, i.e. any one of the data
items listed inside the square bracket can occur For example, [a,b]
represents either a occurs or b occurs.
(): the contents inside the bracket represent optional data which
may or may not appear.
a+(b) represents either a or a+b occurs.
{}: represents iterative data definition, e.g. {name}5 represents five
name data.
{name}* represents zero or more instances of name data.
=: represents equivalence, e.g. a=b+c means that a is a composite
data item comprising of both b and c.
/* */: Anything appearing within /* and */ is considered as
comment.
Page 16 of 23
At each successive lower level DFDs, more and more details are gradually
introduced.
To develop a higher-level DFD model, processes are decomposed into their
subprocesses and the data flow among these subprocesses are identified.
To develop the data flow model of a system, first the most abstract representation
(highest level) of the problem is to be worked out.
Subsequently, the lower level DFDs are developed.
Level 0 and Level 1 consist of only one DFD each.
Level 2 may contain up to 7 separate DFDs, and level 3 up to 49 DFDs, and so on.
However, there is only a single data dictionary for the entire DFD model.
All the data names appearing in all DFDs are populated in the data dictionary and
the data dictionary contains the definitions of all the data items.
Page 17 of 23
The bubble in the context diagram is annotated with the name of the software
system being developed (usually a noun).
This is the only bubble in a DFD model, where a noun is used for naming the
bubble.
The bubbles at all other levels are annotated with verbs according to the main
function performed by the bubble.
This is expected since the purpose of the context diagram is to capture the context
of the system rather than its functionality.
As an example of a context diagram, consider the context diagram a software
developed to automate the book keeping activities of a supermarket (see Figure
6.10). The context diagram has been labelled as ‘Supermarket software’.
Page 18 of 23
and the kinds of data they would be inputting to the system and the data
theywould be receiving from the system.
Here, the term users of the system also includes any external systems which
supply data to or receive data from the system.
Construction of context diagram:
o Examine the SRS document to determine:
o Different high-level functions that the system needs to perform.
o Data input to every high-level function.
o Data output from every high-level function.
o Interactions (data flow) among the identified high-level functions.
Represent these aspects of the high-level functions in a diagrammatic form.
This would form the top-level data flow diagram (DFD),
usually called the DFD 0.
5. STRUCTURED DESIGN
The aim of structured design is to transform the results of the structured analysis
(that is, the DFD model) into a structure chart.
A structure chart represents the software architecture.
The various modules making up the system, the module dependency (i.e. which
module calls which other modules), and the parameters that are passed among
the different modules.
The basic building blocks using which structure charts are designed are as
following:
o Rectangular boxes: A rectangular box represents a module. Usually, every
rectangular box is annotated with the name of the module it represents.
o Module invocation arrows: An arrow connecting two modules implies that
during program execution control is passed from one module to the other
in the direction of the connecting arrow.
o Data flow arrows: These are small arrows appearing alongside the
module invocation arrows. The data flow arrows are annotated with the
corresponding data name. Data flow arrows represent the fact that the
Page 19 of 23
named data passes from one module to the other in the direction of the
arrow.
o Library modules: A library module is usually represented by a rectangle
with double edges. Libraries comprise the frequently called modules.
Usually, when a module is invoked by many other modules, it is made into
a library module.
o Selection: The diamond symbol represents the fact that one module of
several modules connected with the diamond symbol is invoked depending
on the outcome of the condition attached with the diamond symbol.
o Repetition: A loop around the control flow arrows denotes that the
respective modules are invoked repeatedly.
In any structure chart, there should be one and only one module at the top, called
the root.
There should be at most one control relationship between any two modules in
the structure chart. This means that if module A invokes module B, module B
cannot invoke module A.
Different modules of a structure chart to be arranged in layers or levels.
The principle of abstraction does not allow lower-level modules to be aware of the
existence of the high-level modules.
However, it is possible for two higher-level modules to invoke the same lower-
level module. An example of a properly layered design and another of a poorly
layered design are shown in Figure 6.18.
Page 20 of 23
o It is usually difficult to identify the different modules of a program from its
flow chart representation.
o Data interchange among different modules is not represented in a flow
chart.
o Sequential ordering of tasks that is inherent to a flow chart is
suppressed in a structure chart.
(ii) Transformation of a DFD Model into Structure Chart
Systematic techniques are available to transform the DFD representation of a
problem into a module structure represented by as a structure chart.
Structured design provides two strategies to guide transformation of a DFD into a
structure chart:
• Transform analysis
• Transaction analysis
Normally, one would start with the level 1 DFD, transform it into module
representation using either the transform or transaction analysis and then
proceed toward the lower level DFDs.
At each level of transformation, it is important to first determine whether the
transform or the transaction analysis is applicable to a particular DFD.
Whether to apply transform or transaction processing?
• To examine the data input to the diagram.
• The data input to the diagram can be easily spotted because they are
represented by dangling arrows.
• If all the data flow into the diagram are processed in similar ways (i.e. if
all the input data flow arrows are incident on the same bubble in the DFD)
then transform analysis is applicable. Otherwise, transaction analysis is
applicable.
• Normally, transform analysis is applicable only to very simple processing.
• Please recollect that the bubbles are decomposed until it represents a
very simple processing that can be implemented using only a few lines of
code.
• Therefore, transform analysis is normally applicable at the lower levels
of a DFD model.
Page 21 of 23
• Each different way in which data is processed corresponds to a separate
transaction.
• Each transaction corresponds to a functionality that lets a user perform
a meaningful piece of work using the software.
Transform analysis
• Transform analysis identifies the primary functional components
(modules) and the input and output data for these components.
• The first step in transform analysis is to divide the DFD into three types
of parts:
Input.
Processing.
Output.
The input portion in the DFD includes processes that transform
input data from physical (e.g, character from terminal) to logical
form (e.g. internal tables, lists, etc.). Each input portion is called an
afferent branch.
The output portion of a DFD transforms output data from logical
form to physical form. Each output portion is called an efferent
branch.
The remaining portion of a DFD is called central transform.
• In the next step of transform analysis, the structure chart is derived by
drawing one functional component each for the central transform, the
afferent and efferent branches. These are drawn below a root module,
which would invoke these modules.
Identifying the input and output parts requires experience and skill.
One possible approach is to trace the input data until a bubble is
found whose output data cannot be deduced from its inputs alone.
Processes which validate input are not central transforms.
Processes which sort input or filter data from it are central
transforms.
The first level of structure chart is produced by representing each
input and output unit as a box and each central transform as a single
box.
Page 22 of 23
• In the third step of transform analysis, the structure chart is refined by
adding subfunctions required by each of the high-level functional
components.
Many levels of functional components may be added.
This process of breaking functional components into subcomponents is
called factoring.
Factoring includes adding read and write modules, error-handling
modules, initialisation and termination process, identifying
consumer modules etc. The factoring process is continued until all
bubbles in the DFD are represented in the structure chart.
Example 6.6 Draw the structure chart for the RMS software of Example 6.1.
• By observing the level 1 DFD of Figure 6.8,
• we can identify validate-input as the afferent branch and write-output as
the efferent branch. The remaining (i.e., compute-rms) as the central
transform. By applying the step 2 and step 3 of transform analysis, we get
the structure chart shown in Figure 6.19.
Page 23 of 23
o In a transaction-driven system, different data items may pass through
different computation paths through the DFD.
o This is in contrast to a transform centered system where each data item
entering the DFD goes through the same processing steps.
o Each different way in which input data is processed is a transaction.
o A simple way to identify a transaction is the following. Check the input data.
The number of bubbles on which the input data to the DFD are incident
defines the number of transactions. However, some transactions may not
require any input data. These transactions can be identified based on the
experience gained from solving a large number of examples.
o For each identified transaction, trace the input data to the output.
o All the traversed bubbles belong to the transaction. These bubbles should
be mapped to the same module on the structure chart.
o In the structure chart, draw a root module and below this module draw
each identified transaction as a module. Every transaction carries a tag
identifying its type.
o Transaction analysis uses this tag to divide the system into transaction
modules and a transaction-center module.
o Input data to this DFD are handled in three different ways (accept-order,
accept- indent-request, and handle-query), we have three different
transactions corresponding to these as shown in Figure 6.22.
Page 24 of 23
6. DETAILED DESIGN
During detailed design the pseudo code description of the processing and the
different data structures are designed for the different modules of the structure
chart.
These are usually described in the form of module specifications (MSPEC).
MSPEC is usually written using structured English.
The MSPEC for the non-leaf modules describe the different conditions under
which the responsibilities are delegated to the lower- level modules.
The MSPEC for the leaf-level modules should describe in algorithmic form how the
primitive processing steps are carried out.
To develop the MSPEC of a module, it is usually necessary to refer to the DFD
model and the SRS document to determine the functionality of the module.
Unit IV USER INTERFACE DESIGN
1. CHARACTERISTICS OF A GOOD USER INTERFACE
Speed of learning:
A good user interface should be easy to learn.
Speed of learning is hampered by complex syntax and semantics of the
command issue procedures.
A good user interface should not require its users to memorise commands.
Neither should the user be asked to remember information from one
screen to another while performing various tasks using the interface.
Besides, the following three issues are crucial to enhance the speed of
learning:
U s e of metaphors and intuitive command
names:
The abstractions of real-life objects or concepts used in user
interface design are called metaphors.
If the user interface of a text editor uses concepts similar to
the tools used by a writer for text editing such as cutting lines
and paragraphs and pasting it at other places, users can
immediately relate to it.
Another popular metaphor is a shopping cart. Everyone
knows how a shopping cart is used to make choices while
purchasing items in a supermarket. If a user interface uses
the shopping cart metaphor for designing the interaction
style for a situation where similar types of choices have to be
made, then the users can easily understand and learn to use
the interface. Also, learning is facilitated by intuitive
command names and symbolic command issue procedures.
Consistency:
Once, a user learns about a command, he should be able to
use the similar commands in different circumstances for
carrying out similar actions.
This makes it easier to learn the interface since the user can
extend his knowledge about one part of the interface to the
other parts. Thus, the different commands supported by an
interface should be consistent.
Component-based interface:
Users can learn an interface faster if the interaction style of
the interface is very similar to the interface of other
applications with which the user is already familiar with.
This can be achieved if the interfaces of different
applications are developed using some standard user
interface components.
The speed of learning characteristic of a user interface can be determined
by measuring the training time and practice that users require before they
can effectively use the software.
Speed of use:
Speed of use of a user interface is determined by the time and user effort
necessary to initiate and execute different commands.
This characteristic of the interface is some times referred to as productivity
support of the interface.
It indicates how fast the users can perform their intended tasks.
The time and user effort necessary to initiate and execute different
commands should be minimal. This can be achieved through careful design
of the interface.
For example, an interface that requires users to type in lengthy commands
or involves mouse movements to different areas of the screen that are wide
apart for issuing commands can slow down the operating speed of users.
The most frequently used commands should have the smallest length or be
available at the top of a menu to minimise the mouse movements necessary
to issue commands.
Speed of recall:
Once users learn how to use an interface, the speed with which they can
recall the command issue procedure should be maximised.
This characteristic is very important for intermittent users. Speed of recall
is improved if the interface is based on some metaphors, symbolic
command issue procedures, and intuitive command names.
Error prevention:
A good user interface should minimise the scope of committing errors
while initiating different commands.
The error rate of an interface can be easily determined by monitoring the
errors committed by an average users while using the interface.
This monitoring can be automated by instrumenting the user interface
code with monitoring code which can record the frequency and types of
user error and later display the statistics of various kinds of errors
committed by different users.
Consistency of names, issue procedures, and behaviour of similar
commands and the simplicity of the command issue procedures minimise
error possibilities. Also, the interface should prevent the user from
entering wrong values.
Aesthetic and attractive:
A good user interface should be attractive to use. An attractive user
interface catches user attention and fancy. In this respect, graphics-based
user interfaces have a definite advantage over text-based interfaces.
Consistency:
The commands supported by a user interface should be consistent.
The basic purpose of consistency is to allow users to generalise the
knowledge about aspects of the interface from one part to another.
Thus, consistency facilitates speed of learning, speed of recall, and also
helps in reduction of error rate
Feedback:
A good user interface must provide feedback to various user actions.
Especially, if any user request takes more than few seconds to process, the
user should be informed about the state of the processing of his request.
In the absence of any response from the computer for a long time, a novice
user might even start recovery/shutdown procedures in panic.
If required, the user should be periodically informed about the progress
made in processing his command.
Support for multiple skill levels:
A good user interface should support multiple levels of sophistication of
command issue procedure for different categories of users.
This is necessary because users with different levels of experience in using
an application prefer different types of user interfaces.
Experienced users are more concerned about the efficiency of the
command issue procedure, whereas novice users pay importance to
usability aspects.
Very cryptic and complex commands discourage a novice, whereas
elaborate command sequences make the command issue procedure very
slow and therefore put off experienced users.
When someone uses an application for the first time, his primary concern
is speed of learning. After using an application for extended periods of time,
he becomes familiar with the operation of the software. As a user becomes
more and more familiar with an interface, his focus shifts from usability
aspects to speed of command issue aspects. Experienced users look for
options such as “hot-keys”, “macros”, etc.
Thus, the skill level of users improves as they keep using a software
product and they look for commands to suit their skill levels.
Error recovery (undo facility):
While issuing commands, even the expert users can commit errors.
Therefore, a good user interface should allow a user to undo a mistake
committed by him while using the interface. Users are inconvenienced if
they cannot recover from the errors they commit while using a software. If
the users cannot recover even from very simple types of errors, they feel
irritated, helpless, and out of control.
User guidance and on-line help:
Users seek guidance and on-line help when they either forget a command
or are unaware of some features of the software. Whenever users need
guidance or seek help from the system, they should be provided with
appropriate guidance and help.
2. BASIC CONCEPTS
Window System
Most modern graphical user interfaces are developed using some window system.
A window system can generate displays through a set of windows.
Since a window is the basic entity in such a graphical user interface, we need to
first discuss what exactly a window is.
Window: A window is a rectangular area on the screen. A window can be
considered to be a virtual screen, in the sense that it provides an interface to the
user for carrying out independent activities, e.g., one window can be used
forediting a program and another for drawing pictures, etc.
A window can be divided into two parts—client part, and non-client part.
The client area makes up the whole of the window, except for the borders and
scroll bars. The client area is the area available to a client application for display.
The non-client-part of the window determines the look and feel of the window.
The look and feel defines a basic behaviour for all windows, such as creating,
moving, resizing, iconifying of the windows. The window manager is responsible
for managing and maintaining the non-client area of a window.
A basic window with its different parts is shown in Figure 9.3.
Visual programming is the drag and drop style of program development. In this
style of user interface development, a number of visual objects (icons)
representing the GUI components are provided by the programming environment.
The application programmer can easily develop the user interface by dragging the
required component types (e.g., menu, forms, etc.) from the displayed icons and
placing them wherever required. Thus, visual programming can be considered as
program development through manipulation of several visual objects.
Reuse of program components in the form of visual objects is an important aspect
of this style of programming. Though popular for user interface development, this
style of programming can be used for other applications such as Computer-Aided
Design application (e.g., factory design), simulation, etc. User interface
development using a visual programming language greatly reduces the effort
required to develop the interface.
Examples of popular visual programming languages are Visual Basic, Visual C++,
etc. Visual C++ provides tools for building programs with window- based user
interfaces for Microsoft Windows environments. In visual C++ you usually design
menu bars, icons, and dialog boxes, etc. before adding them to your program.
These objects are called as resources. You can design shape, location, type,
and size of the dialog boxes before writing any C++ code for the application.
5. CODING
The input to the coding phase is the design document produced at the end of the
design phase.
During the coding phase, different modules identified in the design document are
coded according to their respective module specifications.
The objective of the coding phase is to transform the design of a system
into code in a high-level language, and then to unit test this code.
Normally, good software development organisations require their programmers to
adhere to some well-defined and standard style of coding which is called their coding
standard.
The main advantages of adhering to a standard style of coding are the following:
o A coding standard gives a uniform appearance to the codes written by
different engineers.
o It facilitates code understanding and code reuse.
o It promotes good programming practices.
A coding standard lists several rules to be followed during coding, such as the way
variables are to be named, the way the code is to be laid out, the error return
conventions, etc. Besides the coding standards, several coding guidelines are also
prescribed by software companies.
It is mandatory for the programmers to follow the coding standards.
Compliance of their code to coding standards is verified during code inspection.
Any code that does not conform to the coding standards is rejected during code
review and the code is reworked by the concerned programmer.
In contrast, coding guidelines provide some general suggestions regarding the coding
style to be followed but leave the actual implementation of these guidelines to the
discretion of the individual developers.
6. SOFTWARE DOCUMENTATION
When a software is developed, in addition to the executable files and the source
code, several kinds of documents such as users’ manual, software requirements
specification (SRS) document, design document, test document, installation
manual, etc., are developed as part of the software engineering process.
All these documents are considered a vital part of any good software development
practice.
Good documents are helpful in the following ways:
o Good documents help enhance understandability of code. As a result, the
availability of good documents help to reduce the effort and time required
for maintenance.
o Documents help the users to understand and effectively use the system.
o Good documents help to effectively tackle the manpower turnover
problem. Even when an engineer leaves the organisation, and a new
engineer comes in, he can build up the required knowledge easily by
referring to the documents.
o Production of good documents helps the manager to effectively track the
progress of the project. The project manager would know that some
measurable progress has been achieved, if the results of some pieces of
work has been documented and the same has been reviewed.
Different types of software documents can broadly be classified into the following:
o Internal documentation: These are provided in the source code itself.
o External documentation: These are the supporting
documents such as SRS document, installation document, user manual,
design document, and test document.
o Observe that the fog index is computed as the sum of two different factors.
o The first factor computes the average number of words per sentence (total
number of words in the document divided by the total number of
sentences).
o This factor therefore accounts for the common observation that long
sentences are difficult to understand.
o The second factor measures the percentage of complex words in the
document.
o Note that a syllable is a group of words that can be independently
pronounced. For example, the word “sentence” has three syllables (“sen”,
“ten”, and “ce”). Words having more than three syllables are complex
words and presence of many such words hamper readability of a
document.
o Example 10.1 Consider the following sentence:
“The Gunning’s fog index is based on the premise that use of short
sentences and simple words makes a document easy to
understand.” Calculate its Fog index.
The fog index of the above example sentence is
7. TESTING
Testing a program involves executing the program with a set of test inputs and
observing if the program behaves as expected.
If the program fails to behave as expected, then the input data and the conditions
under which it fails are noted for later debugging and error correction.
A highly simplified view of program testing is schematically shown in Figure 10.1.
Figure 10.1: A simplified view of program testing
The tester has been shown as a stick icon, who inputs several test data to the
system and observes the outputs produced by it to check if the system fails on
some specific inputs.
Unless the conditions under which a software fails are noted down, it becomes
difficult for the developers to reproduce a failure observed by the testers.
For examples, a software might fail for a test case only when a network connection
is enabled.
Terminologies
The objectives of both verification and validation techniques are very similar since
both these techniques are designed to help remove errors in a software.
Verification is the process of determining whether the output of one phase of
software development conforms to that of its previous phase; whereas validation
is the process of determining whether a fully developed software conforms
to its requirements specification.
Thus, the objective of verification is to check if the work products produced after
a phase conform to that which was input to the phase. For example, a verification
step can be to check if the design documents produced after the design step
conform to the requirements specification. On the other hand, validation is applied
to the fully developed and integrated software to check if it satisfies the customer’s
requirements.
The primary techniques used for verification include review, simulation, formal
verification, and testing. Review, simulation, and testing are usually considered as
informal verification techniques. Formal verification usually involves use of
theorem proving techniques or use of automated tools such as a model checker.
On the other hand, validation techniques are primarily based on product testing.
Note that we have categorised testing both under program verification and
validation.
The reason being that unit and integration testing can be considered as
verification steps where it is verified whether the code is a s per the module and
module interface specifications. On the other hand, system testing can be
considered as a validation step where it is determined whether the fully developed
code is as per its requirements specification.
Verification does not require execution of the software, whereas validation
requires execution of the software.
Verification is carried out during the development process to check if the
development activities are proceeding alright, whereas validation is carried out to
check if the right as required by the customer has been developed.
Verification techniques can be viewed as an attempt to achieve phase containment
of errors. Phase containment of errors has been acknowledged to be a cost-
effective way to eliminate program bugs, and is an important software engineering
principle. The principle of detecting errors as close to their points of commitment
as possible is known as phase containment of errors. Phase containment of errors
can reduce the effort required for correcting bugs. For example, if a design
problem is detected in the design phase itself, then the problem can be taken
care of much more easily than if the error is identified, say, at the end of the
testing phase. In the later case, it would be necessary not only to rework the
design, but also to appropriately redo the relevant coding as well as the
system testing activities, thereby incurring higher cost.
While verification is concerned with phase containment of errors, the aim of
validation is to check whether the deliverable software is error free.
We can consider the verification and validation techniques to be different types of
bug filters. To achieve high product reliability in a cost-effective manner, a
development team needs to perform both verification and validation activities.
The activities involved in these two types of bug detection techniques together are
called the “V and V” activities.
Based on the above discussions, we can conclude that:
Error detection techniques = Verification techniques + Validation
techniques
8. TESTING ACTIVITIES
As can be seen, the test cases are first designed, the test cases are run to detect
failures.
The bugs causing the failure are identified through debugging, and the identified
error is corrected.
Of all the above mentioned testing activities, debugging often turns out to be the
most time-consuming activity.
9. UNIT TESTING
Unit testing is undertaken after a module has been coded and reviewed.
This activity is typically undertaken by the coder of the module himself in the
coding phase.
Before carrying out unit testing, the unit test cases have to be designed and the
test environment for the unit under test has to be developed.
Driver and stub modules
o In order to test a single module, we need a complete environment to
provide all relevant code that is necessary for execution of the module.
o That is, besides the module under test, the following are needed to test the
module:
The procedures belonging to other modules that the module under
test calls.
Non-local data structures that the module accesses.
A procedure to call the functions of the module under test with
appropriate parameters.
o Modules required to provide the necessary environment (which either call
or are called by the module under test) are usually not available until they
too have been unit tested.
o In this context, stubs and drivers are designed to provide the complete
environment for a module so that testing can be carried out.
o Stub:
A stub procedure is a dummy procedure that has the same I/O
parameters as the function called by the unit under test but has a
highly simplified behaviour.
For example, a stub procedure may produce the expected behaviour
using a simple table look up mechanism.
Figure 10.3: Unit testing with the help of driver and stub modules.
o Driver:
A driver module should contain the non-local data structures
accessed by the module under test.
Additionally, it should also have the code to call the different
functions of the unit under test with appropriate parameter values
for testing.
In the equivalence class partitioning approach, the domain of input values to the
program under test is partitioned into a set of equivalence classes.
The partitioning is done such that for every input data belonging to the same
equivalence class, the program behaves similarly.
The main idea behind defining equivalence classes of input data is that testing the
code with any one value belonging to an equivalence class is as good as testing the
code with any other value belonging to the same equivalence class.
Equivalence classes for a unit under test can be designed by examining the input
data and output data.
The following are two general guidelines for designing the equivalence classes:
• If the input data values to a system can be specified by a range of values,
then one valid and two invalid equivalence classes need to be defined.
For example, if the equivalence class is the set of integers in the range 1
to 10 (i.e., [1,10]), then the invalid equivalence classes are [−∞,0],
[11,+∞].
• If the input data assumes values from a set of discrete members of some
domain, then one equivalence class for the valid input values and
another equivalence class for the invalid input values should be
defined. For example, if the valid equivalence classes are {A,B,C}, then
the invalid equivalence class is U-{A,B,C}, where U is the universe of
possible input values.
In the following, we illustrate equivalence class partitioning-based test case
generation through four examples.
Example 10.6
• For a software that computes the square root of an input integer that
can assume values in the range of 0 and 5000. Determine the
equivalence classes and the black box test suite.
• Answer:
There are three equivalence classes—The set of negative integers,
the set of integers in the range of 0 and 5000, and the set of integers
larger than 5000. Therefore, the test cases must include
representatives for each of the three equivalence classes. A possible
test suite can be: {–5,500,6000}.
White box testing techniques analyze the internal structures the used data
structures, internal design, code structure and the working of the software rather
than just the functionality as in black box testing.
It is also called glass box testing or clear box testing or structural testing.
Advantages
• Testing can be commenced at an earlier stage. One need not wait for the
GUI to be available.
• Testing is more thorough, with the possibility of covering most paths.
Disadvantages
• Since tests can be very complex, highly skilled resources are required, with
a thorough knowledge of programming and implementation.
• Test script maintenance can be a burden if the implementation changes too
frequently.
• Since this method of testing is closely tied to the application being tested,
tools to cater to every kind of implementation/platform may not be readily
available.
Statement Coverage is a white box testing technique in which all the executable
statements in the source code are executed at least once.
It is used for calculation of the number of statements in source code which have
been executed.
The main purpose of Statement Coverage is to cover all the possible paths, lines
and statements in source code.
Statement coverage is used to derive scenario based upon the structure of the code
under test.
Source Code:
If A = 3, B = 9
The statements marked in yellow color are those which are executed as per
the scenario
Number of executed statements = 5, Total number of statements = 7
Statement Coverage: 5/7 = 71%
Scenario 2:
If A = -3, B = -9
The statements marked in yellow color are those which are executed as per
the scenario.
Branch Coverage is a white box testing method in which every outcome from a
code module(statement or loop) is tested.
The purpose of branch coverage is to ensure that each decision condition from
every branch is executed at least once.
It helps to measure fractions of independent code segments and to find out
sections having no branches.
If the outcomes are binary, you need to test both True and False outcomes.
The formula to calculate Branch Coverage:
Example
Demo(int a) {
If (a> 5)
a=a*3
Print (a)
}
Also known as predicate coverage, it involves testing the condition statement for
both True and False values for all the input variables.
EXAMPLE 5.2 If a and b, then
Condition Coverage can be satisfied by two tests for True and False values of a
andb:
a = True, b = False
a = False, b = True
In this testing, both Condition Coverage and Decision Coverage are covered.
Every Condition in a decision in the program takes all possible Boolean values at
least once, and every Decision in the program takes all possible outcomes at least
once.
Both Decision Coverage and Condition Coverage are satisfied as illustrated in
Examples 5.3 and 5.4.
EXAMPLE 5.3
If a and b, then
a = False, b = True
Multiple condition coverage checks the True or False outcomes of each condition
and requires that all combinations of conditions inside each decision are tested.
EXAMPLE 5.6 If {(a or b) and c}, then
It will require eight tests to satisfy Multiple Condition Coverage as
thereare 3 variables with 8 combinations:
A path through a program is any node and edge sequence from the start node to a
terminal node of the control flow graph of a program.
In this type of testing, all paths (which is a sequence of nodes and edges from
starting node to terminal node) are executed at least once.
This is the strongest criteria.
EXAMPLE 5.8 Consider the CFG shown in Figure 5.2.
The graph in Figure 5.2 shows that for statement coverage, nodes A,B,C,D,E,F are to be
traversed.
• For Path Coverage, the paths ABCD, AEFCD, AED are to be traversed.
Example:
if A == 10 then
if B > C
A = B
else A = C
endif
endif
print A, B, C
X 1 2, 3
y 1 2, 4
a 3, 4 5
(i)HardwareversusSoftwareReliability
2.SOFTWAREQUALITY
(i) SOFTWAREQUALITYMANAGEMENTSYSTEM
Managerialstructureandindividualresponsibilities
Aqualitysystemistheresponsibilityoftheorganisationasa whole.
However, every organisation has a separate quality department to perform
several quality system activities.
The quality system of an organisation should have the full support of the top
management. Without support for the quality system at a high level in a
company, few members of staff will take the quality system seriously.
Qualitysystemactivities
Thequalitysystemactivitiesencompassthefollowing:
o Auditingofprojectstocheckiftheprocessesarebeing followed.
o Collectprocessandproductmetricsandanalysethemtocheckif quality
goals are being met.
o Reviewofthequalitysystemtomakeitmoreeffective.
o Developmentofstandards,procedures,andguidelines.
o Producereportsforthetopmanagementsummarisingthe
effectiveness of the quality system in the organisation.
Agoodqualitysystemmustbewelldocumented.Withoutaproperly
documentedqualitysystem,theapplicationofqualitycontrols and
procedures become ad hoc, resulting in large variations in the quality of the
products delivered.
InternationalstandardssuchasISO9000provideguidance onhowto
organise a quality system.
EvolutionofQualitySystems
Figure11.3:Evolutionofqualitysystemandcorrespondingshiftinthequality
paradigm.
Thus,qualitycontrolaimsatcorrectingthecausesoferrorsandnotjustrejectingthe
defective products. The next breakthrough in quality systems, was the
development of the quality assurance (QA) principles.
The basic premise of modern quality assurance is that if an organisation’s
processes are good and are followed rigorously, then the products are bound to
be of good quality.
The modern quality assurance paradigm includes guidance for recognising,
defining, analysing, and improving the production process.
Total quality management (TQM) advocates that the process followed by an
organisation must continuously be improved through process measurements.
TQM goes a step further than quality assurance and aims at continuous process
improvement.
TQMgoesbeyonddocumentingprocesses tooptimisingthemthroughredesign.
Atermrelated toTQMisbusinessprocess re-engineering(BPR),whichisaimsat re-
engineering the way business is carried out in an organisation, whereas our
focus in this text is re-engineering of the software development process.
Fromtheabove discussion, we can saythat overthela st sixdecades orso, the
quality paradigm has shifted from product assurance to process assurance (see
Figure 11.3).
ProductMetricsversusProcessMetrics
All modern quality systems lay emphasis on collection of certain product and
process metrics during product development. Let us first understand the basic
differences between product and process metrics.
Product metrics help measure the characteristics of a product being developed,
whereas process metrics help measure how a process is performing.
Examples of product metrics are LOC and function point to measure size, PM
(person-month)to measure the effort required to develop it, monthstomeasure
the time required to develop the product, time complexity of the algorithms, etc.
Examples of process metrics are review effectiveness, average number of defects
foundperhourofinspection,averagedefectcorrectiontime,productivity,average
number of failures detected during testing per LOC, number of latent defects per
line of code in the developed product.
3. ISO9000
(i) WhatisISO9000Certification?
(ii) ISO9000forSoftwareIndustry
Even though ISO 9000 is widely being used for setting up an effective quality
system in an organisation, it suffers from several shortcomings.
SomeoftheseshortcomingoftheISO9000certificationprocessarethefollowing:
o ISO9000requiresasoftwareproductionprocesstobeadheredto,butdoes not
guarantee the process to be of high quality. It also does not give any
guideline for defining an appropriate process.
o ISO 9000 certification process is not fool-proof and no international
accreditionagencyexists.Thereforeitislikelythatvariationsinthenorms
ofawardingcertificatescanexistamongthedifferentaccreditionagencies and
also among the registrars.
o Organisations getting ISO 9000 certification often tend to downplay
domainexpertiseandtheingenuityofthedevelopers.Theseorganisations
start to believe that since a good process is in place, the development
resultsaretrulyperson-independent. Thatis, anydeveloperisaseffective
asanyotherdeveloperinperforminganyparticularsoftwaredevelopment
activity. In manufacturing industry there is a clear link between process
qualityandproductquality.Onceaprocessiscalibrated,itcanberunagain and
again producing quality goods. Many areas of software development are
so specialised that special expertise and experience in these areas
(domain expertise) is required. Also, unlike in case of general product
manufacturing, ingenuity and effectiveness of personal practices play an
importantpart in determining the results produced by a developer. In
other words, software development is a creative process and individual
skills and experience are important.
o ISO9000doesnotautomaticallyleadtocontinuousprocessimprovement. In
other words, it does not automatically lead to TQM.
4. SEI CAPABILITY MATURITY MODEL
SEI capability maturity model (SEI CMM) was proposed by Software Engineering
Institute of the Carnegie Mellon University, USA.
TheUnitesStatesDepartmentofDefence(USDoD)isthelargestbuyerofsoftware
product.Itoftenfaceddifficultiesinvendorperformances,andhadtomanytimes live
with low quality products, late delivery, and cost escalations. In this context,
SEICMMwasoriginallydevelopedtoassisttheU.S.DepartmentofDefense(DoD) in
software acquisition.
The rationale was to include the likely contractor performance as a factor in
contract awards. Most of the major DoD contractors began CMM-based process
improvement initiatives as they vied for DoD contracts.
It was observed that the SEI CMM model helped organisations to improve the
qualityofthe software theydeveloped and therefore adoption ofSEICMM model
had significant business benefits. Gradually many commercial organisations
began to adopt CMM as a framework for their own internal improvement
initiatives.
In simple words, CMM is a reference model for apprising the software process
maturityintodifferentlevels.Thiscanbeusedtopredictthemostlikelyoutcome to be
expected from the next project that the organisation undertakes.
It must be remembered that SEI CMM can be used in two ways—
capabilityevaluation and software process assessment.
Capability evaluation and software process assessment differ in motivation,
objective, and the final use of the result.
Capability evaluation provides a way to assess the software process capability of
an organisation. Capability evaluation is administered by the contract awarding
authority, and therefore the results would indicate the likely contractor
performance if the contractor is awarded a work. On the other hand, software
process assessment is used by an organisation with the objective to improve its
own process capability. Thus, the latter type of assessment is for purely internal
use by a company.
ThedifferentlevelsofSEICMMhavebeendesignedsothatitiseasyforan organisation
to slowly build its quality system starting from scratch.
SEICMMclassifiessoftwaredevelopmentindustriesintothefollowingfivematurity
levels:
Level1:Initial
o A software development organisation at this level is characterised by
adhoc activities. Very few or no processes are defined and followed.
o Since software production processes are not defined, different engineers
follow their own process and as a result development efforts become
chaotic. Therefore, it is also called chaotic level.
o Thesuccessofprojectsdependonindividualeffortsand heroics.
o Whenadeveloperleavestheorganisation,thesuccessorwouldhavegreat
difficulty in understanding the process that was followed and the work
completed.Also,noformalprojectmanagementpracticesarefollowed.As a
result, time pressurebuilds up towards theend ofthe deliverytime,as a
result short-cuts are tried out leading to low quality products.
Level2:Repeatable
o Atthislevel,thebasicprojectmanagementpracticessuchastrackingcost and
schedule are established.
o Configuration management tools are used on items identified for
configuration control.
o Sizeandcostestimationtechniquessuchasfunctionpointanalysis,COCOMO,
etc., are used.
o The necessary process discipline is in place to repeat earlier success
onprojects with similar applications.
o Though there is a rough understanding among the developers about
theprocess being followed, the process is not documented.
o Configurationmanagementpracticesareusedforallprojectdeliverables.
o Please remember that opportunity to repeat a process exists only when a
company produces a family of products. Since the products are very
similar, the success storyon development ofone product can repeatedfor
another.
o In a non- repeatable software development organisation, a software
product development project becomes successful primarily due to the
initiative,effort,brilliance,orenthusiasmdisplayedbycertainindividuals.
o On the other hand, in a non-repeatable software development
organisation, the chances of successful completion of a software project is
to a great extent depends on who the team members are. For this reason,
the successful development of one product by such an organisation does
not automatically imply that the next product development will be
successful.
Level3:Defined
o Atthislevel,theprocessesforbothmanagementanddevelopment
activities are defined and documented.
o Thereisacommonorganisation-wideunderstandingofactivities,roles, and
responsibilities.
o The processes though defined, the process and product qualities are not
measured.
o At this level, the organisation builds up the capabilities of its employees
through periodic training programs. Also, review techniques are
emphasized and documented to achieve phase containment of errors.
o ISO9000aimsatachievingthis level.
Level4:Managed
o Atthislevel, thefocusisonsoftwaremetrics.
o Bothprocessandproductmetricsarecollected.
o Quantitative quality goals are set for the products and at the time of
completion of development it was checked whether the quantitative
quality goals for the product are met.
o Various tools like Pareto charts, fishbone diagrams, etc. are used to
measure the product and process quality.
o Theprocessmetricsareusedtocheckifaprojectperformedsatisfactorily.
Thus, the results of process measurements are used to evaluate project
performance rather than improve the process.
Level5:Optimising
o Atthisstage,processandproductmetricsare collected.
o Process and product measurement data are analysed for continuous
process improvement.
o For example, if from an analysis of the process measurement results, it is
found that the code reviews are not very effective and a large number of
errorsaredetectedonlyduringtheunittesting,thentheprocesswouldbe fine
tuned to make the review more effective. Also, the lessons learned from
specific projects are incorporated into the process.
o Continuous process improvement is achieved both by carefully analysing
the quantitative feedback from the process measurements and also from
application of innovative ideas and technologies.
o At CMM level 5, an organisation would identify the best software
engineering practices and innovations (which may be tools, methods, or
processes) and would transfer these organisation- wide.
o Level5organisationsusuallyhaveadepartmentwhosesoleresponsibility is
to assimilate latest tools and technologies and propagate them
organisation-wide. Since the process changes continuously, it becomes
necessary to effectively manage a changing process.
o Therefore,level5organisationsuseconfigurationmanagementtechniques to
manage process changes.
Exceptforlevel1,eachmaturitylevelischaracterisedbyseveralkeyprocessareas
(KPAs) that indicate the areas an organisation should focus to improve its
software process to this level from the previous level.
Eachofthefocusareasidentifiesanumberofkeypracticesoractivitiesthat need to be
implemented.
Inotherwords,KPAscapturethefocusareasofalevel.Thefocusofeachleveland the
corresponding key process areas are shown in the Table 11.1:
Table11.1FocusareasofCMMlevelsandKeyProcessAreas
Table11.1FocusareasofCMMlevelsandKeyProcessAreas
CMMLevel Focus KeyProcessAreas(KPAs)
Initial Competentpeople
Repeatable Project management Software project planning Software
configuration management
Definition Process
Defined
of definition
processe s Training
programPeer
reviews
Product and Quantitativeprocess
Managed
process quality metrics Software quality
management
Defectprevention
Continuous Process change management
Optimising
process Technology change
improvement management
SEICMMprovidesalistofkeyareasonwhichtofocustotakeanorganisationfrom one
level of maturity to the next.
Thus,itprovidesawayforgradualqualityimprovementoverseveral stages.
Eachstagehasbeencarefullydesignedsuchthatonestageenhancesthecapability
already built up. For example, trying to implement a defined process (level 3)
before a repeatable process (level 2) would be counterproductive as it becomes
difficult to follow the defined process due to schedule and budget pressures.
SubstantialevidencehasnowbeenaccumulatedwhichindicatethatadoptingSEI
CMM has several business benefits. However, the organisations trying out the
CMM frequently face a problem that stems from the characteristic of the CMM
itself.
CMMShortcomings:CMMdoessufferfromseveralshortcomings.Theimportant
among these are the following:
o ThemostfrequentcomplaintbyorganisationswhiletryingouttheCMM-
basedprocessimprovementinitiativeisthattheyunderstandwhatis
neededtobeimproved,buttheyneedmoreguidanceabouthowtoimprove it.
o Anothershortcoming(thatiscommontoISO9000)isthatthicker
documents, more detailed information, and longer meetings are
considered to be better. This is in contrast to the principles of software
economics—reducing complexity and keeping the documentation to the
minimum without sacrificing the relevant details.
o Gettinganaccuratemeasureofanorganisation’scurrentmaturitylevelis
alsoanissue.TheCMMtakesanactivity-basedapproachtomeasuring
maturity;ifyoudotheprescribed setofactivitiesthenyouareatacertain level.
There is nothing that characterises or quantifies whether you do these
activities well enough to deliver the intended results.
5. SOFTWAREMAINTENANCE
(i) CHARACTERISTICSOFSOFTWAREMAINTENANCE
When the hardware platform changes, and a software product performs some
low-level functions, maintenance is necessary.
Also, whenever the support environment of a software product changes, the
software product requires rework to cope up with the newer interface. For
instance, a software product may need to be maintained when the operating
system changes. Thus, every software product continues to evolve after its
development through maintenance efforts.
TypesofSoftware Maintenance
Therearethreetypesofsoftwaremaintenance,whicharedescribed asfollows:
Corrective:
o Correctivemaintenanceofasoftwareproductisnecessaryeithertorectify the
bugs observed while the system is in use.
Adaptive:
o Asoftwareproductmightneedmaintenancewhenthecustomersneedthe
producttorunonnewplatforms,onnewoperatingsystems,orwhenthey need
the product to interface with new hardware or software.
Perfective:
o A software product needs maintenance to support the new features that
users want it to support, to change different functionalities of the system
according to customer demands, or to enhance the performance of the
system.
(ii) CharacteristicsofSoftwareEvolution
LehmanandBeladyhavestudiedthecharacteristicsofevolutionofseveral software
products [1980].
Theyhaveexpressedtheirobservationsintheformof laws.
These are generalisations and may not be applicable to specific cases and also
most of these observations concern large software projects and may not be
appropriate for the maintenance and evolution of very small products.
Lehman’sfirstlaw:
o A software product must change continually or become progressively less
useful.
o Everysoftwareproductcontinuestoevolveafteritsdevelopmentthrough
maintenance efforts.
o Larger products stay in operation for longer times because of
higherreplacement
costsandthereforetendtoincurhighermaintenanceefforts.
o This law clearly shows that every product irrespective of how well
designedmustundergomaintenance.Infact,whenaproductdoesnotneed any
more maintenance, it is a sign that the product is about to be
retired/discarded.Thisisincontrasttothecommonintuitionthatonly
badly designed products need maintenance. In fact, good products are
maintained and bad products are thrown away.
Lehman’ssecondlaw:
o The structure of a program tends to degrade as more and
moremaintenance is carried out on it.
o The reason for the degraded structure is that when you add a function
duringmaintenance,youbuildontopofanexistingprogram,ofteninaway that
the existing program was not intended to support. If you do not redesign
the system, the additions will be more complex that they should be.
o Due to quick-fix solutions, in addition to degradation of structure, the
documentationsbecomeinconsistentandbecomelesshelpfulasmoreand
more maintenance is carried out.
Lehman’sthirdlaw:
o Over a program’s lifetime, its rate of development is
approximatelyconstant.Therateofdevelopmentcanbequantifiedintermsoft
helinesof code written or modified. Therefore this law states that therate
at which
codeiswrittenormodifiedisapproximatelythesameduringdevelopment and
maintenance.
(iii) SpecialProblemsAssociatedwithSoftwareMaintenance
Softwaremaintenanceworkcurrentlyistypicallymuchmoreexpensivethanwhat
itshouldbeandtakesmoretimethanrequired.Thereasonsforthissituationare the
following:
Software maintenance work in organisations is mostly carried out using ad hoc
techniques. The primary reason being that software maintenance is one of the
most neglected areas of software engineering.
Software maintenance has a very poor image in industry. Therefore, an
organisation often cannot employ bright engineers to carry out maintenance
work.
Another problem associated with maintenance work is that the majority of
software products needing maintenance are legacy products. Though the word
legacy implies “aged” software, but there is no agreement on what exactly is a
legacy system. It is prudent to define a legacy system as any software system that
is hard to maintain. The typical problem associated with legacy systems are poor
documentation, unstructured (spaghetti code with ugly control structure), and
lackofpersonnelknowledgeableintheproduct.Manyofthelegacysystemswere
developed long time back. But it is possible that a recently developed system
having poor design and documentation can be considered to be a legacy system.
6. SOFTWARER EVERSEENGINEERING
Software reverse engineering is the process of recovering the design and the
requirements specification of a product from an analysis of its code.
The purpose of reverse engineering is to facilitate maintenance work by
improving the understandability of a system and to produce the necessary
documents for a legacy system.
Reverse engineering is becoming important, since legacy software products lack
proper documentation, and are highly unstructured. Even well-designed
products become legacy software as their structure degrades through a series of
maintenance efforts.
The first stage of reverse engineering usually focuses on carrying out cosmetic
changes to the code to improve its readability, structure, and understand ability,
without changing any of its functionalities.
Away to carry out these cosmetic changes is shown schematically in Figure13.1.
A program can be reformatted using any of the several available pretty printer
programs which layout the program neatly.
Manylegacysoftware products aredifficulttocomprehend withcomplexcontrol
structureandunthoughtfulvariablenames.Assigningmeaningfulvariablenames is
important because that meaningful variable names is the most helpful code
documentation. All variables, data structures, and functions should be assigned
meaningful names wherever possible.
Complex nested conditionals in the program can be replaced by simpler
conditional statements or whenever appropriate by case statements.
Figure13.1:Aprocess modelforreverseengineering.
Afterthecosmeticchangeshavebeencarriedoutonalegacysoftware,theprocess of
extracting the code, design, and the requirements specification can begin.
TheseactivitiesareschematicallyshowninFigure 13.2.
In order to extract the design, a full understanding of the code is needed. Some
automatictoolscanbeusedtoderivethedataflowandcontrolflowdiagramfrom the
code.
The structure chart (module invocation sequence and data interchange among
modules) should also be extracted.
The SRS document can be written once the full code has been thoroughly
understood and the design extracted.
Figure13.2:Cosmeticchangescarriedoutbeforereverse engineering.
References
Prepared by
Dr.G.MuthuLakshmi B.E.,M.E.,Ph.D.
Associate Professor
Department of Computer Science & Engineering,
Manonmaniam Sundaranar University,
Abishekapatti, Tirunelveli - 627012,
Tamilnadu, India.