Unit 4
Unit 4
SECURITY
Structure
4.0 Introduction 54
4.1 Objectives 54
4.2 Software Quality 54
4.3 Formal Technical Review 58
4.4 Software Reliability 63
4.5 Software Quality Standards 64
4.6 Security Engineering 66
4.7 Summary 66
4.8 Solutions/Answers 66
4.9 Further Readings 67
4.0 INTRODUCTION
4. 1 OBJECTIVES
The purpose of this unit is to give an insight as to how software quality assurance
activity is undertaken in the software engineering process.
54
An Overview of Quality software is reasonably bug-free, delivered on time and within budget, meets
Software requirements and is maintainable. However, as discussed above, quality is a subjective
Engineering term. It will depend on who the „customer‟ is and their overall influence in the scheme
of things. Each type of „customer‟ will have their own slant on „quality‟. The end-user
might define quality as some thing which is user-friendly and bug-free.
Good quality software satisfies both explicit and implicit requirements. Software
quality is a complex mix of characteristics and varies from application to application
and the customer who requests for it.
Attributes of Quality
Completeness: The degree to which all of the software‟s required functions and
design constraints are present and fully developed in the requirements specification,
design document and code.
Correctness: The degree to which a system or component is free from faults in its
specification, design, and implementation. The degree to which software,
documentation, or other items meet specified requirements.
Feasibility: The degree to which the requirements, design, or plans for a system or
component can be implemented under existing constraints.
Predictability: The degree to which the functionality and performance of the software
are determinable for a specified set of inputs.
Robustness: The degree to which a system or component can function correctly in the
presence of invalid inputs or stressful environmental conditions.
Structuredness : The degree to which the SDD (System Design Document) and code
possess a definite pattern in their interdependent parts. This implies that the design has
proceeded in an orderly and systematic manner (e.g., top-down, bottom-up). The
modules are cohesive and the software has minimised coupling between modules.
Understandability: The degree to which the meaning of the SRS, SDD, and code are
clear and understandable to the reader.
55
Software Quality
and Security
Verifiability : The degree to which the SRS, SDD, and code have been written to
facilitate verification and testing.
Defect metrics
Maintainability metrics
Consider the graph of Figure 4.1. Each node represents one program segment and
edges represent the control flow. The complexity of the software module represented
by the graph can be given by simple formulae of graph theory as follows:
V(G) = e – n + 2 where
B C
56
An Overview of Applying the above equation the complexity V(G) of the graph is found to be 3.
Software
Engineering The cyclomatic complexity has been related to programming effort, maintenance
effort and debugging effort. Although cyclomatic complexity measures program
complexity, it fails to measure the complexity of a program without multiple
conditions.
The information flow within a program can provide a measure for program
complexity.
• To the extent it satisfies user requirements; they form the foundation to measure
software quality.
• Use of specific standards for building the software product. Standards could be
organisation‟s own standards or standards referred in a contractual agreement.
• Implicit requirements which are not stated by the user but are essential for quality
software.
2. Reviews the software development processes and products for software error
prevention and/ or controlled change to reduced functionality states; and
3. Defines the process for measuring and analysing defects as well as reliability and
maintainability factors.
Software engineers, project managers, customers and Software Quality Assurance
groups are involved in software quality assurance activity. The role of various groups
in software quality assurance are as follows:
57
Software Quality
• SQA group: They assist the software engineer in developing high quality and Security
product.They plan quality assurance activities and report the results of review.
Check Your Progress 1
1) What is auditability?
……………………………………………………………………………………
What is software Review ? Software review can‟t be defined as a filter for the software
engineering process. The purpose of any review is to discover errors in analysis,
design, coding, testing and implementation phase of software development cycle. The
other purpose of review is to see whether procedures are applied uniformly and in a
manageable manner.
Reviews are basically of two types, informal technical review and formal technical
review.
Validation : Validation typically involves actual testing and takes place after
verifications are completed.
58
An Overview of Objectives of Formal Technical Review
Software
Engineering • To uncover errors in logic or implementation
• To ensure that the software has been represented accruing to predefined standards
• To ensure that software under review meets the requirements
• To make the project more manageable.
Each Formal Technical Review (FTR) is conducted as a meeting and requires well
coordinated planning and commitments.
For the success of formal technical review, the following are expected:
• The schedule of the meeting and its agenda reach the members well in advance
• Members review the material and its distribution
• The reviewer must review the material in advance.
The meeting should consist of two to five people and should be restricted to not more
than 2 hours (preferably). The aim of the review is to review the product/work and the
performance of the people. When the product is ready, the producer (developer)
informs the project leader about the completion of the product and requests for
review. The project leader contacts the review leader for the review. The review
leader asks the reviewer to perform an independent review of the product/work before
the scheduled FTR.
Result of FTR
• Meeting decision
o Whether to accept the product/work without any modification
o Accept the product/work with certain changes
o Reject the product/work due to error
Checklist - Typical activities for review at each phase are described below:
Involvement of SQA team in both writing and reviewing the project management plan
in order to assure that the processes, procedures, and standards identified in the plan
are appropriate, clear, specific, and auditable.
During the software requirements phase, review assures that software requirements
are complete, testable, and properly expressed as functional, performance, and
interface requirements. The output of a software requirement analysis phase is
Software Requirements Specification (SRS). SRS forms the major input for review.
Compatibility
59
Software Quality
and Security
• Does the interface enables compatibility with external interfaces?
• Whether the specified models, algorithms, and numerical techniques are
compatible?
Completeness
Consistency
• Are the requirements consistent with each other? Is the SRS free of
contradictions?
• Whether SRS uses standard terminology and definitions throughout.
• Has the impact of operational environment on the software been specified in the
SRS?
Correctness
Traceability
Reviewers should be able to determine whether or not all design features are
consistent with the requirements. And, the program should meet the requirements. The
output of the software design phase is a system design document (SDD) and forms an
input for a Formal Technical Review.
Completeness
60
An Overview of • Whether all the requirements have been addressed in the SDD?
Software
Engineering
• Have the software requirements been properly reflected in software architecture?
• Are algorithms adequate, accurate, and complete?
• Does the design implement the required program behavior with respect to each
program interface?
• Does the design take into consideration all expected conditions?
• Does the design specify appropriate behaviour in case of an unexpected or
improper input and other abnormal conditions?
Consistency
• Are standard terminology and definitions used throughout the SDD? Is the style of
presentation and the level of detail consistent throughout the document.
• Does the design configuration ensure integrity of changes?
• Is there compatibility of the interfaces?
• Is the test documentation compatible with the test requirements of the SRS?
• Are the models, algorithms, and numerical techniques that are specified
mathematically compatible?
• Are input and output formats consistent to the extent possible?
• Are the designs for similar or related functions are consistent?
• Are the accuracies and units of inputs, database elements, and outputs that are
used together in computations or logical decisions compatible?
Correctness
Modifiability
• The modules are organised such that changes in the requirements only require
changes to a small number of modules.
• Functionality is partitioned into programs to maximize the internal cohesion of
programs and to minimize program coupling.
• Is the design structured so that it comprises relatively small, hierarchically related
programs or sets of programs, each performing a particular unique function?
• Does the design use a logical hierarchical control structure?
Traceability
Verifiability
• Does the SDD describe each function using well-defined notation so that the SDD
can be verified against the SRS
• Can the code be verified against the SDD?
• Are conditions, constraints identified so that test cases can be designed?
61
Software Quality
Review of Source Code and Security
The following checklist contains the kinds of questions a reviewer may take up during
source code review based on various standards
Completeness
Consistency
Correctness
Modifiability
Traceability
Understandability
62
An Overview of • Whether test plans are consistent and sufficient to test the functionality of the
Software systems.
Engineering • Whether non-conformance reporting and corrective action has been initiated?.
• Whether boundary value is being tested.
• Whether all tests are run according to test plans and procedures and any non-
conformances are reported and resolved?
• Are the test reports complete and correct?
• Has it been certified that testing is complete and software including
documentation are ready for delivery?
Installation phase
Completeness
Software Reliability: Unlike reliability of the hardware device, the term software
reliability is difficult to measure. In the software engineering environment, software
reliability is defined as the probability that software will provide failure-free operation
in a fixed environment for a fixed interval of time.
Possibly the greatest problem in the field of software reliability estimation has to do
with the accuracy of operational profiles. Without accurate profiles, estimates will
almost certainly be wrong. An operational profile is the probability density function
(over the entire input space) that best represents how the inputs would be selected
during the life-time of the software. There is nothing fancy about operational profiles;
they are really just “guesses” about what inputs will occur in the future.
63
Software Quality
and Security
Definitions of Software reliability
• IEEE Definition: The probability that software will not cause the failure of a
system for a specified time under specified conditions. The probability is a
function of the inputs to and use of the system in the software. The inputs to the
system determine whether existing faults, if any, are encountered.
• The failures are independent of each other, i.e., one failure has no impact on other
failure(s).
• The inputs are random samples.
• Failure intervals are independent and all software failures are observed.
• Time between failures is exponentially distributed.
The following formula gives the cumulative number of defects observed at a time „t‟.
D(t) = Td (1 – e –bct)
„b‟ and „c‟ are constants and depend on historical data of similar software for which
the model is being applied
MTTF(t) = e bct / c Td
SEI-CMM Level 4: Metrics are used to track productivity, processes, and products.
Project performance is predictable, and quality is consistently high.
ISO/IEC 14102 Guideline For the Evaluation and Selection of CASE Tools
IEC 60880 Software for Computers in the Safety Systems of Nuclear Power Stations
65
Software Quality
Check Your Progress 3 and Security
1) Success of a project depends on individual effort. At what stage of maturity doesthe
organisation‟s software process can be rated?
………………………………………………………………………………………………..
………………………………………………………………………………………………...
2) Why ISO 9001 : 2000 is called generic standard?
………………………………………………………………………………………………..
………………………………………………………………………………………………...
3) What is the difference between SEI CMM standards and ISO 9000 : 2000 standards?
………………………………………………………………………………………………..
………………………………………………………………………………………………...
4.7 SUMMARY
4.8 SOLUTIONS/ANSWERS
66
An Overview of
Software
2) Requirements.
Engineering
3) Software quality assurance activity is an umbrella activity comprising activities like
application of standards, technical methods, review, software testing, changecontrol,
measurement, control & reporting during the process of software development life cycle.
High quality product can come from high quality design specification. Unlike quality,
testing quality assurance can‟t be achieved at the end of the product completion phase.
1) No. The purpose of any review is to discover errors in the analysis, design, coding,
testing and maintenance phases of software development life cycle.
2) Solution to a problem.
3) Output of the software design phase is a system design document (SDD) and it is
an input for the Formal Technical Review.
Check Your Progress 3
Reference websites
http://www.rspa.com
http://www.ieee.org
https://en.wikipedia.org/wiki/Security_engineering
67