Cpe 591 Lecture Note Engr Omosigho 2021-2022 Session Print
Cpe 591 Lecture Note Engr Omosigho 2021-2022 Session Print
There are several methods used to identify and resolve faults in software engineering, including:
1. Code reviews: A code review is a process in which other developers or team members review
the code written by a developer to identify potential errors or areas for improvement. This can
be done manually or with automated tools.
2. Testing: Testing is the process of evaluating a system or its component(s) with the intent to
find whether it satisfies the specified requirements or not. There are several types of testing,
such as unit testing, integration testing, and acceptance testing, which can help identify faults
in the software.
3. Debugging: Debugging is the process of identifying and resolving faults in the software by
analyzing the program’s source code, data, and execution. Debugging tools, such as
debuggers, can help developers identify the source of a fault and trace it through the code.
4. Monitoring: Monitoring is the ongoing process of tracking and analyzing the performance and
behavior of a system. Monitoring tools, such as log analyzers, can help identify and diagnose
faults in production systems.
5. Root cause analysis: Root cause analysis is a method used to identify the underlying cause of
a fault, rather than just addressing its symptoms. This can help prevent the same fault from
occurring in the future.
6. Preventing faults in software engineering is important to ensure that the software functions
correctly and meets the needs of its users. This can be achieved through good software design,
following best practices, and adhering to industry standards. Additionally, using version
control systems, keeping documentation and testing are also important to prevent faults in
software engineering.
Fault : It is an incorrect step in any process and data definition in computer program which is
responsible of the unintended behavior of any program in the computer. Faults or bugs in a hardware or
software may cause errors. An error can be defined as a part of the system which will lead to the failure
of the system. Basically an error in a program is an indication that failure occurs or has to occurred. If
there are multiple components of the system, errors in that system will lead to component failure. As
there are many component in the system that interact with each other, failure of one component might
be responsible to introduce one or more faults in the system. Following cycle show the behavior of the
fault.
Fault is cause of an error.
Figure: Fault Behavior
Types of fault : In software products, different types of fault can be occurred. In order to remove the
fault, we have to know what type of fault which is facing by our program. So the following are the types
of faults :
1. Algorithm Fault : This type of fault occurs when the component algorithm or logic does not
provide the proper result for the given input due to wrong processing steps. It can be easily
removed by reading the program i.e. disk checking.
2. Computational Fault : This type of fault occur when a fault disk implementation is wrong or
not capable of calculating the desired result e.g. combining integer and floating point
variables may produce unexpected result.
3. Syntax Fault : This type of fault occur due the use of wrong syntax in the program. We have
to use the proper syntax for the programming language which we are using.
4. Documentation Fault : The documentation in the program tells what the program actually
does. Thus it can occur when program does not match with the documentation.
5. Overload Fault : For memory purpose we used data structures like array, queue and stack
etc. in our programs. When they are filled with their given capacity and we are using them
beyond their capacity, then overload fault occurs in our program.
6. Timing Fault : When the system is not responding after the failure occurs in the program
then this type of fault is referred as the timing fault.
7. Hardware Fault : This type of failure occur when the specified hardware for the given
software does not work properly. Basically, it is due to the problem in the continuation of the
hardware that is not specified in the specification.
8. Software Fault : It can occur when the specified software is not properly working or not
supporting the platform used or we can say operating system.
9. Omission Fault : It can occur when the key aspect is missing in the program e.g. when the
initialization of a variable is not done in the program.
10. Commission Fault : It can occur when the statement of expression is wrong i.e. integer is
initialized with float.
Classification of faults:
ADVANTAGES OR DISADVANTAGES:
1. Improved software quality: By identifying and resolving faults early in the development
process, software developers can improve the overall quality of the software and ensure that it
meets the needs of its users.
2. Reduced costs: Finding and fixing faults early in the development process can save time and
resources, and prevent costly rework or delays later in the project.
3. Enhanced customer satisfaction: Providing software that is free of faults can lead to increased
customer satisfaction and loyalty.
4. Reduced risk: By identifying and resolving faults early, developers can reduce the risk of
software failures and security vulnerabilities, which can have serious consequences.
However, there are also some disadvantages to identifying and resolving faults in software
engineering:
1. Increased development time: Finding and resolving faults can take additional time, which can
lead to delays in the project schedule and increased costs.
2. Additional resources needed: Identifying and resolving faults can require additional resources,
such as extra personnel or specialized tools, which can also increase costs.
3. Difficulty in identifying all faults: Identifying all faults in a software system can be difficult,
especially in large and complex systems. This can lead to missed faults and software failures.
4. Dependence on testing: Identifying faults largely depend on testing, testing may not be able to
reveal all faults in the software.
Overall, identifying and resolving faults in software engineering is an important aspect of software
quality assurance, and can lead to improved software quality, reduced costs, and enhanced customer
satisfaction. However, it is also important to be aware of the potential disadvantages and to manage the
process effectively to minimize these risks.
Reliability terminology
Term Description
Human error or
Human behavior that results in the introduction of faults into a system.
mistake
System fault A characteristic of a software system that can lead to a system error.
An erroneous system state that can lead to system behavior that is unexpected by
System error
system users.
An event that occurs at some point in time when the system does not deliver a
System failure
service as expected by its users.
Failures are a usually a result of system errors that are derived from faults in the system. However, faults
do not necessarily result in system errors if the erroneous system state is transient and can be 'corrected'
before an error arises. Errors do not necessarily lead to system failures if the error is corrected by built-in
error detection and recovery mechanism.
Fault management strategies to achieve reliability:
Fault avoidance
Development techniques are used that either minimize the possibility of mistakes or trap mistakes
before they result in the introduction of system faults.
Fault detection and removal
Verification and validation techniques that increase the probability of detecting and correcting
errors before the system goes into service are used.
Fault tolerance
Run-time techniques are used to ensure that system faults do not result in system errors and/or
that system errors do not lead to system failures.
FAULT TOLERANCE
In critical situations, software systems must be fault tolerant. Fault tolerance is required where there
are high availability requirements or where system failure costs are very high. Fault tolerance means
that the system can continue in operation in spite of software failure. Even if the system has been proved
to conform to its specification, it must also be fault tolerant as there may be specification errors or the
validation may be incorrect.
Fault-tolerant systems architectures are used in situations where fault tolerance is essential. These
architectures are generally all based on redundancy and diversity. Examples of situations where
dependable architectures are used:
• Flight control systems, where system failure could threaten the safety of passengers;
• Reactor systems where failure of a control system could lead to a chemical or nuclear
emergency;
• Telecommunication systems, where there is a need for 24/7 availability.
Protection system is a specialized system that is associated with some other control system, which can
take emergency action if a failure occurs, e.g. a system to stop a train if it passes a red light, or a system to
shut down a reactor if temperature/pressure are too high. Protection systems independently monitor the
controlled system and the environment. If a problem is detected, it issues commands to take emergency
action to shut down the system and avoid a catastrophe. Protection systems are redundant because they
include monitoring and control capabilities that replicate those in the control software. Protection systems
should be diverse and use different technology from the control software. They are simpler than the
control system so more effort can be expended in validation and dependability assurance. Aim is to ensure
that there is a low probability of failure on demand for the protection system.
Self-monitoring architecture is a multi-channel architectures where the system monitors its own
operations and takes action if inconsistencies are detected. The same computation is carried out on each
channel and the results are compared. If the results are identical and are produced at the same time, then it
is assumed that the system is operating correctly. If the results are different, then a failure is assumed and
a failure exception is raised. Hardware in each channel has to be diverse so that common mode hardware
failure will not lead to each channel producing the same results. Software in each channel must also be
diverse, otherwise the same software error would affect each channel. If high-availability is required, you
may use several self-checking systems in parallel. This is the approach used in the Airbus family of
aircraft for their flight control systems.
N-version programming involves multiple versions of a software system to carry out computations at the
same time. There should be an odd number of computers involved, typically 3. The results are compared
using a voting system and the majority result is taken to be the correct result. Approach derived from the
notion of triple-modular redundancy, as used in hardware systems.
Hardware fault tolerance depends on triple-modular redundancy (TMR). There are three replicated
identical components that receive the same input and whose outputs are compared. If one output is
different, it is ignored and component failure is assumed. Based on most faults resulting from component
failures rather than design faults and a low probability of simultaneous component failure.
PROGRAMMING FOR RELIABILITY
Good programming practices can be adopted that help reduce the incidence of program faults. These
programming practices support fault avoidance, detection, and tolerance.
Limit the visibility of information in a program
Program components should only be allowed access to data that they need for their
implementation. This means that accidental corruption of parts of the program state by these
components is impossible. You can control visibility by using abstract data types where the data
representation is private and you only allow access to the data through predefined operations such
as get () and put ().
Check all inputs for validity
All program take inputs from their environment and make assumptions about these inputs.
However, program specifications rarely define what to do if an input is not consistent with these
assumptions. Consequently, many programs behave unpredictably when presented with unusual
inputs and, sometimes, these are threats to the security of the system. Consequently, you should
always check inputs before processing against the assumptions made about these inputs.
Provide a handler for all exceptions
A program exception is an error or some unexpected event such as a power failure. Exception
handling constructs allow for such events to be handled without the need for continual status
checking to detect exceptions. Using normal control constructs to detect exceptions needs many
additional statements to be added to the program. This adds a significant overhead and is
potentially error-prone.
Minimize the use of error-prone constructs
Program faults are usually a consequence of human error because programmers lose track of the
relationships between the different parts of the system This is exacerbated by error-prone
constructs in programming languages that are inherently complex or that don't check for mistakes
when they could do so. Therefore, when programming, you should try to avoid or at least minimize
the use of these error-prone constructs.
Error-prone constructs:
Software failures may be due to bugs, ambiguities, oversights or misinterpretation of the specification that
the software is supposed to satisfy, carelessness or incompetence in writing code, inadequate testing,
incorrect or unexpected usage of the software or other unforeseen problems.
Hardware faults are mostly physical faults. Software faults are design faults, which are
tough to visualize, classify, detect, and
correct.
Hardware components generally fail due to Software component fails due to bugs.
wear and tear.
In hardware, design faults may also exist, In software, we can simply find a strict
but physical faults generally dominate. corresponding counterpart for
"manufacturing" as the hardware
manufacturing process, if the simple action of
uploading software modules into place does
not count. Therefore, the quality of the
software will not change once it is uploaded
into the storage and start running
Hardware exhibits the failure features Software reliability does not show the same
shown in the following figure: features similar as hardware. A possible curve
is shown in the following figure:
There are two significant differences between hardware and software curves are:
One difference is that in the last stage, the software does not have an increasing failure rate as hardware
does. In this phase, the software is approaching obsolescence; there are no motivations for any upgrades or
changes to the software. Therefore, the failure rate will not change.
The second difference is that in the useful-life phase, the software will experience a radical increase in
failure rate each time an upgrade is made. The failure rate levels off gradually, partly because of the defects
create and fixed after the updates.
The upgrades in above figure signify feature upgrades, not upgrades for reliability. For feature upgrades,
the complexity of software is possible to be increased, since the functionality of the software is enhanced.
Even error fixes may be a reason for more software failures if the bug fix induces other defects into the
software. For reliability upgrades, it is likely to incur a drop in software failure rate, if the objective of the
upgrade is enhancing software reliability, such as a redesign or reimplementation of some modules using
better engineering approaches, such as clean-room method.
A partial list of the distinct features of software compared to hardware is listed below:
What is Quality?
Quality of Design: Quality of Design refers to the characteristics that designers specify for an item. The
grade of materials, tolerances, and performance specifications that all contribute to the quality of design.
Quality of conformance: Quality of conformance is the degree to which the design specifications are
followed during manufacturing. Greater the degree of conformance, the higher is the level of quality of
conformance.
Software Quality: Software Quality is defined as the conformance to explicitly state functional and
performance requirements, explicitly documented development standards, and inherent characteristics that
are expected of all professionally developed software.
Software quality product can also be defined in term of its fitness of purpose. That is, a quality product does
precisely what the users want it to do. For software products, the fitness of use is generally explained in
terms of satisfaction of the requirements laid down in the SRS document. Although "fitness of purpose" is
a satisfactory interpretation of quality for many devices such as a car, a table fan, a grinding machine, etc.for
software products, "fitness of purpose" is not a wholly satisfactory definition of quality.
Example: Consider a functionally correct software product. That is, it performs all tasks as specified in the
SRS document. But, has an almost unusable user interface. Even though it may be functionally right, we
cannot consider it to be a quality product.
The modern view of a quality associated with a software product several quality methods such as the
following:
Portability: A software device is said to be portable, if it can be freely made to work in various operating
system environments, in multiple machines, with other software products, etc.
Usability: A software product has better usability if various categories of users can easily invoke the
functions of the product.
Reusability: A software product has excellent reusability if different modules of the product can quickly
be reused to develop new products.
Correctness: A software product is correct if various requirements as specified in the SRS document have
been correctly implemented.
Maintainability: A software product is maintainable if bugs can be easily corrected as and when they show
up, new tasks can be easily added to the product, and the functionalities of the product can be easily
modified, etc.
Quality Control: Quality Control involves a series of inspections, reviews, and tests used throughout the
software process to ensure each work product meets the requirements place upon it. Quality control includes
a feedback loop to the process that created the work product.
Quality Assurance: Quality Assurance is the preventive set of activities that provide greater confidence
that the project will be completed successfully. Quality Assurance focuses on how the engineering and
management activity will be done?
As anyone is interested in the quality of the final product, it should be assured that we are building the right
product. It can be assured only when we do inspection & review of intermediate products, if there are any
bugs, then it is debugged. This quality can be enhanced.
Importance of Quality
We would expect the quality to be a concern of all producers of goods and services. However, the distinctive
characteristics of software and in particular its intangibility and complexity, make special demands.
Increasing criticality of software: The final customer or user is naturally concerned about the general
quality of software, especially its reliability. This is increasing in the case as organizations become more
dependent on their computer systems and software is used more and more in safety-critical areas. For
example, to control aircraft.
The intangibility of software: This makes it challenging to know that a particular task in a project has been
completed satisfactorily. The results of these tasks can be made tangible by demanding that the developers
produce 'deliverables' that can be examined for quality.
SQA ACTIVITIES
Software quality assurance is composed of a variety of functions associated with two different
constituencies ? the software engineers who do technical work and an SQA group that has responsibility
for quality assurance planning, record keeping, analysis, and reporting.
1. Prepares an SQA plan for a project: The program is developed during project planning and is
reviewed by all stakeholders. The plan governs quality assurance activities performed by the
software engineering team and the SQA group. The plan identifies calculation to be performed,
audits and reviews to be performed, standards that apply to the project, techniques for error reporting
and tracking, documents to be produced by the SQA team, and amount of feedback provided to the
software project team.
2. Participates in the development of the project's software process description: The software team
selects a process for the work to be performed. The SQA group reviews the process description for
compliance with organizational policy, internal software standards, externally imposed standards
(e.g. ISO-9001), and other parts of the software project plan.
3. Reviews software engineering activities to verify compliance with the defined software
process: The SQA group identifies, reports, and tracks deviations from the process and verifies that
corrections have been made.
4. Audits designated software work products to verify compliance with those defined as a part of
the software process: The SQA group reviews selected work products, identifies, documents and
tracks deviations, verify that corrections have been made, and periodically reports the results of its
work to the project manager.
5. Ensures that deviations in software work and work products are documented and handled
according to a documented procedure: Deviations may be encountered in the project method,
process description, applicable standards, or technical work products.
6. Records any noncompliance and reports to senior management: Non- compliance items are
tracked until they are resolved.
This activity is a blend of two sub-activities which are explained below in detail:
(i) Product Evaluation:
This activity confirms that the software product is meeting the requirements that were discovered in the
project management plan. It ensures that the set standards for the project are followed correctly.
By validating the change requests, evaluating the nature of change, and controlling the change effect, it is
ensured that the software quality is maintained during the development and maintenance phases.
After this, the QA team should determine the impact of the change which is brought by this defect fix.
They need to test not only if the change has fixed the defect, but also if the change is compatible with the
whole project.
For this purpose, we use software quality metrics that allow managers and developers to observe the
activities and proposed changes from the beginning till the end of SDLC and initiate corrective action
wherever required.
It also checks whether whatever was reported by the team in the status reports was actually performed or
not. This activity also exposes any non-compliance issues.
We often hear that testers and developers often feel superior to each other. This should be avoided as it
can affect the overall project quality.
QUALITY ASSURANCE V/S QUALITY CONTROL
Quality Assurance (QA) is the set of actions Quality Control (QC) is described as
including facilitation, training, measurement, the processes and methods used to
and analysis needed to provide adequate compare product quality to requirements
confidence that processes are established and and applicable standards, and the actions
continuously improved to produce products or are taken when a nonconformance is
services that conform to specifications and are detected.
fit for use.
Software quality assurance is very important for your software product or service to succeed in the market
and survive up to the customer’s expectations.
There are various activities, standards, and techniques that you need to follow to assure that the
deliverable software is of high quality and aligns closely with the business needs.
Monitoring and Controlling are processes needed to track, review, and regulate the progress and
performance of the project. It also identifies any areas where changes to the project management method
are required and initiates the required changes.
The Monitoring & Controlling process group includes eleven processes, which are:
1. Monitor and control project work: The generic step under which all other monitoring and
controlling activities fall under.
2. Perform integrated change control: The functions involved in making changes to the project plan.
When changes to the schedule, cost, or any other area of the project management plan are necessary,
the program is changed and re-approved by the project sponsor.
3. Validate scope: The activities involved with gaining approval of the project's deliverables.
4. Control scope: Ensuring that the scope of the project does not change and that unauthorized
activities are not performed as part of the plan (scope creep).
5. Control schedule: The functions involved with ensuring the project work is performed according to
the schedule, and that project deadlines are met.
6. Control costs: The tasks involved with ensuring the project costs stay within the approved budget.
7. Control quality: Ensuring that the quality of the project?s deliverables is to the standard defined in
the project management plan.
8. Control communications: Providing for the communication needs of each project stakeholder.
9. Control Risks: Safeguarding the project from unexpected events that negatively impact the project's
budget, schedule, stakeholder needs, or any other project success criteria.
10. Control procurements: Ensuring the project's subcontractors and vendors meet the project goals.
11. Control stakeholder engagement: The tasks involved with ensuring that all of the project's
stakeholders are left satisfied with the project work.
The plan identifies the SQA responsibilities of a team, and lists the areas that need to be reviewed and
audited. It also identifies the SQA work products.
Software review:
A software review is an effective way of filtering errors in a software product. Reviews conducted at each
of these phases i.e., analysis, design, coding, and testing reveal areas of improvement in the product.
Reviews also indicate those areas that do not need any improvement. We can use software reviews to
achieve consistency and uniformity across products. Reviews also make the task of product creation more
manageable. Some of the most common software review techniques are:
i. Inspection
ii. Walkthrough
iii. Code review
iv. Formal Technical Reviews (FTR)
v. Pair programming
Formal Technical Review (FTR) is a software quality control activity performed by software
engineers.
Objectives of formal technical review (FTR): Some of these are:
• Useful to uncover error in logic, function and implementation for any representation of the
software.
• The purpose of FTR is to verify that the software meets specified requirements.
• To ensure that software is represented according to predefined standards.
• It helps to review the uniformity in software that is development in a uniform manner.
• To makes the project more manageable.
In addition, the purpose of FTR is to enable junior engineer to observer the analysis, design, coding and
testing approach more closely. FTR also works to promote back up and continuity become familiar with
parts of software they might not have seen otherwise. Actually, FTR is a class of reviews that include
walkthroughs, inspections, round robin reviews and other small group technical assessments of
software. Each FTR is conducted as meeting and is considered successful only if it is properly planned,
controlled and attended.
Example:
suppose during the development of the software without FTR design cost 10 units, coding cost 15 units
and testing cost 10 units then the total cost till now is 25 units without maintenance but there was a
quality issue because of bad design so to fix it we have to re design the software and final cost will
become 50 units. that is why FTR is so helpful while developing the software.
The review meeting: Each review meeting should be held considering the following
constraints- Involvement of people:
1. Between 3, 4 and 5 people should be involve in the review.
2. Advance preparation should occur but it should be very short that is at the most 2 hours of
work for every person.
3. The short duration of the review meeting should be less than two hour. Gives these
constraints, it should be clear that an FTR focuses on specific (and small) part of the overall
software.
At the end of the review, all attendees of FTR must decide what to do.
Formal Review generally takes place in piecemeal approach that consists of six different steps that are
essential. Formal review generally obeys formal process. It is also one of the most important and essential
techniques required in static testing.
Six steps are extremely essential as they allow team of developers simply to ensure and check software
quality, efficiency, and effectiveness. These steps are given below :
1. Planning :
For specific review, review process generally begins with ‘request for review’ simply by author
to moderator or inspection leader. Individual participants, according to their understanding of
document and role, simply identify and determine defects, questions, and comments. Moderator
also performs entry checks and even considers exit criteria.
2. Kick-Off :
Getting everybody on the same page regarding document under review is the main goal and aim
of this meeting. Even entry result and exit criteria are also discussed in this meeting. It is
basically an optional step. It also provides better understanding of team about relationship among
document under review and other documents. During kick-off, Distribution of document under
review, source documents, and all other related documentation can also be done.
3. Preparation :
In preparation phase, participants simply work individually on document under review with the
help of related documents, procedures, rules, and provided checklists. Spelling mistakes are also
recorded on document under review but not mentioned during meeting.
These reviewers generally identify and determine and also check for any defect, issue or error and offer
their comments, that later combined and recorded with the assistance of logging form, while reviewing
document.
4. Review Meeting :
This phase generally involves three different phases i.e. logging, discussion, and decision.
Different tasks are simply related to document under review is performed.
5. Rework :
Author basically improves document that is under review based on the defects that are detected
and improvements being suggested in review meeting. Document needs to be reworked if total
number of defects that are found are more than an unexpected level. Changes that are done to
document must be easy to determine during follow-up, therefore author needs to indicate changes
are made.
6. Follow-Up :
Generally, after rework, moderator must ensure that all satisfactory actions need to be taken on
all logged defects, improvement suggestions, and change requests. Moderator simply makes sure
that whether author has taken care of all defects or not. In order to control, handle, and optimize
review process, moderator collects number of measurements at every step of process. Examples
of measurements include total number of defects that are found, total number of defects that are
found per page, overall review effort, etc.
CMMI level
CMMI level: CMMI stands for Capability maturity model Integration. This model originated in
software engineering. It can be employed to direct process improvement throughout a project, department,
or entire organization.
Systems engineering: This covers development of total systems. System engineers concentrate on
converting customer needs to product solutions and supports them throughout the product lifecycle.
Software engineering: Software engineers concentrate on the application of systematic, disciplined, and
quantifiable approaches to the development, operation, and maintenance of software.
Integrated Product and Process Development (IPPD): Integrated Product and Process Development
(IPPD) is a systematic approach that achieves a timely collaboration of relevant stakeholders throughout
the life of the product to better satisfy customer needs, expectations, and requirements. This section
mostly concentrates on the integration part of the project for different processes. For instance, it's possible
that your project is using services of some other third party component. In such situations the integration
is a big task itself, and if approached in a systematic manner, can be handled with ease.
Software acquisition: Many times an organization has to acquire products from other organizations.
Acquisition is itself a big step for any organization and if not handled in a proper manner means a disaster
is sure to happen.
Implementation: It is just performing a task within a process area. A task is performed according to a
process but actions performed to complete the process are not ingrained in the organization. That means
the process involved is done according to the individual point of view. When an organization starts to
implement any process it first starts at this phase, i.e., implementation, and then when this process looks
good it is raised to the organization level so that it can be implemented across organizations.
Institutionalization: Institutionalization is the output of implementing the process again and again. The
difference between implementation and institutionalization is in implementation if the person who
implemented the process leaves the company the process is not followed, but if the process is
institutionalized then even if the person leaves the organization, the process is still followed.
There are five maturity levels in a staged representation as shown in the following figure.
Maturity Level 1 (Initial): In this level everything is adhoc. Development is completely chaotic with
budget and schedules often exceeded. In this scenario we can never predict quality.
Maturity Level 2 (Managed): In the managed level basic project management is in place. But the basic
project management and practices are followed only in the project level.
Maturity Level 3 (Defined): To reach this level the organization should have already achieved level 2. In
the previous level the good practices and process were only done at the project level. But in this level all
these good practices and processes are brought to the organization level. There are set and standard
practices defined at the organization level which every project should follow. Maturity Level 3 moves
ahead with defining a strong, meaningful, organizational approach to developing products. An important
distinction between Maturity Levels 2 and 3 is that at Level 3, processes are described in more detail and
more rigorously than at Level 2 and are at an organization level.
Maturity Level 4 (Quantitatively measured): To start with, this level of organization should have
already achieved Level 2 and Level 3. In this level, more statistics come into the picture. Organization
controls the project by statistical and other quantitative techniques. Product quality, process performance,
and service quality are understood in statistical terms and are managed throughout the life of the
processes. Maturity Level 4 concentrates on using metrics to make decisions and to truly measure whether
progress is happening and the product is becoming better. The main difference between Levels 3 and 4 are
that at Level 3, processes are qualitatively predictable. At Level 4, processes are quantitatively
predictable. Level 4 addresses causes of process variation and takes corrective action.
Maturity Level 5 (Optimized): The organization has achieved goals of maturity levels 2, 3, and 4. In this
level, processes are continually improved based on an understanding of common causes of variation
within the processes. This is like the final level; everyone on the team is a productive member, defects are
minimized, and products are delivered on time and within the budget boundary.
The following figure shows, in detail, all the maturity levels in a pictorial fashion.
There are two models in CMMI. The first is "staged" in which the maturity level organizes the process
areas.
The second is "continuous" in which the capability level organizes the process area.
5 CMMI levels and their characteristics are described in the below image:
An organization is appraised and awarded a maturity level rating (1-5) based on the type of appraisal.
SOFTWARE QUALITY ASSURANCE STANDARDS
ISO (International Standards Organization) is a group or consortium of 63 countries established to plan and
fosters standardization. ISO declared its 9000 series of standards in 1987. It serves as a reference for the
contract between independent parties. The ISO 9000 standard determines the guidelines for maintaining a
quality system. The ISO standard mainly addresses operational methods and organizational methods such
as responsibilities, reporting, etc. ISO 9000 defines a set of guidelines for the production process and is not
directly concerned about the product itself.
The ISO 9000 series of standards is based on the assumption that if a proper stage is followed for production,
then good quality products are bound to follow automatically. The types of industries to which the various
ISO standards apply are as follows.
1. ISO 9001: This standard applies to the organizations engaged in design, development, production,
and servicing of goods. This is the standard that applies to most software development organizations.
2. ISO 9002: This standard applies to those organizations which do not design products but are only
involved in the production. Examples of these category industries contain steel and car
manufacturing industries that buy the product and plants designs from external sources and are
engaged in only manufacturing those products. Therefore, ISO 9002 does not apply to software
development organizations.
3. ISO 9003: This standard applies to organizations that are involved only in the installation and testing
of the products. For example, Gas companies.
An organization determines to obtain ISO 9000 certification applies to ISO registrar office for registration.
The process consists of the following stages:
1. Application: Once an organization decided to go for ISO certification, it applies to the registrar for
registration.
2. Pre-Assessment: During this stage, the registrar makes a rough assessment of the organization.
3. Document review and Adequacy of Audit: During this stage, the registrar reviews the document
submitted by the organization and suggest an improvement.
4. Compliance Audit: During this stage, the registrar checks whether the organization has compiled
the suggestion made by it during the review or not.
5. Registration: The Registrar awards the ISO certification after the successful completion of all the
phases.
6. Continued Inspection: The registrar continued to monitor the organization time by time.
Reliability Models
A reliability growth model is a numerical model of software reliability, which predicts how software
reliability should improve over time as errors are discovered and repaired. These models help the manager
in deciding how much efforts should be devoted to testing. The objective of the project manager is to test
and debug the system until the required level of reliability is reached.
Following are the Software Reliability Models are:
Software testing is a process of examining the functionality and behavior of the software through
verification and validation.
• Verification is a process of determining if the software is designed and developed as per the specified
requirements.
• Validation is the process of checking if the software (end product) has met the client’s true needs and
expectations.
Software testing is incomplete until it undergoes verification and validation processes. Verification and
validation are the main elements of software testing workflow because they:
The software testing industry is estimated to grow from $40 billion in 2020 to $60 billion in 2027.
Considering the steady growth of the software testing industry, we put together a guide that provides an
in-depth explanation behind verification and validation and the main differences between these two
processes.
Verification
As mentioned, verification is the process of determining if the software in question is designed and
developed according to specified requirements. Specifications act as inputs for the software development
process. The code for any software application is written based on the specifications document.
Verification is done to check if the software being developed has adhered to these specifications at every
stage of the development life cycle. The verification ensures that the code logic is in line with
specifications.
Depending on the complexity and scope of the software application, the software testing team uses
different methods of verification, including inspection, code reviews, technical reviews, and
walkthroughs. Software testing teams may also use mathematical models and calculations to make
predictive statements about the software and verify its code logic.
Further, verification checks if the software team is building the product right. Verification is a continuous
process that begins well in advance of validation processes and runs until the software application is
validated and released.
There are three phases in the verification testing of a mobile application development:
1. Requirements Verification
2. Design Verification
3. Code Verification
Requirements verification is the process of verifying and confirming that the requirements are complete,
clear, and correct. Before the mobile application goes for design, the testing team verifies business
requirements or customer requirements for their correctness and completeness.
Design verification is a process of checking if the design of the software meets the design specifications
by providing evidence. Here, the testing team checks if layouts, prototypes, navigational charts,
architectural designs, and database logical models of the mobile application meet the functional and non-
functional requirements specifications.
Code verification is a process of checking the code for its completeness, correctness, and consistency.
Here, the testing team checks if construction artifacts such as source code, user interfaces, and database
physical model of the mobile application meet the design specification.
Validation
Validation is often conducted after the completion of the entire software development process. It checks if
the client gets the product they are expecting. Validation focuses only on the output; it does not concern
itself about the internal processes and technical intricacies of the development process.
Validation helps to determine if the software team has built the right product. Validation is a one-time
process that starts only after verifications are completed. Software teams often use a wide range of
validation methods, including White Box Testing (non-functional testing or structural/design testing)
and Black Box Testing (functional testing).
White Box Testing is a method that helps validate the software application using a predefined series of
inputs and data. Here, testers just compare the output values against the input values to verify if the
application is producing output as specified by the requirements.
There are three vital variables in the Black Box Testing method (input values, output values, and expected
output values). This method is used to verify if the actual output of the software meets the anticipated or
expected output.
Validation emphasizes checking the functionality, usability, and performance of the mobile application.
Functionality testing checks if the mobile application is working as expected. For instance, while testing
the functionality of a ticket-booking application, the testing team tries to validate it through:
1. Installing, running, and updating the application from distribution channels like Google Play and the
App Store
2. Booking tickets in the real-time environment (fields testing)
3. Interruptions testing
Usability testing checks if the application offers a convenient browsing experience. User interface and
navigations are validated based on various criteria which include satisfaction, efficiency, and
effectiveness.
Performance testing enables testers to validate the application by checking its reaction and speed under
the specific workload. Software testing teams often use techniques such as load testing, stress testing, and
volume testing to validate the performance of the mobile application.
Verification and validation, while similar, are not the same. There are several notable differences between
these two. Here is a chart that identifies the differences between verification and validation:
Verification Validation
It is a process of checking if a
It is a process of ensuring that the product meets
Definition product is developed as per the
the needs and expectations of stakeholders.
specifications.
It tests the requirements,
What it tests or It tests the usability, functionalities, and reliability
architecture, design, and code of the
checks for of the end product.
software product.
Coding It does not require executing the It emphasizes executing the code to test the
requirement code. usability and functionality of the end product.
Types of A few verification methods are A few widely-used validation methods are black
testing inspection, code review, desk- box testing, white box testing, integration testing,
methods checking, and walkthroughs. and acceptance testing.
Verification and validation are an integral part of software engineering. Without rigorous verification and
validation, a software team may not be able to build a product that meets the expectations of stakeholders.
Verification and validation help reduce the chances of product failure and improve the reliability of the
end product.
Different project management and software development methods use verification and validation in
different ways. For instance, both verification and validation happen simultaneously in agile development
methodology due to the need for continuous refinement of the system based on the end-user feedback.