Understanding Software Defects
Understanding Software Defects
When testers execute the test cases, they might come across such test results which are contradictory
to expected results. This variation in test results is referred to as a Software Defect. These defects or
variations are referred by different names in different organizations like issues, problems, bugs or
incidents.
Testing is the process of identifying defects, where a defect is any variance between actual and
expected results. “A mistake in coding is called Error, error found by tester is called Defect, defect
accepted by development team then it is called Bug if it does not meet the requirements then it Is
Failure.”
.
Defect can be categorized into the following:
Wrong: When requirements are implemented not in the right way. This defect is a
variance from the given specification. It is Wrong!
Missing: A requirement of the customer that was not fulfilled. This is a variance from
the specifications, an indication that a specification was not implemented, or a
requirement of the customer was not noted correctly.
Extra: A requirement incorporated into the product that was not given by the end
customer. This is always a variance from the specification, but may be an attribute
desired by the user of the product. However, it is considered a defect because it’s a
variance from the existing requirements.
ERROR: An error is a mistake, misconception, or misunderstanding on the part of a
software developer. In the category of developer we include software engineers,
programmers, analysts, and testers. For example, a developer may misunderstand a
de-sign notation, or a programmer might type a variable name incorrectly – leads to
an Error. It is the one which is generated because of wrong login, loop or due to
syntax. Error normally arises in software; it leads to change the functionality of the
program.
BUG: A bug is the result of a coding error. An Error found in the development
environment before the product is shipped to the customer. A programming error that
causes a program to work poorly, produce incorrect results or crash. An error in
software or hardware that causes a program to malfunction. Bug is terminology of
Tester.
FAILURE: A failure is the inability of a software system or component to perform its
required functions within specified performance requirements. When a defect reaches
the end customer it is called a Failure. During development Failures are usually
observed by testers.
FAULT: An incorrect step, process or data definition in a computer program which
causes the program to perform in an unintended or unanticipated manner. A fault is
introduced into the software as the result of an error. It is an anomaly in the software
that may cause it to behave incorrectly, and not according to its specification. It is the
result of the error.
The software industry can still not agree on the definitions for all the above. In
essence, if you use the term to mean one specific thing, it may not be understood to
be that thing by your audience.
Image Credit armsreliability.com
.
2.2 What is defect leakage?
Defect leakage refers to defects which by pass testing efforts by the development team ending up in
the final product where users could be impacted.
Defect Leakage is the metric which is used to identify the efficiency of the Quality Assurance testing
i.e., how many defects are missed/slipped during the QA testing.
It is the ratio of number of defects attributed to a stage, but only captured in subsequent stages, to the
sum of total number of defects captured in that particular stage and the total number of defects
attributed to a stage, but only captured in subsequent stages. Other components of defect leakage are:
It occurs at the customer or the end user side after the application is delivered.
Used to determine the percent of defect leaked to subsequent.
It is calculated at overall project or stage level or both.
Is measured in percentage.
In short, defect leakage is a metric that measures the percentage of defects leaked from the current
testing stage to the subsequent stage as well as proves the effectiveness of testing executed by software
testers. However, the testing team’s worth is only validated when the percentage of defect leakage is
minimal or non-existing.
Defect Leakage = (No. of Defects found in UAT / No. of Defects found in QA testing.)
Defects happen. It is a fact of life and as software developers we are in a constant war that we will
never fully win. As software developers, we may even create defects purposefully as requirements or
timelines require us to make a decision that introduces necessary risk. But how can we eliminate the
unwanted defects that make our software difficult to use and tarnish our reputation?
Good test logging, regular reporting, customer involvement, and transparency in your product can
go a long way to mitigating defects. You can have the best logging in the world, but it is somewhat
worthless if you do not address the defects. More time should be dedicated to handling errors from
the production site as the project importance increases. These reports help everyone on the team to
understand what problems are being faced by customers
Depending on how the team best responds, these are some ways to share feedback with the team.
1. To get a quick sense of the overall health of a mission critical application, develop an
understandable, clear report which can be used in communications with and across the
business and company leadership.
2. Automatically log defects from production. This approach can get challenging quickly, so
have a separate place to log defects, and then pull in relevant issues.
3. If your team is large enough, or the project critical enough, create a small SWAT team, or
rapid response team that can react to critical issues quickly and cure problems.
4. And, as a general practice, every developer should be aware of what is happening with their
software and be actively engaged and responsible for their code, even when in production.
Keeping track of defects found and repaired prior to release is an indicator of good software
development health and maintains a reasonable defect removal efficiency. Equally important is to
keep records of all defects found after release and bring those back to the product, development and
quality teams so that test cases can be updated, and process adjusted when necessary. Transparency
with software defects is just as important as identification and resolution, because your customers
want to know that you own the problems and are working to resolve them.
In Software Testing Life Cycle (STLC) there are numerous testing methodologies and techniques,
which are proficient in detecting majority of defects and bugs. However, even the most prominent
and effective testing methodologies are unable to retrieve and detect all the bugs, defects, and errors in
the system, as they are hidden or present at the most internal level of the software. These bugs and
errors are uncovered during the later stages of software testing life cycle (STLC) and are known as
leakage. Therefore, to calculate the total number of undetected defects and errors, competent software
engineers follow an approach known as defect leakage, which helps them calculate the total defects
present in a software system, as well as aids them in validating their testing efforts (detailed
discussion on software testing will be covered in lecture 4)
b) Defect Removal Efficiency: Defect removal efficiency (DRE) provides a measure of the
development team’s ability to remove various defects from the software, prior to its release
or implementation. Calculated during and across test phases, DRE is measured per test
type and indicates the efficiency of the numerous defect removal methods adopted by the
test team. Also, it is an indirect measurement of the quality as well as the performance of
the software. Therefore, the formula for calculating Defect Removal Efficiency is:
DRE = Number of defects resolved by the development team/ (Total number of
defects at the moment of measurement)
c) Defect Category: This is a crucial type of metric evaluated during the process of the
software development life cycle (SDLC). Defect category metric offers an insight into the
different quality attributes of the software, such as its usability, performance, functionality,
stability, reliability, and more. In short, the defect category is an attribute of the defects in
relation to the quality attributes of the software product and is measured with the
assistance of the following formula:
Defect Category = Defects belonging to a particular category/ Total number of defects.
d) Defect Severity Index: It is the degree of impact a defect has on the development of an
operation or a component of a software application being tested. Defect severity index
(DSI) offers an insight into the quality of the product under test and helps gauge the quality
of the test team’s efforts. Additionally, with the assistance of this metric, the team can
evaluate the degree of negative impact on the quality as well as the performance of the
software. Following formula is used to measure the defect severity index.
Defect Severity Index (DSI) = Sum of (Defect * Severity Level) / Total number of
defects
e) Review Efficiency: The review efficiency is a metric used to reduce the pre-delivery
defects in the software. Review defects can be found in documents as well as in documents.
By implementing this metric, one reduces the cost as well as efforts utilized in the process
of rectifying or resolving errors. Moreover, it helps to decrease the probability of defect
leakage in subsequent stages of testing and validates the test case effect iveness. The
formula for calculating review efficiency is:
Review Efficiency (RE) = Total number of review defects / (Total number of review
defects + Total number of testing defects) x 100
f) Test Case Effectiveness: The objective of this metric is to know the efficiency of test
cases that are executed by the team of testers during every testing phase. It helps in
determining the quality of the test cases.
Test Case Effectiveness = (Number of defects detected / Number of test cases run) x
100
g) Test Case Productivity: This metric is used to measure and calculate the number of test
cases prepared by the team of testers and the efforts invested by them in the process. It is
used to determine the test case design productivity and is used as an input for future
measurement and estimation. This is usually measured with the assistance of the following
formula:
Test Case Productivity = (Number of Test Cases / Efforts Spent for Test Case
Preparation)
h) Test Coverage: Test coverage is another important metric that defines the extent to which
the software product’s complete functionality is covered. It indicates the completion of
testing activities and can be used as criteria for concluding testing. It can be measured by
implementing the following formula:
Test Coverage = Number of detected faults/number of predicted defects.
Another important formula that is used while calculating this metric is:
Requirement Coverage = (Number of requirements covered / Total number of
requirements) x 100
i) Test Design Coverage: Similar to test coverage, test design coverage measures the
percentage of test cases coverage against the number of requirements. This metric helps
evaluate the functional coverage of test case designed and improves the test coverage. This
is mainly calculated by the team during the stage of test design and is measured in
percentage. The formula used for test design coverage is:
Test Design Coverage = (Total number of requirements mapped to test cases / Total
number of requirements) x 100
j) Test Execution Coverage: It helps us get an idea about the total number of test cases
executed as well as the number of test cases left pending. This metric determines the
coverage of testing and is measured during test execution, with the assistance of the
following formula:
Test Execution Coverage = (Total number of executed test cases or scripts / Total
number of test cases or scripts planned to be executed) x 100
k) Test Tracking & Efficiency: Test efficiency is an important component that needs to be
evaluated thoroughly. It is a quality attribute of the testing team that is measured to ensure
all testing activities are carried out in an efficient manner. The various metrics that assist
in test tracking and efficiency are as follows:
i Passed Test Cases Coverage: It measures the percentage of passed test cases.
(Number of passed tests / Total number of tests executed) x 100
ii Failed Test Case Coverage: It measures the percentage of all the failed test
cases.
(Number of failed tests / Total number of test cases failed) x 100
iii Test Cases Blocked: Determines the percentage of test cases blocked, during
the software testing process.
(Number of blocked tests / Total number of tests executed) x 100
iv Fixed Defects Percentage: With the assistance of this metric, the team is able
to identify the percentage of defects fixed.
(Defect fixed / Total number of defects reported) x 100
v Accepted Defects Percentage: The focus here is to define the total number
of defects accepted by the development team. These are also measured in
percentage.
(Defects accepted as valid / Total defect reported) x 100
vi Defects Rejected Percentage: Another important metric considered under
test track and efficiency is the percentage of defects rejected by the
development team.
(Number of defects rejected by the development team / total d efects reported) x 100
vii Defects Deferred Percentage: It determines the percentage of defects
deferred by the team for future releases.
(Defects deferred for future releases / Total defects reported) x 100
viii Critical Defects Percentage: Measures the percentage of critical defects in the
software.
(Critical defects / Total defects reported) x 100
ix Average Time Taken to Rectify Defects: With the assistance of this formula,
the team members are able to determine the average time taken by the
development and testing team to rectify the defects.
(Total time taken for bug fixes / Number of bugs)
l) Test Effort Percentage: An important testing metric, test efforts percentage offer an
evaluation of what was estimated before the commencement of the testing process vs the
actual efforts invested by the team of testers. It helps in understanding any variances in
the testing and is extremely helpful in estimating similar projects in the future. Similar to
test efficiency, test efforts are also evaluated with the assistance of va rious metrics:
Number of Test Run Per Time Period: Here, the team measures the number
of tests executed in a particular time frame.
(Number of test run / Total time)
Test Design Efficiency: The objective of this metric is to evaluate the design
efficiency of the executed test.
(Number of test run / Total Time)
Bug Find Rate: One of the most important metrics used during the test effort
percentage is bug find rate. It measures the number of defects/bugs found by
the team during the process of testing.
(Total number of defects / Total number of test hours)Number of Bugs
Per Test: As suggested by the name, the focus here is to measure the number
of defects found during every testing stage.
(Total number of defects / Total number of tests)
Average Time to Test a Bug Fix: After evaluating the above metrics, the team
finally identifies the time taken to test a bug fix.(Total time between defect
fix & retest for all defects / Total number of defects)
m) Test Effectiveness: A contrast to test efficiency, test effectiveness measures and evaluates
the bugs and defect ability as well as the quality of a test set. It finds defects and isolates
them from the software product and its deliverables. Moreover, the test effectiveness
metrics offer the percentage of the difference between the total number of defects found
by the software testing and the number of defects found in the software. This is mainly
calculated with the assistance of the following formula:
Test Effectiveness (TEF) = (Total number of defects injected + Total number of
defects found / Total number of defect escaped) x 100
n) Test Economic Metrics: While testing the software product, various components
contribute to the cost of testing, like people involved, resources, tools, and infrastructure.
Hence, it is vital for the team to evaluate the estimated amount of testing, with the actual
expenditure of money during the process of testing. This is achieved by evaluating the
following aspects:
Total allocated the cost of testing.
The actual cost of testing.
Variance from the estimated budget.
Variance from the schedule.
Cost per bug fix.
The cost of not testing.
o) Test Team Metrics: Finally, the test team metrics are defined by the team. This metric is
used to understand if the work allocated to various test team members is distributed
uniformly and to verify if any team member requires more information or clarification
about the test process or the project. This metric is immensely helpful as it promotes
knowledge transfer among team members and allows them to share necessary details
regarding the project, without pointing or blaming an individual for certa in irregularities
and defects. Represented in the form of graphs and charts, this is fulfilled with the
assistance of the following aspects:
Returned defects are distributed team member vise, along with other important
details, like defects reported, accepted, and rejected.
The open defects are distributed to retest per test team member.
Test case allocated to each test team member.
The number of test cases executed by each test team member
As in the above case, if the defect communication is done verbally, soon things become very
complicated. To control and effectively manage bugs you need a defect lifecycle.
This topic will guide you on how to apply the defect management process to the project Guru99
Bank website. You can follow the below steps to manage defects.
Discovery
In the discovery phase, the project teams have to discover as many defects as possible, before the
end customer can discover it. A defect is said to be discovered and change to status accepted when
it is acknowledged and accepted by the developers
In the above scenario, the testers discovered 84 defects in the website Guru99.
Let’s have a look at the following scenario; your testing team discovered some issues in the Guru99
Bank website. They consider them as defects and reported to the development team, but there is a
conflict –
In such case, as a Test Manager, what will you do?
A) Agree With the test team that its a defect
B) Test Manager takes the role of judge to decide whether the problem is defect or not
In such case, a resolution process should be applied to solve the conflict, you take the role as a judge
to decide whether the website problem is a defect or not.
Categorization
Defect categorization help the software developers to prioritize their tasks. That means that this
kind of priority helps the developers in fixing those defects first that are highly crucial.
This is a question which every Test Manager wants to know. There are 2 parameters which you can
consider as following
In the above scenario, you can calculate the defect rejection ratio (DRR) is 20/84 = 0.238 (23.8
%).
Another example, supposed the Guru99 Bank website has total 64 defects, but your testing team
only detect 44 defects i.e. they missed 20 defects. Therefore, you can calculate the defect leakage
ratio (DLR) is 20/64 = 0.312 (31.2 %).
Conclusion, the quality of test execution is evaluated via following two parameters
The smaller value of DRR and DLR is, the better quality of test execution is. What is the ratio range
which is acceptable? This range could be defined and accepted based on the project target or you
may refer the metrics of similar projects.
In this project, the recommended value of acceptable ratio is 5 ~ 10%. It means the quality of test
execution is low. You should find countermeasure to reduce these ratios such as