Unit-4 Software Engineering
Unit-4 Software Engineering
SOFTWARE TESTING
FacebookLinkedInTwitterEmail
Software Testing has different goals and objectives.The major objectives of Software testing are
as follows:
Finding defects which may get created by the programmer while developing the software.
Gaining confidence in and providing information about the level of quality.
To prevent defects.
To make sure that the end result meets the business and user requirements.
To ensure that it satisfies the BRS that is Business Requirement Specification and SRS that is
System Requirement Specifications.
To gain the confidence of the customers by providing them a quality product.
Software testing helps in finalizing the software application or product against business and user
requirements. It is very important to have good test coverage in order to test the software
application completely and make it sure that it’s performing well and as per the specifications.
While determining the test coverage the test cases should be designed well with maximum
possibilities of finding the errors or bugs. The test cases should be very effective. This objective
can be measured by the number of defects reported per test cases. Higher the number of the
defects reported the more effective are the test cases.
Once the delivery is made to the end users or the customers they should be able to operate it
without any complaints. In order to make this happen the tester should know as how the
customers are going to use this product and accordingly they should write down the test scenarios
and design the test cases. This will help a lot in fulfilling all the customer’s requirements.
Software testing makes sure that the testing is being done properly and hence the system is ready
for use. Good coverage means that the testing has been done to cover the various areas like
functionality of the application, compatibility of the application with the OS, hardware and
different types of browsers, performance testing to test the performance of the application and
load testing to make sure that the system is reliable and should not crash or there should not be
any blocking issues. It also determines that the application can be deployed easily to the machine
and without any resistance. Hence the application is easy to install, learn and use.
Unit Testing
Unit testing is a type of software testing that focuses on individual units or components of a software
system. The purpose of unit testing is to validate that each unit of the software works as intended and
meets the requirements. Unit testing is typically performed by developers, and it is performed early in the
development process before the code is integrated and tested as a whole system.
Unit tests are automated and are run each time the code is changed to ensure that new code does not
break existing functionality. Unit tests are designed to validate the smallest possible unit of code, such as
a function or a method, and test it in isolation from the rest of the system. This allows developers to
quickly identify and fix any issues early in the development process, improving the overall quality of the
software and reducing the time required for later testing.
Unit Testing is a software testing technique by means of which individual units of software i.e. group of
computer program modules, usage procedures, and operating procedures are tested to determine
whether they are suitable for use or not. It is a testing method using which every independent module is
tested to determine if there is an issue by the developer himself. It is correlated with the functional
correctness of the independent modules. Unit Testing is defined as a type of software testing where
individual components of a software are tested. Unit Testing of the software product is carried out during
the development of an application. An individual component may be either an individual function or a
procedure. Unit Testing is typically performed by the developer. In SDLC or V Model, Unit testing is the
first level of testing done before integration testing. Unit testing is such a type of testing technique that is
usually performed by developers. Although due to the reluctance of developers to test, quality assurance
engineers also do unit testing.
Integration Testing
Integration testing is the process of testing the interface between two software units or modules. It focuses
on determining the correctness of the interface. The purpose of integration testing is to expose faults in the
interaction between integrated units. Once all the modules have been unit tested, integration testing is
performed.
Integration testing is a software testing technique that focuses on verifying the interactions and data
exchange between different components or modules of a software application. The goal of integration
testing is to identify any problems or bugs that arise when different components are combined and interact
with each other. Integration testing is typically performed after unit testing and before system testing. It
helps to identify and resolve integration issues early in the development cycle, reducing the risk of more
severe and costly problems later on.
Integration testing can be done by picking module by module. This can be done so that there should be a
proper sequence to be followed. And also if you don’t want to miss out on any integration scenarios then
you have to follow the proper sequence. Exposing the defects is the major focus of the integration testing
and the time of interaction between the integrated units.
Integration test approaches – There are four types of integration testing approaches. Those approaches are
the following:
1. Big-Bang Integration Testing – It is the simplest integration testing approach, where all the modules are
combined and the functionality is verified after the completion of individual module testing. In simple
words, all the modules of the system are simply put together and tested. This approach is practicable only
for very small systems. If an error is found during the integration testing, it is very difficult to localize the
error as the error may potentially belong to any of the modules being integrated. So, debugging errors
reported during big bang integration testing is very expensive to fix.
Big-Bang integration testing is a software testing approach in which all components or modules of a
software application are combined and tested at once. This approach is typically used when the software
components have a low degree of interdependence or when there are constraints in the development
environment that prevent testing individual components. The goal of big-bang integration testing is to verify
the overall functionality of the system and to identify any integration problems that arise when the
components are combined. While big-bang integration testing can be useful in some situations, it can also
be a high-risk approach, as the complexity of the system and the number of interactions between
components can make it difficult to identify and diagnose problems.
Advantages:
It is convenient for small systems.
Simple and straightforward approach.
Can be completed quickly.
Does not require a lot of planning or coordination.
May be suitable for small systems or projects with a low degree of interdependence between components.
Disadvantages:
There will be quite a lot of delay because you would have to wait for all the modules to be integrated.
High-risk critical modules are not isolated and tested on priority since all modules are tested at once.
Not Good for long projects.
High risk of integration problems that are difficult to identify and diagnose.
This can result in long and complex debugging and troubleshooting efforts.
This can lead to system downtime and increased development costs.
May not provide enough visibility into the interactions and data exchange between components.
This can result in a lack of confidence in the system’s stability and reliability.
This can lead to decreased efficiency and productivity.
This may result in a lack of confidence in the development team.
This can lead to system failure and decreased user satisfaction.
2. Bottom-Up Integration Testing – In bottom-up testing, each module at lower levels are tested with
higher modules until all modules are tested. The primary purpose of this integration testing is that each
subsystem tests the interfaces among various modules making up the subsystem. This integration testing
uses test drivers to drive and pass appropriate data to the lower-level modules.
Advantages:
In bottom-up testing, no stubs are required.
A principal advantage of this integration testing is that several disjoint subsystems can be tested
simultaneously.
It is easy to create the test conditions.
Best for applications that uses bottom up design approach.
It is Easy to observe the test results.
Disadvantages:
Driver modules must be produced.
In this testing, the complexity that occurs when the system is made up of a large number of small
subsystems.
As Far modules have been created, there is no working model can be represented.
3. Top-Down Integration Testing – Top-down integration testing technique is used in order to simulate the
behaviour of the lower-level modules that are not yet integrated. In this integration testing, testing takes
place from top to bottom. First, high-level modules are tested and then low-level modules and finally
integrating the low-level modules to a high level to ensure the system is working as intended.
Advantages:
Separately debugged module.
Few or no drivers needed.
It is more stable and accurate at the aggregate level.
Easier isolation of interface errors.
In this, design defects can be found in the early stages.
Disadvantages:
Needs many Stubs.
Modules at lower level are tested inadequately.
It is difficult to observe the test output.
It is difficult to stub design.
4. Mixed Integration Testing – A mixed integration testing is also called sandwiched integration testing. A
mixed integration testing follows a combination of top down and bottom-up testing approaches. In top-
down approach, testing can start only after the top-level module have been coded and unit tested. In
bottom-up approach, testing can start only after the bottom level modules are ready. This sandwich or
mixed approach overcomes this shortcoming of the top-down and bottom-up approaches. It is also called
the hybrid integration testing. also, stubs and drivers are used in mixed integration testing.
Advantages:
Mixed approach is useful for very large projects having several sub projects.
This Sandwich approach overcomes this shortcoming of the top-down and bottom-up approaches.
Parallel test can be performed in top and bottom layer tests.
Disadvantages:
For mixed integration testing, it requires very high cost because one part has a Top-down approach while
another part has a bottom-up approach.
This integration testing cannot be used for smaller systems with huge interdependence between different
modules.
Applications:
Identify the components: Identify the individual components of your application that need to be
integrated. This could include the frontend, backend, database, and any third-party services.
Create a test plan: Develop a test plan that outlines the scenarios and test cases that need to be executed
to validate the integration points between the different components. This could include testing data flow,
communication protocols, and error handling.
Set up test environment: Set up a test environment that mirrors the production environment as closely as
possible. This will help ensure that the results of your integration tests are accurate and reliable.
Execute the tests: Execute the tests outlined in your test plan, starting with the most critical and complex
scenarios. Be sure to log any defects or issues that you encounter during testing.
Analyze the results: Analyze the results of your integration tests to identify any defects or issues that
need to be addressed. This may involve working with developers to fix bugs or make changes to the
application architecture.
Repeat testing: Once defects have been fixed, repeat the integration testing process to ensure that the
changes have been successful and that the application still works as expected.
Acceptance Testing
Acceptance Testing is a method of software testing where a system is tested for acceptability. The major
aim of this test is to evaluate the compliance of the system with the business requirements and assess
whether it is acceptable for delivery or not. Standard Definition of Acceptance Testing:
It is a formal testing according to user needs, requirements and business processes conducted to
determine whether a system satisfies the acceptance criteria or not and to enable the users, customers or
other authorized entities to determine whether to accept the system or not.
Acceptance Testing is the last phase of software testing performed after System Testing and before
making the system available for actual use.
Types of Acceptance Testing:
User Acceptance Testing (UAT): User acceptance testing is used to determine whether the product is
working for the user correctly. Specific requirements which are quite often used by the customers are
primarily picked for the testing purpose. This is also termed as End-User Testing.
Business Acceptance Testing (BAT): BAT is used to determine whether the product meets the business
goals and purposes or not. BAT mainly focuses on business profits which are quite challenging due to the
changing market conditions and new technologies so the current implementation may have to being
changed which results in extra budgets.
Contract Acceptance Testing (CAT): CAT is a contract that specifies that once the product goes live, within
a predetermined period, the acceptance test must be performed and it should pass all the acceptance use
cases. Here is a contract termed a Service Level Agreement (SLA), which includes the terms where the
payment will be made only if the Product services are in-line with all the requirements, which means the
contract is fulfilled. Sometimes, this contract happens before the product goes live. There should be a
well-defined contract in terms of the period of testing, areas of testing, conditions on issues encountered
at later stages, payments, etc.
Regulations Acceptance Testing (RAT): RAT is used to determine whether the product violates the rules
and regulations that are defined by the government of the country where it is being released. This may be
unintentional but will impact negatively on the business. Generally, the product or application that is to
be released in the market, has to go under RAT, as different countries or regions have different rules and
regulations defined by its governing bodies. If any rules and regulations are violated for any country then
that country or the specific region then the product will not be released in that country or region. If the
product is released even though there is a violation then only the vendors of the product will be directly
responsible.
Operational Acceptance Testing (OAT): OAT is used to determine the operational readiness of the
product and is non-functional testing. It mainly includes testing of recovery, compatibility, maintainability,
reliability, etc. OAT assures the stability of the product before it is released to production.
Alpha Testing: Alpha testing is used to determine the product in the development testing environment by
a specialized testers team usually called alpha testers.
Beta Testing: Beta testing is used to assess the product by exposing it to the real end-users, usually called
beta testers in their environment. Feedback is collected from the users and the defects are fixed. Also,
this helps in enhancing the product to give a rich user experience.
Use of Acceptance Testing:
To find the defects missed during the functional testing phase.
How well the product is developed.
A product is what actually the customers need.
Feedback help in improving the product performance and user experience.
Minimize or eliminate the issues arising from the production.
Advantages of Acceptance Testing :
This testing helps the project team to know the further requirements from the users directly as it involves
the users for testing.
Automated test execution.
It brings confidence and satisfaction to the clients as they are directly involved in the testing process.
It is easier for the user to describe their requirement.
It covers only the Black-Box testing process and hence the entire functionality of the product will be
tested.
Disadvantages of Acceptance Testing :
Users should have basic knowledge about the product or application.
Sometimes, users don’t want to participate in the testing process.
The feedback for the testing takes long time as it involves many users and the opinions may differ from
one user to another user.
Development team is not participated in this testing process.
Regression Testing
Regression Testing is the process of testing the modified parts of the code and the parts that might get
affected due to the modifications to ensure that no new errors have been introduced in the software
after the modifications have been made. Regression means return of something and in the software field,
it refers to the return of a bug.
When to do regression testing?
When a new functionality is added to the system and the code has been modified to absorb and integrate
that functionality with the existing code.
When some defect has been identified in the software and the code is debugged to fix it.
When the code is modified to optimize its working.
Process of Regression testing:
Firstly, whenever we make some changes to the source code for any reasons like adding new
functionality, optimization, etc. then our program when executed fails in the previously designed test
suite for obvious reasons. After the failure, the source code is debugged in order to identify the bugs in
the program. After identification of the bugs in the source code, appropriate modifications are made.
Then appropriate test cases are selected from the already existing test suite which covers all the
modified and affected parts of the source code. We can add new test cases if required. In the end
regression testing is performed using the selected test cases.
Difference between Top Down Integration Testing and Bottom Up Integration Testing :
02. In this testing the higher level In this testing the lower level
modules are tested first then modules are tested first then
the lower level modules are the higher level modules are
tested and then the modules are tested and then the modules are
integrated accordingly. integrated accordingly.
03. In this testing stubs are used for In this testing drivers are used
simulate the submodule if the for simulate the main module if
invoked submodule is not the main module is not
developed means Stub works as developed means Driver works
a momentary replacement. as a momentary replacement.
04. Top Down Integration testing Bottom Up Integration testing
approach is beneficial if the approach is beneficial if the
significant defect occurs toward crucial flaws encounters
the top of the program. towards the bottom of the
program.
The stubs are taken into The drivers are taken into
concern if testing of upper- concern if testing of lower-
levels of the modules are done levels of the modules are done
6.
and the lower-levels of the and the upper-levels of the
modules are under developing modules are under developing
process. process.
Stubs are used when lower-level Drivers are used when higher-
of modules are missing or in a level of modules are missing or
7. partially developed phase, and in a partially developed phase,
we want to test the main and we want to test the
module. lower(sub)- module.
There are many primary elements that are required to make the product testing lucid hassle-free. Every
element has its own specific utility that helps a lot while software testing and delivering the expected
functionality as per the SRS document as much as possible. Stubs and Drivers are two such elements that
play a very crucial role while testing; they replace the modules that haven’t been developed yet but are
still needed in the testing of other modules against expected functionality and features.
Stubs and Drivers :
The Stubs and Drivers are considered as elements which are equivalent to to-do modules that could be
replaced if modules are in their developing stage, missing or not developed yet, so that necessity of such
modules could be met. Drivers and stubs simulate features and functionalities, and have ability to serve
features that a module can provide. This reduces useless delay in testing and makes the testing process
faster.
Stubs are mainly used in Top-Down integration testing while the Drivers are used in Bottom-up
integration testing, thus increasing the efficiency of testing process.
1. Stubs :
Stubs are developed by software developers to use them in place of modules, if the respective modules
aren’t developed, missing in developing stage, or are unavailable currently while Top-down testing of
modules. A Stub simulates module which has all the capabilities of the unavailable module. Stubs are used
when the lower-level modules are needed but are unavailable currently.
Stubs are divided into four basic categories based on what they do :
Shows the traced messages,
Shows the displayed message if any,
Returns the corresponding values that are utilized by modules,
Returns the value of the chosen parameters(arguments) that were used by the testing modules.
2. Drivers :
Drivers serve the same purpose as stubs, but drivers are used in Bottom-up integration testing and are
also more complex than stubs. Drivers are also used when some modules are missing and unavailable at
time of testing of a specific module because of some unavoidable reasons, to act in absence of required
module. Drivers are used when high-level modules are missing and can also be used when lower-level
modules are missing.
Ex : Suppose, you are told to test a website whose corresponding primary modules are, where each of
them is interdependent on each other, as follows:
Module-A : Login page website,
Module-B : Home page of the website
Module-C : Profile setting
Module-D : Sign-out page
It’s always considered good practice to begin development of all modules parallelly because as soon as
each gets developed they can be integrated and could be tested further as per their corresponding
interdependencies order with a module. But in some cases, if any one of them is in developing stage or
not available in the testing process of a specific module, stubs or drivers could be used instead.
Assume Module-A is developed. As soon as it’s developed, it undergoes testing, but it requires Module-B,
which isn’t developed yet. So in this case, we can use the Stubs or Drivers that simulate all features and
functionality that might be shown by actual Module-B. So, we can conclude that Stubs and drivers are
used to fulfill the necessity of unavailable modules. Similarly, we may also use Stubs or Drivers in place of
Module-C and Module-D if they are too not available.
Do both drivers and Stubs serve the same functionality?
Yes, we can say both serve the same feature and are used in the absence of a module(M1) that has
interdependencies with an other module(M2) that is need to be test, so we use drivers or stubs in order to
fulfill module(M1)’s unavailability’s and to serve its functionality.
Structural Software Testing(White box Testing)
White box testing techniques analyze the internal structures the used data structures, internal design, code
structure, and the working of the software rather than just the functionality as in black box testing. It is also
called glass box testing or clear box testing or structural testing. White Box Testing is also known as
transparent testing or open box testing.
White box testing is a software testing technique that involves testing the internal structure and workings
of a software application. The tester has access to the source code and uses this knowledge to design test
cases that can verify the correctness of the software at the code level.
White box testing is also known as structural testing or code-based testing, and it is used to test the
software’s internal logic, flow, and structure. The tester creates test cases to examine the code paths and
logic flows to ensure they meet the specified requirements.
Working process of white box testing:
Input: Requirements, Functional specifications, design documents, source code.
Processing: Performing risk analysis to guide through the entire process.
Proper test planning: Designing test cases so as to cover the entire code. Execute rinse-repeat until error-
free software is reached. Also, the results are communicated.
Output: Preparing final report of the entire testing process.
Testing techniques:
Statement coverage: In this technique, the aim is to traverse all statements at least once. Hence, each
line of code is tested. In the case of a flowchart, every node must be traversed at least once. Since all lines
of code are covered, helps in pointing out faulty code.
Branch Coverage: In this technique, test cases are designed so that each branch from all decision points is
traversed at least once. In a flowchart, all edges must be traversed at least once.
4 test cases are required such that all branches of all decisions are covered, i.e, all edges of the flowchart are covered
Condition Coverage: In this technique, all individual conditions must be covered as shown in the following
example:
READ X, Y
IF(X == 0 || Y == 0)
PRINT ‘0’
#TC1 – X = 0, Y = 55
#TC2 – X = 5, Y = 0
Multiple Condition Coverage: In this technique, all the possible combinations of the possible outcomes of
conditions are tested at least once. Let’s consider the following example:
READ X, Y
IF(X == 0 || Y == 0)
PRINT ‘0’
#TC1: X = 0, Y = 0
#TC2: X = 0, Y = 5
#TC3: X = 55, Y = 0
#TC4: X = 55, Y = 5
Basis Path Testing: In this technique, control flow graphs are made from code or flowchart and then
Cyclomatic complexity is calculated which defines the number of independent paths so that the minimal
number of test cases can be designed for each independent path. Steps:
Make the corresponding control flow graph
Calculate the cyclomatic complexity
Find the independent paths
Design test cases corresponding to each independent path
V(G) = P + 1, where P is the number of predicate nodes in the flow graph
V(G) = E – N + 2, where E is the number of edges and N is the total number of nodes
V(G) = Number of non-overlapping regions in the graph
#P1: 1 – 2 – 4 – 7 – 8
#P2: 1 – 2 – 3 – 5 – 7 – 8
#P3: 1 – 2 – 3 – 6 – 7 – 8
#P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8
Loop Testing: Loops are widely used and these are fundamental to many algorithms hence, their testing is
very important. Errors often occur at the beginnings and ends of loops.
Simple loops: For simple loops of size n, test cases are designed that:
Skip the loop entirely
Only one pass through the loop
2 passes
m passes, where m < n
n-1 ans n+1 passes
Nested loops: For nested loops, all the loops are set to their minimum count and we start from the
innermost loop. Simple loop tests are conducted for the innermost loop and this is worked outwards till
all the loops have been tested.
Concatenated loops: Independent loops, one after another. Simple loop tests are applied for each. If
they’re not independent, treat them like nesting.
White Testing is Performed in 2 Steps:
1. Tester should understand the code well
2. Tester should write some code for test cases and execute them
Tools required for White box Testing:
PyUnit
Sqlmap
Nmap
Parasoft Jtest
Nunit
VeraUnit
CppUnit
Bugzilla
Fiddler
JSUnit.net
OpenGrok
Wireshark
HP Fortify
CSUnit
Testers can create more comprehensive and effective test cases that cover all code paths.
Testers can ensure that the code meets coding standards and is optimized for performance.
However, there are also some disadvantages to white box testing, such as:
Testers need to have programming knowledge and access to the source code to perform tests.
Testers may focus too much on the internal workings of the software and may miss external issues.
Testers may have a biased view of the software since they are familiar with its internal workings.
Overall, white box testing is an important technique in software engineering, and it is useful for
identifying defects and ensuring that software applications meet their requirements and specifications at
the code level
Disadvantages:
It is very expensive.
Redesigning code and rewriting code needs test cases to be written again.
Testers are required to have in-depth knowledge of the code and programming language as opposed to
black-box testing.
Missing functionalities cannot be detected as the code that exists is tested.
Very complex and at times not realistic.
Much more chances of Errors in production.
Functional Testing (Black box testing)
Black box testing is a type of software testing in which the functionality of the software is not known. The
testing is done without the internal knowledge of the products.
Black box testing can be done in the following ways:
1. Syntax-Driven Testing – This type of testing is applied to systems that can be syntactically represented
by some language. For example- compilers, language that can be represented by context-free grammar. In
this, the test cases are generated so that each grammar rule is used at least once.
2. Equivalence partitioning – It is often seen that many types of inputs work similarly so instead of giving
all of them separately we can group them and test only one input of each group. The idea is to partition the
input domain of the system into several equivalence classes such that each member of the class works
similarly, i.e., if a test case in one class results in some error, other members of the class would also result
in the same error.
Each column corresponds to a rule which will become a test case for testing. So there will be 4 test cases.
5. Requirement-based testing – It includes validating the requirements given in the SRS of a software
system.
6. Compatibility testing – The test case result not only depends on the product but is also on the
infrastructure for delivering functionality. When the infrastructure parameters are changed it is still
expected to work properly. Some parameters that generally affect the compatibility of software are:
Processor (Pentium 3, Pentium 4) and several processors.
Architecture and characteristics of machine (32-bit or 64-bit).
Back-end components such as database servers.
Operating System (Windows, Linux, etc).
Black Box Testing Type
The following are the several categories of black box testing:
Functional Testing
Regression Testing
Nonfunctional Testing (NFT)
Functional Testing: It determines the system’s software functional requirements.
Regression Testing: It ensures that the newly added code is compatible with the existing code. In other
words, a new software update has no impact on the functionality of the software. This is carried out after
a system maintenance operation and upgrades.
Nonfunctional Testing: Nonfunctional testing is also known as NFT. This testing is not functional testing of
software. It focuses on the software’s performance, usability, and scalability.
Tools Used for Black Box Testing:
Appium
Selenium
Microsoft Coded UI
Applitools
HP QTP.
Test Data in Software Testing is the input given to a software program during test execution. It represents
data that affects or is affected by software execution while testing.
Test data is used for both positive testing to verify that functions produce expected results for given
inputs and for negative testing to test software ability to handle unusual, exceptional or unexpected
inputs
Valid test data. It is necessary to verify whether the system functions are in compliance with the
requirements, and the system processes and stores the data as intended.
Invalid test data. QA engineers should inspect whether the software correctly processes invalid values,
shows the relevant messages, and notifies the user that the data are improper.
Boundary test data. Help to reveal the defects connected with processing boundary values.
Wrong data. Testers have to check how the system reacts on entering the data of inappropriate format,
whether it shows the correct error messages.
Absent data. It is a good practice to verify how the product handles entering a blank field in the course of
software testing.
Test suite is a container that has a set of tests which helps testers in executing and reporting the test
execution status. It can take any of the three states namely Active, Inprogress and completed.
A Test case can be added to multiple test suites and test plans. After creating a test plan, test suites are
created which in turn can have any number of tests.
In software development, a test suite, less commonly known as a 'validation suite', is a collection of test
cases that are intended to be used to test a software program to show that it has some specified set of
behaviours.
Alpha testing is performed by testers who are Beta testing is performed by clients who are not
usually internal employees of the organization. part of the organization.
Reliability and security testing are not checked in Reliability, security and robustness are checked
alpha testing. during beta testing.
Alpha testing requires a testing environment or a Beta testing doesn’t require a testing
lab. environment or lab.
Walkthrough
The walkthrough is a review meeting process but it is different from the Inspection, as it does not involve
any formal process i.e. it is a nonformal process. Basically, the walkthrough [review meeting process] is
started by the Author of the code.
In the walkthrough, the code or document is read by the author, and others who are present in the meeting
can note down the important points or can write notes on the defects and can give suggestions about them.
The walkthrough is an informal way of testing, no formal authority is been involved in this testing.
As there is an informal way of testing involved so there is no need for a moderator while performing a
walkthrough. We can call a walkthrough an open-ended discussion, it does not focus on the documentation.
Defect tracking is one of the challenging tasks in the walkthrough.
Advantages and Objectives of Walkthrough:
Following are some of the objectives of the walkthrough.
To detect defects in developed software products.
To fully understand and learn the development of software products.
To properly explain and discuss the information present in the document.
To verify the validity of the proposed system.
To give suggestions and report them appropriately with new solutions and ideas.
To provide an early “proof of concept”.
Types of Review:
Walkthrough
Technical review
Inspection
REVIEW TYPES
Technical Review:
The technical review is a less formal way of reviewing a meeting process. It is process which is performed
to give assurance about software quality. In Technical review process, the testing activity is performed by
software engineers and other persons.
Here a work product is been inspected and reviewed for defects and other errors by individuals rather than
the person who have produced it. The technical review is performed as a peer review without any
management involved. So in context, the technical review varies from informal to quite formal.
Inspection:
Inspection is a more formal way of reviewing a meeting process. In inspection the documents are checked
thoroughly and reviewed by reviewer’s before the meeting. it basically involves peers to examine the
product or software.
A preparation is been carried out separately in which the product is been examined properly and scanned
for any defects or errors. The process of inspection is led and done by trained moderators. A formal follow
up is carried out by this trained moderators. Inspection is been done to improve the product quality more
efficiently.
Objectives of Inspection:
Following are some of the objectives of Inspection.
To efficiently improve the quality of a software or product.
To create common understanding between individuals who are been involved in the inspection.
To remove defects and errors as early as possible.
To learn and improve from defects which are been found during inspection.
Helping author to improve and increase the quality of product which is been developed.
Compliance Testing is performed to maintain and validate the compliant state for the life of the software.
Every industry has a regulatory and compliance board that protects the end users.
Software compliance refers to how well an application obeys the rules in a standard. To achieve software
compliance, you might also have to, for example, produce certain types of documentation or add security
testing at more points in your software development life cycle.
Checklists:
Professionals,who are knowledgeable and experienced,who understand the compliance must be retained.
Perform an internal audit and follow with an action plan to fix the issues
Coding Standards
Coding:
The objective of the coding phase is to transform the design of a system into code in a high-level language
and then to unit test this code.
Good software development organizations normally require their programmers to adhere to some well-
defined and standard style of coding called coding standards.
Coding Standards:
A coding standard gives a uniform appearance to the codes written by different engineers.
No idiosyncratic quirks that could complicate understanding and refactoring by the entire team.
Coding Guidelines:
Line length
Spacing
Inline comments
Error messages