0% found this document useful (0 votes)
50 views

Unit-4 Software Engineering

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views

Unit-4 Software Engineering

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

UNIT-4

SOFTWARE TESTING
FacebookLinkedInTwitterEmail

Software Testing has different goals and objectives.The major objectives of Software testing are
as follows:
Finding defects which may get created by the programmer while developing the software.
Gaining confidence in and providing information about the level of quality.
To prevent defects.
To make sure that the end result meets the business and user requirements.
To ensure that it satisfies the BRS that is Business Requirement Specification and SRS that is
System Requirement Specifications.
To gain the confidence of the customers by providing them a quality product.
Software testing helps in finalizing the software application or product against business and user
requirements. It is very important to have good test coverage in order to test the software
application completely and make it sure that it’s performing well and as per the specifications.
While determining the test coverage the test cases should be designed well with maximum
possibilities of finding the errors or bugs. The test cases should be very effective. This objective
can be measured by the number of defects reported per test cases. Higher the number of the
defects reported the more effective are the test cases.
Once the delivery is made to the end users or the customers they should be able to operate it
without any complaints. In order to make this happen the tester should know as how the
customers are going to use this product and accordingly they should write down the test scenarios
and design the test cases. This will help a lot in fulfilling all the customer’s requirements.
Software testing makes sure that the testing is being done properly and hence the system is ready
for use. Good coverage means that the testing has been done to cover the various areas like
functionality of the application, compatibility of the application with the OS, hardware and
different types of browsers, performance testing to test the performance of the application and
load testing to make sure that the system is reliable and should not crash or there should not be
any blocking issues. It also determines that the application can be deployed easily to the machine
and without any resistance. Hence the application is easy to install, learn and use.

Unit Testing
Unit testing is a type of software testing that focuses on individual units or components of a software
system. The purpose of unit testing is to validate that each unit of the software works as intended and
meets the requirements. Unit testing is typically performed by developers, and it is performed early in the
development process before the code is integrated and tested as a whole system.
Unit tests are automated and are run each time the code is changed to ensure that new code does not
break existing functionality. Unit tests are designed to validate the smallest possible unit of code, such as
a function or a method, and test it in isolation from the rest of the system. This allows developers to
quickly identify and fix any issues early in the development process, improving the overall quality of the
software and reducing the time required for later testing.
Unit Testing is a software testing technique by means of which individual units of software i.e. group of
computer program modules, usage procedures, and operating procedures are tested to determine
whether they are suitable for use or not. It is a testing method using which every independent module is
tested to determine if there is an issue by the developer himself. It is correlated with the functional
correctness of the independent modules. Unit Testing is defined as a type of software testing where
individual components of a software are tested. Unit Testing of the software product is carried out during
the development of an application. An individual component may be either an individual function or a
procedure. Unit Testing is typically performed by the developer. In SDLC or V Model, Unit testing is the
first level of testing done before integration testing. Unit testing is such a type of testing technique that is
usually performed by developers. Although due to the reluctance of developers to test, quality assurance
engineers also do unit testing.

Objective of Unit Testing:


The objective of Unit Testing is:
To isolate a section of code.
To verify the correctness of the code.
To test every function and procedure.
To fix bugs early in the development cycle and to save costs.
To help the developers to understand the code base and enable them to make changes quickly.
To help with code reuse.
Types of Unit Testing:
There are 2 types of Unit Testing: Manual, and Automated.

Workflow of Unit Testing: Unit


Testing Techniques:
There are 3 types of Unit Testing Techniques. They are
Black Box Testing: This testing technique is used in covering the unit tests for input, user interface, and
output parts.
White Box Testing: This technique is used in testing the functional behavior of the system by giving the
input and checking the functionality output including the internal design structure and code of the
modules.
Gray Box Testing: This technique is used in executing the relevant test cases, test methods, test functions,
and analyzing the code performance for the modules.

Unit Testing Tools:


Here are some commonly used Unit Testing tools:
Jtest
Junit
NUnit
EMMA
PHPUnit
Advantages of Unit Testing:
Unit Testing allows developers to learn what functionality is provided by a unit and how to use it to gain a
basic understanding of the unit API.
Unit testing allows the programmer to refine code and make sure the module works properly.
Unit testing enables testing parts of the project without waiting for others to be completed.
Early Detection of Issues: Unit testing allows developers to detect and fix issues early in the development
process, before they become larger and more difficult to fix.
Improved Code Quality: Unit testing helps to ensure that each unit of code works as intended and meets
the requirements, improving the overall quality of the software.
Increased Confidence: Unit testing provides developers with confidence in their code, as they can validate
that each unit of the software is functioning as expected.
Faster Development: Unit testing enables developers to work faster and more efficiently, as they can
validate changes to the code without having to wait for the full system to be tested.
Better Documentation: Unit testing provides clear and concise documentation of the code and its
behavior, making it easier for other developers to understand and maintain the software.
Facilitation of Refactoring: Unit testing enables developers to safely make changes to the code, as they
can validate that their changes do not break existing functionality.
Reduced Time and Cost: Unit testing can reduce the time and cost required for later testing, as it helps to
identify and fix issues early in the development process.

Disadvantages of Unit Testing:


The process is time-consuming for writing the unit test cases.
Unit Testing will not cover all the errors in the module because there is a chance of having errors in the
modules while doing integration testing.
Unit Testing is not efficient for checking the errors in the UI(User Interface) part of the module.
It requires more time for maintenance when the source code is changed frequently.
It cannot cover the non-functional testing parameters such as scalability, the performance of the system,
etc.
Time and Effort: Unit testing requires a significant investment of time and effort to create and maintain
the test cases, especially for complex systems.
Dependence on Developers: The success of unit testing depends on the developers, who must write clear,
concise, and comprehensive test cases to validate the code.
Difficulty in Testing Complex Units: Unit testing can be challenging when dealing with complex units, as it
can be difficult to isolate and test individual units in isolation from the rest of the system.
Difficulty in Testing Interactions: Unit testing may not be sufficient for testing interactions between units,
as it only focuses on individual units.
Difficulty in Testing User Interfaces: Unit testing may not be suitable for testing user interfaces, as it
typically focuses on the functionality of individual units.
Over-reliance on Automation: Over-reliance on automated unit tests can lead to a false sense of security,
as automated tests may not uncover all possible issues or bugs.
Maintenance Overhead: Unit testing requires ongoing maintenance and updates, as the code and test
cases must be kept up-to-date with changes to the software.

Integration Testing
Integration testing is the process of testing the interface between two software units or modules. It focuses
on determining the correctness of the interface. The purpose of integration testing is to expose faults in the
interaction between integrated units. Once all the modules have been unit tested, integration testing is
performed.
Integration testing is a software testing technique that focuses on verifying the interactions and data
exchange between different components or modules of a software application. The goal of integration
testing is to identify any problems or bugs that arise when different components are combined and interact
with each other. Integration testing is typically performed after unit testing and before system testing. It
helps to identify and resolve integration issues early in the development cycle, reducing the risk of more
severe and costly problems later on.
Integration testing can be done by picking module by module. This can be done so that there should be a
proper sequence to be followed. And also if you don’t want to miss out on any integration scenarios then
you have to follow the proper sequence. Exposing the defects is the major focus of the integration testing
and the time of interaction between the integrated units.
Integration test approaches – There are four types of integration testing approaches. Those approaches are
the following:
1. Big-Bang Integration Testing – It is the simplest integration testing approach, where all the modules are
combined and the functionality is verified after the completion of individual module testing. In simple
words, all the modules of the system are simply put together and tested. This approach is practicable only
for very small systems. If an error is found during the integration testing, it is very difficult to localize the
error as the error may potentially belong to any of the modules being integrated. So, debugging errors
reported during big bang integration testing is very expensive to fix.
Big-Bang integration testing is a software testing approach in which all components or modules of a
software application are combined and tested at once. This approach is typically used when the software
components have a low degree of interdependence or when there are constraints in the development
environment that prevent testing individual components. The goal of big-bang integration testing is to verify
the overall functionality of the system and to identify any integration problems that arise when the
components are combined. While big-bang integration testing can be useful in some situations, it can also
be a high-risk approach, as the complexity of the system and the number of interactions between
components can make it difficult to identify and diagnose problems.
Advantages:
It is convenient for small systems.
Simple and straightforward approach.
Can be completed quickly.
Does not require a lot of planning or coordination.
May be suitable for small systems or projects with a low degree of interdependence between components.
Disadvantages:
There will be quite a lot of delay because you would have to wait for all the modules to be integrated.
High-risk critical modules are not isolated and tested on priority since all modules are tested at once.
Not Good for long projects.
High risk of integration problems that are difficult to identify and diagnose.
This can result in long and complex debugging and troubleshooting efforts.
This can lead to system downtime and increased development costs.
May not provide enough visibility into the interactions and data exchange between components.
This can result in a lack of confidence in the system’s stability and reliability.
This can lead to decreased efficiency and productivity.
This may result in a lack of confidence in the development team.
This can lead to system failure and decreased user satisfaction.
2. Bottom-Up Integration Testing – In bottom-up testing, each module at lower levels are tested with
higher modules until all modules are tested. The primary purpose of this integration testing is that each
subsystem tests the interfaces among various modules making up the subsystem. This integration testing
uses test drivers to drive and pass appropriate data to the lower-level modules.
Advantages:
In bottom-up testing, no stubs are required.
A principal advantage of this integration testing is that several disjoint subsystems can be tested
simultaneously.
It is easy to create the test conditions.
Best for applications that uses bottom up design approach.
It is Easy to observe the test results.
Disadvantages:
Driver modules must be produced.
In this testing, the complexity that occurs when the system is made up of a large number of small
subsystems.
As Far modules have been created, there is no working model can be represented.
3. Top-Down Integration Testing – Top-down integration testing technique is used in order to simulate the
behaviour of the lower-level modules that are not yet integrated. In this integration testing, testing takes
place from top to bottom. First, high-level modules are tested and then low-level modules and finally
integrating the low-level modules to a high level to ensure the system is working as intended.
Advantages:
Separately debugged module.
Few or no drivers needed.
It is more stable and accurate at the aggregate level.
Easier isolation of interface errors.
In this, design defects can be found in the early stages.
Disadvantages:
Needs many Stubs.
Modules at lower level are tested inadequately.
It is difficult to observe the test output.
It is difficult to stub design.
4. Mixed Integration Testing – A mixed integration testing is also called sandwiched integration testing. A
mixed integration testing follows a combination of top down and bottom-up testing approaches. In top-
down approach, testing can start only after the top-level module have been coded and unit tested. In
bottom-up approach, testing can start only after the bottom level modules are ready. This sandwich or
mixed approach overcomes this shortcoming of the top-down and bottom-up approaches. It is also called
the hybrid integration testing. also, stubs and drivers are used in mixed integration testing.
Advantages:
Mixed approach is useful for very large projects having several sub projects.
This Sandwich approach overcomes this shortcoming of the top-down and bottom-up approaches.
Parallel test can be performed in top and bottom layer tests.
Disadvantages:
For mixed integration testing, it requires very high cost because one part has a Top-down approach while
another part has a bottom-up approach.
This integration testing cannot be used for smaller systems with huge interdependence between different
modules.

Applications:
Identify the components: Identify the individual components of your application that need to be
integrated. This could include the frontend, backend, database, and any third-party services.
Create a test plan: Develop a test plan that outlines the scenarios and test cases that need to be executed
to validate the integration points between the different components. This could include testing data flow,
communication protocols, and error handling.
Set up test environment: Set up a test environment that mirrors the production environment as closely as
possible. This will help ensure that the results of your integration tests are accurate and reliable.
Execute the tests: Execute the tests outlined in your test plan, starting with the most critical and complex
scenarios. Be sure to log any defects or issues that you encounter during testing.
Analyze the results: Analyze the results of your integration tests to identify any defects or issues that
need to be addressed. This may involve working with developers to fix bugs or make changes to the
application architecture.
Repeat testing: Once defects have been fixed, repeat the integration testing process to ensure that the
changes have been successful and that the application still works as expected.

Acceptance Testing
Acceptance Testing is a method of software testing where a system is tested for acceptability. The major
aim of this test is to evaluate the compliance of the system with the business requirements and assess
whether it is acceptable for delivery or not. Standard Definition of Acceptance Testing:
It is a formal testing according to user needs, requirements and business processes conducted to
determine whether a system satisfies the acceptance criteria or not and to enable the users, customers or
other authorized entities to determine whether to accept the system or not.
Acceptance Testing is the last phase of software testing performed after System Testing and before
making the system available for actual use.
Types of Acceptance Testing:
User Acceptance Testing (UAT): User acceptance testing is used to determine whether the product is
working for the user correctly. Specific requirements which are quite often used by the customers are
primarily picked for the testing purpose. This is also termed as End-User Testing.
Business Acceptance Testing (BAT): BAT is used to determine whether the product meets the business
goals and purposes or not. BAT mainly focuses on business profits which are quite challenging due to the
changing market conditions and new technologies so the current implementation may have to being
changed which results in extra budgets.
Contract Acceptance Testing (CAT): CAT is a contract that specifies that once the product goes live, within
a predetermined period, the acceptance test must be performed and it should pass all the acceptance use
cases. Here is a contract termed a Service Level Agreement (SLA), which includes the terms where the
payment will be made only if the Product services are in-line with all the requirements, which means the
contract is fulfilled. Sometimes, this contract happens before the product goes live. There should be a
well-defined contract in terms of the period of testing, areas of testing, conditions on issues encountered
at later stages, payments, etc.
Regulations Acceptance Testing (RAT): RAT is used to determine whether the product violates the rules
and regulations that are defined by the government of the country where it is being released. This may be
unintentional but will impact negatively on the business. Generally, the product or application that is to
be released in the market, has to go under RAT, as different countries or regions have different rules and
regulations defined by its governing bodies. If any rules and regulations are violated for any country then
that country or the specific region then the product will not be released in that country or region. If the
product is released even though there is a violation then only the vendors of the product will be directly
responsible.
Operational Acceptance Testing (OAT): OAT is used to determine the operational readiness of the
product and is non-functional testing. It mainly includes testing of recovery, compatibility, maintainability,
reliability, etc. OAT assures the stability of the product before it is released to production.
Alpha Testing: Alpha testing is used to determine the product in the development testing environment by
a specialized testers team usually called alpha testers.
Beta Testing: Beta testing is used to assess the product by exposing it to the real end-users, usually called
beta testers in their environment. Feedback is collected from the users and the defects are fixed. Also,
this helps in enhancing the product to give a rich user experience.
Use of Acceptance Testing:
To find the defects missed during the functional testing phase.
How well the product is developed.
A product is what actually the customers need.
Feedback help in improving the product performance and user experience.
Minimize or eliminate the issues arising from the production.
Advantages of Acceptance Testing :
This testing helps the project team to know the further requirements from the users directly as it involves
the users for testing.
Automated test execution.
It brings confidence and satisfaction to the clients as they are directly involved in the testing process.
It is easier for the user to describe their requirement.
It covers only the Black-Box testing process and hence the entire functionality of the product will be
tested.
Disadvantages of Acceptance Testing :
Users should have basic knowledge about the product or application.
Sometimes, users don’t want to participate in the testing process.
The feedback for the testing takes long time as it involves many users and the opinions may differ from
one user to another user.
Development team is not participated in this testing process.
Regression Testing
Regression Testing is the process of testing the modified parts of the code and the parts that might get
affected due to the modifications to ensure that no new errors have been introduced in the software
after the modifications have been made. Regression means return of something and in the software field,
it refers to the return of a bug.
When to do regression testing?
When a new functionality is added to the system and the code has been modified to absorb and integrate
that functionality with the existing code.
When some defect has been identified in the software and the code is debugged to fix it.
When the code is modified to optimize its working.
Process of Regression testing:
Firstly, whenever we make some changes to the source code for any reasons like adding new
functionality, optimization, etc. then our program when executed fails in the previously designed test
suite for obvious reasons. After the failure, the source code is debugged in order to identify the bugs in
the program. After identification of the bugs in the source code, appropriate modifications are made.
Then appropriate test cases are selected from the already existing test suite which covers all the
modified and affected parts of the source code. We can add new test cases if required. In the end
regression testing is performed using the selected test cases.

Techniques for the selection of Test cases for Regression Testing:


Select all test cases: In this technique, all the test cases are selected from the already existing test suite. It
is the most simple and safest technique but not much efficient.
Select test cases randomly: In this technique, test cases are selected randomly from the existing test-suite
but it is only useful if all the test cases are equally good in their fault detection capability which is very
rare. Hence, it is not used in most of the cases.
Select modification traversing test cases: In this technique, only those test cases are selected which
covers and tests the modified portions of the source code the parts which are affected by these
modifications.
Select higher priority test cases: In this technique, priority codes are assigned to each test case of the test
suite based upon their bug detection capability, customer requirements, etc. After assigning the priority
codes, test cases with highest priorities are selected for the process of regression testing.
Test case with highest priority has highest rank. For example, test case with priority code 2 is less
important than test case with priority code 1.
Tools for regression testing: In regression testing, we generally select the test cases form the existing test
suite itself and hence, we need not to compute their expected output and it can be easily automated due
to this reason. Automating the process of regression testing will be very much effective and time saving.
Most commonly used tools for regression testing are:
Selenium
WATIR (Web Application Testing In Ruby)
QTP (Quick Test Professional)
RFT (Rational Functional Tester)
Winrunner
Silktest
Advantages of Regression Testing:
It ensures that no new bugs has been introduced after adding new functionalities to the system.
As most of the test cases used in Regression Testing are selected from the existing test suite and we
already know their expected outputs. Hence, it can be easily automated by the automated tools.
It helps to maintain the quality of the source code.
Disadvantages of Regression Testing:
It can be time and resource consuming if automated tools are not used.
It is required even after very small changes in the code.
Performance Testing
Performance Testing is a type of software testing that ensures software applications to perform properly
under their expected workload. It is a testing technique carried out to determine system performance in
terms of sensitivity, reactivity and stability under a particular workload.
Performance testing is a type of software testing that focuses on evaluating the performance and
scalability of a system or application. The goal of performance testing is to identify bottlenecks, measure
system performance under various loads and conditions, and ensure that the system can handle the
expected number of users or transactions.

There are several types of performance testing, including:


Load testing: Load testing simulates a real-world load on the system to see how it performs under stress.
It helps identify bottlenecks and determine the maximum number of users or transactions the system can
handle.
Stress testing: Stress testing is a type of load testing that tests the system’s ability to handle a high load
above normal usage levels. It helps identify the breaking point of the system and any potential issues that
may occur under heavy load conditions.
Spike testing: Spike testing is a type of load testing that tests the system’s ability to handle sudden spikes
in traffic. It helps identify any issues that may occur when the system is suddenly hit with a high number
of requests.
Soak testing: Soak testing is a type of load testing that tests the system’s ability to handle a sustained load
over a prolonged period of time. It helps identify any issues that may occur after prolonged usage of the
system.
Endurance testing: This type of testing is similar to soak testing, but it focuses on the long-term behavior
of the system under a constant load.
Performance Testing is the process of analyzing the quality and capability of a product. It is a testing
method performed to determine the system performance in terms of speed, reliability and stability under
varying workload. Performance testing is also known as Perf Testing.
Performance Testing Attributes:
Speed:
It determines whether the software product responds rapidly.
Scalability:
It determines amount of load the software product can handle at a time.
Stability:
It determines whether the software product is stable in case of varying workloads.
Reliability:
It determines whether the software product is secure or not.
Objective of Performance Testing:
The objective of performance testing is to eliminate performance congestion.
It uncovers what is needed to be improved before the product is launched in market.
The objective of performance testing is to make software rapid.
The objective of performance testing is to make software stable and reliable.
The objective of performance testing is to evaluate the performance and scalability of a system or
application under various loads and conditions. It helps identify bottlenecks, measure system
performance, and ensure that the system can handle the expected number of users or transactions. It also
helps to ensure that the system is reliable, stable and can handle the expected load in a production
environment.
Types of Performance Testing:
Load testing:
It checks the product’s ability to perform under anticipated user loads. The objective is to identify
performance congestion before the software product is launched in market.
Stress testing:
It involves testing a product under extreme workloads to see whether it handles high traffic or not. The
objective is to identify the breaking point of a software product.
Endurance testing:
It is performed to ensure the software can handle the expected load over a long period of time.
Spike testing:
It tests the product’s reaction to sudden large spikes in the load generated by users.
Volume testing:
In volume testing large number of data is saved in a database and the overall software system’s behavior
is observed. The objective is to check product’s performance under varying database volumes.
Scalability testing:
In scalability testing, software application’s effectiveness is determined in scaling up to support an
increase in user load. It helps in planning capacity addition to your software system.
Performance Testing Process:
Performance Testing Tools:
Jmeter
Open STA
Load Runner
Web Load
Advantages of Performance Testing :
Performance testing ensures the speed, load capability, accuracy and other performances of the system.
It identifies, monitors and resolves the issues if anything occurs.
It ensures the great optimization of the software and also allows large number of users to use it on same
time.
It ensures the client as well as end-customers satisfaction.Performance testing has several advantages
that make it an important aspect of software testing:
Identifying bottlenecks: Performance testing helps identify bottlenecks in the system such as slow
database queries, insufficient memory, or network congestion. This helps developers optimize the system
and ensure that it can handle the expected number of users or transactions.
Improved scalability: By identifying the system’s maximum capacity, performance testing helps ensure
that the system can handle an increasing number of users or transactions over time. This is particularly
important for web-based systems and applications that are expected to handle a high volume of traffic.
Improved reliability: Performance testing helps identify any potential issues that may occur under heavy
load conditions, such as increased error rates or slow response times. This helps ensure that the system is
reliable and stable when it is deployed to production.
Reduced risk: By identifying potential issues before deployment, performance testing helps reduce the
risk of system failure or poor performance in production.
Cost-effective: Performance testing is more cost-effective than fixing problems that occur in production.
It is much cheaper to identify and fix issues during the testing phase than after deployment.
Improved user experience: By identifying and addressing bottlenecks, performance testing helps ensure
that users have a positive experience when using the system. This can help improve customer satisfaction
and loyalty.
Better Preparation: Performance testing can also help organizations prepare for unexpected traffic
patterns or changes in usage that might occur in the future.
Compliance: Performance testing can help organizations meet regulatory and industry standards.
Better understanding of the system: Performance testing provides a better understanding of how the
system behaves under different conditions, which can help in identifying potential problem areas and
improving the overall design of the system.
Disadvantages of Performance Testing :
Sometimes, users may find performance issues in the real time environment.
Team members who are writing test scripts or test cases in the automation tool should have high-level of
knowledge.
Team members should have high proficiency to debug the test cases or test scripts.
Low performances in the real environment may lead to lose large number of users
Performance testing also has some disadvantages, which include:
Resource-intensive: Performance testing can be resource-intensive, requiring significant hardware and
software resources to simulate a large number of users or transactions. This can make performance
testing expensive and time-consuming.
Complexity: Performance testing can be complex, requiring specialized knowledge and expertise to set up
and execute effectively. This can make it difficult for teams with limited resources or experience to
perform performance testing.
Limited testing scope: Performance testing is focused on the performance of the system under stress, and
it may not be able to identify all types of issues or bugs. It’s important to combine performance testing
with other types of testing such as functional testing, regression testing, and acceptance testing.
Inaccurate results: If the performance testing environment is not representative of the production
environment or the performance test scenarios do not accurately simulate real-world usage, the results of
the test may not be accurate.
Difficulty in simulating real-world usage: It’s difficult to simulate real-world usage, and it’s hard to predict
how users will interact with the system. This makes it difficult to know if the system will handle the
expected load.
Complexity in analyzing the results: Performance testing generates a large amount of data, and it can be
difficult to analyze the results and determine the root cause of performance issues.
Functional Testing
Functional Testing is a type of Software Testing in which the system is tested against the functional
requirements and specifications. Functional testing ensures that the requirements or specifications are
properly satisfied by the application. This type of testing is particularly concerned with the result of
processing. It focuses on simulation of actual system usage but does not develop any system structure
assumptions.
It is basically defined as a type of testing which verifies that each function of the software application
works in conformance with the requirement and specification. This testing is not concerned about the
source code of the application. Each functionality of the software application is tested by providing
appropriate test input, expecting the output and comparing the actual output with the expected output.
This testing focuses on checking of user interface, APIs, database, security, client or server application and
functionality of the Application Under Test.
Functional testing can be manual or automated.
Functional Testing Process:
Functional testing involves the following steps:
Identify function that is to be performed.
Create input data based on the specifications of function.
Determine the output based on the specifications of function.
Execute the test case.
Compare the actual and expected output.
Major Functional Testing Techniques:
Unit Testing
Integration Testing
Smoke Testing
User Acceptance Testing
Interface Testing
Usability Testing
System Testing
Regression Testing
Functional Testing Tools:
1. Selenium
2. QTP
3. JUnit
4. SoapUI
5. Watir

Advantages of Functional Testing:


It ensures to deliver a bug-free product.
It ensures to deliver a high-quality product.
No assumptions about the structure of the system.
This testing is focused on the specifications as per the customer usage.
Disadvantages of Functional Testing:
There are high chances of performing redundant testing.
Logical errors can be missed out in the product.
If the requirement is not complete then performing this testing becomes difficult.

Difference between Top Down Integration Testing and Bottom Up Integration Testing :

S.No. TOP DOWN INTEGRATION BOTTOM UP INTEGRATION


TESTING TESTING

01. Top Down Integration testing is Bottom Up Integration testing is


one of the approach of one of the approach of
Integration testing in which Integration testing in which
integration testing takes place integration testing takes place
from top to bottom means from bottom to top means
system integration begins with system integration begins with
top level modules. lowest level modules.

02. In this testing the higher level In this testing the lower level
modules are tested first then modules are tested first then
the lower level modules are the higher level modules are
tested and then the modules are tested and then the modules are
integrated accordingly. integrated accordingly.

03. In this testing stubs are used for In this testing drivers are used
simulate the submodule if the for simulate the main module if
invoked submodule is not the main module is not
developed means Stub works as developed means Driver works
a momentary replacement. as a momentary replacement.
04. Top Down Integration testing Bottom Up Integration testing
approach is beneficial if the approach is beneficial if the
significant defect occurs toward crucial flaws encounters
the top of the program. towards the bottom of the
program.

05. In Top Down Integration testing In Bottom Up Integration testing


approach the main module is approach different modules are
designed at first then the created first then these modules
submodules/subroutines are are integrated with the main
called from it. function.

06. It is implemented on It is implemented on Object-


Structure/procedure-oriented oriented programming
programming languages. languages.

07. The complexity of this testing is The complexity of this testing is


simple. complex and highly data
intensive.

08. It works on big to small It works on small to big


components. components.

09. In this approach Stub modules In this approach, Driver modules


must be produced. must be produced.

10. In terms of cost, Top Down Bottom Up testing is less


testing is more expensive expensive as compared to Top
because it requires the complete Down because it allows early
system for testing. identification and resolution of
the model issues.

Difference between Stubs and Drivers :

S.No. Stubs Drivers

Stubs are used in Top-Down Drivers are used in Bottom-Up


1.
Integration Testing. Integration Testing.

Stubs are basically known as a


While, drivers are the “calling
“called programs” and are used
2. program” and are used in
in the Top-down integration
bottom-up integration testing.
testing.
Stubs are similar to the modules While drivers are used to
3. of the software, that are under invoking the component that
development process. needs to be tested.

While drivers are mainly used in


Stubs are basically used in the
place of high-level modules and
4. unavailability of low-level
in some situation as well as for
modules.
low-level modules.

Whereas the drivers are used if


Stubs are taken into use to test
the main module of the
5. the feature and functionality of
software isn’t developed for
the modules.
testing.

The stubs are taken into The drivers are taken into
concern if testing of upper- concern if testing of lower-
levels of the modules are done levels of the modules are done
6.
and the lower-levels of the and the upper-levels of the
modules are under developing modules are under developing
process. process.

Stubs are used when lower-level Drivers are used when higher-
of modules are missing or in a level of modules are missing or
7. partially developed phase, and in a partially developed phase,
we want to test the main and we want to test the
module. lower(sub)- module.

There are many primary elements that are required to make the product testing lucid hassle-free. Every
element has its own specific utility that helps a lot while software testing and delivering the expected
functionality as per the SRS document as much as possible. Stubs and Drivers are two such elements that
play a very crucial role while testing; they replace the modules that haven’t been developed yet but are
still needed in the testing of other modules against expected functionality and features.
Stubs and Drivers :
The Stubs and Drivers are considered as elements which are equivalent to to-do modules that could be
replaced if modules are in their developing stage, missing or not developed yet, so that necessity of such
modules could be met. Drivers and stubs simulate features and functionalities, and have ability to serve
features that a module can provide. This reduces useless delay in testing and makes the testing process
faster.
Stubs are mainly used in Top-Down integration testing while the Drivers are used in Bottom-up
integration testing, thus increasing the efficiency of testing process.
1. Stubs :
Stubs are developed by software developers to use them in place of modules, if the respective modules
aren’t developed, missing in developing stage, or are unavailable currently while Top-down testing of
modules. A Stub simulates module which has all the capabilities of the unavailable module. Stubs are used
when the lower-level modules are needed but are unavailable currently.
Stubs are divided into four basic categories based on what they do :
Shows the traced messages,
Shows the displayed message if any,
Returns the corresponding values that are utilized by modules,
Returns the value of the chosen parameters(arguments) that were used by the testing modules.
2. Drivers :
Drivers serve the same purpose as stubs, but drivers are used in Bottom-up integration testing and are
also more complex than stubs. Drivers are also used when some modules are missing and unavailable at
time of testing of a specific module because of some unavoidable reasons, to act in absence of required
module. Drivers are used when high-level modules are missing and can also be used when lower-level
modules are missing.
Ex : Suppose, you are told to test a website whose corresponding primary modules are, where each of
them is interdependent on each other, as follows:
Module-A : Login page website,
Module-B : Home page of the website
Module-C : Profile setting
Module-D : Sign-out page
It’s always considered good practice to begin development of all modules parallelly because as soon as
each gets developed they can be integrated and could be tested further as per their corresponding
interdependencies order with a module. But in some cases, if any one of them is in developing stage or
not available in the testing process of a specific module, stubs or drivers could be used instead.
Assume Module-A is developed. As soon as it’s developed, it undergoes testing, but it requires Module-B,
which isn’t developed yet. So in this case, we can use the Stubs or Drivers that simulate all features and
functionality that might be shown by actual Module-B. So, we can conclude that Stubs and drivers are
used to fulfill the necessity of unavailable modules. Similarly, we may also use Stubs or Drivers in place of
Module-C and Module-D if they are too not available.
Do both drivers and Stubs serve the same functionality?
Yes, we can say both serve the same feature and are used in the absence of a module(M1) that has
interdependencies with an other module(M2) that is need to be test, so we use drivers or stubs in order to
fulfill module(M1)’s unavailability’s and to serve its functionality.
Structural Software Testing(White box Testing)

White box testing techniques analyze the internal structures the used data structures, internal design, code
structure, and the working of the software rather than just the functionality as in black box testing. It is also
called glass box testing or clear box testing or structural testing. White Box Testing is also known as
transparent testing or open box testing.
White box testing is a software testing technique that involves testing the internal structure and workings
of a software application. The tester has access to the source code and uses this knowledge to design test
cases that can verify the correctness of the software at the code level.
White box testing is also known as structural testing or code-based testing, and it is used to test the
software’s internal logic, flow, and structure. The tester creates test cases to examine the code paths and
logic flows to ensure they meet the specified requirements.
Working process of white box testing:
Input: Requirements, Functional specifications, design documents, source code.
Processing: Performing risk analysis to guide through the entire process.
Proper test planning: Designing test cases so as to cover the entire code. Execute rinse-repeat until error-
free software is reached. Also, the results are communicated.
Output: Preparing final report of the entire testing process.

Testing techniques:
Statement coverage: In this technique, the aim is to traverse all statements at least once. Hence, each
line of code is tested. In the case of a flowchart, every node must be traversed at least once. Since all lines
of code are covered, helps in pointing out faulty code.

Statement Coverage Example

Branch Coverage: In this technique, test cases are designed so that each branch from all decision points is
traversed at least once. In a flowchart, all edges must be traversed at least once.
4 test cases are required such that all branches of all decisions are covered, i.e, all edges of the flowchart are covered

Condition Coverage: In this technique, all individual conditions must be covered as shown in the following
example:
READ X, Y
IF(X == 0 || Y == 0)
PRINT ‘0’
#TC1 – X = 0, Y = 55
#TC2 – X = 5, Y = 0
Multiple Condition Coverage: In this technique, all the possible combinations of the possible outcomes of
conditions are tested at least once. Let’s consider the following example:
READ X, Y
IF(X == 0 || Y == 0)
PRINT ‘0’
#TC1: X = 0, Y = 0
#TC2: X = 0, Y = 5
#TC3: X = 55, Y = 0
#TC4: X = 55, Y = 5
Basis Path Testing: In this technique, control flow graphs are made from code or flowchart and then
Cyclomatic complexity is calculated which defines the number of independent paths so that the minimal
number of test cases can be designed for each independent path. Steps:
Make the corresponding control flow graph
Calculate the cyclomatic complexity
Find the independent paths
Design test cases corresponding to each independent path
V(G) = P + 1, where P is the number of predicate nodes in the flow graph
V(G) = E – N + 2, where E is the number of edges and N is the total number of nodes
V(G) = Number of non-overlapping regions in the graph
#P1: 1 – 2 – 4 – 7 – 8
#P2: 1 – 2 – 3 – 5 – 7 – 8
#P3: 1 – 2 – 3 – 6 – 7 – 8
#P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8
Loop Testing: Loops are widely used and these are fundamental to many algorithms hence, their testing is
very important. Errors often occur at the beginnings and ends of loops.
Simple loops: For simple loops of size n, test cases are designed that:
Skip the loop entirely
Only one pass through the loop
2 passes
m passes, where m < n
n-1 ans n+1 passes
Nested loops: For nested loops, all the loops are set to their minimum count and we start from the
innermost loop. Simple loop tests are conducted for the innermost loop and this is worked outwards till
all the loops have been tested.
Concatenated loops: Independent loops, one after another. Simple loop tests are applied for each. If
they’re not independent, treat them like nesting.
White Testing is Performed in 2 Steps:
1. Tester should understand the code well
2. Tester should write some code for test cases and execute them
Tools required for White box Testing:
PyUnit
Sqlmap
Nmap
Parasoft Jtest
Nunit
VeraUnit
CppUnit
Bugzilla
Fiddler
JSUnit.net
OpenGrok
Wireshark
HP Fortify
CSUnit

Features of white box testing:


Code coverage analysis: White box testing helps to analyze the code coverage of an application, which
helps to identify the areas of the code that are not being tested.
Access to the source code: White box testing requires access to the application’s source code, which
makes it possible to test individual functions, methods, and modules.
Knowledge of programming languages: Testers performing white box testing must have knowledge of
programming languages like Java, C++, Python, and PHP to understand the code structure and write tests.
Identifying logical errors: White box testing helps to identify logical errors in the code, such as infinite
loops or incorrect conditional statements.
Integration testing: White box testing is useful for integration testing, as it allows testers to verify that the
different components of an application are working together as expected.
Unit testing: White box testing is also used for unit testing, which involves testing individual units of code
to ensure that they are working correctly.
Optimization of code: White box testing can help to optimize the code by identifying any performance
issues, redundant code, or other areas that can be improved.
Security testing: White box testing can also be used for security testing, as it allows testers to identify any
vulnerabilities in the application’s code.
Advantages:
White box testing is thorough as the entire code and structures are tested.
It results in the optimization of code removing errors and helps in removing extra lines of code.
It can start at an earlier stage as it doesn’t require any interface as in the case of black box testing.
Easy to automate.
White box testing can be easily started in Software Development Life Cycle.
Easy Code Optimization.
Some of the advantages of white box testing include:
Testers can identify defects that cannot be detected through other testing techniques.

Testers can create more comprehensive and effective test cases that cover all code paths.

Testers can ensure that the code meets coding standards and is optimized for performance.
However, there are also some disadvantages to white box testing, such as:
Testers need to have programming knowledge and access to the source code to perform tests.

Testers may focus too much on the internal workings of the software and may miss external issues.

Testers may have a biased view of the software since they are familiar with its internal workings.
Overall, white box testing is an important technique in software engineering, and it is useful for
identifying defects and ensuring that software applications meet their requirements and specifications at
the code level
Disadvantages:
It is very expensive.
Redesigning code and rewriting code needs test cases to be written again.
Testers are required to have in-depth knowledge of the code and programming language as opposed to
black-box testing.
Missing functionalities cannot be detected as the code that exists is tested.
Very complex and at times not realistic.
Much more chances of Errors in production.
Functional Testing (Black box testing)
Black box testing is a type of software testing in which the functionality of the software is not known. The
testing is done without the internal knowledge of the products.
Black box testing can be done in the following ways:
1. Syntax-Driven Testing – This type of testing is applied to systems that can be syntactically represented
by some language. For example- compilers, language that can be represented by context-free grammar. In
this, the test cases are generated so that each grammar rule is used at least once.
2. Equivalence partitioning – It is often seen that many types of inputs work similarly so instead of giving
all of them separately we can group them and test only one input of each group. The idea is to partition the
input domain of the system into several equivalence classes such that each member of the class works
similarly, i.e., if a test case in one class results in some error, other members of the class would also result
in the same error.

The technique involves two steps:


Identification of equivalence class – Partition any input domain into a minimum of two sets: valid values
and invalid values. For example, if the valid range is 0 to 100 then select one valid input like 49 and one
invalid like 104.
Generating test cases – (i) To each valid and invalid class of input assign a unique identification number. (ii)
Write a test case covering all valid and invalid test cases considering that no two invalid inputs mask each
other. To calculate the square root of a number, the equivalence classes will be (a) Valid inputs:
The whole number which is a perfect square- output will be an integer.
The whole number which is not a perfect square- output will be a decimal number.
Positive decimals
Negative numbers(integer or decimal).
Characters other than numbers like “a”,”!”,”;”, etc.
3. Boundary value analysis – Boundaries are very good places for errors to occur. Hence if test cases are
designed for boundary values of the input domain then the efficiency of testing improves and the
probability of finding errors also increases. For example – If the valid range is 10 to 100 then test for 10,100
also apart from valid and invalid inputs.
4. Cause effect Graphing – This technique establishes a relationship between logical input called causes
with corresponding actions called the effect. The causes and effects are represented using Boolean graphs.
The following steps are followed:
Identify inputs (causes) and outputs (effect).
Develop a cause-effect graph.
Transform the graph into a decision table.
Convert decision table rules to test cases.
For example, in the following cause-effect graph:

It can be converted into a decision table like:

Each column corresponds to a rule which will become a test case for testing. So there will be 4 test cases.
5. Requirement-based testing – It includes validating the requirements given in the SRS of a software
system.
6. Compatibility testing – The test case result not only depends on the product but is also on the
infrastructure for delivering functionality. When the infrastructure parameters are changed it is still
expected to work properly. Some parameters that generally affect the compatibility of software are:
Processor (Pentium 3, Pentium 4) and several processors.
Architecture and characteristics of machine (32-bit or 64-bit).
Back-end components such as database servers.
Operating System (Windows, Linux, etc).
Black Box Testing Type
The following are the several categories of black box testing:
Functional Testing
Regression Testing
Nonfunctional Testing (NFT)
Functional Testing: It determines the system’s software functional requirements.
Regression Testing: It ensures that the newly added code is compatible with the existing code. In other
words, a new software update has no impact on the functionality of the software. This is carried out after
a system maintenance operation and upgrades.
Nonfunctional Testing: Nonfunctional testing is also known as NFT. This testing is not functional testing of
software. It focuses on the software’s performance, usability, and scalability.
Tools Used for Black Box Testing:
Appium
Selenium
Microsoft Coded UI
Applitools
HP QTP.

Features of black box testing:


Independent testing: Black box testing is performed by testers who are not involved in the development of
the application, which helps to ensure that testing is unbiased and impartial.
Testing from a user’s perspective: Black box testing is conducted from the perspective of an end user, which
helps to ensure that the application meets user requirements and is easy to use.
No knowledge of internal code: Testers performing black box testing do not have access to the application’s
internal code, which allows them to focus on testing the application’s external behavior and functionality.
Requirements-based testing: Black box testing is typically based on the application’s requirements, which
helps to ensure that the application meets the required specifications.
Different testing techniques: Black box testing can be performed using various testing techniques, such as
functional testing, usability testing, acceptance testing, and regression testing.
Easy to automate: Black box testing is easy to automate using various automation tools, which helps to
reduce the overall testing time and effort.
Scalability: Black box testing can be scaled up or down depending on the size and complexity of the
application being tested.
Limited knowledge of application: Testers performing black box testing have limited knowledge of the
application being tested, which helps to ensure that testing is more representative of how the end users
will interact with the application.
Advantages of Black Box Testing:
The tester does not need to have more functional knowledge or programming skills to implement the Black
Box Testing.
It is efficient for implementing the tests in the larger system.
Tests are executed from the user’s or client’s point of view.
Test cases are easily reproducible.
It is used in finding the ambiguity and contradictions in the functional specifications.
Disadvantages of Black Box Testing:
There is a possibility of repeating the same tests while implementing the testing process.
Without clear functional specifications, test cases are difficult to implement.
It is difficult to execute the test cases because of complex inputs at different stages of testing.
Sometimes, the reason for the test failure cannot be detected.
Some programs in the application are not tested.
It does not reveal the errors in the control structure.
Working with a large sample space of inputs can be exhaustive and consumes a lot of time.

Test Data Suit Preparation:

Test Data in Software Testing is the input given to a software program during test execution. It represents
data that affects or is affected by software execution while testing.

Test data is used for both positive testing to verify that functions produce expected results for given
inputs and for negative testing to test software ability to handle unusual, exceptional or unexpected
inputs
Valid test data. It is necessary to verify whether the system functions are in compliance with the
requirements, and the system processes and stores the data as intended.

Invalid test data. QA engineers should inspect whether the software correctly processes invalid values,
shows the relevant messages, and notifies the user that the data are improper.

Boundary test data. Help to reveal the defects connected with processing boundary values.

Wrong data. Testers have to check how the system reacts on entering the data of inappropriate format,
whether it shows the correct error messages.

Absent data. It is a good practice to verify how the product handles entering a blank field in the course of
software testing.

Test suite is a container that has a set of tests which helps testers in executing and reporting the test
execution status. It can take any of the three states namely Active, Inprogress and completed.

A Test case can be added to multiple test suites and test plans. After creating a test plan, test suites are
created which in turn can have any number of tests.

In software development, a test suite, less commonly known as a 'validation suite', is a collection of test
cases that are intended to be used to test a software program to show that it has some specified set of
behaviours.

Difference between Alpha and Beta Testing:


The difference between Alpha and Beta Testing is as follows:

Alpha Testing Beta Testing


Alpha testing involves both the white box and
Beta testing commonly uses black-box testing.
black box testing.

Alpha testing is performed by testers who are Beta testing is performed by clients who are not
usually internal employees of the organization. part of the organization.

Beta testing is performed at the end-user of the


Alpha testing is performed at the developer’s site.
product.

Reliability and security testing are not checked in Reliability, security and robustness are checked
alpha testing. during beta testing.

Beta testing also concentrates on the quality of


Alpha testing ensures the quality of the product the product but collects users input on the
before forwarding to beta testing. product and ensures that the product is ready for
real time users.

Alpha testing requires a testing environment or a Beta testing doesn’t require a testing
lab. environment or lab.

Beta testing requires only a few weeks of


Alpha testing may require a long execution cycle.
execution.

Most of the issues or feedback collected from the


Developers can immediately address the critical
beta testing will be implemented in future
issues or fixes in alpha testing.
versions of the product.

Only one or two test cycles are there in beta


Multiple test cycles are organized in alpha testing.
testing.

Formal Technical Review (Peer Reviews)


Formal Technical Review (FTR) is a software quality control activity performed by software engineers.
Objectives of formal technical review (FTR): Some of these are:
Useful to uncover error in logic, function and implementation for any representation of the software.
The purpose of FTR is to verify that the software meets specified requirements.
To ensure that software is represented according to predefined standards.
It helps to review the uniformity in software that is development in a uniform manner.
To makes the project more manageable.
In addition, the purpose of FTR is to enable junior engineer to observe the analysis, design, coding and
testing approach more closely. FTR also works to promote back up and continuity become familiar with
parts of software they might not have seen otherwise. Actually, FTR is a class of reviews that include
walkthroughs, inspections, round robin reviews and other small group technical assessments of software.
Each FTR is conducted as meeting and is considered successful only if it is properly planned, controlled
and attended.
Example:
suppose during the development of the software without FTR design cost 10 units, coding cost 15 units
and testing cost 10 units then the total cost till now is 25 units without maintenance but there was a
quality issue because of bad design so to fix it we have to re design the software and final cost will
become 50 units. that is why FTR is so helpful while developing the software.
The review meeting: Each review meeting should be held considering the following constraints-
Involvement of people:
Between 3, 4 and 5 people should be involve in the review.
Advance preparation should occur but it should be very short that is at the most 2 hours of work for every
person.
The short duration of the review meeting should be less than two hour. Gives these constraints, it should
be clear that an FTR focuses on specific (and small) part of the overall software.
At the end of the review, all attendees of FTR must decide what to do.
Accept the product without any modification.
Reject the project due to serious error (Once corrected, another app need to be reviewed), or
Accept the product provisional (minor errors are encountered and should be corrected, but no additional
review will be required).
The decision was made, with all FTR attendees completing a sign-of indicating their participation in the
review and their agreement with the findings of the review team.
Review reporting and record keeping :-
During the FTR, the reviewer actively records all issues that have been raised.
At the end of the meeting all these issues raised are consolidated and a review list is prepared.
Finally, a formal technical review summary report is prepared.
It answers three questions :-
What was reviewed ?
Who reviewed it ?
What were the findings and conclusions ?
Review guidelines :- Guidelines for the conducting of formal technical reviews should be established in
advance. These guidelines must be distributed to all reviewers, agreed upon, and then followed. A review
that is unregistered can often be worse than a review that does not minimum set of guidelines for FTR.
Review the product, not the manufacture (producer).
Take written notes (record purpose)
Limit the number of participants and insists upon advance preparation.
Develop a checklist for each product that is likely to be reviewed.
Allocate resources and time schedule for FTRs in order to maintain time schedule.
Conduct meaningful training for all reviewers in order to make reviews effective.
Reviews earlier reviews which serve as the base for the current review being conducted.
Set an agenda and maintain it.
Separate the problem areas, but do not attempt to solve every problem notes.
Limit debate and rebuttal.

Walkthrough
The walkthrough is a review meeting process but it is different from the Inspection, as it does not involve
any formal process i.e. it is a nonformal process. Basically, the walkthrough [review meeting process] is
started by the Author of the code.
In the walkthrough, the code or document is read by the author, and others who are present in the meeting
can note down the important points or can write notes on the defects and can give suggestions about them.
The walkthrough is an informal way of testing, no formal authority is been involved in this testing.

As there is an informal way of testing involved so there is no need for a moderator while performing a
walkthrough. We can call a walkthrough an open-ended discussion, it does not focus on the documentation.
Defect tracking is one of the challenging tasks in the walkthrough.
Advantages and Objectives of Walkthrough:
Following are some of the objectives of the walkthrough.
To detect defects in developed software products.
To fully understand and learn the development of software products.
To properly explain and discuss the information present in the document.
To verify the validity of the proposed system.
To give suggestions and report them appropriately with new solutions and ideas.
To provide an early “proof of concept”.

Types of Review:
Walkthrough
Technical review
Inspection

REVIEW TYPES

Technical Review:
The technical review is a less formal way of reviewing a meeting process. It is process which is performed
to give assurance about software quality. In Technical review process, the testing activity is performed by
software engineers and other persons.
Here a work product is been inspected and reviewed for defects and other errors by individuals rather than
the person who have produced it. The technical review is performed as a peer review without any
management involved. So in context, the technical review varies from informal to quite formal.

Objectives of Technical Review:


Following are some of the objectives of Technical Review.
To create more reliable and manageable project.
To find technical error and defects.
To inform participants who have participated in technical review, about the technical content of document.
To maintain consistency of technical concepts.
To ensure that the software fulfils the requirements for which it is built.

Inspection:
Inspection is a more formal way of reviewing a meeting process. In inspection the documents are checked
thoroughly and reviewed by reviewer’s before the meeting. it basically involves peers to examine the
product or software.
A preparation is been carried out separately in which the product is been examined properly and scanned
for any defects or errors. The process of inspection is led and done by trained moderators. A formal follow
up is carried out by this trained moderators. Inspection is been done to improve the product quality more
efficiently.

Objectives of Inspection:
Following are some of the objectives of Inspection.
To efficiently improve the quality of a software or product.
To create common understanding between individuals who are been involved in the inspection.
To remove defects and errors as early as possible.
To learn and improve from defects which are been found during inspection.
Helping author to improve and increase the quality of product which is been developed.

Compliance with Design

Compliance Testing is performed to maintain and validate the compliant state for the life of the software.
Every industry has a regulatory and compliance board that protects the end users.

Software compliance refers to how well an application obeys the rules in a standard. To achieve software
compliance, you might also have to, for example, produce certain types of documentation or add security
testing at more points in your software development life cycle.
Checklists:

Professionals,who are knowledgeable and experienced,who understand the compliance must be retained.

Understanding the risks and impacts of being non-compliant

Document the processes and follow them

Perform an internal audit and follow with an action plan to fix the issues

Coding Standards

Coding:

The objective of the coding phase is to transform the design of a system into code in a high-level language
and then to unit test this code.

Good software development organizations normally require their programmers to adhere to some well-
defined and standard style of coding called coding standards.

Coding Standards:

A coding standard gives a uniform appearance to the codes written by different engineers.

It enhances code understanding

It encourages good programming practice


Agree upon standards for coding styles

Promotes ease of understanding and uniformity

No idiosyncratic quirks that could complicate understanding and refactoring by the entire team.

Coding Guidelines:

Line length

Spacing

Code is well documented

Length not exceed 10 source lines

Don’t use goto statement

Inline comments

Error messages

You might also like