Unit -5 Software Testing
Unit -5 Software Testing
Software Quality
Quality is defined as the product or services that should be "fit for use and purpose."
Quality is all about meeting the needs and expectations of customers concerning
functionality, design, reliability, durability, and price of the product.
Software Assurance
1. Functionality
2. Reliability
3. Usability
4. Efficiency
5. Maintainability.
6. Portability
1. Pre-project Plan
Pre-project Plan ensures that the resources required for project, schedule, and budget should be
clearly defined. Plan for development and ensuring quality has been determined.
o Development plan
o Schedules
o Risk evaluation
o Quality plan
o Project methodology
2. Project lifecycle component
1. Development Stage
In the Development Stage Component, Software Quality Assurance help to identify the design
and programming errors. Its Components are divided into the following sub-classes: Reviews,
Expert Opinions, and Software Testing.
In Operation Maintenance Stage, the Software Quality Assurance components include the
Development lifecycle component along with specialized components whose aim is to improve
the maintenance tasks.
The aim of this component is to the prevention of software faults and minimizes the rate of
errors.
o Configuration Management
o Documentation Control
This class of component consists of controlling development and maintenance activities. These
components establish the managerial control of software development projects. The management
component aims to prevent the project from going over budget and behind schedule.
Various QA tools help with quality assurance. There are different QA tools required for different
purposes. For comprehensive software quality assurance, we will need a different kind of tool
which is also known as QA software.
o Infrastructure
o Release Management
o Source Control
o Code Reviews
o Testing
o Test management
o Usability Testing
o Load Testing
o Availability Monitoring
o Business Analytics
o Exception Handling
o Log Monitoring
o Performance Monitoring
The whole process of quality assurance has to define the cycle called the PDCA cycle.
o Plan
o Do
o Check
o Act
Plan: The organization should plan and establish the process related objectives and determine
the process that is required to deliver a high-quality end product.
Do: Development and testing of processes and also change in the methods.
Check: Monitoring of processes, modify the methods, and check whether it meets the
predetermined objectives.
Act: Implement actions that are necessary to achieve improvements in the process.
An organization must use Quality Assurance to ensure that the product is designed and
implemented with correct procedures. This will help to reduce problems and errors in the final
product.
1. Technology Transfer
2. Validation For the entire system, validation master plan is prepared. Resource planning
for execution of a validation plan is done.
3. Documentation .
4. Quality assurance function also involves assuring the quality of products.
5. It also involves quality improvement plans.
QA focuses on providing
QC focuses on fulfilling the
Objective assurance that the quality
quality requested.
requested will be achieved.
Process/ Product-
QA is process oriented. QC is product oriented.
oriented
Methods of SEICMM
Capability Evaluation: Capability evaluation provides a way to assess the software process
capability of an organization.
ISO/IEC 12207
Process Framework: Defines processes, activities, and tasks that are required for
developing and maintaining software.
Assessment: Evaluates compliance with the defined framework.
3. Assessment Steps
Planning
Defining Objectives: Establish what you want to achieve with the assessment.
Selecting a Framework: Choose an appropriate assessment model (e.g., CMMI,
SPICE).
Identifying Scope: Determine which processes and areas will be assessed.
Data Collection
Evaluation
Gap Analysis: Compare current processes against the chosen model’s criteria.
Strengths and Weaknesses: Identify what is working well and what needs improvement.
Rating: Assign maturity or capability levels based on the findings.
Reporting
Improvement Planning
5. Challenges
Ad hoc activities characterize a software development organization at this level. Very few or no
processes are described and followed. Since software production processes are not limited,
different engineers follow their process and as a result, development efforts become chaotic.
Therefore, it is also called a chaotic level.
Level 2: Repeatable
At this level, the fundamental project management practices like tracking cost and schedule are
established. Size and cost estimation methods, like function point analysis, COCOMO, etc. are
used.
Level 3: Defined
At this level, the methods for both management and development activities are defined and
documented. There is a common organization-wide understanding of operations, roles, and
responsibilities. The ways through defined, the process and product qualities are not measured.
ISO 9000 goals at achieving this level.
Level 4: Managed
At this level, the focus is on software metrics. Two kinds of metrics are composed.
Product metrics measure the features of the product being developed, such as its size,
reliability, time complexity, understandability, etc.
Process metrics follow the effectiveness of the process being used, such as average defect
correction time, productivity, the average number of defects found per hour inspection, the
average number of failures detected during testing per LOC, etc.
Level 5: Optimizing
At this phase, process and product metrics are collected. Process and product measurement data
are evaluated for continuous process improvement.
Except for SEI CMM level 1, each maturity level is featured by several Key Process Areas
(KPAs) that contains the areas an organization should focus on improving its software process to
the next level. The focus of each level and the corresponding key process areas are shown in the
fig.
1. Communication
1. Goal Orientation:
o Align assessment activities with organizational goals and business objectives.
o Ensure that the assessment focuses on aspects of the process that directly impact
the organization's ability to meet its goals.
2. Process Focus:
o Concentrate on the processes rather than individual performance.
o Evaluate how well processes are defined, managed, and executed.
3. Objectivity:
o Use objective, measurable criteria to evaluate processes.
o Ensure that the assessment is based on data and factual information rather than
opinions.
4. Systematic Approach:
o Follow a structured and well-defined methodology for conducting the assessment.
o Ensure consistency and repeatability in the assessment process.
5. Stakeholder Involvement:
o Involve all relevant stakeholders, including management, developers, and end-
users.
o Ensure that different perspectives are considered in the assessment.
6. Continuous Improvement:
o View assessment as a continuous process rather than a one-time activity.
o Use assessment findings to drive ongoing process improvement efforts.
7. Tailoring:
o Adapt the assessment approach to the specific context and needs of the
organization.
o Consider the size, complexity, and maturity of the organization when designing
the assessment.
8. Confidentiality and Sensitivity:
o Handle assessment data with confidentiality and sensitivity.
o Ensure that participants feel safe to provide honest and accurate information.
9. Cultural Awareness:
o Be aware of and respect the organizational culture.
o Consider cultural factors that might impact the effectiveness of processes and
their assessment.
10. Focus on Key Process Areas:
o Identify and assess critical process areas that have a significant impact on
software quality and project success.
o Prioritize areas for improvement based on their importance and impact.
11. Benchmarking:
o Compare the organization's processes against industry standards and best
practices.
o Identify gaps and opportunities for improvement based on these benchmarks.
12. Actionable Recommendations:
o Provide clear, practical, and actionable recommendations for process
improvement.
o Ensure that recommendations are feasible and aligned with the organization's
capabilities and resources.
13. Feedback and Communication:
o Ensure regular feedback and communication throughout the assessment process.
o Keep stakeholders informed about findings, progress, and next steps.
2. Data Collection
Document Review: Gather and review existing process documentation, project artifacts,
and other relevant materials.
Surveys and Questionnaires: Distribute surveys to gather input from stakeholders.
Interviews: Conduct interviews with key personnel (developers, managers, QA, etc.) to
gain insights into the processes.
Observation: Observe the processes in action to understand the current practices.
3. Analysis
Gap Analysis: Compare current processes against the chosen framework or standards to
identify gaps.
Root Cause Analysis: Identify the underlying causes of any deficiencies or issues.
SWOT Analysis: Assess strengths, weaknesses, opportunities, and threats related to the
current processes.
4. Assessment Report
Findings: Document the findings from the data collection and analysis phases.
Recommendations: Provide actionable recommendations for process improvement.
Prioritization: Prioritize the recommendations based on impact and feasibility.
Draft Report: Prepare a draft assessment report for review by stakeholders.
5. Review and Validation
6. Action Plan
Develop Action Plan: Create a detailed action plan to implement the recommendations.
Assign Responsibilities: Assign tasks and responsibilities to appropriate team members.
Set Deadlines: Establish timelines and milestones for implementing improvements.
7. Implementation
Execute Action Plan: Implement the process improvements as outlined in the action
plan.
Monitor Progress: Track the progress of implementation activities and make
adjustments as needed.
8. Follow-Up
Maintain Records: Keep detailed records of the assessment process, findings, and
improvements.
Communicate Results: Ensure that all stakeholders are informed about the assessment
outcomes and progress of improvements.
10. Evaluation
Measure Impact: Evaluate the impact of the implemented changes on the software
development process.
Lessons Learned: Document lessons learned and best practices for future assessments.
Implementation Steps
Considerations
1. Test Coverage:
o Ensure comprehensive coverage of all functional and non-functional
requirements.
o Use traceability matrices to map test cases to requirements.
2. Test Data Management:
o Create and manage test data that reflects real-world scenarios.
o Ensure test data is secure and compliant with data protection regulations.
3. Automation:
o Identify repetitive and time-consuming test cases for automation.
o Choose appropriate automation tools and frameworks.
o Maintain and update automated test scripts as the software evolves.
4. Performance Testing:
o Include performance testing to assess the software’s responsiveness, stability, and
scalability.
o Use performance testing tools to simulate user load and measure performance
metrics.
5. Security Testing:
o Conduct security testing to identify vulnerabilities and ensure the software is
secure against potential threats.
o Use tools and techniques such as penetration testing, static code analysis, and
vulnerability scanning.
6. Usability Testing:
o Assess the software’s usability to ensure it meets user expectations for ease of use
and accessibility.
o Gather feedback from real users to identify usability issues.
7. Continuous Integration and Continuous Testing:
o Integrate testing into the continuous integration (CI) pipeline to ensure ongoing
quality.
o Use CI tools to automatically trigger tests with each code commit.
8. Test Environment Management:
o Ensure the test environment is stable, consistent, and reflects the production
environment.
o Use virtualization and containerization to manage test environments effectively.
9. Team Collaboration:
o Foster collaboration between developers, testers, and other stakeholders.
o Use collaboration tools and practices such as daily stand-ups, code reviews, and
pair programming.
10. Compliance and Standards:
o Adhere to industry standards and regulatory requirements relevant to the software
being tested.
o Ensure that testing practices comply with standards such as ISO, IEEE, and
others.
11. Risk-Based Testing:
o Prioritize testing activities based on the risk and impact of potential defects.
o Focus on critical areas of the application that have the highest risk.
12. Documentation:
o Maintain thorough documentation of test plans, test cases, test results, and defect
reports.
o Ensure documentation is clear, concise, and accessible to all stakeholders.
A Quality Assurance Plan (QAP) is a document or set of documents that outlines the
systematic processes, procedures, and standards for ensuring the quality of a product
or service. It is a key component of quality management and is used in various
industries to establish and maintain a consistent level of quality in deliverables. For
a software product or service, an SQA plan will be used in conjunction with the
typical development, prototyping, design, production, and release cycle. An SQA
plan will include several components, such as purpose, references, configuration and
management, tools, code controls, testing methodology, problem reporting and
remedial measures, and more, for easy documentation and referencing.
Importance of Software Quality Assurance Plan
Quality Standards and Guidelines: The SQA Plan lays out the
requirements and guidelines to make sure the programme satisfies
predetermined standards for quality.
Risk management: It is the process of recognizing, evaluating and
controlling risks in order to reduce the possibility of errors and other
problems with quality.
Standardization and Consistency: The strategy guarantees consistent
methods, processes, and procedures, fostering a unified and well-
structured approach to quality assurance.
Customer Satisfaction: The SQA Plan helps to ensure that the finished
product satisfies customer needs, which in turn increases overall customer
satisfaction.
Resource optimization: It is the process of defining roles, responsibilities,
and procedures in order to maximize resource utilization and minimize
needless rework.
Early Issue Detection: SQA Plans help identify problems early on, which
lowers the expense and work involved in fixing them.
Objectives And Goals of Software Quality Assurance Plan
The objectives and goals of a Quality Assurance Plan (QAP) are to ensure that
the products or services meet specified quality standards and requirements.
The plan serves as a roadmap for implementing quality assurance processes
throughout a project or organizational activity.
The specific objectives and goals can vary depending on the nature of the
project or industry, but common elements include.
Consideration of Software Quality Assurance Plan:
Overhead in Small Projects: The cost of developing and upholding a
detailed SQA plan may be excessive for small projects compared to their
scope and complexity.
Opposition to Change: An SQA strategy may involve changes that teams
accustomed to their current workflows find difficult to accept, which could
result in a time of adjustment and resistance.
Documentation Complexity: SQA plans with high documentation
requirements run the risk of adding complexity and coming off as
bureaucratic to teams, which makes it difficult to keep documentation up
to date.
Reliance on Human Elements: An SQA plan’s performance depends on
human elements like procedure adherence and attention to detail, which
makes human mistake a possibility.
Verification and Validation Testing
Verification testing
It is also known as static testing, where we are ensuring that "we are developing the
right product or not". And it also checks that the developed application fulfilling all the
requirements given by the client.
Validation testing
Validation testing is testing where tester performed functional and non-functional testing.
Here functional testing includes Unit Testing (UT), Integration Testing (IT) and System
Testing (ST), and non-functional testing includes User acceptance testing (UAT).
Validation testing is also known as dynamic testing, where we are ensuring that "we
have developed the product right." And it also checks that the software meets the
business needs of the client.
Verification Validation
We check whether we are developing the right product We check whether the developed product is right.
or not.
Verification is also known as static testing. Validation is also known as dynamic testing.
Verification includes different methods like Validation includes testing like functional testing, system
Inspections, Reviews, and Walkthroughs. testing, integration, and User acceptance testing.
It is a process of checking the work-products (not the It is a process of checking the software during or at the
final product) of a development cycle to decide end of the development cycle to decide whether the
whether the product meets the specified requirements. software follow the specified business requirements.
Quality assurance comes under verification testing. Quality control comes under validation testing.
The execution of code does not happen in the In validation testing, the execution of code happens.
verification testing.
In verification testing, we can find the bugs early in In the validation testing, we can find those bugs, which
the development phase of the product. are not caught in the verification process.
Verification testing is executed by the Quality Validation testing is executed by the testing team to test
assurance team to make sure that the product is the application.
developed according to customers' requirements.
Verification is done before the validation testing. After verification testing, validation testing takes place.
In this type of testing, we can verify that the inputs In this type of testing, we can validate that the user
follow the outputs or not. accepts the product or not.