0% found this document useful (0 votes)
79 views

Software Verification

The document discusses software quality assurance (QA) techniques for defect prevention, detection, and containment. It covers education and training, formal methods, process conformance, and tools to prevent defects. Defect reduction techniques discussed include inspections, testing, and causal analysis. Inspections involve examining artifacts for defects. Testing executes the software and observes behavior to find failures and locate faults. Together, prevention and reduction techniques aim to develop high-quality software by eliminating sources of defects.

Uploaded by

Ahmi khan
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views

Software Verification

The document discusses software quality assurance (QA) techniques for defect prevention, detection, and containment. It covers education and training, formal methods, process conformance, and tools to prevent defects. Defect reduction techniques discussed include inspections, testing, and causal analysis. Inspections involve examining artifacts for defects. Testing executes the software and observes behavior to find failures and locate faults. Together, prevention and reduction techniques aim to develop high-quality software by eliminating sources of defects.

Uploaded by

Ahmi khan
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 28

Software Verification & Validation

Content of Lecture

• Quality Assurance (QA)

• Classification Scheme for QA as Dealing with Defects

• Defect Prevention

• Defect Reduction (Defect Detection and Removal)

• Defect Containment

Defect Prevention Overview

• Education and training

• Formal methods

• Process conformance and standards enforcement

• Tools and technologies

Education and training

• Product and domain specific knowledge: If the people involved are not familiar with the
product type or application domain, there is a good chance that wrong solutions will be
implemented. For example, developers unfamiliar with embedded software may design
software without considering its environmental constraints, thus leading to various interface
and interaction problems between software and its physical surroundings.

• General software development knowledge and expertise: Plays an important role in developing
high-quality software products. For example, lack of expertise with requirement analysis and
product specification usually leads to many problems and rework in subsequent design, coding,
and testing activities.

• Knowledge about the specific development methodologies and tools Used by the organization
also plays an important role in developing high-quality software products. For example, in an
implementation of Clean room technology (Mills et al., 1987b), if the developers are not familiar
with the key components of formal verification or statistical testing, there is little chance for
producing high-quality products.
• Knowledge about the software process used by the organization: If the project personnel do
not have a good understanding of the development process involved, there is little chance that
the process can be implemented correctly. For example, if the people involved in incremental
software development do not know how the individual development efforts for different
increments fit together, the uncoordinated development may lead to many interface or
interaction problems.

Formal Methods
• Formal methods provide a way to eliminate certain error sources and to verify the
absence of related faults
• Include formal specification and formal verification
• In Formal specification, an unambiguous set of product specifications given so that
customer requirements, as well as environmental constraints and design intentions, are
correctly reflected, thus reducing the chances of accidental fault injections
• Formal verification checks the conformance of software design or code against these
formal specifications, thus ensuring that the software is fault-free with respect to its
formal specifications

• Different techniques to determine Correctness of software focus on two issues:

• What is correct behavior?

• How to verify it?

Other Defect Prevention Techniques


• Software methodologies or technologies can also help reduce the chances of fault
injections. Many of the problems with low quality “fat software” could be addressed by
disciplined methodologies and return to essentials for high-quality “lean software”
(Wirth, 1995).
• Lean programming is a paradigm for the development and production of computer
programs that is based on the principles of optimizing efficiency and minimizing waste

 A better managed process can also eliminate many systematic problems. For example, not
having a defined process or not following it for system configuration management may lead
to inconsistencies or interface problems among different software components.
 Ensuring appropriate process definition and conformance helps eliminate some such error
sources. Similarly, enforcement of selected standards for certain types of products and
development activities also reduces fault injections
 Sometimes, specific software tools can also help reduce the chances of fault injections. For
example, a syntax-directed editor that automatically balances out each open parenthesis,
“{”, with a close parenthesis, “}”, can help reduce syntactical problems in programs written
in the C language

Defect Reduction
• Inspection
• Testing
• Other techniques

Defect Reduction
• Software inspections are critical examinations of software artifacts by
human inspectors aimed at discovering and fixing faults in the software
systems.
• Inspection is a well-known QA alternative familiar to most experienced
software quality professionals. The earliest and most influential work in
software inspection is Fagan inspection (Fagan, 1976).
• Various other variations have been proposed and used to effectively
conduct inspection under different environments

Inspection :
• Inspections are critical reading and analysis of software code or other
software artifacts, such as designs, product specifications, test plans, etc.
• Inspections are typically conducted by multiple human inspectors, through
some coordination process. Multiple inspection phases or sessions might be
used.
• Faults are detected directly in inspection by human inspectors, either during
their individual inspections or various types of group sessions.
• Identified faults need to be removed as a result of the inspection process,
and their removal also needs to be verified.
• The inspection processes vary, but typically include some planning and follow-up activities in
addition to the core inspection activity.

• The formality and structure of inspections may vary, from very informal reviews and
walkthroughs.

• Informal reviews:

• self conducted reviews/walkthroughs

• independent reviews (and their coordination)

• Formal inspections:

• Fagan inspection

• Gilb inspection

• Fagan inspection is a structured process of trying to find defects in development documents


such as programming code, specifications, designs and others during various phases of the
software development process. It is named after Michael Fagan who is credited with being the
inventor of formal software inspections

• Establishment and evaluation of a framework for performing software inspections based on Tom
Gilb's inspection method, covering all development phases and work products

• Inspection is most commonly applied to code, but it could also be applied to requirement
specifications, designs, test plans and test cases, user manuals, and other documents or
software artefacts

• Inspection can be used throughout the development process, particularly early in the software
development before anything can be tested

• Consequently, inspection can be an effective and economical QA alternative because of the


much increased cost of fixing late defects as compared to fixing early ones

• Another important potential benefit of inspection is the opportunity to conduct causal analysis
during the inspection process, for example, as an added step in Gilb inspection these causal
analysis results can be used to guide defect prevention activities by removing identified error
sources or correcting identified missing/incorrect human actions.

Testing
• Testing is one of the most important parts of QA and the most commonly performed QA activity.

• Testing involves the execution of software and the observation of the program behaviour or
outcome
• If a failure is observed, the execution record is then analyzed to locate and fix the fault(s) that
caused the failure

When can a specific testing activity be performed and related faults be detected

 Testing is an execution-based QA activity, a prerequisite to actual testing is the existence of


the implemented software units, components, or system to be tested, although preparation
for testing can be carried out in earlier phases of software development
 Testing can be divided into various sub-phases starting from the coding phase up to post-
release product support, including: unit testing, component testing, integration testing,
system testing, acceptance testing, beta testing, etc.
 The observation of failures can be associated with these individual sub-phases, and the
identification and removal of related faults can be associated with corresponding individual
units, components, or the complete system.

What to test, and what kind of faults are found

• External specifications (black-box)/functional

• When black-box testing is performed, failures related to specific external functions can
be observed, leading to corresponding faults being detected and removed. The
emphasis is on reducing the chances of encountering functional problems by target
customers.

• Internal implementation (white/clear-box)/structural

• when white-box testing is performed, failures related to internal implementations can


be observed, leading to corresponding faults being detected and removed.

The emphasis is on reducing internal faults so that there is less chance for failures later on no matter
what kind of application environment the software is subjected to

When, or at what defect level, to stop testing?

• When to stop testing:

• Coverage information (higher coverage information means higher quality or lower levels
of defects)

• Checklists are used to make containing major functions and usage scenarios are
tested before product release

• Product reliability goals can be used as a more objective criterion to stop testing.

• Use of this criterion requires the testing to be performed under an environment


that resembles actual usage by target customers so that realistic reliability
assessment can be obtained, resulting in the so-called usage-based statistical
testing.

Other Techniques for Defect Reduction


• Static:
• Formal model analysis techniques
• Algorithm analysis, boundary value analysis, finite state machine, control
and data flow analysis, software fault trees etc.
• Dynamic:
• Testing, dynamic, execution-based, techniques also exist for fault detection and
removal. For example, symbolic execution, simulation, and prototyping can help
us detect and remove various defects early in the software development process

Defect Containment
• Due to large size and high complexity of most software systems, the defect
reduction activities can only reduce the number of faults to a fairly low
level, but not completely eliminate them.
• For software systems where failure impact is substantial, such as many real-
time control software sub-systems used in medical, nuclear, transportation,
and other embedded systems, this low defect level and failure risk may still
be inadequate.
Fault Tolerance
• Software fault tolerance ideas originate from fault tolerance designs in
traditional hardware systems that require higher levels of reliability and
availability
• The primary software fault tolerance techniques include recovery blocks, N-
version programming (NVP), and their variations
• Recovery blocks use repeated executions (or redundancy over time) as the
basic mechanism for fault tolerance. If dynamic failures in some local areas
are detected, a portion of the latest execution is repeated, in the hope that
this repeated execution will not lead to the same failure. Therefore, local
failures will not propagate to global failures, although some time-delay may
be involved.
• NVP uses parallel redundancy, where N copies, each of a different version,
of programs fulfilling the same functionality are running in parallel. The
decision algorithm in NVP makes sure that local failures in limited number
of these parallel versions will not compromise global execution results

Safety Assurance and Fault Containment


• For safety critical systems, the primary concern is our ability to prevent accidents from
happening, where an accident is a failure with a severe consequence.
• Even low failure probabilities for software are not tolerable in such systems if these
failures may still likely lead to accidents.
• Prevent accidents from happening
• accident: failure with severe consequences
• Specific techniques for safety critical systems based on analysis of hazards
• hazard: precondition to accident
Conclusion
• Defect reduction through inspection, testing, and other static analyses or dynamic
activities, to detect and remove faults from software.
• Defect containment through fault tolerance, failure prevention, or failure impact
minimization, to assure software reliability and safety.
Sequence [Todays Agenda]

Content of Lecture
• Handling Discovered Defect During QA Activities
• QA Activities in Software Processes
• Verification and Validation Perspectives
• Reconciling the Two Views
Introduction
• Quality assurance (QA) interpretation from last lecture
• QA as dealing with defects
• implicitly assumed that all discovered defects will be resolved within the
software development process before product release
• In this lecture we
• Describe defect handling during the execution of specific QA activities
• Examine how different QA activities fit into different software development
processes
• Examine the QA activities from the perspective of verification and validation
(V&V) view
Defect Handling and Related Activities
• QA aims to resolve each discovered defect before product release
• Most important activity associated with defect handling is defect resolution
• Ensures that each discovered defect is corrected or taken care of through
appropriate actions
• Two important activities to support defect resolution:
• Defect logging: recording discovered defects
• Defect tracking: monitoring defects up its final resolution
• What to record?
• Various specific information can be recorded and updated through the defect
handling process. Defect type, severity, impact areas, possible cause etc
• Details in later lectures
• To ensure proper collection and usage of defected data, we need to pay special
attention to the following in the defect discovery and resolution activities
• distinguish execution failures, internal faults and human errors
• the specific problems need to be counted and tracked consistently
• timely defect reporting
Defect Handling Process
• Defect handling handled in similar ways as configuration management
• A formalized defect handling process defines
• Important activities and associated rules,
• Parties involved, and their responsibilities
• Multiple parties.
• Different states associated with individual defect status
• New, Working, Closed
Defect Handling Tools

• We need the support of software for


• implementation of the defect handling process
• enforcement of various related rules
• Commonly referred to as defect tracking tools or defect handling tools
• Examples:
• IBM: CMVC - configuration management and version control, an IBM Products
was used for defect tracking
• Open source projects: Tools such as
• Bugzilla (http://www.bugzilla.org)
• Issuezilla (http://www.issuezilla.org)

Defect Handling Process


• Bugzilla Bug Life Cycle
Defect Handling in Different QA Activities
• Three classes of the QA activities:
• defect detection and removal activities, such as testing and inspection,
• are more closely associated with defect handling
• inspector of a program may make an initial note about a possible
problem in the program code. When it is confirmed during the inspection
meeting, it is formally recorded as a defect
• various defect prevention activities
• do not directly deal with defects
• rather deal with various ways to prevent the injection
• defect containment activities,
• the focus is not on the discovery of underlying faults

QA Activities in Software Processes

• QA activities - integral part of the overall software process


• Maintenance stage, the focus of QA is on how:
• Each problem reported from field operations is
• logged
• analyzed
• resolved
• Complete tracking record is kept so that we can learn from past problems for
future quality improvement
• Most of the core QA activities (defect prevention & reduction) are performed during
software development instead of during in-field maintenance
• So we focus on how different QA activities fit into software processes
• First, we examine how QA activities fits in the Waterfall Process
QA in the Waterfall Process

• Strictly sequential stages


• Typical sequence includes, in chronological order:
• product planning,
• requirement analysis,
• specification,
• design,
• coding,
• testing,
• release, and
• post-release product support
• Testing, a central part of QA activities,
• is an integral part of the waterfall process,
forms an important link in the overall development chain
QA in the Waterfall Process
• Other QA activities
• can be carried out throughout other phases and in the transition from one phase
to another
• For example, part of the criteria to move on from each phase to the next is
quality, typically in the form of checking to see if certain quality plans or
standards have been completed or followed
• Through reviews and inspection
• Various defect prevention activities are typically concentrated in the earlier phases –
why?
• Some detection and removal techniques like inspection can also be applied to early
stages
• e.g. inspecting requirement documents, product specs, product design etc.
• But there are practical obstacles to the early fixing of injected defects
• Dynamic problems may only become apparent during execution
• e.g. component dependency issues cannot be detected & fixed till after
implementation & execution
• Defect containment (fault tolerance, safety assurance) in late phases
• However, its planning/preparation is done at early stages

QA in the Waterfall Process

QA in Other Software Processes


• Incremental and iterative processes
• consisting of several increments/iterations
• each of them following more or less the same mini-stages corresponding to
those in the waterfall process
• The new increment has to be incorporated with the existing parts - Integration
testing
• Spiral process
• Are similar to those performed in incremental/iterative processes
• Minor difference is typically in the risk focus adopted in spiral process
• Agile development method
• can be treated as special cases of incremental, iterative, or spiral process models
where many of their elements are used or adapted
• Inspection & testing play an even more important role e.g. test driven
development & inspection is integral part of extreme programming
QA Activities in V&V Context
• V & V → Validation and Verification
• Validation activities
• Related QA activities check whether a function needed and expected by the
customers is present in a software product
• Verification activities
• Related QA activities to confirm the correct or reliable performance of these
specified functions
• Check the conformance of a software system to its specifications
• Examples of QA activities that can be classified as Validation activities:
• System testing focus is the overall set of system functions
• Acceptance & beta testing with focus on the user
• Software fault tolerance for providing continued service even when local problem
exists
• Software safety assurance activities which focus on providing the expected
accident free operations or reducing accident damage when an accident is
unavoidable
• When a expected feature is present
• the activity to determine whether it performs or behaves expectedly is then a
verification activity
• Therefore, all the QA activities we classified as dealing directly with faults, errors,
or error sources can be classified as verification activities
• E.g. proper working of algorithms, data structures, coding standards, process
standards

V & V Based QA Activities in Software Processes
• Validation checks whether the expected functions or features are present
or not
• validation deals directly with users and their requirements
• Verification checks implementation against its specifications to see if it is
implemented correctly
• Verification deals with internal product specifications
• Different processes involve users in different ways
• Therefore validation and verification activities may be distributed in
different processes differently

V & V Based QA Activities in Waterfall Process

• Focus of validation activities


• Direct involvement of users and their requirements at the very beginning & the
very end
• e.g. project planning, market analysis, requirement analysis,
specification, beta testing, acceptance testing, product release, post-
release support and maintenance etc
• Therefore, these are the phases where validation activities may be the focus
• Focus of verification activities
• No direct involvement of users in the middle part
• e.g. inspection of design, formal verification of program correctness, unit
testing, component testing etc
• Therefore, these are the phases where verification activities may be the focus
The V Process Model
• These verification and validation activities can be best illustrated by the V-model
Verification vs validation
• Focus of validation activities
• Direct involvement of users and their requirements at the very beginning & the
very end
• e.g. project planning, market analysis, requirement analysis,
specification, beta testing, acceptance testing, product release, post-
release support and maintenance etc
• Therefore, these are the phases where validation activities may be the focus
• Focus of verification activities
• No direct involvement of users in the middle part
• e.g. inspection of design, formal verification of program correctness, unit
testing, component testing etc
• Therefore, these are the phases where verification activities may be the focus
Example
• Switches and buttons on the wall
Examples Cont’d
• There are supposed to be three switches and two buttons on this wall. (verification)
• The switches are supposed to be equi-distant apart and 48 inches off the floor
(verification)
• The buttons are supposed to be on either side of the switches (verification)
• The left button is supposed to be labeled "Left" (verification)
The right button is supposed to be labeled "Right“ (verification)

Examples Cont’d
If I click the left button on, does it disable the far right switch?
(validation)
If I click the left button off, does it enable the far right switch?
(validation)
If I click the right button on, does it disable the far left switch?
(validation)
If I click the right button off, does it enable the far left switch?
(validation)
If I click both buttons on does only the middle switch work?
(validation)
The V & V process
• Is a whole life-cycle process - V & V must be applied at each stage in the software
process.
• Has two principal objectives
• The discovery of defects in a system;
• The assessment of whether or not the system is useful and useable in an
operational situation.
V & V goals
• Verification and validation should establish confidence that the software is fit for
purpose.
• This does NOT mean completely free of defects.
• Rather, it must be good enough for its intended use and the type of use will determine
the degree of confidence that is needed.
V & V confidence
• Depends on system’s purpose, user expectations and marketing environment
• Software function
• The level of confidence depends on how critical the software is to an
organisation.
• User expectations
• Users may have low expectations of certain kinds of software.
• Marketing environment
• Getting a product to market early may be more important than finding
defects in the program.
Static and dynamic verification
• Software inspections: Concerned with analysis of
the static system representation to discover problems (static verification)
• May be supplement by tool-based document and code analysis
• Software testing: Concerned with exercising and
observing product behaviour (dynamic verification)
• The system is executed with test data and its operational behaviour is observed
V & V planning
• Careful planning is required to get the most out of testing and inspection processes.
• Planning should start early in the development process.
• The plan should identify the balance between static verification and testing.
• Test planning is about defining standards for the testing process rather than describing
product tests.
The V-model of development

Requir ements System System Detailed


specification specification design design

System Sub-system Module and


Acceptance
integration integ ration unit code
test plan
test plan test plan and test

Acceptance System Sub-system


Service
test integ ration test integ ration test
Difference between V&V

             Verification              Validation

1. Verification is a static practice of verifying 1. Validation is a dynamic mechanism of validating and


documents, design, code and program. testing the actual product.

2. It does not involve executing the code. 2. It always involves executing the code.

3. It is human based checking of documents 3. It is computer based execution of program.


and files.

4. Verification uses methods like inspections, 4. Validation uses methods like black box (functional) 
reviews, walkthroughs, and Desk-checking etc. testing, gray box testing, and white box (structural)
testing etc.

5. Verification is to check whether the software 5. Validation is to check whether software meets the
conforms to specifications. customer expectations and requirements.

6. It can catch errors that validation cannot 6. It can catch errors that verification cannot catch. It is
catch. It is low level exercise. High Level Exercise.

7. Target is requirements specification, 7. Target is actual product-a unit, a module, a bent of


application and software architecture, high integrated modules, and effective final product.
level, complete design, and database design
etc.

8. Verification is done by QA team to ensure 8. Validation is carried out with the involvement of
that the software is as per the specifications in testing team.
the SRS document.

9. It generally comes first-done before 9. It generally follows after verification.


validation.
Content of Lecture
• Critical System Validation, Reliability Validation, Safety Assurance, Security
assessment
• Introduction
• Computer Aided Disasters
• Validation of critical systems
• Validation costs
• Reliability validation
• Safety assurance
• Security assessment
Introduction
System failures
Cause inconvenience but no serious, long-term damage
Result in significant economic losses, physical damage or threats to human life

Critical systems

Three main types

Mission-critical Business-critical
Safety-critical systems
systems systems

Injury, loss of life,


Failure of goal-directed
serious environmental Very high costs
activity
damage

e.g. customer
e.g. chemical e.g. navigational
accounting system in a
manufacturing plant system for a spacecraft
bank
System Dependability
• For critical systems, it is usually the case that the most important system property is the
dependability of the system.
• Dependability is a non-functional requirement

Critical System
• The most emergent properties of a critical system
• Systems that are unreliable, unsafe or insecure are often rejected by their users
• Untrustworthy systems may cause information loss

Computer Aided Disasters


• Therac 25 (1985-87, N. America) radiation therapy machine delivers severe radiation
overdoses (x6)
• London Ambulance Service (1992) 20+ die unnecessarily when dispatch system fails
• USS Vincennes (1988) shoots down Iran airliner after faulty identification
Validation of Critical Systems
• The verification and validation costs for critical systems involves additional validation
processes and analysis than for the non-critical systems:
• The costs and consequences of failure are high so it is cheaper to find and
remove faults than to pay for system failure;
• You may have to make a formal case to customers or to a regulator that the
system meets its dependability requirements. This dependability case may
require specific V & V activities to be carried out.
Validation of Critical Systems
• The verification and validation costs for critical systems involves additional validation
processes and analysis than for the non-critical systems:
• The costs and consequences of failure are high so it is cheaper to find and
remove faults than to pay for system failure;
• You may have to make a formal case to customers or to a regulator that the
system meets its dependability requirements. This dependability case may
require specific V & V activities to be carried out.
Validation Costs
• Because of the additional activities involved, the validation costs for critical systems are
usually significantly higher than for non-critical systems.
• Normally, V & V costs take up more than 50% of the total system development costs
Reliability Validation
• Reliability validation involves exercising the program to assess whether or not it has
reached the required level of reliability.
• This cannot normally be included as part of a normal defect testing process because
data for defect testing is (usually) a typical of actual usage data.
• Reliability measurement therefore requires a specially designed data set that replicates
the pattern of inputs to be processed by the system.

The Reliability Measurement Process

Identify Compute
Prepare test Apply tests to
operational observed
data set system
profiles reliability
Reliability validation activities
• Establish the operational profile for the system.
• Construct test data reflecting the operational profile.
• Test the system and observe the number of failures and the times of these failures.
• Compute the reliability after a statistically significant number of failures have been
observed.
Reliability validation activities
• Establish the operational profile for the system.
• Construct test data reflecting the operational profile.
• Test the system and observe the number of failures and the times of these failures.
• Compute the reliability after a statistically significant number of failures have been
observed.
Statistical Testing
• Used to test for reliability, not fault detection
• Measuring the number of errors allows the reliability of the software to be predicted
• Error seeding is one approach to measuring reliability
• An acceptable level of reliability should be specified before testing begins and the
software should be modified until that level is attained
Estimating Number of Program Errors by Error Seeding
• One member of the test team places a known number of errors in the program while
other members try to find them.
• Assumption: (s/S) = (n/N)
• s = # seeded errors found during testing
• S = # of seeded errors placed in program
• n = # non-seeded (actual) errors found during testing
• N = total # of non-seeded (actual) errors in program
• This can be written as N = (S*n)/s
Error Seeding Example
• Using the error seeding assumptions
• if 75 of 100 seeded errors are found
• we believe that we have found 75% of the actual errors
• If we found 25 non-seeded errors that means actual # errors in the program is
N = (100 * 25)/75 = 33.3
Safety vs. Reliability
• Not the same thing
• Reliability is concerned with conformance to a given specification and delivery of service
• Safety is concerned with ensuring system cannot cause damage irrespective of whether
or not it conforms to its specification

A system may be reliable but not safe – but, usually, if a critical system
is unreliable it is likely to be unsafe …

Safety assurance
• Safety assurance and reliability measurement are quite different:
• Within the limits of measurement error, you know whether or not a
required level of reliability has been achieved;
• However, quantitative measurement of safety is impossible. Safety
assurance is concerned with establishing a confidence level in the
system.
Safety confidence
• Confidence in the safety of a system can vary from very low to very high.
• Confidence is developed through:
• Past experience with the company developing the software;
• The use of dependable processes and process activities geared to safety;
• Extensive V & V including both static and dynamic validation techniques.
Safety arguments
• Safety arguments are intended to show that the system cannot reach in
unsafe state.
• They are generally based on proof by contradiction
• Assume that an unsafe state can be reached;
• Show that this is contradicted by the program code.

Construction of a safety argument


• Establish the safe exit conditions for a component or a program.
• Starting from the END of the code, work backwards until you have
identified all paths that lead to the exit of the code.
• Assume that the exit condition is false.
• Show that, for each path leading to the exit that the assignments made in
that path contradict the assumption of an unsafe exit from the component.
Security assessment
• Security assessment has something in common with safety assessment.
• It is intended to demonstrate that the system cannot enter some state (an
unsafe or an insecure state) rather than to demonstrate that the system
can do something.
• However, there are differences
• Safety problems are accidental; security problems are deliberate;
• Security problems are more generic - many systems suffer from the
same problems; Safety problems are mostly related to the
application domain
Security validation

Experience- The system is reviewed and analysed against the


based types of attack that are known to the validation
validation team.

Tool-based Various security tools such as password checkers


validation are used to analyse the system in operation.

A team is established whose goal is to breach the


Tiger teams security of the system by simulating attacks on
the system.

Security checklist

Content of Lecture

• Concepts, Issues

• Testing Purpose

• Testing Processes, Activities and Context

• Classification of Testing Techniques

Basic Concept

• Testing can be defined as “A process of analyzing a software item to detect


the differences between existing and required conditions (that is
defects/errors/bugs) and to evaluate the features of the software item”.
State-of-the-Art
• 30-85 errors are made per 1000 lines of source code
• Extensively tested software contains 0.5-3 errors per 1000 lines of source
code
• If testing is postponed, as a consequence: the later an error is discovered,
the more it costs to fix it.
• Error distribution: 60% design, 40% implementation. 66% of the design
errors are not discovered until the software has become operational.
Lessons
• Many errors are made in the early phases
• These errors are discovered late
• Repairing those errors is costly
 It pays off to start testing real early
Software Testing
Testing is the process of exercising a program with the specific intent of finding
errors prior to delivery to the end user.
What is a “Good” Test?
• a high probability of finding an error
• not redundant.
• neither too simple nor too complex
OBJECTIVE
to uncover errors
CRITERIA
in a complete manner
CONSTRAINT
with a minimum of effort and time
Testing Activities
• Major testing activities:
=> test planning and preparation
=>execution (testing)
=> analysis and follow-up
• Link above activities => generic process:
 planning-execution-analysis-feedback.
 entry criteria
 exit criteria
 some (small) process variations
-but we focus on strategies/techniques.

Testing Activities

You might also like