0% found this document useful (0 votes)
8 views

Cheatsheet ISE

The document discusses software quality, testing fundamentals, measures, metrics and indicators. It defines quality, discusses strategic approaches to testing including verification and validation. It also covers test case design, testing strategies like exhaustive and selective testing, and white-box testing. Measurement principles and objectives of software measurement are presented.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Cheatsheet ISE

The document discusses software quality, testing fundamentals, measures, metrics and indicators. It defines quality, discusses strategic approaches to testing including verification and validation. It also covers test case design, testing strategies like exhaustive and selective testing, and white-box testing. Measurement principles and objectives of software measurement are presented.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

1. Introduction to software Quality 1. What is testing process? 1. Testing Fundamentals 1.

Introduction to measures, metrics and indicators


Quality • process of exercising a program with the specific intent of finding errors prior Testing Fundamentals • Measures: provides a quantitative indication of the
Definition: "a characteristic or attribute of something" Strategic Approach • Operability- operates cleanly (more efficiently it can be extent, amount, dimension, capacity, or size of
A Philosophical View: some things are better than • Effective testing= effective technical reviews. (many errors will be eliminated tested) some attribute of a product or process
others before testing commences) • Observability- the results of each test case are readily • Metrics: a quantitative measure of the degree to
A Pragmatic View:- Who tests the software? observed which a system, component, or process possesses a
• transcendental view- something that you • Developer: Understands the system but, will test "gently" and, is driven by • Controllability- testing can be automated and optimized given attribute.
immediately recognize, but cannot explicitly "delivery" • Decomposability- can be targeted (isolates problems and • Indicators: a metric or combination of metrics that
define • Independent Tester: Must learn about the system, but, will attempt to break it retest) independent modules allow testing to be done provide insight into the software process, a
• User view- end-user's specific goals. If a product and, is driven by quality independently software project, or the product itself
meets those goals, it exhibits quality Verification & Validation (V&V) • Simplicity- reduce complex architecture and logic to Measurement Principles
• Manufacturer's view- original specification of the simplify tests (functional, structural, code) • The objectives of measurement should be
product. If the product conforms to the spec, it • Stability- few changes are requested during test established before data collection begins;
exhibits quality • Understandability- the more info, the smarter we test • Each technical metric should be defined in an
• Product view- inherent characteristics (functions Good Test unambiguous manner;
& features) of a product • High probability of finding error- mindset(how the • Metrics should be derived based on a theory that is
• Value based view- how much a customer is software might fail?) valid for the domain of application (e.g., metrics for
willing to pay for a product. • Not redundant- testing time and resources are limited design should draw upon basic design concepts and
Software Quality • Best of breed- choose the highest likelihood of uncovering principles and attempt to provide an indication of
Definition: An effective software process applied in a error the presence of an attribute that is deemed
manner that creates a useful product that provides • Neither too simple nor too complex- focus one test cases desirable);
measurable value for those who produce it and those to test only one criteria • Metrics should be tailored to best accommodate
who use it 2. Test Case Design specific products and processes
a) Quality of design- encompasses requirements, Objective: to uncover errors Objectives of Software Measurement
specifications, and the design of the system. Criteria: in a complete manner • Identify that a project is healthy (the status of the
b) Quality of conformance- is an issue focused Constraint: with a minimum of effort and time project, product, processes and resources)
primarily on implementation. 2. Testing Strategy 3. Exhaustive Testing 2. Selective Testing • Document project trends (Record characteristics of
User satisfaction = compliant product + good quality + projects)
delivery within budget and schedule • Gain control of the software engineering process
a) Effective software process- establishes the • Understand what is happening during development
infrastructure that supports any effort at building and maintenance
a high quality software product. • Improve processes, projects and products in the
b) Useful product- delivers the content, functions, future
and features that the end-user desires 3. White-box testing Measurement process
c) Adding value- of a software product, high quality • Goal: to ensure that all statements and conditions have 1. Formulation: derivation of software measure and
software provides benefits for the software been executed at least once metrics appropriate for the representation of the
organization and the end-user community. Strategic Issues White-box testing software
2. Software Quality Model • Specify product requirements in a quantifiable manner long before testing - Not necessarily a flow chart 2. Collection: accumulate data required to derive the
McCall model(1977) commences (provide solid foundation for designing effective test cases) formulated metrics
• State testing objectives explicitly (tester understands the goals) 3. Analysis: computation of metrics and the
• Understand the users of the software and develop a profile for each user application of mathematical tools
category (cover the range of user interaction) 4. Interpretation: Evaluation of metrics results in an
• Develop a testing plan that emphasizes “rapid cycle testing.” (early detection) effort to gain insight into the quality of the
• Build “robust” software that is designed to test itself representation
• Use effective technical reviews (evaluating the design, code, and other artifacts 5. Feedback: recommendations derived from the
before testing) as a filter prior to testing ( to catch issues early in the interpretation of product metrics transmitted
1. Product Operation Factors:
development process) Goal-oriented Software Measurement- Formulation
a) Correctness- The extent to which a program
• Conduct technical reviews to assess the test strategy and test cases themselves. Steps: Goal/Question/Metric Paradigm (GQM)
satisfies its specification (daily operation) and
• Develop a continuous improvement approach for the testing process (Regularly 1. Compute the cyclomatic complexity; V(G): (higher the 1. Explicit measurement goal: specific to the process
fulfils the objectives (defined in a list of system
assess and evaluate the effectiveness of testing methodologies, tools, and V(G), higher the probability of errors) activity or product
required output).
b) Reliability- The extent to which a program can be processes) • Formula 1: V(G) = number of simple decisions (diamonds) 2. Define questions: must be answered in order to
3. Test Strategies for Conventional Software + 1 OR achieve the goal
expected to perform its intended function with
• Formula 2: V(G) = number of enclosed areas + 1 3. Formulate metrics: metrics that help to answer
required precision (deal with failures to provide
1. Unit Testing In this case, V(G)= 4 these questions
service).
2. Derive the independent paths: Goal Definition Template
c) Efficiency- Amount(lesser) of computing
Driver & Stub Since V(G)=4, there are 4 paths • Analyse {the name of activity or attribute to be
resources (hardware resources) and code
3. Derive test cases measured}
required to perform its function.
Prepare test cases that will force execution of each path in the • for the purpose of {the overall objective of the
d) Integrity- The extent to which access to a
basis set. analysis}
software or data by unauthorized persons can be
Graph Matrices (another white-box technique) • with respect to {the aspect of the activity or
controlled.
attribute that is considered}
e) Usability- (Less) Effort required to learn, operate,
• Used in bottom up testing approach • from the viewpoint of {the people who have an
prepare input and output of the program by staff
Drivers: "Calling" programs interest in the measurement}
to train new user.
• Dummy code : used when the sub modules are ready but the main module not • in the context of {the environment in which the
2. Product Revision Factors:
ready measurement takes place}
a) Maintainability- Determine the efforts that will
Metrics Attributes
be needed by users and maintenance personnel Stubs:
• Computer programs that act as temporary replacement for a called module and • Simple and computable: easy to learn how to
to identify the reasons for software failures, to
give the same output as the actual product or software derive the metric and computation should not
correct the failures, and to verify the success of
2. Integration Testing Steps: demand excessive amount of effort or time
the corrections.
a) The "big bang" approach- all or most of the modules are integrated 1. For each row plus all the nodes and minus 1 • Empirically and intuitively persuasive: metric
b) Flexibility- The capabilities and efforts required
simultaneously, and the entire system is tested as a whole. 2. The total for each row plus 1 (cyclomatic complexity) should satisfy the engineer intuitive notions about
(in man-days) to support adaptive maintenance
(for small project) Control Structure Testing (another white-box technique) the product attribute
activities are covered.
b) An incremental construction strategy- integrating and testing the system in 1. Data Flow Testing • Consistent and objective: metric should always
c) Testability- Deal with the (ease of) testing of an
small, manageable increments. • selects test paths of a program according to the locations yield results that are unambiguous
information system as well as with its operation
• Consistent in its use of units and dimensions:
(helps the tester, providing predefined results). (for large project) of definitions and uses of variables in the program
Top Down Integration Bottom-up Integration Sandwich Testing Steps: mathematical computation of metric should use
3. Product Transition Factors:
1. Statement numbers: measures that do not lead to bizarre combinations
a) Portability- Requirements tend to the adaptation
Assume each statement in a program is assigned a unique of unit
of a software system(using the same basic
statement number (e.g. S1) • Programming language independent: metrics
software) to other environments consisting of
2. Definitions(DEF) and uses(USE): should be based on the analysis, design model or
different hardware, different operating systems,
DEF(S) = {X | statement S contains a definition of X} the structure of the program itself
diverse hardware and OS situation.
USE(S) = {X | statement S contains a use of X} • Effective mechanism for quality feedback: metric
b) Reusability- Requirements deal with the use of
e.g. DEF(S1)={a}, USE(S)={a+b} should provide a software engineer with
software modules originally designed for one
3. Regression Testing 3. Definition-use (DU) chain: information that can lead to a higher quality end
project in a new software project currently being
• re-execution of some subset of tests that have already been conducted to of variable X is of the form [X, S, S'], where S and S' are product
developed and future projects
ensure that changes have not propagated unintended side effects statement numbers, X is in DEF(S) and USE(S'), and the Collection and Analysis Principles
c) Interoperability- Requirements focus on creating
• helps to ensure that changes (due to testing or for other reasons) do not definition of X in statement S is live at statement S' • Data collection and analysis should be automated
interfaces with other software systems or with
introduce unintended behaviour or additional errors. e.g. [a, S1, S3]: The value of a is defined in S1 and used in whenever possible
other equipment firmware (e.g.: the firmware of
4. Smoke Testing- to determine whether the software build is stable and ready for S3. • Valid statistical technique should be applied to
the production machinery and testing equipment
more comprehensive testing 4. Create the test cases: establish relationship between internal product
interfaces with the production control software)
• creating “daily builds” for product software to ensure that the flow data from definitions to uses is attributes and external quality characteristics
3. Software Quality Challenges
• May be top down or bottom up correct • Interpretative guidelines and recommendations
a) Measuring Quality
Steps: e.g. Test Case 1: Verify that the calculation of c (S3) is should be established for each metric
• Subjective measures of software quality may be
• Software components that have been translated into code are integrated into a correct based on the values of a and b. 2. Metrics for Products
viewed as little more than personal opinion
“build” 1. Condition Testing 1. Metrics For Requirements Model
• General quality dimensions and factors are not
• A series of tests is designed to expose errors that will keep the build (Show • a test case design method that exercises the logical Function-Based Metrics
adequate for assessing the quality of an
Stopper) from properly performing its function. conditions contained in a program module • Function-based metrics: Use the function point as a
application in concrete terms
• The build is integrated with other builds and the entire product (in its current normalizing factor or as a measure of the size of
How to solve:
form) is smoke tested daily. specification
• Project teams need to develop a set of targeted
4. General Testing Criteria • Specification metrics: used as an indication of
questions to assess the degree to which each
• Interface integrity- internal and external module interfaces tested as each quality by measuring number of requirements by
application quality factor has been satisfied
module or cluster is added to the software type
• Software metrics represent indirect measures of
• Functional validity- to uncover functional defects in the software The function point metric (FP)- used effectively as a
some manifestation of quality and attempt to
• Information content- errors in local or global data structures means for measuring the functionality delivered by a
quantify the assessment of software quality
• Performance- verify specified performance bounds are tested system.
b) The Software Quality Dilemma
Object-Oriented Testing 1. Simple Loops: Test Cases Function points: derived using an empirical relationship
• If you produce a software system that has terrible
• begins by evaluating the correctness and consistency of the analysis and design • skip the loop entirely based on countable (direct) measures of software's
quality, you lose because no one will want to buy
models • only one pass through the loop information domain and assessments of software
it. If on the other hand you spend infinite time,
• Useful because the same semantic constructs (e.g., classes, attributes, • two passes through the loop complexity
extremely large effort, and huge sums of money
operations, messages) appear at the analysis, design, and code level. • m passes through the loop m < n
to build the absolutely perfect piece of software,
• problem in the definition of class attributes that is uncovered during analysis • (n-1), n, and (n+1) passes through the loop
then it's going to take so long to complete and it
will circumvent side effects that might occur if the problem were not discovered where n is the maximum number of allowable passes
will be so expensive to produce that you'll be out
until design or code (or even the next iteration of analysis). 2. Nested Loops: Test Cases
of business anyway.
Three strategies: • Start at the innermost loop. Set all outer loops to their
• Either you missed the market window, or you
• Thread-based testing- integrates the set of classes required to respond to one minimum iteration parameter values.
simply exhausted all your resources.
input or event • Test the min+1, typical, max-1 and max for the innermost 2. Metrics for Design
How to solve:
• Use-case based testing- integrates the set of classes required to respond to one loop, while holding the outer loops at their minimum a) Architectural design metrics
• "Good Enough" software: magical middle ground
use case values. Architectural design metrics
where the product is good enough not to be
• Cluster-testing- integrates the set of classes required to demonstrate one • Move out one loop and set it up as in step 2, holding all • Structural complexity = g (fan-out)
rejected right away, such as during evaluation,
collaboration other loops at typical values. Continue this step until the • Data complexity = f (input & output variables, fan -

Final ISE Page 1


Three strategies: • Start at the innermost loop. Set all outer loops to their
• Either you missed the market window, or you
• Thread-based testing- integrates the set of classes required to respond to one minimum iteration parameter values.
simply exhausted all your resources.
input or event • Test the min+1, typical, max-1 and max for the innermost 2. Metrics for Design
How to solve:
• Use-case based testing- integrates the set of classes required to respond to one loop, while holding the outer loops at their minimum a) Architectural design metrics
• "Good Enough" software: magical middle ground
use case values. Architectural design metrics
where the product is good enough not to be
• Cluster-testing- integrates the set of classes required to demonstrate one • Move out one loop and set it up as in step 2, holding all • Structural complexity = g (fan-out)
rejected right away, such as during evaluation,
collaboration other loops at typical values. Continue this step until the • Data complexity = f (input & output variables, fan -
but also not the object of so much perfectionism
3. Validation- uses conventional black box methods outermost loop has been tested. out)
and so much work that it would take too long or
Test case- draws on conventional methods, but also encompasses special features 3. Concatenated Loops: Test Cases • System complexity = h (structural & data
cost too much to complete.
WebApp Testing • If the loops are independent of one another then treat complexity)
c) "Good Enough" Software
• The content model- to uncover errors each as a simple loop else* treat as nested loops endif* forHK metric (Holland-Kanekal Metric)
• delivers high quality functions and features that
• The interface model-to ensure that all use cases can be accommodated example, the final loop counter value of loop 1 is used to • architectural complexity as a function of fan-in and
end-users desire, but at the same time it delivers
• The design model-to uncover navigation errors initialize loop 2. fan-out
other more obscure or specialized functions and
• The user interface- to uncover errors in presentation and/ or navigation 4. Black-box Testing Morphology metrics
features that contain known bugs.
mechanics • a function of the number of modules and the
• Risks: convinced budget people might want to Advantages of BBT Disadvantages of BBT
MobileApp Testing number of interfaces between modules
buy the first version, but expose the risk of the 1. More effective on larger units 1. Ignores important
• User experience testing- ensuring app meets stakeholder usability and b) Metrics for object-oriented design
damage of the product and cause damage to of code than white-box testing. functional properties not in E.g.:
accessibility expectations
company's reputation. As a result, cannot deliver requirements specifications.
• Device compatibility testing- testing on multiple devices • Depth of the Inheritance Tree (DIT).
the second version because of the product's
• Performance testing- testing non-functional requirements 2. Balanced and unbiased as 2. May overlook program • Number of Children (NOC).
damages and company's bad reputation.
• Connectivity testing- testing ability of app to connect reliably testers and developers are faults related to design and • Lack of Cohesion in Methods (LCOM)
d) Cost of Quality
• Security testing- ensuring app meets stakeholder security expectations independent of each other. implementation details. • Coupling Between Object Classes (CBO)
Prevention costs: quality planning, formal
• Testing-in-the-wild- testing app on user devices in actual user environments Class-Oriented Metrics
technical reviews, test equipment, Training 3. Tests can be developed early 3. Does not provide a basis for
• Certification testing- app meets the distribution standards • weighted methods per class
Internal failure costs: rework, repair, failure mode when requirement coverage measure in terms of
High Order Testing • depth of the inheritance tree
analysis specifications are ready. data states.
• Validation testing- software requirements Operation-oriented metrics
External failure costs: complaint resolution,
• System testing- System integration 4. Helps identify vagueness and 4. Logic errors may not be • average operation size
product return and replacement, help line
• Alpha/Beta testing - Customer usage inconsistencies in functional effectively identified. • operation complexity
support, warranty work
• Recovery testing- forces the software to fail in a variety of ways and verifies that specifications. • average number of parameters per operation
How to solve:
recovery is properly performed Black-box Testing Technique - Equivalence Partitioning Component-Level Design metrics
• During the coding phase, the costs will increase
• Security testing- verifies protection mechanisms • To reduce the number of test cases to necessary • Cohesion metrics: a function of data objects and
rapidly. So, it is better to reduce the costs during
• Stress testing- executes a system in a manner that demands resources in minimum, To select the right test cases to cover all the locus of their definition
requirements and design phase
abnormal quantity, frequency, or volume possible scenarios. • Coupling metrics: measure of how many
e) Quality and Risk
• Performance testing- the run-time performance of software • divides the input domain of a program into classes of data dependencies classes in a single class. (uses a
• The importance of quality must be top-notch
5. Debugging Techniques (Equivalence classes) that have similar effect on the function of input and output parameters, global
because people bet their jobs, their comforts,
Debugging: A diagnostic process program (representative test case can be derived from variables, and modules called)
their safety, their entertainment, their decisions,
Debugging Effort each equivalence classes)
and their very lives on computer software Class Coupling Risk Evaluation
• Time required to diagnose the symptom and determine the cause Equivalence Class
• The effects might cause deaths (e.g. systems in
• Time required to correct the error and conduct regression tests > 30 (on member level) AND > 80 Dependencies are
hospitals) • Is a set of inputs that the program treats identically when
Systems & Causes (on type level) critical
f) Negligence and Liability the program is tested.
Symptom- May be geographically separated with cause, disappear when another • to represent certain condition on the input domain 10 To 30 (on member level) AND 10 Dependencies are
• When a corporate hires a major software
problem is fixed and intermittent • Input conditions are typically used to partition the input To 80 (on type level) still okay
developer to construct a software to support the
Cause- May be due to a system or compiler error and assumptions that everyone domain into equivalence classes for the purpose of
company, but by the time the system is delivered, 00 To 09 Dependencies are
believes selecting inputs.
the quality is bad (The system is late, fails to good
Debugging Techniques • During equivalence partitioning process, both valid (i.e.
deliver desired features and functions, is error-
• Brute Forces: let the computer to find errors (apply when other techniques fail) return a non-error value) and invalid (i.e. return an • Complexity metrics: hundreds have been proposed
prone, and does not meet with customer
• Backtracking: manually backtracked until the cause is found erroneous value) inputs are considered. (e.g., cyclomatic complexity)
approval. Litigation ensues.)
• Cause elimination: data related to the error will be organized to isolate potential Cyclomatic Risk Evaluation
g) Low Quality Software
causes. Hypothesis is created and tests are conducted to eliminate each Complexity
• increases risks for both developers and end-users
potential causes
Characteristics of a low quality software: systems >50 Very high risky code which is an un
- Induction
are delivered late, fail to deliver functionality, and testable
- Deduction
does not meet customer expectations litigation
• Automatic debugging 21 To 50 Risky code it include more complex
ensues
Correcting the error logic
Effects: easier to hack and can increase the
1. Is the cause of the bug reproduced in another part of the program?
security risks for the application once deployed 11 To 20 Moderate risk
• In many situations, a program defect is caused by an erroneous pattern of logic
and liable to contain architectural flaws as well as 1 To 10 a simple program, without very
that may be reproduced elsewhere Test Case Generation- equivalence partitioning technique
implementation problems (bugs) much risk
2. What "next bug" might be introduced by the fix I'm about to make? Guidelines:
• Before the correction is made, the source code should be evaluated to assess 1. If an input condition specifies a range of values c) Interface design metrics
coupling of logic and data structures Then identify: One valid equivalence class and Two invalid • Layout appropriateness: a function of layout
3. What could we have done to prevent this bug in the first place? equivalence class entities, the geographic position and the “cost” of
• Correct the process as well as the product, the bug will be removed from the E.g.: : If a range of values 1…99, thus three equivalence making transitions among entities
current program and may be eliminated from all future programs classes are derived. d) Design metrics for web and mobile apps
• Think before you act to correct • One valid equivalence class: {1,2, ..99} • Does the user interface promote usability?
• Use tools to gain additional insight • Are the aesthetics of the App appropriate for the
• Two invalid equivalence class: {x | x < 1} , {x | x > 99 }
• Get help from someone else application domain and pleasing to the user?
2. If an input condition specifies a specific value
• One correcting the bug, use regression testing to uncover any side effects
Then identify: One valid equivalence class and Two invalid 3. Metrics for coding
a) Impact of Management Decisions • Halstead’s Software Science: a comprehensive
equivalence class
Estimation decisions: irrational delivery date estimates cause teams to take short-cuts collection of metrics all predicated on the number
E.g.: If the specific value is 4055, thus three equivalence
that can lead to reduced product quality (count and occurrence) of operators and operands
classes are derived.
Scheduling decisions: failing to pay attention to task dependencies when creating the within a component or program
• One valid equivalence class: 4055
project schedule 4. Metrics for testing
• Two invalid equivalence class: {x | x < 4055} , {x | x > 4055}
Risk-oriented decisions: reacting to each crisis as it arises rather than building in • Halstead measures: derived metrics from halstead
3. If an input condition specifies a set of values
mechanisms to monitor risks may result in products having reduced quality measures can be used to estimate testing effort
then identify: One valid equivalence class and One invalid
1. Achieving Software Quality design metrics that have a direct influence on the
equivalence class
a) Software Engineering methods: good project management and solid engineering “testability” of an OO system:
E.g.: If a set of values is M = {a, y, n, z} thus two
practice, understand the problem to be solved and be capable of creating a quality • Lack of cohesion in methods (LCOM).
equivalence classes are derived.
design the conforms to the problem requirements and Eliminating architectural flaws • Percent public and protected (PAP).
• One valid equivalence class: {x | x ∈ M }
during design • Public access to data members (PAD).
• One invalid equivalence classes: {x | x ∉ M }
b) Project Management: explicit techniques for quality and change management
4. If an input condition specifies Boolean value • Number of root classes (NOR). ▫ Fan-in (FIN).
c) Quality control: series of inspections, reviews, and tests used to ensure conformance • Number of children (NOC) and depth of the
then identify: One valid equivalence class and One invalid
of a work product to its specifications inheritance tree (DIT).
equivalence class
d) Quality assurance: consists of the auditing and reporting procedures used to provide 5. Metrics for maintenance
• One valid equivalence class: True
management with data needed to make proactive decisions a) Maintenance
• One invalid equivalence classes: False
Boundary Value Analysis (BVA) • Software maturity index (SMI)- provides an
• complements equivalence partitioning indication of the stability of a software
• Rather than focusing solely on input conditions, BVA product(based on changes that occur for each
derives test cases from the output domain as well release of the product)
• requires selection of test cases that exercise bounding • As SMI approaches 1.0, the product begins to
values stabilize
partitions: • SMI = [MT - (Fa + Fc + Fd)]/MT
• MT = the number of modules in the current release
Invalid Partition Valid Partition Invalid Partition • Fc = the number of modules in the current release
0 1-1000 1001 and above that have been changed
Boundary Values: • Fa = the number of modules in the current release
Valid Partition: 1, 1000 that have been added
Invalid Partition: 0, 1001 • Fd = the number of modules from the preceding
release that were deleted in the current release

Final ISE Page 2

You might also like