Activity-Based Techniques: Test Design Techniques (18 April 2019)
Activity-Based Techniques: Test Design Techniques (18 April 2019)
ACTIVITY-BASED TECHNIQUES
Test Design Techniques
[18 April 2019]
The BBST Courses are created and developed by Cem Kaner, J.D., Ph.D..,
Professor of Software Engineering at Florida Institute of Technology.
Contents
• Last lecture…
• Terminology
• Activity-based Techniques
• Use Case Testing;
• Scenario-based testing;
• Other activity-based techniques
• Guerilla testing;
• All-pairs testing;
• Random testing;
• Installation testing;
• Regression testing;
• Long sequence testing;
• Dumb monkey testing;
• Load testing;
• Performance testing.
Last Lecture…
• Topics approached in Lecture 05 and Lecture 06:
• Risk-based techniques:
• Part I
• Risk
• Risk Approaches to Software Testing
• Guidewords. HTSM
• Risk Catalogs
• Project-level Risks
• Specific Risk-based Techniques
• Quick-tests
• Part II
• Specific Risk-based Techniques
• [Quick-tests]
• Constraints
• Logical expressions
• Stress testing
• Load testing
• Performance testing
• History-based testing
• Risk-based multivariable testing
• Usability testing
• Configuration / compatibility testing
• Interoperability testing
• Long sequence regress
TDTs Taxonomy
• The main test design techniques are:
• Black-box approach:
• Coverage-based techniques;
• Tester-based techniques;
• Risk-based techniques;
• Activity-based techniques;
• Evaluation-based techniques;
• Desired result techniques;
• White-box approach:
• Glass-box techniques.
Test Case. Attributes
• A test case is
• a question you ask the program. [BBST2010]
• we are more interested in the informational goal, i.e., to gain information; e.g., whether the
program will pass or fail the test.
• Attributes of relevant (good) test cases:
• most of the activity-based techniques are classified in another way as well – almost each test
design techniques requires to address some activities at some degree.
Activity-based Test Techniques. Focus
Activity-based techniques focus on how-to test/perform testing. This is why
these techniques are most closely match the classical notion of a technique.
• a technique may be classified depending on what the tester has in mind when he uses it.
• E.g.: long-sequence (automation) testing
• is activity-oriented because every time the tester thinks about the activities to perform:
• programming, maintenance and developing diagnostics;
• types of work required to create and run the tests;
• is risk-oriented because the tests are especially suited to hunt for specific types of bugs that
will not show up in a normal testing.
Activity-based Test Design Techniques
• Activity-based techniques [11 techniques]:
• Use case testing;
• Scenario testing
• Guerilla testing;
• All-pairs testing;
• Random testing;
• Installation testing;
• Regression testing;
• Long sequence testing;
• Dumb monkey testing;
• Load testing;
• Performance testing.
Use Cases. Definition
• A Use Case specifies
• a sequence of actions, including variants, that the system can perform and that yields an
observable result of value to a particular actor [Jacobson1995];
• a system’s behavior in response to a request from an actor which might be a human or
another system;
• the intended behavior, i.e., how the system should work to achieve a goal, but not the
motivation of the actor or the consequences for the actor if the request fails;
• the actor’s steps and system behavior on a sequence diagram;
• “happy path/flow” is the sequence diagram that shows the simplest set of steps, i.e.,
sequence of actions, that lead to success;
• other paths show complications, some leading to failures;
Use Cases. Details
• Concepts used with use cases [Jacobson1995]:
• An actor is
• a person, process or external system that interacts with your product;
• A goal is
• the reach of a desired state of the system, i.e., the observable result of value;
• An action is
• a change of state and is realized by sending a message to an object or modifying a value in an attribute;
• something the actor does as part of the effort to achieve the goal;
• Sequences of actions are
• a specific flow of events through the system;
• many different flows are possible and many of them may be very similar;
• to make a use-case model understandable, similar flows of events are grouped into a single use case;
• A sequence diagram is
• a diagram that shows actions and states of a use case, emphasizing the ordering of the actions in time.
Use Case Testing. Definition
• Use-Case Testing consists of
• the modeling and testing down the sequence diagram’s paths;
Activity: The tester creates sequence diagrams (behavior models) and runs tests that
trace down the paths of the diagrams.
Use Case Testing. Proper Use Steps
• A working pattern to describe a full set of use cases [Cockburn2001]:
1. brainstorm and list the primary actors;
2. brainstorm and exhaustively list user goals for the system;
3. capture the summary goals (higher-level goals, which include several sub-goals);
• these capture the meaningful benefits offered by the system;
4. select one use case to expand;
• capture stakeholders and interests, preconditions and guarantees;
• write the main success scenario, i.e., a sequence diagram;
• brainstorm and exhaustively list extension conditions:
• alternate sequence diagrams to achieve the same result, or
• sequences diagrams that lead to failure.
5. repeat step 4. for each distinct use case identified.
Use Case Testing. Benefits
• Use-case testing encourages the tester:
• to identify the actors in the system: human and/or other processes or systems;
• to inventory the possible actor goals;
• to identify the benefits of the system by identifying the summary goals;
• to develop some method, e.g., sequence diagrams, outlines, textual descriptions, for describing a
sequence of actions and system responses that ultimately lead to a result;
• to develop variations of a basic sequence, to create meaningful new tests.
• the tester goes beyond features or individual specification-claims ==> identifies meaningful sequences;
• the use case contains its own oracle:
• if the sequence should lead to achievement of some goal, but it does not actually achieve it ==> the
program is broken;
• if the sequence that should lead to error handling, but it does not or the error is not handled well ==>
there is a failure.
Use Case Testing. Downsides
• Use-case testing brings some drawbacks:
• the approach abstracts out the human element, i.e., does not consider the human as a relevant factor;
• because the actor may not be human, actors are described in ways that are equally suitable for things
that have no consciousness;
• human goals go beyond a desired program state; they are more complex;
• for humans:
• goals are intimately connected with motivation:
• Why does this person want to achieve this goal? How important is it to them? Why?
• failure to achieve a goal causes consequences, including emotions:
• How upset will the user be if this fails? Why?
• understanding the human element might be irrelevant for sequence diagrams, but it proves to be valuable:
• to prioritize the tests;
• to combine goals in human-meaningful ways;
• to interpret and to explain the results.
Use Case Testing vs Tours and Function Testing
• a use case-based approach to testing provides a good starting point (like tours) for testers if
they don’t know much about the application;
• advantages of using use case testing:
• provides a structure for tracing through the application by building diagram sequences;
• simple as function testing but it uses several functions together.
Use Case Testing and Scenarios. RUP Approach
• The Rational Unified Process (RUP) defines scenarios in terms of use cases [Collard1999]:
• a scenario is an instantiation of a use case, i.e., it specify the values of the use case data to
create one instance of the use case;
• a RUP-scenario traces one of the paths through the use case;
• if the tester actually executes the path, he runs a scenario test;
• thorough use case-based testing involves
• tracing through all (most) of the paths
• through all (most) of the use cases, paying special attention to failure cases.
Even though use case-based testing is useful in its own right, as a basic approach to
scenario testing in RUP, it misses the deep value of what we know as scenario analysis.
Scenario. Definition
• A scenario is
• a hypothetical story about how someone uses a program;
Activity: Creating a story (or a related-family of stories) and a test that expresses it.
Scenario-based Testing. Attributes
A scenario is a coherent story,
credible, motivating, complex,
• Ideal scenario test has several characteristics:
and easy to evaluate.
1. the test is based on a coherent story about how the program is used, including goals and
emotions of the involved people.
2. the story is credible; stakeholders will believe that something like it probably will happen;
3. the story is motivating; a stakeholder with influence will advocate for fixing a program that
failed this test;
4. the story involves complexity: a complex use of the program or a complex environment or
a complex set of data;
5. test results are easy to evaluate; this is important for scenarios because they are complex.
Scenario-based Testing. Benefits (1)
• scenario-based thinking was popular in the 1950s in military planning;
• later it was adopted and proved useful to commercial field by imagining scenario crisis.
• Benefits of using scenarios [Kahn1967]:
• call attention to the larger range of possibilities that must be considered by the analysis in the future;
• dramatize and illustrate the possibilities;
• force analysts to deal with details and dynamics that they might avoid if they focus on abstract
considerations;
• illuminate interactions of psychological, social, economic, cultural, political, and military factors, including
the influence of individual personalities, in a form that permits the comprehension of many interacting
elements at once;
• consider alternative possible outcomes of certain real past and present events.
Scenarios help imagine complexity (people, society) and help work with it (how entities
interact between them). This complexity misses from sequence diagrams, i.e., working
with use cases.
Scenario-based Testing. Benefits (2)
• many test techniques tell the tester how the program will behave in the first few days that
someone uses it;
• Scenario-based testing ensures that good scenarios tests
• go beyond the simple uses of the program to ask whether the program is delivering the
benefits it should deliver;
• often give the tester insight into frustrations that an experienced user will face – someone
who has used the program for a few months and is now trying to do significant work with the
program.
Scenario-based Testing. Combination Testing Usage
• there are three approaches to combination testing:
• Mechanical (or procedural):
• the tester uses a routine procedure to determine a good set of tests;
• E.g.: random combinations and all-pairs;
• Risk-based:
• the tester combines test values (the values of each variable) based on perceived risks associated with
noteworthy combinations;
• E.g.: quick-tests and stress testing;
• Scenario-based:
• the experienced tester combines meaningful test values on the basis of interesting stories created for
the combinations important to the experienced user;
• E.g.: scenario-based testing.
4. Interview users about famous challenges and failures of the old system;
• study users or workflows.
Scenario Types (3)
5. Look at the specific transactions that people try to complete;
• E.g.:
• opening a bank account;
• sending a message.
• the tester can design scenarios (one, or probably more) for each transaction, plus scenarios for larger tasks
that are composed of several transactions;
• in a transaction processing system a transaction is an indivisible operation, i.e., the system completes it or
cancels it.
Scenario Types (4)
6. Work with sequences;
• people (or the system) typically do tasks (like Task X) in an order;
• What are the most common orders (sequences) of subtasks in achieving X?
• it might be useful to map Task X with a behavior diagram;
• this is the closest analog to the use-case based scenario;
• a use-case test is transformed into a scenario by adding the human details to it;
• goal and alternate sequences are added;
• motivation and consequences should be considered as well.
Scenario Types (5)
7. Consider disfavored users;
• question to ask: How do they want to abuse your system?
• disfavored users are humans too;
• the tester should analyze their interests, objectives, capabilities, and potential opportunities;
• they are the source of lots of scenarios;
• E.g.: hacking into the web application;
• the systems is successful if it blocks bad actions of disfavored users, rather than enabling them.
Scenario Types (6)
8. What forms do the users work with?
• work with these forms by performing operations, e.g., read, write, modify, etc.;
• E.g.: for a program that helps people find a job, some forms to fill in would be:
• several standard résumé templates;
• automatically filling in fields in employer-site or recruiter-site forms;
• any form may be the source of various scenarios.
Scenario Types (7)
9. Write life histories for objects in the system.
• questions to ask:
• How was the object created, what happens to it, how is it used or modified, what does it interact with,
when is it destroyed or discarded?
• similar to creating a list of possible users and base the scenarios on who they are and what they do with the
system, the tester can create a list of objects and base his scenarios on what they are, why they are used,
and what the system can do with them;
• E.g.: for a program that helps people find a job, the list of objects for which life history can be built would be:
• Resumes;
• Contacts;
• Downloaded ads;
• Links to network sites;
• Emails.
Scenario Types (8)
10. List system events;
• questions to ask: How does the system handle them?
• an system event is
• any occurrence (all things that can happen) that the system is designed to respond to;
• anything that causes an interrupt and the system has to respond to;
• E.g.: for a program that helps people find a job, some business events would be:
• going to an interview;
• sending a résumé ;
• getting called by a prospective employer.
Scenario Types (9)
11. List special events;
• A special event is
• a system event that is handled in a special way than usual, considering the contract;
• a thing that do not happen very often, but might cause the system to work differently when it
happens;
• the system might change how it works or do special processing in the context of a special event;
• they are predictable but unusual occurrences;
• E.g.:
• last (first) day of the quarter or of the fiscal or calendar year;
• while installing or upgrading the software;
• holidays.
Scenario Types (10)
12. List benefits and create end-to-end tasks to check them.
• questions to ask:
• What benefits is the system supposed to provide?
• the tester should not rely only on an official list of benefits;
• he should ask stakeholders what they think the benefits of the system are supposed to be;
• look for misunderstandings and conflicts among the stakeholders;
• same system offers different benefits to different people;
• The expected result of the story is the result the tester expects if the program is working correctly.
• it has to run from start to finish in a way that makes sense;
• an arbitrary sequence of actions is not a scenario, as there is no motivation, no goal.
Test Suite Scenario. Attributes (2)
2. Create a Credible Story
• ask the scenario questions:
• What would make a story about this be credible?
• When would this come up, or be used?
• Who would use it?
• What would they be trying to achieve?
• Competitor examples?
• Spec/support/history examples?
• Developing a credible story means that people who read the story should believe the program will run into a
situation like this. Even if does not happen very often, but it will happen.
Test Suite Scenario. Attributes (3)
3. Create a Motivating Story
• given an item in the list, ask scenario-building questions:
• What is important (motivating) about this?
• Why do people care about it?, Who would care about it?
• What does it relate to that modifies its importance?
• What gets impacted if it fails?, What does failure look like?
• What are the consequences of failure?
• The story is motivating if someone important (specific stakeholder) thinks the program should pass the test.
• scenarios are powerful tools for building a case that a bug should be fixed;
• the tester should make the problem report meaningful to a powerful stakeholder who should care about
this particular failure;
• inability to develop a strong scenario around a failure may be a signal that the failure is not well understood or
not important.
Test Suite Scenario. Attributes (4)
4. Create a Complex Story
• given an item in the list, ask the scenario questions:
• How to increase complexity?
• What does this naturally combine with?
• What benefits involve this and what collection of things would be required to achieve each?
• Can the tester make it bigger? Do it more? Work with richer data? (What boundaries are involved?)
• Will any effects of the scenario persist, affecting later behavior of the program?
• meaningful complexity of the story/scenario consists of using:
• many related features and actions;
• many data values.
Test Suite Scenario. Attributes (5)
4. Create a Complex Story – Handling Complexity in Scenarios
• Each feature is tested in isolation (or in small mechanical clusters of features) before testing it inside scenarios;
• it allows to reach straightforward failures sooner and more cheaply;
• tests reused expose better weak designs with cheap function tests than more expensive scenarios;
• combination failures are harder to troubleshoot; simple failures that appear first inside a combination can be
unnecessarily expensive to troubleshoot;
• scenarios are prone to blocking bugs: a broken feature blocks running the rest of the test; once that feature
is fixed, the next broken feature blocks the test.
• Adding complexity to a scenario arbitrarily will not work; the story must still be coherent and credible;
• arbitrary combinations is reasonable for combination tests, not for scenarios.
Test Suite Scenario. Attributes (6)
5. Create an Easy-to-evaluate Test
• given an item in the list, ask the scenario questions:
• How to design an easy-to-evaluate test?
• self-verifying data sets?
• automatable partial oracles?
• known, predicted result?
• Evaluability is important because so many failures have been exposed by a good scenario but missed by the
tester.
• E.g.: summarized IBM data showed that over 30% of the bugs discovered has been actually exposed by tests
run by testers, but the bugs were not noticed it took to much time and attention to check the results closely
enough to realize there was a bug;
• When designing complex tests, i.e., scenario-based testing, it is important to design them such that the failures
are obvious.
Scenario-based Testing. Good Practices
• sketch the story, briefly; the tester does not have to write down the details of the setting and
motivation if he understands them;
• the good story need to be developed in tester’s head, first;
• the full story is written down when if the bug report is written;
• some skilled scenario testers add detail early;
• only write down the steps that are essential, i.e., the tester might forget them or the tester is likely
to make a mistake;
• the expected result is always the correct program behavior.
Scenario-based Testing and Testing Coverage
• in general, scenario-based testing cannot guarantee high code coverage.
• each line of inquiry is like a tour
• the tester could explore that line thoroughly to achieve a level of coverage;
• E.g.:
• system events;
• objects created by the system;
• required benefits;
• features;
• however, coverage-oriented testing often uses simpler tests;
• some old research results suggests that traditional black-box testing achieves less than 33% coverage of the
lines of the program ==> testing misses 2/3 parts of the code, mostly error handling.
Regression testing based on scenario tests might be less powerful and less efficient
than regression testing based on other techniques, e.g., function testing.
Scenario-based Testing vs Risk-based Testing (1)
• Scenario testing • Risk-based testing
• Tests are complex and coherent stories that capture • Tests are derived from ideas about how the program
how the program will be used in real-life situations. could fail.
• These are combination tests, whose combinations are • These tests might focus on individual variables or
credible reflections of real use. combinations.
• These tests are highly credible (stakeholders will • These tests are designed for power and efficiency –
believe users will do these things) and so failures are find the bug quickly – rather than for credibility.
likely to be fixed. Extreme-value tests that go beyond reasonable use are
common.
When a bug is found with risk-based testing, it is recommended to do follow-up testing to describe a
scenario that demonstrates the bug in a more credible and motivating way to investigate, i.e., bug advocacy.
Scenario-based Testing vs Risk-based Testing (2)
Attribute Scenario-based Testing Risk-based Testing
• in scenario-based testing the 17 lines of inquiries (scenario types) represent distinct test techniques;
• many test techniques tell the tester how the program will behave in the first few days that someone uses it;
• Scenario-based testing teach testers that:
• good scenario tests go beyond the simple uses of the program to ask whether the program is delivering the
benefits it should deliver;
• good scenarios often give tester insight into frustrations that an experienced user will face—someone who
has used the program for a few months and is now trying to do significant work with the program.
Guerilla Testing. Definition
• Guerilla Testing allows
• to run exploratory tests that are usually time-boxed and done by an experienced explorer;
• goal: to perform a fast and vicious attack on some part of the program.
• E.g.: a senior tester might spend a day testing an area that is seen as low priority and would otherwise be
ignored;
• he tries out his most powerful tests;
• if he finds significant problems, the area will be re-budgeted and the overall test plan might be
affected;
• if he finds no significant problems, the area will hereinafter be ignored or only lightly tested.
Activity: following the algorithms (or using tools) to generate tests that meet this criterion.
Random Testing. Definition
• Random Testing allows
• the tester to use a random number generator to determine:
• values to be assigned to some variables, or
• the order in which tests will be run, or
• the selection of features to be included in a test.
• it means the decisions are made by a random number generator rather than by a human as
part of a detailed plan, but testing is performed by tester.
Activity: Coding and then executing input streams activities, followed by execution-timing
monitoring activities.
Lecture Summary
• We have discussed:
• Activity-based testing. Focus
• Use case testing;
• Scenario testing;
• Other activity-based techniques:
• Guerilla testing;
• All-pairs testing;
• Random testing;
• Installation testing;
• Regression testing;
• Long sequence testing;
• Dumb monkey testing;
• Performance testing.
Next Lecture
• Tester-based techniques • Bug reporting
• Focus; • Bug Types;
• Techniques: • RIMGEN/RIMGEEA investigation and reporting
• User testing; strategy;
• Alpha testing; • Bugs taxonomy by Claudiu Draghia;
• Beta testing; • Examples of bug reports from BBST Bug Advocacy.
• Bug bashes;
• Subject-matter expert testing;
• Paired testing;
• Eat your own dogfood;
• Localization testing.
References I
• [Kaner2003] Cem Kaner, An introduction to scenario testing, http://www.kaner.com/pdfs/ScenarioIntroVer4.pdf, 2003.
• [BBST2011] BBST – Test Design, Cem Kaner, http://www.testingeducation.org/BBST/testdesign/BBSTTestDesign2011pfinal.pdf.
• [BBST2010] BBST – Fundamentals of Testing, Cem Kaner,
http://www.testingeducation.org/BBST/foundations/BBSTFoundationsNov2010.pdf.
• [KanerBachPettichord2001] Kaner, C., Bach, J., & Pettichord, B. (2001). Lessons Learned in Software Testing: Chapter 3: Test Techniques,
http://media.techtarget.com/searchSoftwareQuality/downloads/Lessons_Learned_in_SW_testingCh3.pdf .
• [Whittacker2002] Whittaker, J.A. (2002). How to Break Software. Addison Wesley.
• [Marick2000] Marick, B. (2000) , Testing for Programmers, http://www.exampler.com/testing-com/writings/half-day-programmer.pdf.
• [Savoia2000] Savoia, A. (2000), The science and art of web site load testing, International Conference on Software Testing Analysis &
Review (STAR East), Orlando. https://www.stickyminds.com/presentation/art-and-science-load-testing-internet-applications
• [McGeeKaner2004] McGee, P. & Kaner, C. (2004), Experiments with high volume test automation, Workshop on Empirical Research in
Software Testing, International Symposium on Software Testing and Analysis, http://www.kaner.com/pdfs/MentsvillePM-CK.pdf
• [Jorgensen2003] Jorgensen, A.A. (2003), Testing with hostile data streams, ACM SIGSOFT Software Engineering Notes, 28(2),
http://cs.fit.edu/media/TechnicalReports/cs-2003-03.pdf
• [Bach1999] Bach, J. (1999), Heuristic risk-based testing, Software Testing & Quality Engineering,
http://www.satisfice.com/articles/hrbt.pdf
References II
• [Agruss2000] Agruss, C. (2000), Software installation testing: How to automate tests for smooth system installation, Software Testing &
Quality Engineering, 2 (4), http://www.stickyminds.com/getfile.asp?ot=XML&id=5001&fn=Smzr1XDD1806filelistfilename1%2Epdf
• [TestInsaneApps] Test Insame Apps, 2019, http://apps.testinsane.com/mindmaps
• [Nyman1998] Nyman, N. (1998), Application testing with dumb monkeys, International Conference on Software Testing Analysis & Review
(STAR West);
• [Meier2007] Meier, J.D., Farre, C., Bansode, P., Barber, S., & Rea, D. (2007), Performance Testing Guidance for Web Applications.
Redmond: Microsoft Press.
• [Jacobson1995] Jacobson, I. (1995), The use-case construct in object-oriented software engineering, In John Carroll (ed.) (1995), Scenario-
Based Design. Wiley.
• [Cockburn2001] Cockburn, A.(2001). Writing Effective Use Cases Addison-Wesley.
• [Collard1999] Collard, R. (July/August 1999). Test design: Developing test cases from use cases, Software Testing & Quality Engineering,
31-36.
• [Kahn1967] Kahn, H. (1967), The use of scenarios, in Kahn, Herman & Wiener, Anthony (1967), The Year 2000: A Framework for
Speculation on the Next Thirty-Three Years, pp. 262-264. https://www.hudson.org/research/2214-the-use-of-scenarios
• [Carroll1999] Carroll, J.M. (1999), Five reasons for scenario-based design, Proceedings of the 32nd Hawaii International Conference on
System Sciences, http://www.massey.ac.nz/~hryu/157.757/Scenario.pdf