0% found this document useful (0 votes)
76 views

Introduction To

The document provides an introduction and agenda for testing that covers why testing is necessary, what tests are, the fundamental test process, features of high-quality software, tricky phrases in testing, seven testing principles, good testing practices, and what to avoid in testing. It discusses the importance of testing in reducing risks and ensuring software quality. Some key points covered include that testing shows the presence of defects, exhaustive testing is impossible, early testing is important, and the absence of errors fallacy.

Uploaded by

Ana Maria
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views

Introduction To

The document provides an introduction and agenda for testing that covers why testing is necessary, what tests are, the fundamental test process, features of high-quality software, tricky phrases in testing, seven testing principles, good testing practices, and what to avoid in testing. It discusses the importance of testing in reducing risks and ensuring software quality. Some key points covered include that testing shows the presence of defects, exhaustive testing is impossible, early testing is important, and the absence of errors fallacy.

Uploaded by

Ana Maria
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 49

Introduction to testing

Agenda

1. Why testing is necessary?


2. What are tests?
3. Fundamental test process
4. Features of high-quality software
5. Tricky phrases
6. 7 testing rules
7. Good testing practices
8. What to avoid in testing?
9. Psychology of testing
10. Code of ethics
11. Roles related to testing
12. Attributes for good software tester
13. Professional development of the tester
14. Glossary
15. Bug reporting
Why testing is necessary?

● Context of software systems.


● Causes of software defects
● The role of testing in the development, maintenance and operations
● Testing and quality
● How much testing is enough?
Why testing is necessary?

Context of software systems


Software systems are present in our daily lives. Each of us probably was in a
situation where something that he used did not work.
Software that works improperly can cause many problems, loss of money,
time or loss of business reputation. It can even result in loss of health or life.
Why testing is necessary?

Causes of software defects - A human being can make an error (mistake), which produces a defect
(fault, bug) in the program code, or in a document. If a defect in code is executed, the system may fail
to do what it should do (or do something it shouldn’t), causing a failure. Defects in software, systems
or documents may result in failures, but not all defects do so.

Defects occur because:

● people are fallible


● people work under time pressure
● people work with complex code
● people work in complexity of infrastructure
● people work with changing technologies
● people work with many system interactions
Why testing is necessary?

Role of testing in software development, maintenance and operations

Rigorous testing of systems and documentation can help to reduce the risk of problems occurring
during operation and contribute to the quality of the software system

If the defects found are corrected before the system is released for operational use.
Why testing is necessary?

Testing and Quality - With the help of testing, it is possible to measure the quality of software in
terms of defects found, for both functional and non-functional software requirements and
characteristics.

Testing can give confidence in the quality of the software if it finds few or no defects.

A properly designed test that passes reduces the overall level of risk in a system. When testing does
find defects, the quality of the software system increases when those defects are fixed.

Lessons should be learned from previous projects.

Testing should be integrated as one of the quality assurance activities.


Why testing is necessary?

The impact of error on the user:

● Wasted time
● Lost trust
● Lost money
● Irritation
● Unwillingness to product, software
● Health loss
● Losing life
Why testing is necessary?

How much testing is enough?

Deciding how much testing is enough should take account of the level of risk, including technical,
safety, and business risks, and project constraints such as time and budget.

Testing should provide sufficient information to stakeholders to make informed decisions about the
release of the software or system being tested, for the next development step or handover to
customers.
What are tests?

Testing is only part of testing - testing is a process


Before and during:

● planning and supervision


● selection of test conditions
● design and execution of test cases

and after:

● checking the results, assessment of fulfillment of the completion criteria,


● reporting of the testing and testing system
● finishing and closing tests
● browsing documentation and source code as well as static analysis
What are tests?

Testing objectives:

● Finding defects
● Gaining confidence about the level of quality
● Providing information for decision-making
● Preventing defects
Fundamental test process

The following steps may also occur in a different order:

● Test planning and control - comparing the progress of tests with the plan and status reporting
● Test analysis and design - translating goals into terms and test cases
● Test implementation and execution - setting test cases in a specific order and adding other
information needed to perform tests, environmental configuration and testing
● Evaluating exit criteria and reporting - consists in assessing the performance of tests in
accordance with the adopted testing objectives
● Test closure activities - collecting data from completed test activities
Features of high-quality software

● Functionality
● Reliability
● Usability
● Efficiency
● Maintainability
● Portability
Tricky phrases

Fault vs failure
Error (mistake)> defect (fault, bug)> failure (not all malfunctions cause failures)
Tricky phrases

Testing vs debugging
Testing can show failures caused by faults.

Debugging is a programming activity (finding, analyzing and removing the causes of failure).
Subsequent confirmation tests (retests) confirm that the defect is missing.
Tricky phrases

Tests vs retests
The first acceptance approach is tests. When we want to confirm that previously detected error no
longer exists, we make retests.

The testing itself does not improve the quality of the software, only the repair of errors detected in
their time.
Tricky phrases

Retests vs regression tests


Retests - testing consisting in running test cases which, during the last run, detected errors in the target
checking the correctness of the repair.

Regressive tests - retesting a previously tested program after modification in it, to ensure that no new
defects have been created or unrecognized previously as a result of changes, in the unchanged part of
the software, e.g. software or its working environment
Tricky phrases

Functional tests vs non-functional tests


Functional tests (black box) - What? - function tests that the system, subsystem or component
performs. They can be described as specification requirements, use cases or functionality specifics,
and can also be undocumented. Functions are deflected as actions performed by the system.
Functional tests are based on these functionalities or functions (described in the documentation or
understood by the tester).

Non-functional tests (parameters) - How? - testing attributes of a module or system that do not
relate to its functionality, e.g. reliability, efficiency, maintainability and transferability
Tricky phrases

Verification vs validation
Verification - checking the operation according to the specification; the control process to verify that
the product has been carried out according to assumptions

Is the product created correctly?

Validation - determining the correctness of the products of the software development process in
terms of customer / user expectations.

Is the product correct?


Seven Testing Principles

1. Testing shows presence of defects


2. Exhaustive testing is impossible
3. Early testing
4. Defect clustering
5. Pesticide paradox
6. Testing is context dependent
7. Absence-of-errors fallacy
Seven Testing Principles

Exhaustive testing is impossible


Only in trivial cases it is possible to check everything (combination of inputs and initial conditions). It
is therefore not true to say that the software does not have flaws - they can be they are not found.
Seven Testing Principles

Early testing
Seven Testing Principles

Defect clustering
Testing effort shall be focused proportionally to the expected and later observed defect density of
modules. A small number of modules usually contains most of the defects discovered during pre-
release testing, or is responsible for most of the operational failures.

Pareto Principle (also known as 80/20 rule) states that, for many events, roughly 80% of the effects
come from 20% of the causes.
Seven Testing Principles

● 20% of clients bring 80% of profits,


● 20% of drivers cause 80% of accidents,
● we wear 20% of clothes for 80% of the time
● 20% of employees generate 80% of products
● 20% of text allows to understand 80% of content.

The principle allows setting priorities and facilitates the organization of time, thus achieving
maximum results in a minimum time.
Seven Testing Principles

Pesticide paradox
If the same tests are repeated over and over again, eventually the same set of test cases will no longer
find any new defects.

To overcome this, test cases need to be regularly reviewed and revised, and new and different tests
need to be written to exercise different parts of the software or system to find potentially more defects.
Seven Testing Principles

Testing is context dependent

Depending on what is passed for testing - a mobile application that allows you to order food or early
warning software on airplanes, you should choose the range and methods accordingly testing.

Although a large part of the software is tested in the same way, a similar scheme, sometimes the scope
of works should be enriched, e.g. ensuring a higher level of security.
Seven Testing Principles

Absence-of-errors fallacy

Finding and fixing defects does not help if the system built is unusable and does not fulfill the users’
needs and expectations.
Good testing practices

● Supplementing the system with test data:


○ Real data
○ Abstract data
○ Ugly data
● One entry - one mistake
● Test environment - the first data from programmers
● Informing about the status of the application
● Understanding the functionality - not making assumptions
● Reporting errors in usability or business to analysts
● In the absence of information about decent action, ask the analyst first:
○ the documentation suggests
○ probably
○ it seems likely to be the case
● Report or fix an error as soon as it is discovered
Good testing practices

Be accurate in what you do - the tester's job is to check whether the application works properly, if it
is used as intended, and what happens when the operator starts to perform unacceptable or unforeseen
operations.

Remember that the person designing the system can also be wrong - do not accept everything
written on faith. Very often the documentation contains errors, gaps and inconsistencies, which - if left
without correction - can negatively affect the quality of programming and testing. The tester's task is
to check also the documentation - especially the business rules and the correctness of the process flow.
Good testing practices

If you are not sure how a module should work - ask and get an answer. Do not skip testing certain
areas just because you do not understand their operation.

Do not make assumptions - in particular, do not assume that something works because it usually
works or should work. The tester can determine that the application or its module works when it
checks it and the test results confirm it.

If you believe that you have discovered a mistake or the application is not working properly in
terms of usability or business, and the implementation is consistent with the analytical
specification - please send your comments to the analysts. Any improvement of the problem at an
early stage of testing is easier and cheaper than later.
What to avoid in tests?

1. No curiosity and accuracy


2. Performing tests without understanding the operation of the application
3. Postpone reporting errors for later
4. Reporting an error before making sure it is an error
5. Interpreting specs or test results at your own discretion
6. Reporting errors without sufficient description
7. Wrong tasks set before tests and lack of priorities
8. Skipping testing documentation
9. Improper management of reported errors
10. Attitude: Developer is an enemy
11. Not reading specs
12. Incorrect input data
13. Aborting the standard process
14. Running other or several parallel operations.
15. Unplanned operations on the database
Psychology of testing

Testers take part in tests to get an independent look - other than the owner of the code. Independent
testing may be carried out at any level of testing. You can define several levels of it:

● Tests designed by the person(s) who wrote the software under test (low level of independence)
● Tests designed by another person(s) (e.g., from the development team)
● Tests designed by a person(s) from a different organizational group (e.g., an independent test
team) or test specialists (e.g., usability or performance test specialists)
● Tests designed by a person(s) from a different organization or company (i.e., outsourcing or
certification by an external body)
Psychology of testing

People tend to tailor plans to goals set by management or other stakeholders - that's why it's important
to clearly set goals in testing.

Finding errors during testing can be seen as criticizing a product or its author. That is why it requires
appropriate binding and preparation.

Searching for failures in the system requires: curiosity, professional pessimism, critical look, attention
to detail, good communication with programmers and experience, knowledge about the product.
Psychology of testing

Constructive notification of defects:

● start with cooperation not from war (common goal is better quality)
● communicate in a neutral, fact-focused way, without criticizing the author of the product
● stand on the other side
● make sure that you understand the remarks of the other party
Code of Ethics

Testing gives access to learning confidential and secret information, which is why it is important to be
aware of the responsibility and approach to their proper use

Responsibility will be repeated in relation to:

● Public interest
● Customer and employer
● Product (always the highest possible quality of work and product)
● Judgment (honesty and independence)
● Management (promoting an ethical approach)
● Profession (they promote the integrity and reputation of the tester)
● Co-workers (support and cooperation with those involved in the process)
○ I - certificated software testers learn throughout their life and promote an ethical approach
to professional practice
Roles related to testing

Test Leader

● Manages a team of testers


● Develop strategies and test plan
● Supervises the work of testers

Test Architect

● Defines the strategic directions for test organization


● Defines the way tests are created

Test Analyst

● Analysis of the test basics


Roles related to testing

Test Automation Engineer

● Engineer for test automation


● Implementation and testing
● Has programming knowledge
● Testing technical aspects of the project
● Performance testing
● Technical security testing

Test Administrator

● Care for testers' work products


● Implements, maintains, manages the tools used in tests
● Configures the test environment
● Responsible for efficient operation
Roles related to testing

Inspector / Auditor

● Quality control of documentation


● Process compliance with standards

Business analyst

● Knowledge about the project


● Customer representative
● Responsible for requirements
● Performs acceptance tests
Attributes for good software tester

● Interpersonal skills - otherwise soft skills; rely on the ease of establishing and maintaining
relationships, motivating others, assertiveness, the power of persuasion and freedom in
conducting dialogue.
● Management - consists of three factors:
○ Technical skills, ie the ability to use tools and methods to support the management
process.
○ Social skills, that is, the ease of cooperation with other people, empathy and matching
people to particular roles.
○ Cognition skills, consisting of the ability to perceive the design as a whole, to formulate
goals and combine them into a coherent whole.
● Contact with the client - in this case it is not a typical skill, but a necessity, because some
positions are closely related to cooperation with the client, and not everyone finds himself in
such a situation.
Attributes for good software tester

● Analytical thinking - the ability to analyze and draw conclusions based on collected data.
● The ability to write code is nothing more than the ability to program in a given language.
● Knowledge of technology - a general concept about technologies that can or are used in a given
project. It does not have to be specialist knowledge, but general orientation in the subject.
● Keeping documentation - nothing to add, projects require that certain events, procedures and
processes be supported by documentation. It depends on the position and methodology of
project management how much extended form of documentation is required.
● Creativity - means unconventional thinking, going beyond schemata and worn paths.
Professional development of the tester

Test Analyst - is a person in the team responsible for test design, test scenarios and test plans. Often
during the design process he has contact with the client. This analyst determines the main directions in
which a given functionality should be tested. In short, such a person translates the client's
requirements into the tested scenarios.

Test Automation Engineer - currently a very popular role, combining the skills of the tester and
programmer. One of the main tasks of such a person is testing the application using a program that
performs the same tasks as previously carried out manually. The simplest example can be logging in
to an online store. Instead of doing it manually, we can write a program or script that will do it for us
and report the result. It is important to remember that not every test can be designed as a machine.
Professional development of the tester

Test Manager - a person in this position manages the project in terms of quality, plans resources,
heads the team and often contacts the customer in terms of quality.

Programmer - a person who finds a good name in the code. The task of the programmer is to
translate the given functionality into the programming language in which he specializes. It is a
technical person who usually has no contact with the client.

UI / UX Designer - at the outset it is worth noting that this role can be divided into two separate UX
Designer positions and separately UI Designer.

UX Designer is a person who relies on the user's experience and wants to design the system so that it
is as intuitive and functional as possible. Here, people with sociological or cognitive education often
find themselves.

UI Designer is someone who is responsible for the graphical design of the user interface. In case of
separating the described functions, it can be a graph.
Professional development of the tester

Business Analyst - this is an important figure in the software development process, though often
overlooked. This position can be defined as translating, expectations and requirements for the solution
created from the client's language to the language of the team. The main task of such a person is to
gather information from the client and to understand what purpose the client needs. The analyst often
also has to propose a solution to the client's problem. In fact, analysis is the first step in creating a
system, so any error that sneaks in at this stage is the least costly mistake to repair.

Project Manager - is actually the "conductor" of the entire project. It is he who gives the rhythm to
the project. It stays in constant contact with the client and with the teams creating the solution. This is
a creative position, requiring the preparation of an alternative plan B and this under time pressure.
Such a person must be a good strategist, easily communicating with others.
Professional development of the tester

Consultant - plays a role that combines several others such as analyst, trader, tester, trainer,
programmer. It provides comprehensive customer service, often works on ready-made methods of
solutions, introducing only small changes or integrating modules between each other.
Glossary

Ticket (issue) Task (task)

Report registered in the system (issue tracker), most often it is a problem to be solved (bug) or a task
to perform.

Server

The main computer that supports computers connected to the network. The servers perform only the
provision of certain defined services, without the possibility of running programs in them.

Cloud

Computer only somewhere else. The space provided by the provider via the Internet.
Glossary

Production

An implemented finished product used by users.

FTP - File Transfer Protocol

A communication protocol that allows you to connect via the Internet to another computer and
perform certain actions, such as viewing file directories, copying files from one computer to another.

Cache

A type of very fast memory used for data buffering. Processor cache - accelerates access to the free
compiler's operating memory - RAM (Random Access Memory), it is used to store data that will be
processed soon. CTRL + F5
Glossary

Backup

A backup copy of files is created on different data carriers to protect data from loss.

Interface

The part of the software responsible for interaction with the user.

API - Application Programming Interface

This is the way we can use a given library, framework or external system. API is a simplified
collection of public interfaces that we can use to implement our application.
Glossary

Framework

A library that provides a large collection of functionalities and on which we base our application.

Backup

A backup copy of files is created on different data carriers to protect data from loss.
Bug report

Bug should have:

● Title - a short description that allows you to quickly locate the problem and its scale.
● Initial conditions - eg operator data (permissions, etc.), system state necessary to reproduce the error (eg
there must be a specific client in the system, the customer must have the appropriate amount of funds on the
bank account).
● Reproduction procedure - step by step steps taken to obtain an error. It is also advisable to provide test data
- because some defects occur only when certain data are used.
● Expected result - what should the system do after completing the above procedure according to the
specification?
● Actual result - what the system did and why it is not correct (reference to documentation, fragment of
specification, etc.)?
● Screenshot
● Priority
● The system version
● Data of the reporting person

You might also like