TESTING
TESTING
TESTING
Testing Concepts
Testing
Testing Methodologies
Black box Testing:
White box Testing.
Gray Box Testing.
Levels of Testing
Unit Testing.
Module Testing.
Integration Testing.
System Testing.
User Acceptance Testing.
Types Of Testing
Smoke Testing.
Sanitary Testing.
Regression Testing.
Re-Testing.
Static Testing.
Dynamic Testing.
Alpha-Testing.
Beta-Testing.
Monkey Testing.
Compatibility Testing.
Installation Testing.
Adhoc Testing.
Ext….
TCD (Test Case Documentation)
STLC
Test Planning.
Test Development.
Test Execution.
Result Analysis.
Bug-Tracing.
Reporting.
Microsoft Windows – Standards
Manual Testing
Automation Testing (Tools)
Win Runner.
Test Director.
Software Testing is the process used to help identify the correctness, completeness,
security, and quality of developed computer software. Testing is a process of technical
investigation, performed on behalf of stakeholders, that is intended to reveal quality-related
information about the product with respect to the context in which it is intended to operate.
This includes, but is not limited to, the process of executing a program or application with the
intent of finding errors. Quality is not an absolute; it is value to some person. With that in
mind, testing can never completely establish the correctness of arbitrary computer software;
testing furnishes a criticism or comparison that compares the state and behavior of the
product against a specification. An important point is that software testing should be
distinguished from the separate discipline of Software Quality Assurance (SQA), which
encompasses all business process areas, not just testing.
Introduction:
In general, software engineers distinguish software faults from software failures. In case
of a failure, the software does not do what the user expects. A fault is a programming error
that may or may not actually manifest as a failure. A fault can also be described as an error in
the correctness of the semantic of a computer program. A fault will become a failure if the
exact computation conditions are met, one of them being that the faulty portion of computer
software executes on the CPU. A fault can also turn into a failure when the software is ported
to a different hardware platform or a different compiler, or when the software gets extended.
Software testing is the technical investigation of the product under test to provide
stakeholders with quality related information.
Software testing may be viewed as a sub-field of Software Quality Assurance but typically
exists independently (and there may be no SQA areas in some companies). In SQA, software
process specialists and auditors take a broader view on software and its development. They
examine and change the software engineering process itself to reduce the amount of faults
that end up in the code or deliver faster.
Regardless of the methods used or level of formality involved the desired result of testing
is a level of confidence in the software so that the organization is confident that the software
has an acceptable defect rate. What constitutes an acceptable defect rate depends on the
nature of the software. An arcade video game designed to simulate flying an airplane would
presumably have a much higher tolerance for defects than software used to control an actual
airliner.
The software, tools, samples of data input and output, and configurations are all
referred to collectively as a test harness.
History
The separation of debugging from testing was initially introduced by Glen ford J. Myers in
his 1978 book the "Art of Software Testing". Although his attention was on breakage testing
it illustrated the desire of the software engineering community to separate fundamental
development activities, such as debugging, from that of verification. Drs. Dave Gelperin and
William C. Hetzel classified in 1988 the phases and goals in software testing as follows: until
1956 it was the debugging oriented period, where testing was often associated to debugging:
there was no clear difference between testing and debugging. From 1957-1978 there was the
demonstration oriented period where debugging and testing was distinguished now - in this
period it was shown, that software satisfies the requirements. The time between 1979-1982 is
announced as the destruction oriented period, where the goal was to find errors. 1983-1987 is
classified as the evaluation oriented period: intention here is that during the software lifecycle
a product evaluation is provided and measuring quality. From 1988 on it was seen as
prevention oriented period where tests were to demonstrate that software satisfies its
specification, to detect faults and to prevent faults. Dr. Gelperin chaired the IEEE 829-1988
(Test Documentation Standard) with Dr. Hetzel writing the book "The Complete Guide of
Software Testing". Both works were pivotal in to today's testing culture and remain a
consistent source of reference. Dr. Gelperin and Jerry E. Durant also went on to develop High
Impact Inspection Technology that builds upon traditional Inspections but utilizes a test
driven additive.
Testing:
The process of executing a system with the intent of finding an error.
Testing is defined as the process in which defects are identified, isolated,
subjected for rectification and ensured that product is defect free in order to
produce the quality product and hence customer satisfaction.
Quality is defined as justification of the requirements
Defect is nothing but deviation from the requirements
Defect is nothing but bug.
Testing --- The presence of bugs
Testing can demonstrate the presence of bugs, but not their absence
Debugging and Testing are not the same thing!
Testing is a systematic attempt to break a program or the AUT
Debugging is the art or method of uncovering why the script /program did not
execute properly.
Testing Methodologies:
Black box Testing: is the testing process in which tester can perform testing on an
application without having any internal structural knowledge of application.
Usually Test Engineers are involved in the black box testing.
White box Testing: is the testing process in which tester can perform testing on an
application with having internal structural knowledge.
Usually The Developers are involved in white box testing.
Gray Box Testing: is the process in which the combination of black box and white
box tonics’ are used.
Levels of Testing:
Module1 Module2
Module3
Units Units Units
Test Development:
1. Test case Development (check list)
2. Test Procedure preparation. (Description of the Test cases)
3. Implementation of test cases. Observing the result.
Result Analysis: 1. Expected value: is nothing but expected behavior Of application.
2. Actual value: is nothing but actual behavior of application
Bug Tracing: Collect all the failed cases, prepare documents.
Reporting: Prepare document (status of the application)
Types Of Testing:
> Smoke Testing: is the process of initial testing in which tester looks for the availability of
all the functionality of the application in order to perform detailed testing on them. (Main
check is for available forms)
> Sanity Testing: is a type of testing that is conducted on an application initially to check
for the proper behavior of an application that is to check all the functionality are available
before the detailed testing is conducted by on them.
> Regression Testing: is one of the best and important testing. Regression testing is the
process in which the functionality, which is already tested before, is once again tested
whenever some new change is added in order to check whether the existing functionality
remains same.
>Re-Testing: is the process in which testing is performed on some functionality which is
already tested before to make sure that the defects are reproducible and to rule out the
environments issues if at all any defects are there.
Static Testing: is the testing, which is performed on an application when it is not been
executed.ex: GUI, Document Testing
Dynamic Testing: is the testing which is performed on an application when it is being
executed.ex: Functional testing.
Alpha Testing: it is a type of user acceptance testing, which is conducted on an application
when it is just before released to the customer.
Beta-Testing: it is a type of UAT that is conducted on an application when it is released to
the customer, when deployed in to the real time environment and being accessed by the real
time users.
Monkey Testing: is the process in which abnormal operations, beyond capacity operations
are done on the application to check the stability of it in spite of the users abnormal behavior.
Compatibility testing: it is the testing process in which usually the products are tested on
the environments with different combinations of databases (application servers, browsers…
etc) In order to check how far the product is compatible with all these environments platform
combination.
Installation Testing: it is the process of testing in which the tester try to install or try to
deploy the module into the corresponding environment by following the guidelines produced
in the deployment document and check whether the installation is successful or not.
Adhoc Testing: Adhoc Testing is the process of testing in which unlike the formal testing
where in test case document is used, with out that test case document testing can be done of
an application, to cover that testing of the future which are not covered in that test case
document. Also it is intended to perform GUI testing which may involve the cosmotic issues.
Test Cases:
Template for Test Case
T.C.No Description Exp Act Result
Data Security
Term Definition
Data security is the process of protecting information systems and its data from unauthorized
accidental or intentional modification, destruction or disclosure. The protection includes the
confidentiality, integrity and availability of these systems and data.
Risk assessment, mitigation and measurement are key components of data security. To
maintain a secure environment, data security protocols require that any changes to data
systems have an audit trail, which identifies the individual, department, time and date of any
system change. Companies utilize personnel, policies, protocols, standards, procedures,
software, hardware and physical security measures to attain data security. Data security may
include one or a combination of all of these.
Data security encompasses the security of the Information System in its entirety. The U.S.
National Information Systems Security Glossary defines Information Systems Security
(INFOSEC) as: “The protection of information systems against unauthorized access to or
modification of information, whether in storage, processing or transit, and against the denial
of service to authorized users or the provision of service to unauthorized users, including
those measures necessary to detect, document, and counter such threats.“
Protecting data from unauthorized access is one component of data security that receives a
great deal of attention. The concern for data protection extends beyond corporate concerns
but is a high priority consumer interest as well. Data can be protected against unauthorized
access through a variety of mechanisms. Passwords, digital certificates and biometric
techniques all provide a more secure method to access data. Once the authorized user has
been authorized or authenticated, sensitive information can be encrypted to prevent spying or
theft. However, even the most sophisticated data security programs and measures cannot
prevent human error. Security safeguards must be adhered to and protected to be effective.
Step 1.
Open SqlPlus
Enter Administrator User Name:--------
Password:--------
HostString:--------
Step 2.
SQL>create user <username> identified by <password>;
user Created.
SQL>show user;
user is "username" (check here username).
SQL>desc <tablename>;
It gives column names and datatypes.
Total Metrics specializes in quantifying software development projects early in their lifecycle
and using their functional size as input into software project resource estimates of effort, cost,
team size and schedule. We use industry project data sourced from ISBSG combined with the
expert knowledge base of Knowledge PLAN™ to determine the likely productivity and
quality of the project.
Functional size can be determined as soon as the Business Requirements are identified. Our
project estimates use an independent 'top down' method of estimating, which complement the
standard 'bottom up' work breakdown methodologies. However industry experience shows
that functional size based estimates are much more accurate early in the lifecycle of a project
and can be completed in under 3 days of effort as compared to work breakdown estimates
which need detailed project information and take weeks to develop.
Our estimation techniques have been proven to be accurate and provide an independent
estimate of project budget and schedule requirements.
Why Estimate?
The accuracy of project estimates can have a dramatic impact on profitability. Software
development projects are characterized by regularly over running their budgets and rarely
meeting deadlines.
Effective software estimation is one of the most important software development activities
however it is also one of the most difficult. Under estimating a project will lead to under
staffing it, under scoping the quality assurance effort and setting too short a schedule. That in
turn can lead to staff burnout , low quality , loss of credibility, deadlines being missed and
ultimately to inefficient development effort that takes longer than normal. Overestimating a
project can be almost as bad. Parkinson's Law is that work expands until available time
comes into play. Which means that the project will take as long as estimated even if the
project is over estimated. An accurate estimate is a critical part of the foundation of efficient
software development.
Total Metrics uses Functional Size Measures and Industry data to develop Project
Estimates:
Size is a major driver of effort, duration and risk. Once Total Metrics can measure the
functional size of your project then the estimates of cost, duration, effort and defects can be
created.
Estimating is a critical business process, especially at the early stages of the project.
Enterprises have limited resources of personnel, time and budget, and proper estimating will
allow the leaders of the enterprise to properly allocate these limited resources to achieve the
highest benefits.
There are detailed estimating processes in many different industry sectors. In the building
industry, there are standard books with detailed methodologies for all of the craftsmen
required to build a home or building In engineering, there are similar guidelines based on
physics and chemistry If these disciplines have strong estimating practices, then why is
software estimation's track record so abysmal?
The information technology (IT) trade magazines are constantly filled with stories of
runaway projects which have exceeded their original budgets by multiples of two and higher,
projects which failed to meet the users' requirements, time of delivery, or projects which were
cancelled after substantial financial Investments. These failed projects seem to have some
consistent elements: