0% found this document useful (0 votes)
74 views11 pages

The Quality Assessment of A Software Testing Procedure and Its Effects

Uploaded by

LodyFerma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views11 pages

The Quality Assessment of A Software Testing Procedure and Its Effects

Uploaded by

LodyFerma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/285299174

The Quality Assessment of a Software Testing Procedure and Its Effects

Article · December 2015

CITATIONS READS
0 358

4 authors, including:

Najia Saher Dost Muhammad KHAN

19 PUBLICATIONS   32 CITATIONS   
The Islamia University of Bahawalpur
52 PUBLICATIONS   88 CITATIONS   
SEE PROFILE
SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Performance based comparison between RDBMS and OODBMS View project

All content following this page was uploaded by Dost Muhammad KHAN on 09 August 2016.

The user has requested enhancement of the downloaded file.


Sci.Int.(Lahore),27(5),4865-4874,2015 ISSN 1013-5316; CODEN: SINTE 8 4865

THE QUALITY ASSESSMENT OF SOFTWARE TESTING PROCEDURE AND


ITS EFFECTS
Najia Saher1, Dost Muhammad Khan2, Faisal Shahzad3, Ayesha Karim4
1,2,3
Department of Computer Science & IT, The Islamia University of Bahawalpur, Pakistan
E-mail: {najiasaher, faisalsd}@gmail.com, [email protected]
4
National University of Computer and Emerging Sciences, Lahore, Pakistan, [email protected]
ABSTRACT- It is very competitive for software companies to develop the higher quality software utilizing a negligible besides
strictly observing the timelines as well. Testing the software for accuracy and functionality is generally the final stage in the
SDLC process before releasing software. This paper portrays the findings of our research-based study that has two fold
primary targets. At the point when ought to a test be automated and when it ought to be manual. 2).The trade-off between
Manual software testing and automated software testing. Furthermore, we have investigated the current framework‘s testing
technique thoroughly on the basis of cost, time and number of errors detected during the functional, security, and performance
testing using manual and automated test approach.

Keyword— Automation Testing, Manual Testing, Defects, Functional testing, Security testing, Performance Testing

1. INTRODUCTION in term of performance, functionality and security of web


Testing is a standard procedure used to authenticate that based applications and an exchange of automated and manual
product complies with its formal requirements. The principle testing. Here we would take the three attributes that would
goal of testing incorporates the certification of the product effect on above stated technique i.e. cost, time and number of
quality by discovering and removing with its shortcomings, errors detected manually or automated software testing
exhibiting the vicinity of its all predefined usefulness and approach. We additionally compare the effect of applying
assessing the operational dependability of the software these testing procedures and its resulting effect on
product. The testing process incorporates all the exercises performance, functionality and security.
used to figure out the contrasts between requirement of the Whatever remains of the paper is sorted out as takes after:
product and its actual conduct. Section 2 is about the Testing Procedure of Software Quality,
Testing can be categorized as manual and automated, Section 3 is about Literature Review, Section 4 is
however both methodologies are correlated. In case of Methodology, Results are discussed in Section 5 and finally
automated testing the script is composed by tester and Conclusion is drawn in Section 6.
programming is used to test the product while during manual 2. Testing Procedure of Software Quality
testing the tester physically implement the test cases without Testing is the standard procedure used to approve that
the aid of any automated tool. Manual testing is considered as product fits in with the formal requirements. The principle
the embryonic sort of all testing categories that help to objectives of testing fuse avowing the product quality by
discover bugs in product framework. Automated testing is finding and administering errors in the project, showing the
capable of performing a large number of tests in brief time, vicinity of all predefined usefulness in the software and
though manual testing uses the information of the testing assessing the operational quality of t
specialist to target testing to the parts of the system that are he product. Software testing includes all the exercises went
certain to be more mistake inclined. for recognizing the contrasts between specification details of
Automated test instruments are capable guides to enhance the the software product and the real conduct. The activities like
return on the testing asset when utilized carefully. A few tests planning, design, implementation, execution and evaluation
naturally oblige an automated way to be operative, however makes up together the testing process. A general testing
others must be manual. Moreover, automated testing projects process is depicted in Figure 1
that fail have a large impact on project in term of expense. In For an application testing both the manual and automated
what manner would we be able to perceive whether automate testing methods are quite unlike systems. The manual testing
a test or run it manually, and what amount of cash and time is a straightforward process as contrast with automated
would it be advisable for us to use on a test? method, manual testing is time consuming and it is
Our point is to fortify discourse about functional and non- conceivable up to certain level however in automated process
functional testing methodology utilizing manual or each sort of testing is conceivable utilizing distinctive sorts of
programmed test era. A few tests are suitable for automated tools. In contrast to manual testing, the automated one is
testing method because it is a fact that some specific types of more costly. Generally small scale project is utilized by
tests aren't possible manually in any significant way. manual testing in light of the fact that it is effective and
Different tests, on the other hand, are either best when done conceal in constrained course of time. Manual testing is
manual or just done physically. Similarly with test centered around idea, functions of the project however the
methodologies, decision of the suitable choice in this context automated tools backings to restricted dialects.
will have a veritable impact on the return on investment.
In this research paper we will answer the issues such that,
which is the best testing methodology manual or automated
.

Sept.-Oct
4866 ISSN 1013-5316; CODEN: SINTE 8 Sci.Int.(Lahore),27(5),4865-4874,2015

Figure 1 – The general testing process

Capers Jones, ―In Estimating Software Costs, cites failure


rates for large, complex system development efforts as
high as 50%—or higher. A test automation project can be
just as complex as developing software, and, indeed, Figure 2 – The manual testing process
Dorothy Graham and Mark Fewster cite similar failure 2.2 Automation Testing:
rates for automation projects in Software Test In automated testing methodology, test engineers run the
Automation‖. script on any testing instrument for evaluation and
2.1 Manual Testing: testingpurpose. To test software utilizing script in
In this technique the software product is tested manually automated tool is a difficult job for a new test engineer, as
using test cases. The test designer does all the experiments the engineer ought to have decent programming
(test cases) and executes on the application manually and information first and afterwards they can compose a script
shows whether a specific step was fulfilled effectively or against any test case.Here these individuals take after the
whether it failed. For manual testing the information plan and make numerous scripts for several testing. To
needed by the tester are only the test case and the guideline change over a test case into script is an absolutely time
about how to execute that case. consuming job. Before they run a script, we need to set an
As per tactics of a test plan a test case must comprise of all environment on tool to run the test case, as the test script
sorts of testing. Design document is used as a source to are essential in light of the
write a test case by a test engineer. Manual testing is fact that, a solitary change might become a reason of the
dependably a piece of any testing exertion. It is essentially failure of whole script. At the stage of script execution the
profitable pilot stage of software development phase, when frame ought to have the same as all the scripts are
the software and its user interface are not sufficiently considered vital aim GUI object of the screen and are main
steady, and starting the automation does not bode well. information while writing a test script. [9].The automated
The manual testing process is depicted in figure 2. [9] testing process [9] is depicted in figure 3.
Sci.Int.(Lahore),27(5),4865-4874,2015 ISSN 1013-5316; CODEN: SINTE 8 4867
The major benefits and drawbacks of a manual testing vs
Automation [16] testing are depicted in Table 1.
3. Literature Review
In [25] accurate estimates of the return on investment of
test automation entail the analysis of costs and benefits
involved. However, since the benefits of test automation
are particularly hard to quantify, many estimates
conducted in industrial projects are limited to
considerations of cost only. In [7], a case study originally
published by Linz and Daigl, “[18] is presented, which
details the costs for test automation as follows:
V: = Expenditure for test specification and implementation
D: = Expenditure for single test execution‖
Accordingly, the costs for a single automated test (Aa) can
be calculated as eq. (1):
Aa: = Va + n * Da (1)
As describe in [25] “where Va is the expenditure for
specifying and automating the test case, Da is the
expenditure for executing the test case one time, and n is
the number of automated test executions. Following this
model, in order to calculate the break-even point for test
automation, the cost for manual test execution of a single
test case (Am) is calculated similarly as eq. (2)”.
Am: = Vm + n * Dm (2)
“Where Vm is the expenditure for specifying the test case,
Dm is the expenditure for executing the test case and n is
the number of manual test executions”. Figure 4 depicts
these relations.
Figure 3 – The automated testing process

Table 1: Benefit and Challenges of a manual testing vs Automation testing

Benefit of Manual and Automation Testing


Benefits of a Manual Testing Benefits of an Automated Testing
Manual testing can be used in both small and big project Fast: Cover up all cases in a limited time period.

Easily we reduce and added our test case according to project Reliable: Automated testing tools, run the scripts reliably
movement. each time. Exact same steps are followed every time, the
script is run.
It is covered in limited cost. Comprehensive: One can build a suite of tests that covers
every feature of application. It is always desirable to test
the complete functionality of the software.
Easy to learn for new people who are entered in manual testing. Reusable: One can reuse tests on different versions of a
website or application, even if the user-interface changes.
Manual is more reliable then automated (in many cases Time Constraints: Auto testing is good for those projects,
automated not cover all cases) which have no time constraints.

Challenges of Manual and Automation Testing


Challenges of Manual Testing Challenges of Automation Testing
GUI objects size difference and color combination etc. Automation testing is expensive as compare to manual testing.

Actual load and performance is not possible to cover Selection and customization of Test Tool
Running test manually is very time consuming job. Selection of Automation Level than development and
verification of script.

Sept.-Oct
4868 ISSN 1013-5316; CODEN: SINTE 8 Sci.Int.(Lahore),27(5),4865-4874,2015
shortcomings of overly simplistic cost models for automated
testing frequently found in literature and commonly applied
in practice: only costs are evaluated and benefits are ignored,
incomparable aspects of manual testing and automated testing
are compared, all test cases and test executions are considered
equally important, project context, especially the available
budget for testing, is not taken into account and
additional cost factors are missing in the analysis. He also
introduced an alternative model using opportunity cost. The
concept of opportunity cost allows us to include the benefit
and, thus, to make the analysis more rational”. In [27,28]
different methods are used to select the best data mining
algorithm for a dataset.
4. METHODOLOGY
Figure 4: Manual and Automated Testing Cost For this case study we have collected data from Insurance
domain consisting 4 projects having 31 releases. In order to
Figure. 4 shows a relation between manual and automated address the problem, we will use statistical analysis to find
testing. The x-axis represents the number of test runs, while whether manual testing or auto testing is best for web base
the y-axis represents the cost of testing. The figure depict projects. The questioner prepared will try to identify
how the costs increase with every test run. While the curve successful and challenging areas in the existing approaches
for manual testing costs is sharply rising, automated test used during the testing of web-based systems. By analyzing
execution costs increase only moderately. But, automated this data; we will be able to find the best testing technique.
testing needs a higher initial investment as compare to We have investigated the existing system‟s testing technique
manual test. thoroughly on the basis of cost, time and number of errors
Bach [22] argues that “hand testing and automated testing are detected during the functional, security, and performance
really two different processes, rather than two different ways testing using manual and automated test approach. We
to execute the same process. Their dynamics are different, collected data against the above mentioned measures and
and the bugs they tend to reveal are different. Therefore, have analyzed the collected data through statistical
direct comparison of them in terms of dollar cost or number techniques.
of bugs found is meaningless.” Following table presents data statistics that we have collected
Boehm criticizes this on value-based software engineering using a questionnaire.
[23]: “Much of current software engineering practice and
research is done in a value-neutral setting, in which every Table 2: Data collection statistics
requirement, use case, object, test case, and defect is equally
important. In a real-world project, however, different test Attribute Value
cases and different test executions have different priorities
based on their probability to detect a defect and on the impact
which a potential defect has on the system under test”. Data Collection Questionnaire
Johnson Michael [2] discusses the performance-testing
approach required manually inspecting the performance logs. Sample Size 4 Projects
Another direction of future work is automatic performance
test generation. In this project, we relied on the performance Project Type Web-based software applications
architect's experience to identify the execution paths and
measurement points for performance testing. We can derive Project Duration 4 to 6 Months (Release)
this crucial information for performance testing from the
performance requirements and system design. We plan to T-test analysis technique has been conducted in the data
find guidelines for specifications of performance analysis. SPSS statistical package is used to apply T-test
requirements and system design to make the automation technique.
possible. 4.1 Hypotheses and Research Site
Andreas Leaner [7] discusses the “strength of automatically The background of this study is about automated and manual
generated and manually written test and conclude that both testing. When should a test be automated and when it should
have different strengths. An automatic strategy can generate be manual and the trade-off between Manual software testing
and run a much greater number of test cases than a human and Automated software testing.
could run in the same time”. For this we compare automated and manual testing on the
Rudolf Ramler in [8] discussed “cost models to support parameter of „cost‘, „time‘ and „number of error identified‘.
decision making in the trade-off between automated and
manual testing. He summarized typical problems and
Sci.Int.(Lahore),27(5),4865-4874,2015 ISSN 1013-5316; CODEN: SINTE 8 4869
Table 3: Data collected from organization We consider „Cost‘ on the basis of, licensed cost, man hours,
training cost and maintainability cost.
Organization Detail
Time on the basis of testing time and training time. Number of
error identified during (Functional testing, Performance
Organization size 800 employees
testing and security testing)1 we also consider usability
testing but during collection of data in a software house we
Organization’s Maturity level CMMI Level 5 ISO Certified didn‟t find any data regarding automated testing of usability.
Hypothesis I
Project Details The purpose of this hypothesis is to test the cost of the
Manual ‗Testing‘ and Automation testing. Here the variable
Number of project under Four testing has two categories, automation and manual whereas
study Project A = 3 Releases the variable ‗Cost‘ has four categories: licensed cost, man
Project B = 14 Releases hours, training cost and maintainability cost. To prove the
hypothesis, we have used regression analysis and applied the
Project C = 6 Releases T- Test.
Project D = 8 Releases Null Hypothesis:
Insurance
H0: Automation cost (licensed cost, salary, training cost,
Domain of the project under
maintainability cost) is greater or equal to manual cost
study (licensed cost, salary, training cost, maintainability cost).
Average duration of each Project A = 120 days Alternate Hypothesis:
H1: Automation Cost (licensed cost, salary, training cost,
release in a project Project B = 110 days
maintainability cost) is less than Manual Cost (licensed cost,
Project C = 180 days salary, training cost, maintainability cost)
Project D = 90 days Hypothesis II
The purpose of this hypothesis is to test the „Time‟ taken by
Average number of resource Project A: the Manual testing and Automation testing. Here the variable
utilized in each release of a Team size: 15, Quality testing has two categories, Automation and Manual whereas
project Assurance = 4 testers the variable ‗Time‘ has two categories, Testing Time and
Training Time. To prove the hypothesis, we have used
regression analysis and applied the T- Test.
Project B: Null Hypothesis:
Team size: 20, Quality H0: Automation Testing Time (Testing Time, Training Time)
is greater or equal to Manual Testing Time (Testing Time,
Assurance = 4 testers Training Time).
Alternate Hypothesis:
Project C:
H1: Automation Testing Time (Testing Time, Training Time)
is less than Manual testing Time (Testing Time, Training
Team size: 40, Quality Time).
Assurance = 5 testers Hypothesis III
The purpose of this hypothesis is to test the “Number of
Errors Identified/count” of the Manual testing and
Project D: Automation testing. Here the variable testing has two
Team size: 10, Quality categories, automation and manual whereas the variable
„Errors Identified‟ has three categories: Functional, Security
Assurance = 5 testers
and Performance. To prove the hypothesis, we have used
Technology used in the Project A Dot Net regression analysis and applied the T- Test.
selected projects Project B Dot Net Null Hypothesis:
H0: Automation Testing Errors Identified (In Functional
Project C Dot Net
testing, Security testing, Performance testing) is greater or
Project D Dot Net equal to Manual Errors Identified (In Functional testing,
Security testing, Performance testing).

1
We collect data on following parameter
(Functional testing has been checked on the basis of User requirement-SRS.
System security on the basis of Authentications and password checking.
Performance testing on the basis of Load testing and stress testing)
Sept.-Oct
4870 ISSN 1013-5316; CODEN: SINTE 8 Sci.Int.(Lahore),27(5),4865-4874,2015
Alternate Hypothesis: chosen with diverse commercial applications having more
H1: Automation Testing Errors Identified (In Functional than 800 employees and at CMMI level 5 for our research
testing, Security testing, Performance testing) is less than site. In Table-3, there is a detail of the organization and its
Manual Errors Identified (In Functional testing, Security projects. All the projects belong to the e-Commerce domain
testing, Performance testing). having four projects with 30 releases.
4.2 Research Site and Data Collection
For this research a leading software organization has been

Table-4: T-Test results of Cost at significant level of 0.05

5. RESULTS AND DISCUSSION


In this section we will discuss our findings based on the
Statistical analysis of the hypotheses2.
5.1 Hypothesis-I: Relationship b/w Automation and
Manual Testing in term of Cost.
For hypothesis-I we have combined all the releases of the
four projects to determine if there is a relationship between
the automated and manual testing with respect to cost. Our
results in Table-4 indicate that there is a significant
relationship between automated and manual testing since
the p value 0.103 of T-Test is greater than 0.05 so we are
failed to reject Null hypothesis. It shows that Automation
cost is greater than the Manual cost Figure 5 and Table-4
present the Automation testing cost is higher than manual
testing cost if we include all the licensing and training cost.
Especially the factor; licensing of automation tool mainly
maximizes the testing cost.
This is the reason; Figure -6 is showing the cost of
automation and manual with respect to Project A and its
releases. Here if we add all tool costs in first project
Release and other yearly licensing cost then automation
cost in Release 2 and Release 3 comes out to be less than
the manual cost.
Figure 5- Cost of four projects not including α –cost

2
(Mathematical Description of the hypotheses is given in the Appendix)
Sci.Int.(Lahore),27(5),4865-4874,2015 ISSN 1013-5316; CODEN: SINTE 8 4871
5 Performance 11 3 2 340

4 Security 5 1 1 170

3 Functional 78 12 28 4760
A R1 Performance 7 1 4 680
2 u…
Security 5 2 7 1190
1
Functional 76 6 42 7140

Manual
0 R2 Performance 4 2 4 680
Release
Release
1 Release
2 3
Security 5 2 4 680

Functional 91 9 47 7990
Figure 6- Automation Cost vs Manual Cost in Project A
R3 Performance 15 3 8 1360
The total Cost3 of testing is defined as the sum between the
cost of manual tests and the cost of automated tests: Security 5 1 4 680
CT = CL+CM+ CS + CTR
Figure 7 and Table-7 present the Automation testing
Table-5: Testing Cost
saves the time during regression testing, performance
Testing Cost per Single Release testing, load testing, and stress testing because the script in
Licensing Cost Rs. 9,96,000/- Auto test is written once but in manual testing one has to
Maintainability Cost 18% start from the scratch. We also concluded that it is very
1year Maintainability cost: Rs. 1,79,280/- hard to do regression testing manually, especially in
3years Maintainability cost: 1,79,280 *3 released project. Automation testing is performed swiftly
= Rs. 5,37,840/- and therefore saves time of testers. Fig-6 is showing data
Total Three years cost Rs. 15,33,840/- of testing time and manual time in working days. By
Overall Project Done 14 combining all project data it is finally concluded that
One project test cost 1,09,560 automation testing almost saves half of the manual testing
time.
Training Cost
5.3 Hypothesis-III: Relationship b/w Automation and
Training Time 1 month
Manual Testing in term of Number of Defects
Avg. Salary Rs. 30,000/-
Identified.
Salary/hr. Rs. 170/- For hypothesis-III, Table- 8 indicates the relationship
Training Cost Rs 30,000/- between Defects Identified by manual or automated
Testing Cost per Single Release = Testing time-in-hr. * testing, since the p value of T-Test 0.657 is greater than
Salary / hr. + Avg Training cost + Avg (α) Licensing Cost 0.05. So we are failed to reject Null hypothesis. It shows
+ Avg Maintainability cost that number of defects identified in Automation testing is
If we do not take this alpha cost for these projects then the greater than the manual testing.
automation cost is less than the manual cost based on Table-8 and Table-9 present that automation testing
working hours and salary according to those working generates the best result in functional, performance and
hours. security testing. As performance testing include load,
Table-6: Sample data of Project A that shows the Number of
stress testing is easily identified in automation testing.
releases of project A, Error Identify, time and cost
In Fig-8 mean number of defects are identified in all 4
projects combining all releases. Here the data is collected
No of No of on the basis of functional, performance and security test
Area Releases Area Scripts Error Time Cost
cases. However, there is a slight difference between
Functional 72 10 25 4250 automation and manual as far as performance and security
R1 Performance 3 2 1 170 are concerned because in manual it is complicated to
attempt all scripts and all possible combinations while
Automation

Security 5 2 2 340 automation executes all possible combinations just by


Functional 76 6 16 2720 writing a single script.
R2 Performance 4 3 1.5 255

Security 5 2 2.5 425


R3 Functional 91 9 20 3400

3
CT: Total Cost, CL: License Cost, CM : Maintainability Cost, CS: Salary Cost,
CTR: Training Cost
.
Sept.-Oct
4872 ISSN 1013-5316; CODEN: SINTE 8 Sci.Int.(Lahore),27(5),4865-4874,2015
Table-7: T-Test results of Time at significant level of 0.05

Table-8: T-Test results of Error Identified at significant level of 0.05

Table- 9: No of error identified in Automation testing v


. Manual testing

Testing Functional Performance Security


Errors Errors Errors

Manual 60-80% 70-80 % 60-89 %

Automation 70-80% 90-99 % 90-99 %

Figure- 7: Automation testing time vs. Manual testing time

Figure- 8: No of error identified


Sci.Int.(Lahore),27(5),4865-4874,2015 ISSN 1013-5316; CODEN: SINTE 8 4873
results in functional, performance and security testing. As
performance testing includes load testing and stress testing
hence it is easily identified in automation testing. During
discussion with senior testers, it was revealed that software
testing cannot be automated completely. Some tests still have
to be done manually. There are specific tests where
automated tools are of no use.

APPENDIX
Mathematical Description of Hypothesis-1, 2 and 3
For hypotheses 1, 2 and 3, all have testing variable which has
two categories, automation and manual and for this we use T-
Test. For example: in the first case, we need to find the
relationship between the cost of Automation and Manual
testing.


Figure- 9: # of error identified in Automation testing vs. Manual
In second case, we need to find the relationship between the
Testing
time taken for Automation and manual testing.
6. CONCLUSION
To ensure the quality of any software testing is a prime
venture in SDLC. A few tests innately oblige an automated
methodology to be compelling, however others must be √
manual. We have observed that
Unsuccessful automated testing projects are expensive. In this In third case, we need to find the relationship between the
research, we have perceived whether to automate a test or run Error identified by Automation and manual testing.
it manually. Our model is based on cost and time spent in
testing and number of bugs detected during automated and
manual testing approaches. This model will be valuable and √
steady in choice making whether to trade-off between
automated or manual testing. Where:
The Automation cost is higher comparing manual cost S2 = Variance
considering all licensing and training cost. Especially the N = Number of Record
factor of licensing of automation tool mainly maximizes the ―In T-Test probability of 0.05 or less is commonly interpreted
testing cost. Yet in the event that we overlook the by social scientists as justification for rejecting the null
aforementioned cost in later releases of projects than the hypothesis that the row variable is unrelated (that is, only
Automation cost is lesser than Manual cost. randomly related) to the column variable.”
On the other hand, automated testing needs a higher initial
investment as compare to manual testing but it can reduce the REFERENCES
testing associated costs by minimizing the time spent on [1] R. Scott Barber; ―Beyond Performance Testing‖
creating and running the test cases. This reduction of testing BPT Part 1: Introduction; © PerfTestPlus, Inc. 2006
cost will appear after a period of time relying on the [2] Johnson, Michael J.; Ho, Chih-Wei; Maximilien, E.
utilization of automation tools. Michael; Williams, Laurie; “Incorporating
The extent that time taken to execute manual test vs. Performance Testing in Test-Driven Development‖,
automated test is concerned, the automated testing diminishes Software, IEEE Computer Society, 2007; Volume:
the time it takes to complete software testing and allows for 24 Issue:3; ISSN: 0740-7459 ; INSPEC Accession
increased test coverage. Automation tests saves time during Number: 9457103
regression testing, performance testing, load testing and stress [3] Software Testing and Development Technical
testing because the script in Auto test is written once but in Articles Online:
manual testing we start from the scratch. It is also observed http://smartbear.com/community/resources/
that it is very hard to do regression testing manually; (Accessed on 25-08-2010)
especially in release project because automation performs [4] Samaroo, A., Allott, S., & Hambling, B. (1999). E-
very well and saves time of testers. effective Testing for E-Commerce. Retrieved June
The more prominent quantities of bugs are distinguished via 15, 2001, from the World Wide Web:
automated testing as compared to manual testing. By analysis http://www.stickyminds.com/docs_index/XML0471.
of data we have found that automation testing generates best doc
Sept.-Oct
4874 ISSN 1013-5316; CODEN: SINTE 8 Sci.Int.(Lahore),27(5),4865-4874,2015
[5] Gerrard, P. (2000a). Risk-Based E-Business Testing: [17] Hughes Software Systems Ltd. Test Automation,
Part 1 – Risks and Test Strategy. http://www.hssworld.com/whitepapers/whitepaper_p
Retrieved June 15, 2001, from the World Wide df/test_automation.pdf, December 2002.
Web: [18] Linz, T., Daigl, M., GUI Testing Made Painless.
http://www.evolutif.co.uk/articles/EBTestingPart1.p Implementation and results of the ESSI Project
df Number 24306, 1998. In: Dustin, et. al.,
[6] SmartBear Software; “Uniting Your Automated and Automated Software Testing, Addison-Wesley,
Manual Test Efforts”; © 2010 SmartBear Software. 1999, pp. 52
[7] Ramler R., Biffl S., Grünbacher P., Value-based [19] Dustin, E. et. al., Automated Software Testing,
Management of Software Testing. In: Biffl S. et al.: Addison-Wesley, 1999.
Value-Based Software Engineering. Springer, 2005. [20] Fewster, M., Graham, D., Software Test Automation:
[8] Rudolf Ramler and Klaus Wolfmaier, Software EffectiveUse of Text Execution Tool, Addison-
Competence Center Hagenberg GmbH: ―Economic Wesley, 1999
Perspectives in Test Automation: Balancing [21] Link, J., and Unit Testing in Java: How Tests Drive
Automated and Manual Testing with Opportunity the Code, Morgan Kaufmann, 2003
Cost‖. [22] Bach, J., Test Automation Snake Oil, 14th
http://aop.cslab.openu.ac.il/~lorenz/www/ontheShelf International Conference and Exposition on Testing
/p85.pdf Computer Software, Washington, DC, 1999
[9] OTS Solutions Pvt. Ltd: an Offshore Software [23] Boehm, B., Value-Based Software Engineering:
Development company: “Manual Testing vs Overview and Agenda. In: Biffl S. et al.: Value-
Automated Testing‖, Based Software Engineering. Springer, 2005
http://www.otssolutions.com/blog/?p=37 [24] Hoffman, D., Cost Benefits Analysis of Test
[10] Belatrix Software Factory www.belatrixsf.com, Automation, Software Testing Analysis &
Case Study: From Manual Testing to Automated Review Conference (STAR East). Orlando, FL,
Testing May 1999
[11] O. Taipale, K. Smolander, and H. Kälviäinen, A [25] Rajender Bathla, Anil Kapil, “Analytical Scenario of
Survey on Software Testing, 6th Software Testing Using Simplistic Cost Model”,
International SPICE Conference on Software International Journal of Computer Science and
Process Improvement and Capability determination Network (IJCSN) Volume 1, Issue 1, February 2012
(SPICE'2006), Luxembourg, 2006. www.ijcsn.org ISSN 2277-5420 1Er. RAJENDER
[12] Nguyen,H.Q.: “Testing Web-based Applications“, BATHLA, Er.
Software Testing & Quality Engineering, Vol. 2, [26] D. Marinov and S. Khurshid, "Test Era: A Novel
No. 3, May 2000, p. 23-30 Framework for Automated Testing of Java
[13] Altman D.G. (1991) Practical Statistics for Medical Programs," in Proc.16th IEEE International
Research. Chapman & Hall, London. Conference on Automated Software Engineering
Campbell M.J. & Machin D. (1993) Medical (ASE), 2001, pp. 22-34
Statistics a Commonsense Approach. 2nd edn. [27] Dost Muhammad Khan, Nawaz Mohamudally, DKR
Wiley, London. Babajee, “Investigating the Statistical Linear
[14] Dustin, E. et. al., Automated Software Testing, Relation between the Model Selection Criterion and
Addison- Wesley, 1999. the Complexities of Data Mining Algorithms”,
[15] Rudolf Ramler and Klaus Wolfmaier, “Economic JOURNAL OF COMPUTING, Volume 4 Issue 8,
Perspectives in Test Automation: Balancing 2012, pp.: 14-28.
Automated and Manual Testing with Opportunity [28] Dost Muhammad Khan, Nawaz Mohamudally,
Cost” “Model Selection Criterions as Data Mining
[16] Malik Jahan Khan, Abdul Qadeer, Shafay Shamail, Algorithms‟ Selector The Selection of Data Mining
“Software Automated Testing Guidelines” Algorithms through Model Selection Criterions”,
JOURNAL OF COMPUTING, Volume 4 Issue 3,
2012, pp.: 102-114.
.

View publication stats

You might also like