The Quality Assessment of A Software Testing Procedure and Its Effects
The Quality Assessment of A Software Testing Procedure and Its Effects
net/publication/285299174
CITATIONS READS
0 358
4 authors, including:
19 PUBLICATIONS 32 CITATIONS
The Islamia University of Bahawalpur
52 PUBLICATIONS 88 CITATIONS
SEE PROFILE
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Dost Muhammad KHAN on 09 August 2016.
Keyword— Automation Testing, Manual Testing, Defects, Functional testing, Security testing, Performance Testing
Sept.-Oct
4866 ISSN 1013-5316; CODEN: SINTE 8 Sci.Int.(Lahore),27(5),4865-4874,2015
Easily we reduce and added our test case according to project Reliable: Automated testing tools, run the scripts reliably
movement. each time. Exact same steps are followed every time, the
script is run.
It is covered in limited cost. Comprehensive: One can build a suite of tests that covers
every feature of application. It is always desirable to test
the complete functionality of the software.
Easy to learn for new people who are entered in manual testing. Reusable: One can reuse tests on different versions of a
website or application, even if the user-interface changes.
Manual is more reliable then automated (in many cases Time Constraints: Auto testing is good for those projects,
automated not cover all cases) which have no time constraints.
Actual load and performance is not possible to cover Selection and customization of Test Tool
Running test manually is very time consuming job. Selection of Automation Level than development and
verification of script.
Sept.-Oct
4868 ISSN 1013-5316; CODEN: SINTE 8 Sci.Int.(Lahore),27(5),4865-4874,2015
shortcomings of overly simplistic cost models for automated
testing frequently found in literature and commonly applied
in practice: only costs are evaluated and benefits are ignored,
incomparable aspects of manual testing and automated testing
are compared, all test cases and test executions are considered
equally important, project context, especially the available
budget for testing, is not taken into account and
additional cost factors are missing in the analysis. He also
introduced an alternative model using opportunity cost. The
concept of opportunity cost allows us to include the benefit
and, thus, to make the analysis more rational”. In [27,28]
different methods are used to select the best data mining
algorithm for a dataset.
4. METHODOLOGY
Figure 4: Manual and Automated Testing Cost For this case study we have collected data from Insurance
domain consisting 4 projects having 31 releases. In order to
Figure. 4 shows a relation between manual and automated address the problem, we will use statistical analysis to find
testing. The x-axis represents the number of test runs, while whether manual testing or auto testing is best for web base
the y-axis represents the cost of testing. The figure depict projects. The questioner prepared will try to identify
how the costs increase with every test run. While the curve successful and challenging areas in the existing approaches
for manual testing costs is sharply rising, automated test used during the testing of web-based systems. By analyzing
execution costs increase only moderately. But, automated this data; we will be able to find the best testing technique.
testing needs a higher initial investment as compare to We have investigated the existing system‟s testing technique
manual test. thoroughly on the basis of cost, time and number of errors
Bach [22] argues that “hand testing and automated testing are detected during the functional, security, and performance
really two different processes, rather than two different ways testing using manual and automated test approach. We
to execute the same process. Their dynamics are different, collected data against the above mentioned measures and
and the bugs they tend to reveal are different. Therefore, have analyzed the collected data through statistical
direct comparison of them in terms of dollar cost or number techniques.
of bugs found is meaningless.” Following table presents data statistics that we have collected
Boehm criticizes this on value-based software engineering using a questionnaire.
[23]: “Much of current software engineering practice and
research is done in a value-neutral setting, in which every Table 2: Data collection statistics
requirement, use case, object, test case, and defect is equally
important. In a real-world project, however, different test Attribute Value
cases and different test executions have different priorities
based on their probability to detect a defect and on the impact
which a potential defect has on the system under test”. Data Collection Questionnaire
Johnson Michael [2] discusses the performance-testing
approach required manually inspecting the performance logs. Sample Size 4 Projects
Another direction of future work is automatic performance
test generation. In this project, we relied on the performance Project Type Web-based software applications
architect's experience to identify the execution paths and
measurement points for performance testing. We can derive Project Duration 4 to 6 Months (Release)
this crucial information for performance testing from the
performance requirements and system design. We plan to T-test analysis technique has been conducted in the data
find guidelines for specifications of performance analysis. SPSS statistical package is used to apply T-test
requirements and system design to make the automation technique.
possible. 4.1 Hypotheses and Research Site
Andreas Leaner [7] discusses the “strength of automatically The background of this study is about automated and manual
generated and manually written test and conclude that both testing. When should a test be automated and when it should
have different strengths. An automatic strategy can generate be manual and the trade-off between Manual software testing
and run a much greater number of test cases than a human and Automated software testing.
could run in the same time”. For this we compare automated and manual testing on the
Rudolf Ramler in [8] discussed “cost models to support parameter of „cost‘, „time‘ and „number of error identified‘.
decision making in the trade-off between automated and
manual testing. He summarized typical problems and
Sci.Int.(Lahore),27(5),4865-4874,2015 ISSN 1013-5316; CODEN: SINTE 8 4869
Table 3: Data collected from organization We consider „Cost‘ on the basis of, licensed cost, man hours,
training cost and maintainability cost.
Organization Detail
Time on the basis of testing time and training time. Number of
error identified during (Functional testing, Performance
Organization size 800 employees
testing and security testing)1 we also consider usability
testing but during collection of data in a software house we
Organization’s Maturity level CMMI Level 5 ISO Certified didn‟t find any data regarding automated testing of usability.
Hypothesis I
Project Details The purpose of this hypothesis is to test the cost of the
Manual ‗Testing‘ and Automation testing. Here the variable
Number of project under Four testing has two categories, automation and manual whereas
study Project A = 3 Releases the variable ‗Cost‘ has four categories: licensed cost, man
Project B = 14 Releases hours, training cost and maintainability cost. To prove the
hypothesis, we have used regression analysis and applied the
Project C = 6 Releases T- Test.
Project D = 8 Releases Null Hypothesis:
Insurance
H0: Automation cost (licensed cost, salary, training cost,
Domain of the project under
maintainability cost) is greater or equal to manual cost
study (licensed cost, salary, training cost, maintainability cost).
Average duration of each Project A = 120 days Alternate Hypothesis:
H1: Automation Cost (licensed cost, salary, training cost,
release in a project Project B = 110 days
maintainability cost) is less than Manual Cost (licensed cost,
Project C = 180 days salary, training cost, maintainability cost)
Project D = 90 days Hypothesis II
The purpose of this hypothesis is to test the „Time‟ taken by
Average number of resource Project A: the Manual testing and Automation testing. Here the variable
utilized in each release of a Team size: 15, Quality testing has two categories, Automation and Manual whereas
project Assurance = 4 testers the variable ‗Time‘ has two categories, Testing Time and
Training Time. To prove the hypothesis, we have used
regression analysis and applied the T- Test.
Project B: Null Hypothesis:
Team size: 20, Quality H0: Automation Testing Time (Testing Time, Training Time)
is greater or equal to Manual Testing Time (Testing Time,
Assurance = 4 testers Training Time).
Alternate Hypothesis:
Project C:
H1: Automation Testing Time (Testing Time, Training Time)
is less than Manual testing Time (Testing Time, Training
Team size: 40, Quality Time).
Assurance = 5 testers Hypothesis III
The purpose of this hypothesis is to test the “Number of
Errors Identified/count” of the Manual testing and
Project D: Automation testing. Here the variable testing has two
Team size: 10, Quality categories, automation and manual whereas the variable
„Errors Identified‟ has three categories: Functional, Security
Assurance = 5 testers
and Performance. To prove the hypothesis, we have used
Technology used in the Project A Dot Net regression analysis and applied the T- Test.
selected projects Project B Dot Net Null Hypothesis:
H0: Automation Testing Errors Identified (In Functional
Project C Dot Net
testing, Security testing, Performance testing) is greater or
Project D Dot Net equal to Manual Errors Identified (In Functional testing,
Security testing, Performance testing).
1
We collect data on following parameter
(Functional testing has been checked on the basis of User requirement-SRS.
System security on the basis of Authentications and password checking.
Performance testing on the basis of Load testing and stress testing)
Sept.-Oct
4870 ISSN 1013-5316; CODEN: SINTE 8 Sci.Int.(Lahore),27(5),4865-4874,2015
Alternate Hypothesis: chosen with diverse commercial applications having more
H1: Automation Testing Errors Identified (In Functional than 800 employees and at CMMI level 5 for our research
testing, Security testing, Performance testing) is less than site. In Table-3, there is a detail of the organization and its
Manual Errors Identified (In Functional testing, Security projects. All the projects belong to the e-Commerce domain
testing, Performance testing). having four projects with 30 releases.
4.2 Research Site and Data Collection
For this research a leading software organization has been
2
(Mathematical Description of the hypotheses is given in the Appendix)
Sci.Int.(Lahore),27(5),4865-4874,2015 ISSN 1013-5316; CODEN: SINTE 8 4871
5 Performance 11 3 2 340
4 Security 5 1 1 170
3 Functional 78 12 28 4760
A R1 Performance 7 1 4 680
2 u…
Security 5 2 7 1190
1
Functional 76 6 42 7140
Manual
0 R2 Performance 4 2 4 680
Release
Release
1 Release
2 3
Security 5 2 4 680
Functional 91 9 47 7990
Figure 6- Automation Cost vs Manual Cost in Project A
R3 Performance 15 3 8 1360
The total Cost3 of testing is defined as the sum between the
cost of manual tests and the cost of automated tests: Security 5 1 4 680
CT = CL+CM+ CS + CTR
Figure 7 and Table-7 present the Automation testing
Table-5: Testing Cost
saves the time during regression testing, performance
Testing Cost per Single Release testing, load testing, and stress testing because the script in
Licensing Cost Rs. 9,96,000/- Auto test is written once but in manual testing one has to
Maintainability Cost 18% start from the scratch. We also concluded that it is very
1year Maintainability cost: Rs. 1,79,280/- hard to do regression testing manually, especially in
3years Maintainability cost: 1,79,280 *3 released project. Automation testing is performed swiftly
= Rs. 5,37,840/- and therefore saves time of testers. Fig-6 is showing data
Total Three years cost Rs. 15,33,840/- of testing time and manual time in working days. By
Overall Project Done 14 combining all project data it is finally concluded that
One project test cost 1,09,560 automation testing almost saves half of the manual testing
time.
Training Cost
5.3 Hypothesis-III: Relationship b/w Automation and
Training Time 1 month
Manual Testing in term of Number of Defects
Avg. Salary Rs. 30,000/-
Identified.
Salary/hr. Rs. 170/- For hypothesis-III, Table- 8 indicates the relationship
Training Cost Rs 30,000/- between Defects Identified by manual or automated
Testing Cost per Single Release = Testing time-in-hr. * testing, since the p value of T-Test 0.657 is greater than
Salary / hr. + Avg Training cost + Avg (α) Licensing Cost 0.05. So we are failed to reject Null hypothesis. It shows
+ Avg Maintainability cost that number of defects identified in Automation testing is
If we do not take this alpha cost for these projects then the greater than the manual testing.
automation cost is less than the manual cost based on Table-8 and Table-9 present that automation testing
working hours and salary according to those working generates the best result in functional, performance and
hours. security testing. As performance testing include load,
Table-6: Sample data of Project A that shows the Number of
stress testing is easily identified in automation testing.
releases of project A, Error Identify, time and cost
In Fig-8 mean number of defects are identified in all 4
projects combining all releases. Here the data is collected
No of No of on the basis of functional, performance and security test
Area Releases Area Scripts Error Time Cost
cases. However, there is a slight difference between
Functional 72 10 25 4250 automation and manual as far as performance and security
R1 Performance 3 2 1 170 are concerned because in manual it is complicated to
attempt all scripts and all possible combinations while
Automation
3
CT: Total Cost, CL: License Cost, CM : Maintainability Cost, CS: Salary Cost,
CTR: Training Cost
.
Sept.-Oct
4872 ISSN 1013-5316; CODEN: SINTE 8 Sci.Int.(Lahore),27(5),4865-4874,2015
Table-7: T-Test results of Time at significant level of 0.05
APPENDIX
Mathematical Description of Hypothesis-1, 2 and 3
For hypotheses 1, 2 and 3, all have testing variable which has
two categories, automation and manual and for this we use T-
Test. For example: in the first case, we need to find the
relationship between the cost of Automation and Manual
testing.
√
Figure- 9: # of error identified in Automation testing vs. Manual
In second case, we need to find the relationship between the
Testing
time taken for Automation and manual testing.
6. CONCLUSION
To ensure the quality of any software testing is a prime
venture in SDLC. A few tests innately oblige an automated
methodology to be compelling, however others must be √
manual. We have observed that
Unsuccessful automated testing projects are expensive. In this In third case, we need to find the relationship between the
research, we have perceived whether to automate a test or run Error identified by Automation and manual testing.
it manually. Our model is based on cost and time spent in
testing and number of bugs detected during automated and
manual testing approaches. This model will be valuable and √
steady in choice making whether to trade-off between
automated or manual testing. Where:
The Automation cost is higher comparing manual cost S2 = Variance
considering all licensing and training cost. Especially the N = Number of Record
factor of licensing of automation tool mainly maximizes the ―In T-Test probability of 0.05 or less is commonly interpreted
testing cost. Yet in the event that we overlook the by social scientists as justification for rejecting the null
aforementioned cost in later releases of projects than the hypothesis that the row variable is unrelated (that is, only
Automation cost is lesser than Manual cost. randomly related) to the column variable.”
On the other hand, automated testing needs a higher initial
investment as compare to manual testing but it can reduce the REFERENCES
testing associated costs by minimizing the time spent on [1] R. Scott Barber; ―Beyond Performance Testing‖
creating and running the test cases. This reduction of testing BPT Part 1: Introduction; © PerfTestPlus, Inc. 2006
cost will appear after a period of time relying on the [2] Johnson, Michael J.; Ho, Chih-Wei; Maximilien, E.
utilization of automation tools. Michael; Williams, Laurie; “Incorporating
The extent that time taken to execute manual test vs. Performance Testing in Test-Driven Development‖,
automated test is concerned, the automated testing diminishes Software, IEEE Computer Society, 2007; Volume:
the time it takes to complete software testing and allows for 24 Issue:3; ISSN: 0740-7459 ; INSPEC Accession
increased test coverage. Automation tests saves time during Number: 9457103
regression testing, performance testing, load testing and stress [3] Software Testing and Development Technical
testing because the script in Auto test is written once but in Articles Online:
manual testing we start from the scratch. It is also observed http://smartbear.com/community/resources/
that it is very hard to do regression testing manually; (Accessed on 25-08-2010)
especially in release project because automation performs [4] Samaroo, A., Allott, S., & Hambling, B. (1999). E-
very well and saves time of testers. effective Testing for E-Commerce. Retrieved June
The more prominent quantities of bugs are distinguished via 15, 2001, from the World Wide Web:
automated testing as compared to manual testing. By analysis http://www.stickyminds.com/docs_index/XML0471.
of data we have found that automation testing generates best doc
Sept.-Oct
4874 ISSN 1013-5316; CODEN: SINTE 8 Sci.Int.(Lahore),27(5),4865-4874,2015
[5] Gerrard, P. (2000a). Risk-Based E-Business Testing: [17] Hughes Software Systems Ltd. Test Automation,
Part 1 – Risks and Test Strategy. http://www.hssworld.com/whitepapers/whitepaper_p
Retrieved June 15, 2001, from the World Wide df/test_automation.pdf, December 2002.
Web: [18] Linz, T., Daigl, M., GUI Testing Made Painless.
http://www.evolutif.co.uk/articles/EBTestingPart1.p Implementation and results of the ESSI Project
df Number 24306, 1998. In: Dustin, et. al.,
[6] SmartBear Software; “Uniting Your Automated and Automated Software Testing, Addison-Wesley,
Manual Test Efforts”; © 2010 SmartBear Software. 1999, pp. 52
[7] Ramler R., Biffl S., Grünbacher P., Value-based [19] Dustin, E. et. al., Automated Software Testing,
Management of Software Testing. In: Biffl S. et al.: Addison-Wesley, 1999.
Value-Based Software Engineering. Springer, 2005. [20] Fewster, M., Graham, D., Software Test Automation:
[8] Rudolf Ramler and Klaus Wolfmaier, Software EffectiveUse of Text Execution Tool, Addison-
Competence Center Hagenberg GmbH: ―Economic Wesley, 1999
Perspectives in Test Automation: Balancing [21] Link, J., and Unit Testing in Java: How Tests Drive
Automated and Manual Testing with Opportunity the Code, Morgan Kaufmann, 2003
Cost‖. [22] Bach, J., Test Automation Snake Oil, 14th
http://aop.cslab.openu.ac.il/~lorenz/www/ontheShelf International Conference and Exposition on Testing
/p85.pdf Computer Software, Washington, DC, 1999
[9] OTS Solutions Pvt. Ltd: an Offshore Software [23] Boehm, B., Value-Based Software Engineering:
Development company: “Manual Testing vs Overview and Agenda. In: Biffl S. et al.: Value-
Automated Testing‖, Based Software Engineering. Springer, 2005
http://www.otssolutions.com/blog/?p=37 [24] Hoffman, D., Cost Benefits Analysis of Test
[10] Belatrix Software Factory www.belatrixsf.com, Automation, Software Testing Analysis &
Case Study: From Manual Testing to Automated Review Conference (STAR East). Orlando, FL,
Testing May 1999
[11] O. Taipale, K. Smolander, and H. Kälviäinen, A [25] Rajender Bathla, Anil Kapil, “Analytical Scenario of
Survey on Software Testing, 6th Software Testing Using Simplistic Cost Model”,
International SPICE Conference on Software International Journal of Computer Science and
Process Improvement and Capability determination Network (IJCSN) Volume 1, Issue 1, February 2012
(SPICE'2006), Luxembourg, 2006. www.ijcsn.org ISSN 2277-5420 1Er. RAJENDER
[12] Nguyen,H.Q.: “Testing Web-based Applications“, BATHLA, Er.
Software Testing & Quality Engineering, Vol. 2, [26] D. Marinov and S. Khurshid, "Test Era: A Novel
No. 3, May 2000, p. 23-30 Framework for Automated Testing of Java
[13] Altman D.G. (1991) Practical Statistics for Medical Programs," in Proc.16th IEEE International
Research. Chapman & Hall, London. Conference on Automated Software Engineering
Campbell M.J. & Machin D. (1993) Medical (ASE), 2001, pp. 22-34
Statistics a Commonsense Approach. 2nd edn. [27] Dost Muhammad Khan, Nawaz Mohamudally, DKR
Wiley, London. Babajee, “Investigating the Statistical Linear
[14] Dustin, E. et. al., Automated Software Testing, Relation between the Model Selection Criterion and
Addison- Wesley, 1999. the Complexities of Data Mining Algorithms”,
[15] Rudolf Ramler and Klaus Wolfmaier, “Economic JOURNAL OF COMPUTING, Volume 4 Issue 8,
Perspectives in Test Automation: Balancing 2012, pp.: 14-28.
Automated and Manual Testing with Opportunity [28] Dost Muhammad Khan, Nawaz Mohamudally,
Cost” “Model Selection Criterions as Data Mining
[16] Malik Jahan Khan, Abdul Qadeer, Shafay Shamail, Algorithms‟ Selector The Selection of Data Mining
“Software Automated Testing Guidelines” Algorithms through Model Selection Criterions”,
JOURNAL OF COMPUTING, Volume 4 Issue 3,
2012, pp.: 102-114.
.