0% found this document useful (0 votes)
37 views

STA lab manual

Uploaded by

geetha.pv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views

STA lab manual

Uploaded by

geetha.pv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

(Approved by AICTE, New Delhi & Affiliated to Anna University, Chennai)

NH- 47, Palakkad Main Road, Navakkarai Po, Coimbatore – 641 105.

DEPARTMENT OF

ARTIFICIAL INTELLIGENCE
AND DATA SCIENCE

RECORD NOTEBOOK

Name: ___________________________________________

Register No: ______________________________________

Subject Code/Title: ________________________________

Year/Semester: ___________________________________

Academic Year:___________________________________
(Approved by AICTE, New Delhi & Affiliated to Anna University, Chennai)

NH- 47, Palakkad Main Road, Navakkarai Po, Coimbatore – 641 105.

DEPARTMENT OF

ARTIFICIAL INTELLIGENCE AND DATA SCIENCE

CERTIFICATE
This is to certify that Mr./Ms._____________________________________
Reg. No._____________________________________________ of____ Semester
____Year B.Tech. (Artificial Intelligence and Data Science) has completed his/her
practical work in the_____________________________________________
laboratory during the academic year 2024-25.

Faculty In-charge Head of the Department

Submitted for the University practical examination held on :_______________

INTERNAL EXAMINER EXTERNAL EXAMINER


INDEX
Page Staff
S. No. Name of the Experiment Marks
No. Sign.

Average:
Exp. No.:1 Date:

TEST PLAN FOR TESTING AN E-COMMERCE WEB/MOBILE APPLICATION

Aim:

The aim of this experiment is to develop a comprehensive test plan for testing the
functionality and usability of the e-commerce web/mobile application www.amazon.in.

Algorithm:

1. Identify the Scope: Determine the scope of testing, including the features and
functionalities that need to be tested.

2. Define Test Objectives: Specify the primary objectives of testing, such as functional
testing, usability testing, performance testing, security testing, etc.

3. Identify Test Environment: Define the platforms, browsers, devices, and operating systems
on which the application will be tested.

4. Determine Test Deliverables: Decide on the documents and artifacts that will be generated
during the testing process, such as test cases, test reports, and defect logs.

5. Create Test Strategy: Develop an overall approach for testing, including the testing
techniques, entry and exit criteria, and the roles and responsibilities of the testing team.

6. Define Test Scope and Schedule: Specify the timeline for each testing phase and the scope
of testing for each phase.

7. Risk Analysis: Identify potential risks and their impact on the testing process, and devise
risk mitigation strategies.

8. Resource Planning: Allocate the necessary resources, including the testing team,
hardware, and software required for testing.

9. Test Case Design: Prepare detailed test cases based on the requirements and functionalities
of the e-commerce application.

10. Test Data Setup: Arrange test data required for executing the test cases effectively.
11. Test Execution: Execute the test cases and record the test results.

12. Defect Reporting: Document any defects encountered during testing and track their
resolution. Test Plan:

The test plan should cover the following sections:

1. Introduction: Briefly describe the purpose of the test plan and provide an overview of the
e- commerce application to be tested.

2. Test Objectives: List the primary objectives of testing the application.

3. Test Scope: Specify the features and functionalities to be tested and any limitations on
testing.

4. Test Environment: Describe the hardware, software, browsers, and devices to be used for
testing.

5. Test Strategy: Explain the overall approach to be followed during testing.

6. Test Schedule: Provide a detailed timeline for each testing phase.

7. Risk Analysis: Identify potential risks and the strategies to mitigate them.

8. Resource Planning: Specify the resources required for testing.

9. Test Case Design: Include a summary of the test cases developed for the application.

10. Test Data Setup: Describe the process of arranging test data for testing. 3

11. Defect Reporting: Explain the procedure for reporting and tracking defects.

Test Case Table:

ExpectedR Actual
Process No. TestCase Steps Description Status esult Result Comment
1. Verify The test
Test Scope Reviewthe thescope planincludes
Plan TC001 ofTestin test oftesting. Done allfeatures.
g plandocume
nt.
The
1. Verify testobjecti
TestObjec Reviewthe thetestobje vesare
TC002 tives test ctives. Done well-
plandocume defined.
nt.
Testenviron
1. Check mentsaremen
TestEnviron Reviewthe thespecifiede tioned.
TC003 ment test nvironments. Done
plandocume
nt.

1. Ensure The test


TestDeliver Reviewthe alldeliverab planincludes
TC004 ables test lesarelisted. Done alldeliverabl
plandocume es.
nt.
The
1. Verify teststrateg
TestStra Reviewthe theoverall y
TC005 tegy test approach. Done isclearlyst
plandocume ated.
nt.

Test 1. Check The


ScopeandS Reviewthe theschedule scheduleand
TC006 chedule test andscope. Done scope
plandocume aredefined.
nt.

Result:
Exp. No.:2 Date:

TEST CASES FOR TESTING E-COMMERCE APPLICATION

Aim:

The aim of this experiment is to design a set of comprehensive and effective test cases for
testing the e-commerce application www.amazon.in

Algorithm:

1. Understand Requirements: Familiarize yourself with the functional and nonfunctional


requirements of the e-commerce application.

2. Identify Test Scenarios: Based on the requirements, identify different test scenarios that
cover all aspects of the application.

3. Write Test Cases: Develop test cases for each identified scenario, including preconditions,
steps to be executed, and expected outcomes.

4. Cover Edge Cases: Ensure that the test cases cover edge cases and boundary conditions to
verify the robustness of the application.

5. Prioritize Test Cases: Prioritize the test cases based on their criticality and relevance to the
application.

6. Review Test Cases: Conduct a peer review of the test cases to ensure their accuracy and
completeness.

7. Optimize Test Cases: Optimize the test cases for reusability and maintainability.

Test Case Design:

The test case design should include the following components for each test case:

1. Test Case ID: A unique identifier for each test case.

2. Test Scenario: Description of the scenario being tested.

3. Test Case Description: Detailed steps to execute the test.


4. Precondition: The necessary conditions that must be satisfied before executing the test
case.

5. Test Steps: The sequence of actions to be performed during the test.

6. Expected Result: The outcome that is expected from the test.

Test Case Table:

Expected Actual
Process No. TestCase Steps Description Status Result Result Comment

1.
TestCa Navigateto Verify User
seDesi UserRegistr theregistrat userregistr cansuccessf
gn TC001 ation ionpage. ationproce Done ullyregister.
ss.

1. Verify User
Navigateto userloginpr cansuccessf
TC002 UserLogin the ocess. Done ullylogin.
loginpage.

Searchres
1. Enter ultsreleva
akeyword Verifysearch nt
SearchFuncti inthe functionality. tothekeyw
TC003 onality searchbar. Done ord.

Verifyaddi Product
1. ngproducts isadded to
Browsethe tothecart. theshopping
TC004 Add to Cart productcat Done cart.
alog.

1. Click Itemsinthes
Shopping ontheshop Verify hoppingcar
CartValid pingcartico theshopping t
TC005 ation n. cartcontents. Done aredisplaye
d.
Explanation:

Test cases are designed to validate the functionality and behaviour of the e-commerce
application. They ensure that the application performs as intended and meets the specified
requirements.

Result:
Exp. No.:3 Date:

TEST THE E-COMMERCE APPLICATION AND REPORT THE DEFECTS IN IT

Aim:

The aim of this experiment is to execute the designed test cases and identify defects or issues
in the e-commerce application www.amazon.in.

Algorithm:

1. Test Environment Setup: Set up the testing environment with the required hardware,
software, and test data.

2. Test Case Execution: Execute the test cases designed in Experiment 2, following the
specified steps.

3. Defect Identification: During test execution, record any discrepancies or issues


encountered.

4. Defect Reporting: Log the identified defects with detailed information, including steps to
reproduce, severity, and priority.

5. Defect Tracking: Track the progress of defect resolution and verify fixes as they are
implemented. 6. Retesting: After defect fixes, retest the affected areas to ensure the issues are
resolved.

7. Regression Testing: Conduct regression testing to ensure new changes do not introduce
new defect.

Test Case Table:

Expected Actual
Process No. TestCase Steps Description Status Result Result Comment
1.
TestCa Navigateto Verify User
seDesi UserRegistr theregistrat userregistr cansuccessf
gn TC001 ation ionpage. ationproce Done ullyregister.
ss.

1. Verify User
Navigateto userloginpr cansuccessf
TC002 UserLogin the ocess. Done ullylogin.
loginpage.

Searchres
1. Enter ultsreleva
akeyword Verifysearch nt
SearchFuncti inthe functionality. tothekeyw
TC003 onality searchbar. Done ord.

Verifyaddi Product
1. ngproducts isadded to
Browsethe tothecart. theshopping
TC004 Add to Cart productcat Done cart.
alog.

1. Click Itemsinthes
Shopping ontheshop Verify hoppingcar
CartValid pingcartico theshopping t
TC005 ation n. cartcontents. Done aredisplaye
d.

1. Click Checkoutpr
onthe"Chec Verify ocessproce
Checkout kout"butto thechecko NotSta eds
TC006 Process n. utprocess. rted asexpected.

Explanation:

Testing the e-commerce application aims to validate its functionality and usability. By
identifying and reporting defects, you ensure the application's quality and reliability

Result:
Exp. No.:4 Date:

DEVELOP THE TEST PLAN AND DESIGN THE TEST CASES FOR AN
INVENTORY CONTROL SYSTEM

Aim:

The aim of this experiment is to create a comprehensive test plan and design test cases for an
Inventory Control System.

Algorithm:

Follow the same algorithm as described in Experiment 1 for developing the test plan for an
inventory control system. Follow the same algorithm as described in Experiment 2 for
designing test cases for an inventory control system.

Test Plan:

Expected Actual
Process No. TestCase Steps Description Status Result Result Comment

The
1. Review testplanincl
therequirement Verify udes
Test Scope sand thescope allessential
Plan TC001 ofTestin projectdocume oftesting. Done features.
g ntation.

2.Identifythe
modules to
betested.

3.
Determineth
e out-of-
scopeitems.

ExpectedRe Actual
Process No. TestCase Steps Description Status sult Result Comment
1. Review The
therequireme Verify testobjective
TC002 TestObjectiv ntsand thetestobjecti Done sare
es projectdocu ves. clearlydefin
mentation. ed.
2. Discuss
withstakehol
ders
tounderstand
expectations.
1.Identifythe
hardware Verify The
TC003 TestEnviron andsoftwarer therequireden NotSta testenviron
ment equirements. vironments. rted mentisdefin
ed.
2. Set up
therequiredh
ardware
andsoftware.

1. Allnecessar
Determineth Verify ydocuments
TC004 TestDelivera e therequiredde NotSta arelisted.
bles documentsa liverables. rted
ndartifactsto
be produced.
2.
Createtempl
atesfortest
reports,defec
t logs,etc.
1. Decide Verify
onthe theoverallapp The
TC005 TestStrategy testingappro roachfortestin NotSta teststrategy
ach g. rted isdefined.
andtechniqu
es.

Test Case Design:


ExpectedR Actual
Process No. TestCase Steps Description Status esult Result Comment

1. Review Verify Allfunctionali


TestCa Module A - therequireme thefunctiona tiesof Module
seDesi Functionality ntsrelated lityof NotSta Aaretested.
gn TC001 Test toModuleA. ModuleA. rted

2.Identifytests
cenarios
forModuleA.

3.
Developdet
ailed
testcases
forModule
A.
1. Review Verify
Module B - therequireme theintegratio Module B
Integration ntsrelated nof Module NotSta issuccessful
TC002 Test toModule B. Bwithothers rted lyintegrated
. .

2.
Identifyintegr
ationpoints
withothermod
ules.

3. Design
testcases
fortestinginte
grationscenar
ios.

ExpectedResu Actual
Process No. TestCase Steps Description Status lt Result Comment
1. Review Verify Module
Module C- theperforman theperforma Cperformsopt
TC003 Performance cerequiremen nceof NotSta imallyunderlo
Test tsforModule ModuleC. rted ad.
C.

2.
Determineper
formancemet
rics to
bemeasured.

3.
Developperfo
rmancetestca
sesforModule
C.

1. Review
Module D - theusabilityre Verify Module D
TC004 UsabilityTes quirementsfo theusability NotSta isuser-
t rModuleD. ofModuleD. rted friendlyandint
uitive.

2.
Identifyusabi
lityaspects to
betested.

3. Create
testcases
forevaluating
Module
D'susability.

Module E
1. Review isprotectedag
Module E- thesecurityre Verify NotSta ainstsecurityt
TC005 SecurityTest quirementsfo thesecurity rted hreats.
rModuleE. ofModuleE.
2.
Identifypoten
tial

Explanation:

An inventory control system is critical for managing stock and supplies. Proper testing
ensures the system functions accurately and efficiently.

Result:
Exp. No.:5 Date:

EXECUTE THE TEST CASES AGAINST A CLIENT-SERVER OR DESKTOP


APPLICATION AND IDENTIFY THE DEFECTS

Aim:

The aim of this experiment is to execute the test cases against a client-server or desktop
application and identify defects.

Algorithm:

1. Test Environment Setup: Set up the testing environment, including the client-server or
desktop application, required hardware, and test data.

2. Test Case Execution: Execute the test cases designed in Experiment 2 against the
application.

3. Defect Identification: During test execution, record any discrepancies or issues


encountered.

4. Defect Reporting: Log the identified defects with detailed information, including steps to
reproduce, severity, and priority.

5. Defect Tracking: Track the progress of defect resolution and verify fixes as they are
implemented. 6. Retesting: After defect fixes, retest the affected areas to ensure the issues are
resolved.

7. Regression Testing: Conduct regression testing to ensure new changes do not introduce
new defects.

Test Case Table:

Expected Actual
Process No. TestCase Steps Description Status Result Result Comment
User
Test 1. Launch Verify NotSta cansuccessf
CaseExec TC001 UserLogin theapplicatio userloginproc rted ullylogin.
ution n. ess.

2. Enter
validlogincre
dentials.

3. Click on
the"Login"butt
on.

Invalid
datashowsa
1. Access Verify ppropriatee
adata datavalidatio NotSta rrormessag
TC002 DataValidation inputform. n ontheform. rted es.

2. Enter
invaliddatainth
eformfields.

3. Submit
theform.

1. Access Verify File


thefile fileuploadfu NotSta isuploadeds
TC003 FileUpload uploadfeatur nctionality. rted uccessfully.
e.

2.Selectafilefr
om
thesystem.

3. Click on
the"Upload"b
utton.

Applicationgr
Verify acefullyhandl
TC004 NetworkConne 1. theapplication's NotSta esdisconnecti
ctivity Disconnectthe response. rted on.
network.
2. Attempt
toperform
anactionrequir
ingnetworkacc
ess.
1. Verifyapplicati Applicationpe
ConcurrentUser Simulateconc onperformance NotSta rforms
TC005 s urrent . rted wellunderload
usersessions. .

2.
Performaction
ssimultaneous
ly.

1. Test Applicationw
theapplication Verify cross- orks on
TC006 Compatibility ondifferentpla platformcompa NotSta allspecifiedpla
tforms. tibility. rted tforms.

2. Execute
testson
variousbrowse
rs.

1. Data
Monitornetwo Verifycommun iscorrectlytran
TC007 Client- rk icationintegrity NotSta smittedandrec
ServerCommun trafficbetween . rted eived.
ication clientandserve
r.

Explanation:

Testing a client-server or desktop application ensures its functionality across different


platforms and environments.

Result:
Exp. No.:6 Date:

TEST THE PERFORMANCE OF THE E-COMMERCE APPLICATION

Aim:

The aim of this experiment is to test the performance of the e-commerce application
www.amazon.in.

Algorithm:

1. Identify Performance Metrics: Determine the performance metrics to be measured, such as


response time, throughput, and resource utilization.

2. Define Test Scenarios: Create test scenarios that simulate various user interactions and
loads on the application.

3. Performance Test Setup: Set up the performance testing environment with appropriate
hardware and software.

4. Execute Performance Tests: Run the performance tests using the defined scenarios and
collect performance data.

5. Analyze Performance Data: Analyze the collected data to identify any performance
bottlenecks or issues.

6. Performance Tuning: Implement necessary optimizations to improve the application's


performance.

Performance Table:

Expected Actua
Process No. TestCase Steps Descriptio Status Result lResul Comment
n t
Thehome
pageloads
1.Access the withinthe
home pageof specified
Response thee- Measure the Response
Performanc Timefor commerce response Not Time
e
Testing TC00 HomePage application. time. Starte threshold.
1 d

ExpectedR
Process No. TestCase Steps Description Status esult

2. Use
aperformance
testing tool
torecord
thetime.
3.Analyzether
ecorded
datato
determineresp
onsetime.
Theapplicat
ioncan
handlepeak
-
1. hourtraffic
Throughput Simulatepeak withoutsign
TC00 during -hourtraffic Measure NotSt ificantdelay
2 PeakHours on thethroughp arted s.
theapplication ut.
.
2.
Executeperfor
mancetests
duringpeak
hours.
3.Analyzethe
data
todetermine
thethroughput
.
Resourceut
1. ilizationre
MonitorCPU, mainswithi
memory,and Measureres nacceptable
ResourceUt networkusage ourceutiliza NotSt limits.
TC00 ilization duringtesting. tion. arted
3
2.
Executeperfor
mancetests
whilemonitori
ngresources.
ExpectedR
Process No. TestCase Steps Description Status esult

2. Use
aperformance
testing tool
torecord
thetime.
3.Analyzether
ecorded
datato
determineresp
onsetime.
Theapplicat
ioncan
handlepeak
-
1. hourtraffic
Throughput Simulatepeak withoutsign
TC00 during -hourtraffic Measure NotSt ificantdelay
2 PeakHours on thethroughp arted s.
theapplication ut.
.
2.
Executeperfor
mancetests
duringpeak
hours.
3.Analyzethe
data
todetermine
thethroughput
.
Resourceut
1. ilizationre
MonitorCPU, mainswithi
memory,and Measureres nacceptable
ResourceUt networkusage ourceutiliza NotSt limits.
TC00 ilization duringtesting. tion. arted
3
2.
Executeperfor
mancetests
whilemonitori
ngresources.

Explanation:

Performance testing helps to identify bottlenecks in the e-commerce application, ensuring it


can handle real-world user loads effectively.

Result:
Exp. No.:7 Date:

AUTOMATE THE TESTING OF E-COMMERCE APPLICATIONS USING


SELENIUM.

Aim:

The aim of this task is to automate the testing of an e-commerce web application
(www.amazon.in) using Selenium WebDriver, which will help improve testing efficiency and
reliability.

Algorithm:

1. Set up the environment:

- Install Java Development Kit (JDK) and configure the Java environment variables.

- Install an Integrated Development Environment (IDE) like Eclipse or IntelliJ.

- Download Selenium WebDriver and the required web drivers for the browsers you intend
to test (e.g., ChromeDriver, GeckoDriver for Firefox).

2. Create a new Java project in the IDE:

- Set up a new Java project in the IDE and include the Selenium WebDriver library. 3.
Develop test cases:

- Identify the key functionalities and scenarios to test in the e-commerce application.

- Design test cases covering various aspects like login, search, product details, add to cart,
checkout, etc.

4. Implement Selenium automation scripts:

- Write Java code using Selenium WebDriver to automate the identified test cases.

- Utilize different Selenium commands to interact with the web elements, navigate through
pages, and perform various actions.

5. Execute the automated test cases:

- Run the automated test scripts against the e-commerce application.


- Observe the test execution and identify any failures or defects.

6. Analyze the test results:

- Review the test execution results to identify any failed test cases.

- Debug and fix any issues with the automation scripts if necessary.

7. Report defects:

- Document any defects found during the automated testing process.

- Provide detailed information about each defect, including steps to reproduce and expected
results.

Program:

Package program;
Import org.openqa.selenium.By;
Import org.openqa.selenium.WebDriver;
Importorg.openqa.selenium.chrome.Chr
omeDriver;publicclassselenim {
publicstaticvoidmain(String[]args)
{
System.setProperty("webdriver.chrome.driver","C:\\Users\\Admin\\Downloads\\c
hromedriver- win64\\chromedriver-win64\\chromedriver.exe");
WebDriver d=new
ChromeDriver();d.get("ht
tps://www.amazon.in");
d.findElement(By.xpath("//*[@id=\"nav-link-accountList\"]/span/span")).click();
d.findElement(By.id("ap_email")).sendKeys("[email protected]");d.findEle
ment(By.xpath("//*[@id=\"continue\"]")).click();d.findElement(By.id("ap_passwo
rd")).sendKeys("your
password");d.findElement(By.xpath("//*[@id=\"signInSubmit\"]")).click();
String
u=d.getCurrentUrl();if(u.equals("https://www.amazon.in/?ref_=nav_ya
_signin"))
{
System.out.println("TestCasePassed");
}
else
{

System.out.println("Test Case Failed");

}d.close();

}}

Automation Process:
Console output:

Result:
Exp. No.:8 Date:

INTEGRATE TESTNG WITH THE ABOVE TEST AUTOMATION.

Aim:

The aim of this task is to integrate TestNG with the existing Selenium automation scripts for
the e-commerce application, enhancing test management, parallel execution, and reporting
capabilities. Algorithm:

1. Set up TestNG in the project: - Add TestNG library to the existing Java project.

2. Organize test cases using TestNG annotations: - Add TestNG annotations (@Test,
@BeforeTest, @AfterTest, etc.) to the existing test cases. - Group similar test cases using
TestNG's grouping mechanism.

3. Implement data-driven testing (optional): - Utilize TestNG's data providers to implement


data-driven testing if required.

4. Configure TestNG test suite: - Create an XML configuration file for TestNG to define test
suites, test groups, and other configurations.

5. Execute the automated test cases using TestNG: - Run the automated test suite using
TestNG. - Observe the test execution and identify any failures or defects.

6. Analyze the test results: - Review the TestNG-generated test reports to identify any failed
test cases. - Utilize TestNG's reporting capabilities to understand the test execution status.

7. Report defects (if any): - Document any defects found during the automated testing
process. Provide detailed information about each defect, including steps to reproduce and
expected results.
ProgramCode(Program1.java):

Package mytest;

importjava.time.Duration;

importorg.openqa.selenium.By;

importorg.openqa.selenium.WebDriver;

importorg.openqa.selenium.chrome.ChromeDriver;

importorg.testng.Assert;

importorg.testng.annotations.AfterMethod;

importorg.testng.annotations.BeforeMethod;

importorg.testng.annotations.Test;

publicclassProgram1{

WebDriverdriver;

@BeforeMethod

publicvoidsetUp()

System.setProperty("webdriver.chrome.driver","C:\\selenium\\chromedriver_win32\
\chro

medriver.exe");

driver=newChromeDriver();

driver.get("https://amazon.in");

driver.manage().window().maximize();
driver.manage().timeouts().implicitlyWait(Duration.ofSeconds(5));

ProgramCode(Program1.java):

packagemytest;

importjava.time.Duration;

importorg.openqa.selenium.By;

importorg.openqa.selenium.WebDriver;

importorg.openqa.selenium.chrome.ChromeDriver;

importorg.testng.Assert;

importorg.testng.annotations.AfterMethod;

importorg.testng.annotations.BeforeMethod;

importorg.testng.annotations.Test;

publicclassProgram1{

WebDriverdriver;

@BeforeMethod

publicvoidsetUp()

System.setProperty("webdriver.chrome.driver","C:\\selenium\\chromedriver_win32\
\chro

medriver.exe");

driver=newChromeDriver();
driver.get("https://amazon.in");

driver.manage().window().maximize();

driver.manage().timeouts().implicitlyWait(Duration.ofSeconds(5));

}
@Test

publicvoidverifyTitle()

StringactualTitle=driver.getTitle();

Assert.assertEquals(actualTitle,expectedTitle);

@Test

publicvoidverifyLogo()

booleanflag=driver.findElement(By.xpath("//a[@id='nav-logo-

sprites']")).isDisplayed();Assert.assertTrue(flag);

@AfterMethod

publicvoidtearDown()

driver.quit();

ProgramCode(pom.xml) :

<project
xmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema
-
ProgramCode(testng.xml):

<?xmlversion="1.0"encoding="UTF-8"?>

<!DOCTYPEsuiteSYSTEM"https://testng.org/testng-1.0.dtd">

<suitename="Suite">

<testname="Test">

<classes>

<classname="mytest.Program1"></class>

</classes>

</test><!--Test -->

</suite><!--Suite -->

Output:

Result:
Exp. No.:9.a Date:

DATA DRIVEN TESTING USING SELENIUM AND TESTNG

Aim:

To perform data driven testing using Selenium and TestNG.

Data Driven Testing is a strategic approach that involves executing a set of test script actions
in a repetitive manner, each time utilizing distinct input values sourced from an associated
data repository. This technique enhances efficiency by decoupling the ‘test_case‘ code from
the underlying ‘data_set,’ streamlining testing processes.
It is one of the widely-used automation testing best practices for verifying the behavior and
efficiency of tests when handling various types of input values. You can learn more about the
different testing methods by reading the article on TDD vs BDD: Choosing The Suitable
Framework.

Here are the popular external data feed or data sources in data driven testing:

MS Excel Sheets (.xls, .xlsx)

CSV Files (.csv)

XML Files (.xml)

MS Access Tables (.mdb)

The data feed or data source not only contains the input values used for Selenium
automation testing but can also be used for storing the expected test result and the output
test result. This can be useful in comparing the test execution result and storing the same
for referring to later stages.

Data Driven Framework in Selenium WebDriver

Data Driven Framework is a highly effective and widely utilized automation testing
framework that enables iterative development and testing. It follows the principles of
data-driven testing, allowing you to drive test cases and test suites using external data
feeds such as Excel Sheets (xls, xlsx), CSV files (csv), and more. By establishing a
connection with the external data source, the test script seamlessly performs the required
operations on the test data, ensuring efficient and accurate testing.
Using the Data Driven Framework in Selenium WebDriver, the test data set is separated
from the test implementation, reducing the overall effort involved in maintaining and
updating the test code. Minimal changes in the business rules will require changes in the
test data set, with/without minimal (or no) changes in the test code.

Selenium WebDriver lets you perform automated cross browser testing on web
applications; however, it does not have the support to perform create, read, update, and
delete (CRUD) operations on external data feeds like Excel sheets, CSV files, and more.
This is where third-party APIs like Apache POI has to be used since it lets you access and
performs relevant operations on external data sources.

Data Driven Testing using @DataProvider

Data-driven testing can be carried out through TestNG using its @DataProvider annotation.
A method with @DataProvider annotation over it returns a 2D array of the object where the
rows determine the number of iterations and columns determine the number of input
parameters passed to the Test method with each iteration.
This annotation takes only the name of the data provider as its parameter which is used to
bind the data provider to the Test method. If no name is provided, the method name of the
data provider is taken as the data provider’s name.

@DataProvider(name = "nameOfDataProvider")

public Object[][] dataProviderMethodName() {


//Data generation or fetching logic from any external source

//returning 2d array of object

return new Object[][] {{"k1","r1",1},{"k2","r2",2}};

After the creation of data provider method, we can associate a Test method with data provider
using ‘dataProvider’ attribute of @Test annotation. For successful binding of data provider
with Test method, the number and data type of parameters of the test method must match the
ones returned by the data provider method.

@Test(dataProvider = "nameOfDataProvider")

public void sampleTest(String testData1, String testData2, int testData3) {

System.out.println(testData1 + " " + testData2 + " " + testData3);

Code Snippet for Data Driven Testing in TestNG

@DataProvider(name = "dataProvider1")

public Object[][] dataProviderMethod1() {

return new Object[][] {{"k1","r1"},{"k2","r2"},{"k3","r3"}};

//The test case will run 3 times with different set of values

@Test(dataProvider = "dataProvider1")

public void sampleTest(String str1, String str2) {

System.out.println(str1 + " " + str2);

The above test “sampleTest” will run 3 times with different set of test data –
{“k1″,”r1”},{“k2″,”r2”},{“k3″,”r3”} received from ‘dataProvider1’ dataProvider method.

Result:
Exp. No.:9.b Date:

BUILD PAGE OBJECT MODEL USING SELENIUM AND TESTNG

Aim:

To build a build page object model using Selenium and TestNG.

Page Object Model, also known as POM, is a design pattern in Selenium that creates an
object repository for storing all web elements. It helps reduce code duplication and improves
test case maintenance.
In Page Object Model, consider each web page of an application as a class file. Each class
file will contain only corresponding web page elements. Using these elements, testers can
perform operations on the website under test.

Advantages of Page Object Model

 Easy Maintenance: POM is useful when there is a change in a UI element or a


change in action. An example would be: a drop-down menu is changed to a radio button. In
this case, POM helps to identify the page or screen to be modified. As every screen will have
different Java files, this identification is necessary to make changes in the right files. This
makes test cases easy to maintain and reduces errors.

 Code Reusability: As already discussed, all screens are independent. By using POM,
one can use the test code for one screen, and reuse it in another test case. There is no need to
rewrite code, thus saving time and effort.

 Readability and Reliability of Scripts: When all screens have independent java files,
one can quickly identify actions performed on a particular screen by navigating through the
java file. If a change must be made to a specific code section, it can be efficiently done
without affecting other files.

The Page Object Model is an often used method for improving maintainability of Selenium
tests, to this setup. To do so, we need to accomplish the following steps:

 Create Page Objects representing pages of a web application that we want to test
 Create methods for these Page Objects that represent actions we want to perform
within the pages that they represent

 Create tests that perform these actions in the required order and performs checks that
make up the test scenario

 Run the tests as TestNG tests and inspect the results

Creating Page Objects for our test application

For this purpose, again I use the ParaBank demo application that can be found here. I’ve
narrowed the scope of my tests down to just three of the pages in this application: the login
page, the home page (where you end up after a successful login) and an error page (where
you land after a failed login attempt). As an example, this is the code for the login page:

package com.ontestautomation.seleniumtestngpom.pages;

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;

public class LoginPage {

private WebDriver driver;

public LoginPage(WebDriver driver) {

this.driver = driver;

if(!driver.getTitle().equals("ParaBank | Welcome | Online Banking")) {


driver.get("http://parabank.parasoft.com");
}
}

public ErrorPage incorrectLogin(String username, String password) {


driver.findElement(By.name("username")).sendKeys(username);
driver.findElement(By.name("password")).sendKeys(password);
driver.findElement(By.xpath("//input[@value='Log In']")).click();
return new ErrorPage(driver);
}

public HomePage correctLogin(String username, String password) {

driver.findElement(By.name("username")).sendKeys(username);
driver.findElement(By.name("password")).sendKeys(password);
driver.findElement(By.xpath("//input[@value='Log In']")).click();
return new HomePage(driver);
}
}

It contains a constructor that returns a new instance of the LoginPage object as well as two
methods that we can use in our tests: incorrectLogin, which sends us to the error page
and correctLogin, which sends us to the home page. Likewise, I’ve constructed Page Objects
for these two pages as well. A link to those implementations can be found at the end of this
post.

Note that this code snippet isn’t optimized for maintainability – I used direct references to
element properties rather than some sort of element-level abstraction, such as an Object
Repository.

Creating methods that perform actions on the Page Objects

You’ve seen these for the login page in the code sample above. I’ve included similar methods
for the other two pages. A good example can be seen in the implementation of the error page
Page Object:

package com.ontestautomation.seleniumtestngpom.pages;

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
public class ErrorPage {

private WebDriver driver;

public ErrorPage(WebDriver driver) {

this.driver = driver;
}

public String getErrorText() {

return driver.findElement(By.className("error")).getText();
}
}

By implementing a getErrorText method to retrieve the error message that is displayed on the
error page, we can call this method in our actual test script. It is considered good practice to
separate the implementation of your Page Objects from the actual assertions that are
performed in your test script (separation of responsibilities). If you need to perform additional
checks, just add a method that returns the actual value displayed on the screen to the
associated page object and add assertions to the scripts where this check needs to be
performed.

Create tests that perform the required actions and execute the required checks

Now that we have created both the page objects and the methods that we want to use for the
checks in our test scripts, it’s time to create these test scripts. This is again pretty
straightforward, as this example shows (imports removed for brevity):

package com.ontest automation.seleniumtestngpom.tests;

public class TestNGPOM {

WebDriver driver;
@BeforeSuite
public void setUp() {

driver = new FirefoxDriver();


driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
}

@Parameters({"username","incorrectpassword"})
@Test(description="Performs an unsuccessful login and checks the resulting error
message")
public void testLoginNOK(String username, String incorrectpassword) {

LoginPage lp = new LoginPage(driver);


ErrorPage ep = lp.incorrectLogin(username, incorrectpassword);
Assert.assertEquals(ep.getErrorText(), "The username and password could
not be verified.");
}

@AfterSuite
public void tearDown() {

driver.quit();
}
}

Note the use of the page objects and the check being performed using methods in these page
object implementations – in this case the getErrorText method in the error page page object.

As we have designed our tests as Selenium + TestNG tests, we also need to define
a testng.xml file that defines which tests we need to run and what parameter values the
parameterized testLoginOK test takes. Again, see my previous post for more details.

<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd" >


<suite name="My first TestNG test suite" verbose="1" >
<parameter name="username" value="john"/>
<parameter name="password" value="demo"/>
<test name="Login tests">
<packages>
<package name="com.ontestautomation.seleniumtestngpom.tests" />
</packages>
</test>
</suite>

Run the tests as TestNG tests and inspect the results

Finally, we can run our tests again by right-clicking on the testng.xml file in the Package
Explorer and selecting ‘Run As > TestNG Suite’. After test execution has finished, the test
results will appear in the ‘Results of running suite’ tab in Eclipse. Again, please note that
using meaningful names for tests and test suites in the testng.xml file make these results much
easier to read and interpret.

Result:

You might also like