0% found this document useful (0 votes)
413 views5 pages

TestNG Selenium Automation Guide

1. The test scenario navigates to a URL and performs various actions like clicking elements and validating pages. 2. The test must be run on different browser and OS combinations using Selenium Grid on LambdaTest. 3. The submission requires running the tests on LambdaTest, submitting the code as a GitHub repo, and including logs, videos and screenshots in the test runs.

Uploaded by

MANISH KUMAR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
413 views5 pages

TestNG Selenium Automation Guide

1. The test scenario navigates to a URL and performs various actions like clicking elements and validating pages. 2. The test must be run on different browser and OS combinations using Selenium Grid on LambdaTest. 3. The submission requires running the tests on LambdaTest, submitting the code as a GitHub repo, and including logs, videos and screenshots in the test runs.

Uploaded by

MANISH KUMAR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

TestNG: Assessment Problem

Test Scenario
1. Navigate to [Link]
2. Perform an explicit wait till the time all the elements in the DOM
are available.
3. Locate the WebElement CI/CD tools and click on the same.
4. Inside “CI/CD tools” click on the link titled “LEARN MORE”. Use the
right Selenium method to ensure that the link opens in a New Tab (and
not the Same Window).
5. Go to the tab where the link opened through “LEARN MORE” is
available
6. Verify whether the URL is the same as the expected URL (if not
throw an Assert)

7. On that page, first scroll to the bottom of the page and then to the
top of the page.
8. Close the current window and switch to the parent window (using
its window handle)
9. On the parent window, perform a click on the menu “Resources”.
Inside the “Resources” menu, click on the “Newsletter” option.
10. Search for the element “Let me read it first” using the CssSelector
property and click on the same.
11. Find “ALL EDITIONS'' on the page to confirm whether the
Newsletter page is open. If not, throw an assert and print the Stack
Trace.
12. If step (11) is successful, close the window and free the resources
held by the Selenium WebDriver.

Execution
The test scenario should be demonstrated on the following combinations of
browsers and platforms (using Selenium 4 Grid and Selenium 4 Java):

1. Chrome + 88.0 + Windows 10


2. MicrosoftEdge + 87.0 + macOS Sierra
3. Firefox + 82.0 + Windows 7
4. Internet Explorer + 11.0 + Windows 10

Important Notes
▪ Pass the browser and OS combinations to the test scenario from [Link]
▪ Ensure that the functions in setup and teardown functions are not a part of
the @Test annotation. The TimeOut of the test duration should be set to 20
seconds. Parallelism should be at the Class Level (i.e. both the tests should
be executing in parallel on LambdaTest).
▪ Please ensure to use at least 3 different locators while performing the test.
▪ Please ensure that network logs, video recording, screenshots, and console
logs are enabled in all the test runs. Please refer to the Capability generator
for desired capabilities: [Link]
▪ Refer to the detailed instructions appended below for submission
guidelines.
Reference Images

Important Instructions
▪ You are required to submit the final solution within 36 hours of the
deadline. ▪ You must run the test on LambdaTest Cloud Selenium Grid in
parallel and mention the final Test ID(s) while submitting.
▪ You can submit the solution using Java programming language and TestNG
framework. We advise you to keep it as simple and as lean as possible. ▪
Please ensure that network logs, video recording, screenshots, and console
logs are enabled through the desired capabilities object while running the
test. You can refer to LambdaTest Capability Generator for the setting of
these desired capabilities.
▪ The final code needs to be submitted as a GitHub Repository. Please ensure
that it is a Private Repository shared with ‘LambdaTest-Certification’ or
admin@[Link]. You can refer to the Getting Started
Guide to get started with GitHub.
▪ While the code would be saved on Github, the final test run should be
initiated at Gitpod online IDE. You can refer to Getting Started with GitPod
to get started with Gitpod. To learn more about configuring your single
click Gitpod dev environment, refer here.
▪ You need to ensure that the GitHub repository is configured on Gitpod with
the required .[Link] file. Please ensure that you attach a detailed
[Link] file along with your GitHub repository. This should include
the instructions to run the test on the Gitpod dev environment.
▪ Setting up your LambdaTest account & running your first test
▪ Ensure that you register for your LambdaTest Account with the same email
address used to register for the certification and to appear for the
objective test. If you have not registered, you can register for a free
account from LambdaTest
Register. The free account comes with 15 days of automation trial access
that allows 100 minutes of automation testing.
▪ If you have used the automation minutes or have exceeded the allotted trail
access time, you can get a 24-hour trial access window by contacting
LambdaTest support. You can contact LambdaTest support from our
in-app
▪ chat support or by dropping a mail at support@[Link] ▪ You can
also refer to our support docs to get started with Automation testing on the
LambdaTest platform.

Common questions

Powered by AI

Enabling network logs, video recording, and console logs provides comprehensive insights into the test execution process, facilitating efficient debugging and issue resolution . Network logs help track HTTP requests/responses to detect network-related issues, video recordings allow replaying the test for visual verification, and console logs capture error messages and browser console outputs. Collectively, these tools help in identifying problems quickly, verifying correct functionalities, and ensuring a smooth test execution process.

The purpose of using an explicit wait in the test scenario is to ensure that all elements of the DOM are completely loaded before interacting with them. This is crucial for test reliability as it prevents tests from failing due to elements not being ready or available at the time of interaction . Explicit waits handle asynchronous web pages more effectively by waiting for specific conditions to occur or for a maximum time period before executing further actions.

Gitpod provides an integrated development environment (IDE) that is configured for seamless execution of the test scenario on the LambdaTest framework . By setting up a .gitpod.yml file and ensuring the environment is properly configured, testers can develop, debug, and execute tests directly from their Gitpod workspace. This simplifies the process by reducing local machine dependency, ensuring a standardized test environment, and enabling easier collaboration and continuous integration workflows.

Opening the 'LEARN MORE' link in a new tab is important for maintaining the test's flow and context isolation. It allows the tester to interact with the new content while preserving the state of the original tab, facilitating easy navigation back to the parent window without refreshing or losing context . Not doing so could result in altered test flow, potential data loss on the parent page, and complications in returning to the initial page state, affecting the test's accuracy.

The Timeout setting in the test scenario ensures that each test completes its execution within a predetermined time frame, in this case, 20 seconds . It contributes to the robustness of the test by preventing tests from hanging indefinitely due to unexpected conditions like network lags or wait for a user input that never arrives. This ensures the testing pipeline remains efficient and responsive, helping identify performance bottlenecks or errors sooner.

Browser and OS combinations impact test execution significantly as they determine how web applications render and behave due to differences in browser engines and system environments . Variances such as JavaScript execution, CSS rendering, and event handling between combinations can lead to different outcomes if not properly managed. In the described scenario, these should be managed by specifying configurations in 'testng.xml', running tests in parallel across different combinations, and ensuring compatibility issues are identified and resolved through thorough testing. Detailed logs and capabilities setup are essential to diagnose platform-specific issues.

Inclusion of multiple locators (e.g., XPath, CssSelector, name) is significant as it enhances the test's robustness and adaptability across different browsers and environments . Each browser might render DOM elements differently, and a locator that works for one browser may fail in another. Using multiple locators ensures broader coverage and reliability of element selection, thereby increasing the test's success rate and reducing cross-browser compatibility issues.

Parallel execution of tests on LambdaTest enhances efficiency by running multiple tests simultaneously, reducing the total execution time and making better use of resources . Successful parallel execution requires setting up the test environment to support parallelism at the class level, ensuring that tests do not interfere with each other, and that system resources are adequately allocated to handle concurrent sessions. Additionally, ensuring configurations are set in 'testng.xml' for different browser and OS combinations is essential for execution across varied environments.

A detailed README.md file is crucial as it provides comprehensive documentation necessary for understanding, setting up, and running the test project . It ensures that new developers or team members can quickly onboard and contribute. The README should include instructions for setting up the project, running tests, configurations used, dependencies, and guidelines on how to report issues or contribute to the project. This level of documentation promotes collaboration, transparency, and ensures project maintainability.

The 'Capability Generator' is significant as it simplifies the process of setting the desired capabilities required for Selenium tests on LambdaTest, ensuring that tests are executed with the correct browser configurations and test environment setups . By using the generator, testers can easily configure capabilities like browser version, operating system, and enabling logs, which are crucial for running tests on different platforms and obtaining accurate test results. It prevents configuration errors and saves time in setting up the test environment.

You might also like