ASSIGNMENT-2
SOFTWARE TESTING
ANKIT
KUMAR
MISHRA
2100290120030
Question
1. Construct down the user interface test cases for following application::
2. You are given the task to verify the student’s management system, developed on java
programming language.
➢ Which Testing will be used
➢ Generate an automated test case to verify the OOPS is followed
➢ Write down test cases for integration of different modules
3. You are part of the QA team at Innovate Tech Ltd., which has recently released version 2.0 of
its mobile application for task management. After several new features were added, the
development team has requested a regression testing cycle to ensure that existing functionalities
remain unaffected.
Scenario:
During the regression testing phase, the following metrics were recorded:
➢ Total test cases designed for regression testing: 200
➢ Total test cases executed: 180
➢ Total test cases passed: 150
➢ Total test cases failed: 30
➢ Total defects reported during regression testing: 20
➢ Total defects resolved: 15
➢ Total critical defects found: 5
• Calculate the test case execution rate as a percentage. How many test cases were not
executed?
• Determine the test case pass rate. What percentage of the executed test cases passed?
• What is the defect resolution rate? How many defects remain unresolved?
• What percentage of the total defects reported were critical? Discuss the potential impact of
these critical defects on the application.
Answer 1:
Test Cases:
1. UI Element Availability
• Test Case: Verify that all the form elements (Description field, Severity dropdown,
Assigned To field, and ADD button) are visible on the screen.
• Expected Result: All elements are present and aligned correctly.
2. Description Field Validation
• Test Case: Check if the Description field accepts input.
• Expected Result: Users can enter text into the Description field.
3. Description Field Length
• Test Case: Verify the maximum character limit for the Description field.
• Expected Result: Input is limited to the expected maximum number of characters
(e.g., 255 characters).
4. Severity Dropdown
• Test Case: Verify that the Severity dropdown is functional and lists appropriate
severity options (e.g., Low, Medium, High).
• Expected Result: Dropdown opens, displays options, and allows a user to select one.
5. Assigned To Field Validation
• Test Case: Check if the Assigned To field accepts valid input (e.g., names or email
addresses).
• Expected Result: Users can enter a valid input format into the Assigned To field.
6. Empty Field Submission
• Test Case: Attempt to submit the form with one or more fields empty.
• Expected Result: An error message or validation prompt should be displayed, and the
form should not be submitted.
7. Mandatory Fields
• Test Case: Verify which fields are mandatory (e.g., Description, Severity) and test if
the form prevents submission when they are left blank.
• Expected Result: Mandatory fields should prompt an error if not filled.
8. ADD Button Functionality
• Test Case: Click the ADD button after completing the form.
• Expected Result: The form data is submitted successfully, or a confirmation message
appears.
9. Error Message for Invalid Input
• Test Case: Enter invalid data in the Assigned To field (e.g., special characters or an
empty string) and attempt submission.
• Expected Result: A relevant error message is displayed.
10. Field Reset on Submission
• Test Case: Verify that all fields are reset after successful form submission.
• Expected Result: Fields are cleared and ready for new input.
11. Keyboard Navigation
• Test Case: Test navigation between form fields using the Tab key.
• Expected Result: Focus moves sequentially between form elements.
12. Default Values
• Test Case: Check if the Severity dropdown has a default value (e.g., "Select
Severity").
• Expected Result: The default option is visible and selectable.
13. Button State
• Test Case: Verify if the ADD button remains disabled until all mandatory fields are
filled.
• Expected Result: The ADD button becomes enabled only when mandatory fields are
complete.
14. Input Persistence
• Test Case: Reload the page or navigate away and return while entering data into the
form.
• Expected Result: Input data should either persist or reset, depending on the design.
15. Responsiveness
• Test Case: Verify how the UI behaves on different screen sizes (e.g., mobile, tablet,
desktop).
• Expected Result: The form adjusts layout and remains functional.
16. Error Message Design
• Test Case: Submit the form with errors and verify the design of the error messages.
• Expected Result: Error messages are clearly displayed and assist the user in
correcting issues.
17. Accessibility
• Test Case: Check if the form is accessible for users with disabilities (e.g., test screen
reader compatibility and keyboard navigation).
• Expected Result: The form is fully accessible.
18. Form Reset (if applicable)
• Test Case: Verify if there is a reset or clear option for the form and test its
functionality.
• Expected Result: The reset button clears all fields without submitting.
19. Special Characters
• Test Case: Enter special characters or malicious scripts in the input fields.
• Expected Result: The form sanitizes the input and prevents XSS or other security
vulnerabilities.
20. Submission Performance
• Test Case: Measure the time taken to submit the form and process the data.
• Expected Result: Submission should complete within an acceptable time frame (e.g.,
2 seconds).
Answer 2:
1. Type of Testing to Be Used
For verifying a Student Management System (SMS) developed in Java, the following
testing types will be utilized:
1. Unit Testing: Verify the correctness of individual classes and methods.
2. Integration Testing: Ensure proper communication between different modules of the
system.
3. Functional Testing: Verify if the application meets all functional requirements.
4. System Testing: Test the overall behavior of the entire system.
5. Regression Testing: Ensure that updates or changes in the codebase do not break
existing functionality.
6. Object-Oriented Testing: Test if the system adheres to Object-Oriented
Programming principles.
7. Performance Testing: Verify system responsiveness and load-handling capabilities.
8. Security Testing: Ensure the system is secure from potential vulnerabilities.
9. Automation Testing: Automate repetitive test cases to save time and effort.
2. Automated Test Case to Verify OOPS Principles
Below is an automated test case written in JUnit (a Java testing framework) to verify
adherence to Object-Oriented Programming principles:
• Encapsulation: All fields in the Student class are private.
• Inheritance: Subclasses (e.g., GraduateStudent) extend parent classes (Student).
• Polymorphism: Methods in subclasses override those in parent classes.
• Abstraction: Abstract classes or interfaces are used.
3. Test Cases for Integration of Different Modules
Modules for Integration
• Student Management Module: Handles student data (e.g., add, edit, delete, fetch).
• Course Management Module: Manages course data (e.g., course registration,
updates).
• Result Management Module: Manages student grades and results.
• Authentication Module: Handles login and role-based access.
Integration Test Cases
Conclusion
These testing approaches ensure that both the OOPS principles and module integrations are
validated thoroughly for the Student Management System. The automated JUnit test can
serve as a base for detecting issues early in development. Let me know if you'd like further
examples!
Answer 3:
Calculations and Analysis
1. Test Case Execution Rate
• Formula:
Execution Rate=(Executed Test CasesTotal Test Cases)×100\text{Execution Rate} =
\left( \frac{\text{Executed Test Cases}}{\text{Total Test Cases}} \right) \times 100
• Calculation: (180200)×100=90%\left( \frac{180}{200} \right) \times 100 = 90\%
• Test cases not executed: 200−180=20200 - 180 = 20
Execution Rate: 90%
Test Cases Not Executed: 20
2. Test Case Pass Rate
• Formula: Pass Rate=(Passed Test CasesExecuted Test Cases)×100\text{Pass Rate} =
\left( \frac{\text{Passed Test Cases}}{\text{Executed Test Cases}} \right) \times 100
• Calculation: (150180)×100=83.33%\left( \frac{150}{180} \right) \times 100 =
83.33\%
Pass Rate: 83.33%
3. Defect Resolution Rate
• Formula: Resolution Rate=(Resolved DefectsTotal Defects)×100\text{Resolution
Rate} = \left( \frac{\text{Resolved Defects}}{\text{Total Defects}} \right) \times 100
• Calculation: (1520)×100=75%\left( \frac{15}{20} \right) \times 100 = 75\%
• Unresolved defects: 20−15=520 - 15 = 5
Resolution Rate: 75%
Unresolved Defects: 5
4. Percentage of Critical Defects
• Formula:
Critical Defect Percentage=(Critical DefectsTotal Defects)×100\text{Critical Defect
Percentage} = \left( \frac{\text{Critical Defects}}{\text{Total Defects}} \right) \times
100
• Calculation: (520)×100=25%\left( \frac{5}{20} \right) \times 100 = 25\%
Critical Defects Percentage: 25%
Discussion on Critical Defects
• Impact on Application:
o Critical defects often involve functionality that is vital for the application's
core operations. Examples could include:
▪ App crashes during major workflows (e.g., task creation or task
updates).
▪ Data loss or corruption in key operations.
▪ Security vulnerabilities like unauthorized access to sensitive data.
o These defects may result in a poor user experience, loss of trust, or even
regulatory penalties if the app involves sensitive data.
• Action Required:
o Prioritize the resolution of all critical defects before releasing updates or
patches.
o Conduct another round of testing after resolving these defects to ensure no
regression issues arise.