0% found this document useful (0 votes)
94 views

Assignment No 1 Manual Testing: What Is Software Testing?

The document discusses manual testing and automated testing. It provides details on manual testing steps such as understanding functionality, preparing a test environment, executing test cases, verifying and recording results. It also discusses test cases, their components like ID, condition, procedure, expected result. The document then discusses automated testing using a tool. It explains recording tests in analog and context sensitive mode and choosing a record mode for automated testing. Recording allows quickly creating test scripts by recording user operations on the application.

Uploaded by

Harun Khan
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
94 views

Assignment No 1 Manual Testing: What Is Software Testing?

The document discusses manual testing and automated testing. It provides details on manual testing steps such as understanding functionality, preparing a test environment, executing test cases, verifying and recording results. It also discusses test cases, their components like ID, condition, procedure, expected result. The document then discusses automated testing using a tool. It explains recording tests in analog and context sensitive mode and choosing a record mode for automated testing. Recording allows quickly creating test scripts by recording user operations on the application.

Uploaded by

Harun Khan
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 19

Assignment No 1 Manual Testing

What is Software Testing? Software testing is a process used to identify the correctness, completeness and quality of developed computer software. A set of activities conducted with the intent of finding errors in software. Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test Manual testing Manual testing is the oldest and most rigorous type of software testing. Manual testing requires a tester to perform manual test operations on the test software without the help of Test automation. Manual testing is a laborious activity that requires the tester to possess a certain set of qualities; to be patient, observant, speculative, creative, innovative, openminded, resourceful, unopinionated, and skillful. Steps for Manual Testing A manual tester would typically perform the following steps for manual testing: 1. Understand the functionality of program 2. Prepare a test environment 3. Execute test case(s) manually 4. Verify the actual result 5. Record the result as Pass or Fail 6. Make a summary report of the Pass and Fail test cases 7. Publish the report 8. Record any new defects uncovered during the test case execution Test An activity in which a system or component is executed under specified conditions, the result are observed or recorded, and an evaluation is made of some aspect of the system or component. Test Case A set of test inputs, execution conditions, and expected results developed for a particular objective .The smallest entity that is always executed as unit, from beginning to end . A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly . A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results . Test case may also include prerequisites

Test Case Components The structure of test cases is one of the things that stays remarkably the same regardless of the technology being tested. The conditions to be tested may differ greatly from one technology to the next, but you still need to know three basic things about what you plan to test: ID : This is a unique identifier for the test case. The identifier does not imply a sequential order of test execution in most cases. The test case ID can also be intelligent. For example, the test case ID of ORD001 could indicate a test case for the ordering process on the first web page. Condition: This is an event that should produce an observable result. For example, in an ecommerce application, if the user selects an overnight shipping option, the correct charge should be added to the total of the transaction. A test designer would want to test all shipping options, with each option giving a different amount added to the transaction total. Procedure: This is the process a tester needs to perform to invoke the condition and observe the results. A test case procedure should be limited to the steps needed to perform a single test case. Expected Result: This is the observable result from invoking a test condition. If you cant observe a result, you cant determine if a test passes or fails. In the previous example of an ecommerce shipping option, the expected results would be specifically defined according to the type of shipping the user selects. Pass/Fail: This is where the tester indicates the outcome of the test case. For the purpose of space, I typically use the same column to indicate both "pass" (P) and "fail" (F). In some situations, such as the regulated environment, simply indicating pass or fail is not enough information about the outcome of a test case to provide adequate documentation. For this reason, some people choose to also add a column for "Observed Results." Defect Number Cross-reference: If you identify a defect in the execution of a test case, this component of the test case gives you a way to link the test case to a specific defect report.

a) Write black box test cases for an application using Test Director tool
Specifying Testing Requirements You begin the testing process by specifying testing requirements in TestDirectors Requirements module. Requirements describe in detail what needs to be tested in your application and provide the test team with a foundation on which the entire testing process is based. You define the requirements in TestDirector by creating a requirements tree. This is a graphical representation of your requirements specification, displaying your requirements hierarchically. You can group and sort requirements in the tree, monitor task allocation and progress of requirements, and generate detailed reports and graphs. Defining Requirements In the following exercise, you will define requirements for testing the functionality of reserving cruises in Mercury Tours. To define a requirement: 1. Open the TestDirector_Demo project.

2. Display the Requirements module. 3 Display the requirement tree in Document View.

4 Create a new requirement.

In the Name box, type Cruise Reservation.In the Product box, select Mercury Tours (HTML Edition).In the Priority box, select 4-Very High.In the Type box, select Functional. Click OK. TestDirector adds the Cruise Reservation requirement to the requirements tree. 5. Add a sub-requirement Click the New Child Requirement button to add the next requirement underneath the Cruise Reservation, at a lower hierarchical level. The New Requirement dialog box opens.In the Name box, type Cruise Search. In the Product box, select Mercury Tours (HTML Edition). In the Priority box, select 4-Very High. In the Type box, select Functional. Click OK. TestDirector adds the Cruise Search requirement, under the Cruise Reservation requirement. 6 Add an additional sub-requirement. In the requirements tree, select the Cruise Reservation requirement. Repeat step 5. This time in the Name box, type Cruise Booking. TestDirector adds the Cruise Booking requirement, under the Cruise Reservation requirement.

Viewing Requirements
You can change the way TestDirector displays requirements in the requirements tree. In the following exercise, you will learn how to zoom in and out of the tree, display numeration, refresh the tree, and expand and collapse the branches of the tree. To view requirements:

1 Display the Requirements module. 2 Zoom in and out of the requirement.

3. Display numeration in the requirements tree.

4 Refresh the data in the Requirements module. 5 Expand and collapse the requirements.

Modifying Requirements
You can modify the requirements in the requirements tree. In the following exercise, you will learn how to copy, rename, move, or delete requirements. To modify requirements: 1. Display the Requirements module. 2. Copy a requirement. 3. Rename the Cruise Reservation_Copy_ requirement. 4. Move the Hotel Reservation requirement to different location in the requirements tree.

5 Delete the Hotel Reservation requirement. Converting Requirements Once you have created the requirements tree, you use the requirements as a basis for defining your test plan tree in the Test Plan module. You can use the Convert to Tests wizard to assist you when designing your test plan tree. The wizard enables you to convert selected requirements or all requirements in the requirements tree to tests or subjects in the test plan tree. In the following exercise you will convert the Cruise

Reservation requirement to a subject in the test plan tree. Its sub-requirements will be converted to tests. To convert a requirement: 1. Display the Requirements module. 2. Select a requirement. 3. Open the Convert to Tests wizard.

4. Choose an automatic conversion method. 5. Start the conversion process.

6. Convert sub-requirements to tests.

7. Choose the destination subject path.

8. Finalize the conversion process. 9. View the converted requirements in the test plan tree.

b) Perform white box testing Cyclomatic complexity, Data flow testing, Control flow testing.
What is White-box Testing? Looking at the internal structure of a program and deriving test cases based on the logic or control flow.Test cases can be designed to reach every branch in the code and to exercise each conditionTypically done during unit testing also known as: Structural Testing Glass-Box Testing Looking at the program from an external point of view and deriving test cases based on the specification.The only criteria upon which the program is judged is if it produces the correct output for a given input. Control Flow Graph Example Code Fragment

Do { if (A) then {... }; else { if (B) then { if (C) then {... }; else {} } else if (D) then {... }; else {... }; } } While (E);

A
Te r u

Te r u

C
Te r u Te r u

Te r u

Cyclomatic Complexity Directly measures the number of linearly independent paths through a programs source code, taking into account the various decision points Calculating Cyclomatic complexity 1.E-N+2 Where E Number of Eges N Number of Nodes 2. # regions + 1

Assignment No 2 Automated Testing


Perform Black Box testing using automated testing tool on an application Context sensitive mode A] Recording test in analog and context sensitive mode B] Choosing a Record Mode By recording, you can quickly create automated test scripts. You work with your application as usual, clicking objects with the mouse and entering keyboard input. WinRunner records your operations and generates statements in TSL, Mercury Interactives Test Script Language. These statements appear as a script in a WinRunner test window. Before you begin recording a test, you should plan the main stages of the test and select the appropriate record mode. Two record modes are available: Context Sensitive and Analog. Context Sensitive Context Sensitive mode records your operations in terms of the GUI objects in your application. WinRunner identifies each object you click (such as a window, menu, list, or button), and the type of operation you perform (such as press, enable, move, or select).For example, if you record a mouse click on the OK button in the Flight Reservation Login window, WinRunner records the following TSL statement in your test script: button_press ("OK");When you run the script, WinRunner reads the command, looks for the OK button, and presses it. Analog In Analog mode, WinRunner records the exact coordinates traveled by the mouse, as well as mouse clicks and keyboard input. For example, if you click the OK button in the Login window, WinRunner records statements that look like this: When this statement is recorded... .......... it really means: move_locator_track (1); mouse track mtype ("<T110><kLeft>") ; left mouse button press mtype ("<kLeft>+"); left mouse button release When you run the test, WinRunner retraces the recorded movements using absolute screen coordinates. If your application is located in a different position on the desktop, or the user interface has changed, WinRunner is not able to execute the test correctly. In this exercise you will create a script that tests the process of opening an order in the Flight Reservation application. You will create the script by recording in Context Sensitive mode. 1 Start WinRunner. If WinRunner is not already open, choose Programs > WinRunner > WinRunner 2 Open a new test. If the Welcome window is open, click the New Test button. Otherwise, choose File > New. A new test window opens in WinRunner. 3. Start the Flight Reservation application and log in.

Choose Programs > WinRunner > Sample Applications > Flight 1A on the Start menu. In the Login window, type your name and the password mercury, and click OK.

4. Start recording in Context Sensitive mode. In WinRunner, choose Create > RecordContext Sensitive or click the Record button on the toolbar. From this point on, WinRunner records all mouse clicks and keyboard input. Note that the text, Rec appears in blue above the recording button. This indicates that you are recording in Context Sensitive mode. The status bar also informs you of your current recording mode. 5. Open order #3. In the Flight Reservation application, choose File > Open Order. In the Open Order dialog box, select the Order No. check box. Type 3 in the adjacent box, and click OK.Watch how WinRunner generates a test script in the test window as you work. 6. Stop recording. In WinRunner, choose Create > Stop Recording or click the Stop button on the toolbar. 7. Save the test. Choose File > Save or click the Save button on the toolbar. Save the test as lesson3 in a convenient location on your hard drive. Click Save to close the Save Test dialog box.Note that WinRunner saves the lesson3 test in the file system as a folder, and not as an individual file. This folder contains the test script and the results that are generated when you run the test.

Output: WinRunner Test Results window is open and displays the test results. Conclusion:
Recording in ContextSensitive mode is cleared and test results are also seen.

Assignment No 3
Defect Tracking
Defect Tracking : a. Log the test results in Test Director b Prepare a Defect Tracking Report / Bug Report using MS-Excel. Locating and repairing defects is an essential phase in application development. Defects can be detected and reported by developers, testers, and end users in all stages of the testing process. Using TestDirector, you can report defects detected in the application, and track them until they are repaired.

How to Track Defects When you report a defect to a TestDirector project, it is


tracked through the following stages: New, Open, Fixed, and Closed. A defect may also be Rejected, or Reopened after it is fixed.

When you initially report the defect to the TestDirector project, by default it is assigned the status New. A quality assurance or project manager reviews the defect, and determines whether or not to consider the defect for repair. If the defect is refused, it is assigned the status Rejected. If the defect is accepted, the quality assurance or project manager determines a repair priority, changes its status to Open, and assigns it to a member of the development team. A developer repairs the defect and assigns it the status Fixed. You retest the application, making sure that the defect does not recur. If the defect recurs, the quality assurance or project manager assigns it the status Reopened. If the defect is actually repaired, it is assigned the status Closed.

Adding New Defects


You can add a new defect to a TestDirector project at any stage of the testing process. In the following exercise you will report the defect that was detected while running the Cruise Booking test.

To add a new defect: 1.Open the TestDirector_Demo project. If the TestDirector_Demo project is not already open, log in to the project. 2.Display the Defects module. Click the Defects tab. The Defects Grid displays defect data in a grid. Each line in the grid displays a separate defect record. 3.Open the Add Defect dialog box. Click the Add Defect button. The Add Defect dialog box opens. Note that fields that are marked in red are mandatory.

4.Summarize the defect. In the Summary box, type a brief description of the defect. For example, type: Unable to reserve a cruise from the Cruise page. 5.Specify the defect information. In Category, specify the class category of the defect. Select Defect. Skip the Detected By box. This field indicates the name of the person who detected the defect. By default, the login user name is displayed. Skip the Project box. This field indicates the name of the project in which this defect was found. Accept the default value. In Severity, specify the severity level of the defect. Select 2-Medium. Skip the Reproducible box. This field indicates whether the defect can be reproduced under the same conditions in which it was detected. Accept the default value. In Subject, specify the subject in the test plan tree to which the defect is related. Select Cruises. Skip the Detected on Date box. This field indicates the date on which the defect was found. By default, todays date is displayed. In Detected in Version, specify the application version in which the defect was detected. Select Version 1.01. Skip the Status box. When you initially add a defect to a project, it is assigned the status New. Skip the Regression field. Accept the default value. 6.Skip the user-defined fields. Click the Next Page arrow. For the purpose of this exercise, skip the following fields: Language, Browser, and Operating System. Click the Back Page arrow. 7.Type a detailed description of the defect. In the Description box, type a description of the defect. For example, type: The defect was detected in the Cruise Booking test. When you click the Now Accepting Reservations button, the Flight Finder page opens instead of the Cruise Reservation page.

8.Attach the URL address for the Mercury Tours page where the defect was detected. Click the Attach URL button. The Attach URL dialog box opens. Type the URL address of the Mercury Tours page. For example, type: http://[server name]/mtours/servlet/com.mercurytours.servlet.ReservationServlet Make sure to replace [server name] with your actual TestDirector server name. Click OK. The URL appears above the Description box. 9.Spell check your text. Place the cursor in the Description box, and click the Check Spelling button. If there are no errors, a confirmation message box opens. If errors are found, the Spelling dialog box opens and displays the word together with replacement suggestions. 10.Add the defect to the TestDirector project. Click the Submit button. A confirmation message box indicates that the defect was added successfully. Click OK. 11.Close the Add Defect dialog box. Click Close. The defect is listed in the Defects Grid.

Matching Defects
Matching defects enables you to eliminate duplicate or similar defects in your project. Each time you add a new defect, TestDirector stores lists of keywords from the Summary and Description fields. When you search for similar defects, keywords in these fields are matched against other defects. Note that keywords are more than two characters, and letter case does not affect your results. TestDirector ignores the following: articles (a, an, the); coordinate conjunctions (and, but, for, nor, or); boolean operators (and, or, not, if, or then); and wildcards (?, *, [ ]). In the following exercise, you will match defects by comparing a selected defect with all other existing defects in the TestDirector_Demo project. To match defects: 1.Display the Defects module. Click the Defects tab. 2.Select defect number 37. In the Defects Grid, select defect number 37. Note that if you cannot find defect number 37 in the Defects Grid, you will need to clear the filter that was applied to the grid. To do so, click the Clear Filter/Sort button. 3.Find similar defects. Click the Find Similar Defects button. Results are displayed in the Similar Defects dialog box. Similar defects are displayed according to the percentage of detected similarity

Click Close to close the Similar Defects dialog box.

Updating Defects
Tracking the repair of defects in a project requires that you periodically update defects. You can do so directly in the Defects Grid, or in the Defect Details dialog box. Note that the ability to update some defect fields depends on your permission settings as a user. In this exercise you will update your defect information. To update a defect: 1 Display the Defects module. Click the Defects tab. 2 Update the defect directly in the Defects Grid. In the Defects Grid, select the defect that was added in Adding New Defects on page 77. To assign the defect to a member of the development team, click the Assigned to box that corresponds to the defect and select james_td from the Assign to list. 3 Open the Defect Details dialog box. Click the Defect Details button. The Defect Details dialog box opens.

4.Change the severity level of the defect. Select 5-Urgent from the Severity list. 5.Add a new R&D comment to explain the change in the severity level. Click the Description tab. Click the Comment button. A new section is added to the R&D Comment box, displaying your user name and the current date. Type: This defect also occurs in Mercury Tours version 1.0. 6.View the Attachments. Click the Attachments tab. Note that the URL attachment is listed. 7.View the History. Click the History tab to view the history of changes made to the defect. For each changed field, the date of the change, the name of the person who made the change, and the new value are displayed. 8 Close the Defect Details dialog box. Click OK to exit the dialog box and save your changes.

Mailing Defects
You can send an e-mail about a defect to another user. This enables you to routinely inform development and quality assurance personnel about defect repair activity. In the following exercise, you will e-mail your defect.

To mail a defect: 1.Display the Defects module. Click the Defects tab. 2.Select a defect. Select the defect you added in Adding New Defects on page 77, and click the Mail Defects button. The Send Mail dialog box opens.

3.Type a valid e-mail address. In the To box, type your actual e-mail address. 4.Type a subject for the e-mail. In the Subject box, type a subject for the e-mail. 5.Include the attachments and history of the defect. In the Include box, select Attachments and History. 6.E-mail the defect. Click Send. A message box opens. Click OK. 7 View the e-mail. Open your mailbox and view the defect you sent.

Associating Defects with Tests


You can associate a test in your test plan with a specific defect in the Defects Grid. This is useful, for example, when a new test is created specifically for a known defect. By creating an association, you can determine if the test should be run based on the status of the defect. Note that any requirements covered by the test are also associated with the defect. You can also create an association during a manual test run, by adding a defect. TestDirector automatically creates an association between the test run and the new defect. In the following exercise, you will associate your defect with the Cruise Booking test in the Test Plan module, and view the associated test in the Defects Grid. To associate a defect with a test: 1.Display the Test Plan module. Click the Test Plan tab. 2.Select the Cruise Booking test. In the test plan tree, expand the Cruise Reservation subfolder under Cruises. Right-click the Cruise Booking test in the test plan tree or Test Grid, and choose Associated Defects.

The Associated Defects dialog box opens.

3.Add an associated defect. Click the Associate button. The Associate Defect dialog box opens.

Click the Select button to select your defect from a list of available defects. Click the Associate button. An information box opens. Click OK. Click Close to close the list of available defects. Your defect is added to the list. Click Close to close the Associated Defects dialog box. 4. View the associated test in the Defects Grid. Click the Defects tab. Select your defect in the Defects Grid, and choose View > Associated Test. The Associated Test dialog box opens. The Details tab displays a description of the test. The Design Steps tab lists the test steps. The Test Script tab displays the test script if the test is automated. The Reqs Coverage tab displays the requirements covered by the test. The Test Run Details tab displays run details for the test. This tab is only available if the association was made during a test run. The All Runs tab displays the results of all test runs and highlights the run from which the defect was submitted. This tab is only available if the association was made during a test run.

Creating Favorite Views


A favorite view is a view of a TestDirector window with the settings you have applied to it. You can save favorite views of the Test Grid, Execution Grid, Defects Grid, and all TestDirector reports and graphs. For example, your favorite view settings may include applying a filter to grid columns, sorting fields in a report, or setting a graph appearance. In the following exercise, you will create a favorite view in the Defects Grid.

To create a favorite view: 1.Display the Defects module. Click the Defects tab. 2.Define a filter to view defects you detected that are not closed. Click the Set Filter/Sort button. The Filter dialog box opens.

Click the Filter Condition box that corresponds to Detected By. Click the Browse button. The Select Filter Condition dialog box opens.

Under Users, select your TestDirector login user name (alice_td, cecil_td, or michael_td). Click OK to close the Select Filter Condition dialog box. Click the Filter Condition box that corresponds to Status. Click the Browse button. The Select Filter Condition dialog box opens. Select the logical expression Not. Select Closed.

Click OK to close the Select Filter Condition dialog box. Click OK to close the Filter dialog box. The Defects Grid displays the defects you detected that are not closed. 3 Add a favorite view. Click the Favorites button, and choose Add to Favorites. The Add Favorite dialog box opens.

In the Name box, type: My detected defects (status Not Closed). You can add a favorite view to either a public folder or a private folder. Views in the public folder are accessible to all users. Views in the private folder are accessible only to the person who created them. For the purpose of this exercise, select Private. Click OK. The new view name is added to the Favorite list.

Assignment No 4
a. Calculate Software Metrics for an application using FP analysis method. b. Prepare any two of the Ishikawas Seven tools listed below for an application 1. The cause-and-effect or Ishikawa diagram 2. The check sheet 3. The control chart 4. The histogram 5. The Pareto chart 6. The scatter diagram 7. Stratification

Function Oriented Metrics Mainly used in business applications The focus is on program functionality A measure of the information domain + a subjective assessment of complexity Most common are: function points, and feature points (FP) . Examples of use include:

productivity FP/person-month quality faults/FP cost $$/FP documentation doc_pages/FP The function point metric is evaluated using the following tables: Weighting Factor Parameter # of user inputs # of user outputs #of user inquiries # of files # of external interfaces Count Simple *3 * 4 *3 *7 *5 Average Complex 4 6= Weight

5 7= 4 6= 10 15 = 7 10 = Total_weight =

The following relationship is used to compute function points:

FP = total _ weight [0.65 + 0.01 SUM ( Fi )]

You might also like