0% found this document useful (0 votes)
72 views48 pages

Network Program

Stress testing is the process of determining the ability of a computer, network, program or device to maintain a certain level of effectiveness under unfavorable conditions. When conducting a stress test, an adverse environment is deliberately created and maintained. Individual stressors are varied one by one, leaving the others constant. The final component of stress testing is determining how well or how fast a system can recover after an adverse event.

Uploaded by

Sapan Mittal
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views48 pages

Network Program

Stress testing is the process of determining the ability of a computer, network, program or device to maintain a certain level of effectiveness under unfavorable conditions. When conducting a stress test, an adverse environment is deliberately created and maintained. Individual stressors are varied one by one, leaving the others constant. The final component of stress testing is determining how well or how fast a system can recover after an adverse event.

Uploaded by

Sapan Mittal
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 48

blosumsdotnet.blogspot.com/.../why-we-perform-stress-testing.

html

http://www.softwareqatest.com/

Stress testing is the process of determining the ability of a computer, network, program or device to maintain a certain level of effectiveness under unfavorable conditions. The process can involve quantitative tests done in a lab, such as measuring the frequency of errors or system crashes. The term also refers to qualitative evaluation of factors such as availability or resistance to denial-of-service (DoS) attacks. Stress testing is often done in conjunction with the more general process of performance testing. When conducting a stress test, an adverse environment is deliberately created and maintained. Actions involved may include:

Running several resource-intensive applications in a single computer at the same time Attempting to hack into a computer and use it as a zombie to spread spam Flooding a server with useless e-mail messages Making numerous, concurrent attempts to access a single Web site Attempting to infect a system with viruses, Trojans, spyware or other malware.

The adverse condition is progressively and methodically worsened, until the performance level falls below a certain minimum or the system fails altogether. In order to obtain the most meaningful results, individual stressors are varied one by one, leaving the others constant. This makes it possible to pinpoint specific weaknesses and vulnerabilities. For example, a computer may have adequate memory but inadequate security. Such a system, while able to run numerous applications simultaneously without trouble, may crash easily when attacked by a hacker intent on shutting it down. Stress testing can be time-consuming and tedious. Nevertheless, some test personnel enjoy watching a system break down under increasingly intense attacks or stress factors. Stress testing can provide a means to measure graceful degradation, the ability of a system to maintain limited functionality even when a large part of it has been compromised. Once the testing process has caused a failure, the final component of stress testing is determining how well or how fast a system can recover after an adverse event.

Stress Testing: - We need to check the performance of the application. Def: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements Resolution Testing: - Some times developer created only for 1024 resolution, the same page displayed a horizontal scroll bar in 800 x 600 resolutions. No body can like the horizontal scroll appears in the screen. That is reason to test the Resolution testing. Cross-browser Testing: - This testing some times called compatibility testing. When we develop the pages in IE compatible, the same page is not working in Fairfox or Netscape properly, because most of the scripts are not supporting to other than IE. So that we need to test the cross-browser Testing

Software Metrics Guide


1 Introduction 2 Metrics Set 2.1 Progress 2.2 Effort 2.3 Cost 2.4 Review Results 2.5 Trouble Reports 2.6 Requirements Stability 2.7 Size Stability 2.8 Computer Resource Utilization 2.9 Training 3 Overview of Project Procedures

1 Introduction
Effective management of the software development process requires effective measurement of that process.

This guide presents an overview of the collection, analysis, and reporting of software metrics. Only the progress, effort and trouble report metrics are required for the project. However, the student should be familiar with all the metrics described below. Software metrics are numerical data related to software development. Metrics strongly support software project management activities. They relate to the four functions of management as follows: 1. Planning - Metrics serve as a basis of cost estimating, training planning, resource planning, scheduling, and budgeting.

2. Organizing - Size and schedule metrics influence a project's organization. 3. Controlling - Metrics are used to status and track software development activities for compliance to plans. 4. Improving - Metrics are used as a tool for process improvement and to identify where improvement efforts should be concentrated and measure the effects of process improvement efforts. A metric quantifies a characteristic of a process or product. Metrics can be directly observable quantities or can be derived from one or more directly observable quantities. Examples of raw metrics include the number of source lines of code, number of documentation pages, number of staff-hours, number of tests, number of requirements, etc. Examples of derived metrics include source lines of code per staff-hour, defects per thousand lines of code, or a cost performance index.

The term indicator is used to denote a representation of metric data that provides insight into an ongoing software development project or process improvement activity. Indicators are metrics in a form suitable for assessing project behavior or process improvement. For example, an indicator may be the behavior of a metric over time or the ratio of two metrics. Indicators may include the comparison of actual values versus the plan, project stability metrics, or quality metrics. Examples of indicators used on a project include actual versus planned task completions, actual versus planned staffing, number of trouble reports written and resolved over time, and number of requirements changes over time. Indicators are used in conjunction with one another to provide a more complete picture of project or organization behavior. For example, a progress indicator is related to requirements and size indicators. All three indicators should be used and interpreted together.

2 Metrics Set
The metrics to be collected provide indicators that track ongoing project progress, software products, and software development processes.

The defined indicators are consistent with the Software Engineering Institute's Capability Maturity Model (CMM). Table 1 shows the indicator categories, the management insight provided, and the specific indicators for recommended metrics. Depending upon the nature of the project, specific contractual requirements, or management preference, a project may choose to collect additional metrics or to tailor the recommended set. Chart Construction Summary Charts are prepared for the standard metrics. All charts require titles, legends, and labels for all axes. They should clearly and succinctly show the metrics of interest, with no excessive detail to detract the eye. Do not overuse different line types, patterns, or color, or added dimensionality unless used specifically to differentiate items. Overlayed data is preferable to multiple charts when the different data are related to each other and can be meaningfully depicted without obscuring other details. The most common type of chart is the tracking chart. This chart is used extensively for the Progress indicator, and is used in similar forms for many of the other indicators. For task progress, it depicts the cumulative number of planned and actual task completions (or milestones) against time. For other indicators, it may show actual versus planned staffing profiles, actual versus planned software size, actual versus planned resource utilization or other measures compared over time. There are many ways to modify the tracking chart. A horizontal planned line representing the cumulative goal can be drawn at the top, multiple types of tasks can be overlaid on a single tracking chart (such as design, code, and integration), or the chart can be overlaid with other types of data. It is recommended that tracked quantities be shown as a line chart, and that replanned task progress be shown as a separate planning line. The original planned baseline is kept on the chart, as well as all replanning data if there is more than a single replan. The following sections provide brief descriptions of the different metrics categories with samples of the required charts. Individual projects may enhance the charts for their situations or have additional charts for the categories. The sample charts are designed for overhead presentations and are available as templates from the professor. Table 1 Recommended Metrics Set for a Project
Indicator Category Management Insight Indicators

Progress

Provides information on how well the project is Actual vs. planned task completions performing with respect to its schedule.
Actual vs. planned durations Actual vs. planned

Effort

Provides visibility into the contributions of staffing on

project costs, schedule adherence, and product quality. Provides tracking of actual costs against estimated costs and predicts future costs. Provides status of action items from life-cycle review.

staffing profiles

Actual vs. planned costs


Cost and schedule variances

Cost

Review Results

Status of action items Status of trouble reports

Trouble Reports

Provides insight into product and process quality and the effectiveness of the testing.

Number of trouble reports opened, closed, etc. during reporting period

Requirements Stability

Provides visibility into the magnitude and impact of requirements changes.

Number of requirements changes/clarifications


Distribution of requirements over releases

Size Stability

Provides insight into the completeness and stability Size growth of the requirements and into the ability of the staff to complete the project within the current Distribution of size over releases budget and schedule. Provides information on how well the project is meeting its computer resource utilization goals/requirements. Provides information on the training program and staff skills. Actual vs. planned profiles of computer resource utilization Actual vs. planned number of personnel attending classes

Computer Resource Utilization Training

2.1 Progress
Progress indicators provide information on how well the project is performing with respect to planned task completions and keeping schedule commitments. Tasks are scheduled and then progress is tracked to the schedules. Metrics are collected for the activities and milestones identified in the project schedules. Metrics on actual completions are compared to those of planned completions to determine whether there are deviations to the plan. The difference between the actual and planned completions indicates the deviations from the plan.

Each project identifies tasks for which progress metrics will be collected. The completion criteria for each task must be well defined and measurable. The project should establish range limits (thresholds) on the planned task progress for the project. The thresholds are used for management of software development risk. Figure 1 depicts the cumulative number of planned and actual completions (or milestones) over time. Note that this chart is generic, and each project will substitute specific tasks (units, milestones, SLOCs, etc. ). Additionally, each project is expected to produce multiple progress charts for different types of tasks, different teams, etc.

Figure 1 Progress Indicator

2.2 Effort
Effort indicators allow the software manager to track personnel resources. They provide visibility into the contribution of staffing to project costs, schedule adherence, product quality and the amount of effort required for each activity. Effort indicators include trends in actual staffing levels, staffing profile by activity or labor category, or a profile of unplanned staff loses. Effort indicators may be used by all levels of project software management to measure the actual profile against the plan. Each level of management forms a profile for its area of control and monitors the actual profile against the plan. Determining the number of staff needed at any one time is an important function performed by software management. By summing the number of staff during each reporting period, the composite staffing profile for the project can be determined.

These indicators are applied during all life-cycles phases, from project inception to project end. Effort metrics are to be collected and reported at least on a monthly basis. The effort and cost metrics are related. By convention, effort metrics are non-cumulative expenditures of human resources, and cost metrics are cumulative levels of effort as tracked by earned value. Thus, cost metrics are a cumulative depiction of effort. Figure 2 shows a sample plot of monthly actual versus planned effort

Figure 2 Effort Indicator

2.3 Cost
Cost management is an important activity for the success of a project, and labor is the primary component of software development cost. Managers must define the work in their area, determine the skill level required to perform the work, and use productivity estimates and schedule constraints to determine budgeted costs over time.

Use staff-hours to measure cost, rather than dollars. The dollars per staff-hour varies over time and by labor category, and the conversion is made only by Finance. Cost is related to the effort indicator, with cost defined as an accumulation of effort expenditures. (The total project cost also includes non-labor costs, but they are not tracked here.) Only those projects using earned value can report the earned value quantities. A Work Breakdown Structure (WBS) is established to define the structures that will be used to collect the costs. The WBS identifies separate elements for requirements, design, documentation, code and unit test, integration, verification, and system testing. Costs can also be segregated by component, function, or configuration item. Work packages are derived from the WBS. Costs are allocated to work packages using an earned value method. This system allows managers to track the actual costs and measure them against the budget for their respective areas of responsibility. Figure 3 is a sample Cost Indicator Chart. The actual and budgeted quantities are derived from an earned value system, and are shown in terms of staff-hours.

Figure 3 Cost Indicator

2.4 Review Results


Review Results indicators provide insight into the status of action items from life-cycle reviews. The term Action Item (AI) refers to inspection defects and customer comments. Reviews include the following:

Formal inspections of software documents or code Formal customer milestones, e.g., SSR, PDR, CDR, or TRR Informal peer evaluations of products, e.g., walkthroughs, technical reviews, or internal PDRs Management reviews

Process reviews, e.g., SQA audits, SEI CMM assessments, or the causal analysis from formal inspections.

There are standards for some reviews, as well as procedures for conducting them. For example, formal inspections result in assertion logs that document the minor and major defects uncovered by the inspection process. Therefore, standard review result indicators for formal inspections are: 1. Counts of minor/major defects 2. Rates of defect detection (e.g., assertions per inspection meeting minute, defects per inspected document page, or defects per KSLOC of code inspected) 3. Defect status (e.g., age of open defects, number of open/closed defects, and breakdown by defect categories). A customer-conducted review such as a Preliminary Design Review (PDR) generates AIs that must be closed before approval of the Software Design Document. Therefore, standard review result indicators for a PDR are the number of comments written and their status (open, closed, and age). Review metrics record the AIs identified in the review findings and track them until they are resolved. These metrics provide status on both products and processes. Review results are not to be used to evaluate the performance of individuals. Review Results are collected and reported at least monthly at every stage of the software life cycle, but preferably weekly for key AIs. Figure 4 depicts the cumulative counts of AIs written and closed by reporting period.

Figure 4 Review Results Indicator

2.5 Trouble Reports


TR indicators provide managers with insight into the quality of the product, software reliability, and the effectiveness of testing. They also provide information on the software development process. The terms defect and problem will be used interchangeably herein. Monthly tracking of TR indicators shows the project's trends in the following areas: 1. The rate at which TRs are being written and resolved. 2. The type and severity of the TRs. 3. Relationship between the number of TRs and the number of test cases passed or the number of test steps passed. 4. The TR density (the number of TRs per unit size). 5. The number of defects in each software application/unit. TR indicators are applicable only in the following life cycle stages (and each release of the software within these stages, and during the informal and formal test segments of these stages) (1) application test and integration, (2) system test, (3) acceptance test. Thus the TR indicators are applicable only to defects during the operation or execution of a computer program. Due to the shortness of testing periods, and the dynamics involved between the test team and the implementation team that analyzes the TRs and fixes the defects, the TR indicators are generally evaluated on a weekly basis. The terms open and closed are defined as follows: Open The problem has been reported. Closed The investigation is complete and the action required to resolve the problem has been proposed, implemented, and verified to the satisfaction of all concerned. In some cases, a TR will be found to be invalid as part of the investigative process and closed immediately. Figure 5 shows the cumulative count of total, open, and closed TRs over time (weekly periods).

Figure 5 TR Indicator

2.6 Requirements Stability


Requirements Stability provides an indication of the completeness, stability, and understanding of the requirements. It indicates the number of changes to the requirements and the amount of information needed to complete the requirements definition. A lack of requirements stability can lead to poor product quality, increased cost, and schedule slippage. Requirements stability indicators are in the form of trend charts that show the total number of requirements, cumulative changes to the requirements, and the number of TBDs over time. A TBD refers to an undefined requirement. Based on requirements stability trends, corrective action may be necessary. Requirements stability is applicable during all life-cycles phases, from project inception to the end. The requirements stability indicators are most important during requirements and design phases. Requirements stability metrics are collected and reported on a monthly basis. Figure 6 shows an example of the total number of requirements, the cumulative number of requirements changes, and the number of remaining TBDs over time. It may be desirable to also show the number of added, modified and deleted requirements over time.

Figure 6 Requirements Stability Indicator

2.7 Size Stability


Software size is a critical input to project planning. The size estimate and other factors are used to derive effort and schedule before and during a project. The software manager tracks the actual versus planned software product size. Various indicators show trends in the estimated code size, trends by code type, the variation of actual software size from estimates or the size variation by release. Size stability is derived from changes in the size estimate as time goes on. It provides an indication of the completeness and stability of the requirements, the understanding of the requirements, design thoroughness and stability, and the capability of the software development staff to meet the current budget and schedule. Size instability may indicate the need for corrective action. Size metrics are applicable during all life-cycle phases. Size metrics are collected and reported on a monthly basis, or more often as necessary. Figure 7 shows an example of planned and currently estimated software size per release over time. Besides showing re-allocation of software content between releases, this also shows the growth in the total estimated size.

Figure 7 Size Indicator

2.8 Computer Resource Utilization


Computer Resource Utilization indicators show whether the software is using the planned amount of system resources. The computer resources are normally CPU time, I/O, and memory. For some software, the constraints of computer resources significantly affect the design, implementation, and testing of the product. They can also be used to replan, re-estimate, and guide resource acquisition. Computer resource utilization is planned during the requirements activity and reviewed during the design activity. Resources are monitored from the start of implementation activity to the end of the life cycle. For memory utilization, the unit of data is the byte, word, or half-word. For CPU time, the unit of data is either MIPS (millions of instructions per second), or the percentage of CPU time used during a peak period. For I/O time, the unit of data is the percentage of I/O time used during a peak period. Resource Utilization data is collected and reported at least monthly, with the period between collection and reporting becoming shorter as the software system nears completion and a better picture of software performance can be seen. Note that the resource utilization is normally an estimate until integration occurs, at which time the actual data is available. Figure 8 shows an example of CPU and memory use as a percent of available and the maximum allowed.

Figure 8 Computer Resources Indicator

2.9 Training
Training indicators provide managers with information on the training program and whether the staff has necessary skills. A trained staff is a commitment. The manager must ensure that the staff has the skills needed to perform their assigned tasks. The objective of the training indicator is to provide visibility into the training process, to ensure effective utilization of training, and to provide project software managers with an indication of their staff's skill mixture. The manager should investigate the deviations in the number of classes taught from the number of classes planned, and the deviation of the number of staff taught to the planned number. The quality of the training program should also be determined from completed course evaluation sheets. The number of waivers requested and approved for training should also be tracked. Figure 9 shows a sample graph of the total monthly attendance of personnel attending training classes. It represents the sum of the number of personnel attending all classes.

Figure 9 Training Indicator

3 Overview of Project Procedures


Metrics are planned at project conception, used to track project status and recorded at the end of the project.

When starting a software development project, determine the list of software metrics. Use the goal-question-measure paradigm to select appropriate measurements for the project. To do this, first establish goals for the project, develop questions to be answered by measurement, and then collect the data needed. Document the software metrics to be collected for the project in the project plan. During a project, collect and report the metrics. Use the collected metrics data to track project status. At the end of a project, utilize the data for postmortem reporting. Use automated data collection and analysis tools whenever possible to collect the metrics data for your project. Some tools that can aid in this process include Amadeus for code measurement, TR statusing tools, requirements management tools, etc. Excel spreadsheets are useful for generating metrics reports. Contact the professor for spreadsheet templates of the charts shown in this guide.

Software Metrics Guide


1 Introduction 2 Metrics Set 2.1 Progress 2.2 Effort 2.3 Cost 2.4 Review Results 2.5 Trouble Reports 2.6 Requirements Stability 2.7 Size Stability 2.8 Computer Resource Utilization 2.9 Training 3 Overview of Project Procedures

1 Introduction
Effective management of the software development process requires effective measurement of that process.

This guide presents an overview of the collection, analysis, and reporting of software metrics. Only the progress, effort and trouble report metrics are required for the project. However, the student should be familiar with all the metrics described below. Software metrics are numerical data related to software development. Metrics strongly support software project management activities. They relate to the four functions of management as follows: 1. Planning - Metrics serve as a basis of cost estimating, training planning, resource planning, scheduling, and budgeting.

2. Organizing - Size and schedule metrics influence a project's organization. 3. Controlling - Metrics are used to status and track software development activities for compliance to plans. 4. Improving - Metrics are used as a tool for process improvement and to identify where improvement efforts should be concentrated and measure the effects of process improvement efforts. A metric quantifies a characteristic of a process or product. Metrics can be directly observable quantities or can be derived from one or more directly observable quantities. Examples of raw metrics include the number of source lines of code, number of documentation pages, number of staff-hours, number of tests, number of requirements, etc. Examples of derived metrics include source lines of code per staff-hour, defects per thousand lines of code, or a cost performance index. The term indicator is used to denote a representation of metric data that provides insight into an ongoing software development project or process improvement activity. Indicators are metrics in a form suitable for assessing project behavior or process improvement. For example, an indicator may be the behavior of a metric over time or the ratio of two metrics. Indicators may include the comparison of actual values versus the plan, project stability metrics, or quality metrics. Examples of indicators used on a project include actual versus planned task completions, actual versus planned staffing, number of trouble reports written and resolved over time, and number of requirements changes over time. Indicators are used in conjunction with one another to provide a more complete picture of project or organization behavior. For example, a progress indicator is related to requirements and size indicators. All three indicators should be used and interpreted together.

2 Metrics Set
The metrics to be collected provide indicators that track ongoing project progress, software products, and software development processes.

The defined indicators are consistent with the Software Engineering Institute's Capability Maturity Model (CMM). Table 1 shows the indicator categories, the management insight provided, and the specific indicators for recommended metrics. Depending upon the nature of the project, specific contractual requirements, or management preference, a project may choose to collect additional metrics or to tailor the recommended set. Chart Construction Summary

Charts are prepared for the standard metrics. All charts require titles, legends, and labels for all axes. They should clearly and succinctly show the metrics of interest, with no excessive detail to detract the eye. Do not overuse different line types, patterns, or color, or added dimensionality unless used specifically to differentiate items. Overlayed data is preferable to multiple charts when the different data are related to each other and can be meaningfully depicted without obscuring other details. The most common type of chart is the tracking chart. This chart is used extensively for the Progress indicator, and is used in similar forms for many of the other indicators. For task progress, it depicts the cumulative number of planned and actual task completions (or milestones) against time. For other indicators, it may show actual versus planned staffing profiles, actual versus planned software size, actual versus planned resource utilization or other measures compared over time. There are many ways to modify the tracking chart. A horizontal planned line representing the cumulative goal can be drawn at the top, multiple types of tasks can be overlaid on a single tracking chart (such as design, code, and integration), or the chart can be overlaid with other types of data. It is recommended that tracked quantities be shown as a line chart, and that replanned task progress be shown as a separate planning line. The original planned baseline is kept on the chart, as well as all replanning data if there is more than a single replan. The following sections provide brief descriptions of the different metrics categories with samples of the required charts. Individual projects may enhance the charts for their situations or have additional charts for the categories. The sample charts are designed for overhead presentations and are available as templates from the professor. Table 1 Recommended Metrics Set for a Project
Indicator Category Management Insight Indicators

Progress

Provides information on how well the project is performing with respect to its schedule. Provides visibility into the contributions of staffing on project costs, schedule adherence, and product quality. Provides tracking of actual costs against estimated costs and predicts future costs.

Actual vs. planned task completions


Actual vs. planned durations Actual vs. planned staffing profiles

Effort

Actual vs. planned costs


Cost and schedule variances

Cost

Review Results

Provides status of action items from life-cycle review.

Status of action items Status of trouble reports

Trouble Reports

Provides insight into product and process quality and the effectiveness of the testing.

Number of trouble reports opened, closed, etc. during reporting period

Requirements Stability

Provides visibility into the magnitude and impact of requirements changes.

Number of requirements changes/clarifications


Distribution of requirements over releases

Size Stability

Provides insight into the completeness and stability Size growth of the requirements and into the ability of the staff to complete the project within the current Distribution of size over releases budget and schedule. Provides information on how well the project is meeting its computer resource utilization goals/requirements. Provides information on the training program and staff skills. Actual vs. planned profiles of computer resource utilization Actual vs. planned number of personnel attending classes

Computer Resource Utilization Training

2.1 Progress
Progress indicators provide information on how well the project is performing with respect to planned task completions and keeping schedule commitments. Tasks are scheduled and then progress is tracked to the schedules. Metrics are collected for the activities and milestones identified in the project schedules. Metrics on actual completions are compared to those of planned completions to determine whether there are deviations to the plan. The difference between the actual and planned completions indicates the deviations from the plan. Each project identifies tasks for which progress metrics will be collected. The completion criteria for each task must be well defined and measurable. The project should establish range limits (thresholds) on the planned task progress for the project. The thresholds are used for management of software development risk.

Figure 1 depicts the cumulative number of planned and actual completions (or milestones) over time. Note that this chart is generic, and each project will substitute specific tasks (units, milestones, SLOCs, etc. ). Additionally, each project is expected to produce multiple progress charts for different types of tasks, different teams, etc.

Figure 1 Progress Indicator

2.2 Effort
Effort indicators allow the software manager to track personnel resources. They provide visibility into the contribution of staffing to project costs, schedule adherence, product quality and the amount of effort required for each activity. Effort indicators include trends in actual staffing levels, staffing profile by activity or labor category, or a profile of unplanned staff loses. Effort indicators may be used by all levels of project software management to measure the actual profile against the plan. Each level of management forms a profile for its area of control and monitors the actual profile against the plan. Determining the number of staff needed at any one time is an important function performed by software management. By summing the number of staff during each reporting period, the composite staffing profile for the project can be determined. These indicators are applied during all life-cycles phases, from project inception to project end. Effort metrics are to be collected and reported at least on a monthly basis. The effort and cost metrics are related. By convention, effort metrics are non-cumulative expenditures of human resources, and cost metrics are cumulative levels of effort as tracked by earned value. Thus, cost metrics are a cumulative depiction of effort. Figure 2 shows a sample plot of monthly actual versus planned effort

Figure 2 Effort Indicator

2.3 Cost
Cost management is an important activity for the success of a project, and labor is the primary component of software development cost. Managers must define the work in their area, determine the skill level required to perform the work, and use productivity estimates and schedule constraints to determine budgeted costs over time. Use staff-hours to measure cost, rather than dollars. The dollars per staff-hour varies over time and by labor category, and the conversion is made only by Finance. Cost is related to the effort indicator, with cost defined as an accumulation of effort expenditures. (The total project cost also includes non-labor costs, but they are not tracked here.) Only those projects using earned value can report the earned value quantities. A Work Breakdown Structure (WBS) is established to define the structures that will be used to collect the costs. The WBS identifies separate elements for requirements, design, documentation, code and unit test, integration, verification, and system testing. Costs can also be segregated by component, function, or configuration item. Work packages are derived from the WBS. Costs are

allocated to work packages using an earned value method. This system allows managers to track the actual costs and measure them against the budget for their respective areas of responsibility. Figure 3 is a sample Cost Indicator Chart. The actual and budgeted quantities are derived from an earned value system, and are shown in terms of staff-hours.

Figure 3 Cost Indicator

2.4 Review Results


Review Results indicators provide insight into the status of action items from life-cycle reviews. The term Action Item (AI) refers to inspection defects and customer comments. Reviews include the following:

Formal inspections of software documents or code Formal customer milestones, e.g., SSR, PDR, CDR, or TRR Informal peer evaluations of products, e.g., walkthroughs, technical reviews, or internal PDRs Management reviews Process reviews, e.g., SQA audits, SEI CMM assessments, or the causal analysis from formal inspections.

There are standards for some reviews, as well as procedures for conducting them. For example, formal inspections result in assertion logs that document the minor and major defects uncovered by the inspection process. Therefore, standard review result indicators for formal inspections are: 1. Counts of minor/major defects

2. Rates of defect detection (e.g., assertions per inspection meeting minute, defects per inspected document page, or defects per KSLOC of code inspected) 3. Defect status (e.g., age of open defects, number of open/closed defects, and breakdown by defect categories). A customer-conducted review such as a Preliminary Design Review (PDR) generates AIs that must be closed before approval of the Software Design Document. Therefore, standard review result indicators for a PDR are the number of comments written and their status (open, closed, and age). Review metrics record the AIs identified in the review findings and track them until they are resolved. These metrics provide status on both products and processes. Review results are not to be used to evaluate the performance of individuals. Review Results are collected and reported at least monthly at every stage of the software life cycle, but preferably weekly for key AIs. Figure 4 depicts the cumulative counts of AIs written and closed by reporting period.

Figure 4 Review Results Indicator

2.5 Trouble Reports


TR indicators provide managers with insight into the quality of the product, software reliability, and the effectiveness of testing. They also provide information on the software development process. The terms defect and problem will be used interchangeably herein. Monthly tracking of TR indicators shows the project's trends in the following areas:

1. The rate at which TRs are being written and resolved. 2. The type and severity of the TRs. 3. Relationship between the number of TRs and the number of test cases passed or the number of test steps passed. 4. The TR density (the number of TRs per unit size). 5. The number of defects in each software application/unit. TR indicators are applicable only in the following life cycle stages (and each release of the software within these stages, and during the informal and formal test segments of these stages) (1) application test and integration, (2) system test, (3) acceptance test. Thus the TR indicators are applicable only to defects during the operation or execution of a computer program. Due to the shortness of testing periods, and the dynamics involved between the test team and the implementation team that analyzes the TRs and fixes the defects, the TR indicators are generally evaluated on a weekly basis. The terms open and closed are defined as follows: Open The problem has been reported. Closed The investigation is complete and the action required to resolve the problem has been proposed, implemented, and verified to the satisfaction of all concerned. In some cases, a TR will be found to be invalid as part of the investigative process and closed immediately. Figure 5 shows the cumulative count of total, open, and closed TRs over time (weekly periods).

Figure 5 TR Indicator

2.6 Requirements Stability


Requirements Stability provides an indication of the completeness, stability, and understanding of the requirements. It indicates the number of changes to the requirements and the amount of information needed to complete the requirements definition. A lack of requirements stability can lead to poor product quality, increased cost, and schedule slippage. Requirements stability indicators are in the form of trend charts that show the total number of requirements, cumulative changes to the requirements, and the number of TBDs over time. A TBD refers to an undefined requirement. Based on requirements stability trends, corrective action may be necessary. Requirements stability is applicable during all life-cycles phases, from project inception to the end. The requirements stability indicators are most important during requirements and design phases. Requirements stability metrics are collected and reported on a monthly basis. Figure 6 shows an example of the total number of requirements, the cumulative number of requirements changes, and the number of remaining TBDs over time. It may be desirable to also show the number of added, modified and deleted requirements over time.

Figure 6 Requirements Stability Indicator

2.7 Size Stability

Software size is a critical input to project planning. The size estimate and other factors are used to derive effort and schedule before and during a project. The software manager tracks the actual versus planned software product size. Various indicators show trends in the estimated code size, trends by code type, the variation of actual software size from estimates or the size variation by release. Size stability is derived from changes in the size estimate as time goes on. It provides an indication of the completeness and stability of the requirements, the understanding of the requirements, design thoroughness and stability, and the capability of the software development staff to meet the current budget and schedule. Size instability may indicate the need for corrective action. Size metrics are applicable during all life-cycle phases. Size metrics are collected and reported on a monthly basis, or more often as necessary. Figure 7 shows an example of planned and currently estimated software size per release over time. Besides showing re-allocation of software content between releases, this also shows the growth in the total estimated size.

Figure 7 Size Indicator

2.8 Computer Resource Utilization


Computer Resource Utilization indicators show whether the software is using the planned amount of system resources. The computer resources are normally CPU time, I/O, and memory. For some software, the constraints of computer resources significantly affect the design,

implementation, and testing of the product. They can also be used to replan, re-estimate, and guide resource acquisition. Computer resource utilization is planned during the requirements activity and reviewed during the design activity. Resources are monitored from the start of implementation activity to the end of the life cycle. For memory utilization, the unit of data is the byte, word, or half-word. For CPU time, the unit of data is either MIPS (millions of instructions per second), or the percentage of CPU time used during a peak period. For I/O time, the unit of data is the percentage of I/O time used during a peak period. Resource Utilization data is collected and reported at least monthly, with the period between collection and reporting becoming shorter as the software system nears completion and a better picture of software performance can be seen. Note that the resource utilization is normally an estimate until integration occurs, at which time the actual data is available. Figure 8 shows an example of CPU and memory use as a percent of available and the maximum allowed.

Figure 8 Computer Resources Indicator

2.9 Training
Training indicators provide managers with information on the training program and whether the staff has necessary skills. A trained staff is a commitment. The manager must ensure that the staff has the skills needed to perform their assigned tasks. The objective of the training indicator

is to provide visibility into the training process, to ensure effective utilization of training, and to provide project software managers with an indication of their staff's skill mixture. The manager should investigate the deviations in the number of classes taught from the number of classes planned, and the deviation of the number of staff taught to the planned number. The quality of the training program should also be determined from completed course evaluation sheets. The number of waivers requested and approved for training should also be tracked. Figure 9 shows a sample graph of the total monthly attendance of personnel attending training classes. It represents the sum of the number of personnel attending all classes.

Figure 9 Training Indicator

3 Overview of Project Procedures


Metrics are planned at project conception, used to track project status and recorded at the end of the project.

When starting a software development project, determine the list of software metrics. Use the goal-question-measure paradigm to select appropriate measurements for the project. To do this, first establish goals for the project, develop questions to be answered by measurement, and then collect the data needed. Document the software metrics to be collected for the project in the project plan.

During a project, collect and report the metrics. Use the collected metrics data to track project status. At the end of a project, utilize the data for postmortem reporting. Use automated data collection and analysis tools whenever possible to collect the metrics data for your project. Some tools that can aid in this process include Amadeus for code measurement, TR statusing tools, requirements management tools, etc. Excel spreadsheets are useful for generating metrics reports. Contact the professor for spreadsheet templates of the charts shown in this guide.

Release notes are documents that are distributed with software products, often when the
product is still in the development or test state (e.g., a beta release). For products that have already been in use by clients, the release note is a supplementary document that is delivered to the customer when a bug is fixed or an enhancement is made to the product.

Purpose and Responsibilities


Release notes are communication documents shared with customers and clients of an organization detailing the changes or enhancement made to the features of service or product the company provides. Thus this communication document is usually circulated only after the product or service is thoroughly tested and approved against the specification provided by the development team. Release notes may be written by a technical writer or any other member of the development or test team. Release notes can also contain test results and information about the test procedure. This kind of information gives readers of the release note more confidence in the fix/change done; this information also enables implementers of the change to conduct rudimentary acceptance tests. Release notes are also an excellent mechanism to feed the process of end user documentation; user guides, marketing materials, and revisions to training materials.

It's a classic love/hate relationship. I love big projects because they bring in lots of money -- there's nothing better than holding a paycheck fat enough to buy something substantial. But I hate big projects because they bombard me with so many possibilities, challenges, thoughts, and fears that after a while I feel shell-shocked. The beginning of a big project can be overwhelming and confusing. It can lead you to ask yourself questions like, Should I call my client to ask for instructions? Should I work intuitively until the right approach "evolves"? Should I give up being an IP and enlist in the Coast Guard? Inevitably, I stop asking questions and start worrying about wasted time, wasted opportunities, endless diversions, hours spent on chores that should take minutes, and minutes spent on chores that deserve hours. It's no way to run a business. That's why whenever a large project lands on my desk, I remind myself to break it down into its essential elements. Once I've broken it down, my shoulders straighten, my heels dig in, and I feel inspired, clear, and effective (well, for a while anyway). It's one thing to tell yourself to break down a project; it's quite another to do so in an effective manner. Each project will dictate the best method, but chances are good it will be in one of the following ways. 1. Chronologically. It sounds like a no-brainer, but if you take the time to break down a project chronologically, you may find some surprises. What steps must be completed before others can be started? How long it will take to finish each portion of the project? And what's the best estimate of the number of hours really needed to complete the job? Using your calendar, you can determine conflicts, holidays, and other hindrances to project completion. Best of all, you can track progress daily and actually feel like you're getting work done. 2. Structurally. Any good-sized gig has structural elements that give the project shape. For me, as a writer, an article usually involves four stages: research, interviews, rough draft, and final draft. A graphic artist may have more elements: research, client input, rough layout, design, illustration, prepress, production. A smart IP will break a big project down into its structural elements before he begins.

(Remember that these elements may not be dealt with in a strict chronological order. For example, there can be a lot of back-and-forth; sometimes I need to slam on the brakes, turn around, and do more research mid-way through a project. The point is that setting up a general structure will give you the confidence to get started, and to see the project through.) You may be so familiar with your work that the structural elements blend together. That's great -- until a big project comes along, at which time it's helpful to regain structural clarity in order to set and meet major milestones. Now you can celebrate the major milestones of the project's life and return to work with fresh eyes. 3. By degree of difficulty. Movies are shot scene by scene, and it's rare that the opening minute of a film is the first to see the lens. In many cases the director will choose an easy scene to ease the actors and crew into the shoot. You can tiptoe into the water by selecting easy elements to tackle first, or if you like to get the tough stuff out of the way, tackle the most difficult components first. By breaking down a project's tasks by difficulty, you'll get to know exactly when you'll be entering deep waters. 4. By partner. If your job entails managing the work of others or subcontracting, breaking down a project by team members will help you get a handle on the time and effort you'll have to invest in order to get things ready for your partners. Is Partner A always busy? Can you get a better deal by giving Partner B a long lead time? Is Partner C an unknown entity? Any of these factors may prompt you to address those sections of the project that must be handed off first, even if they don't come first chronologically or structurally. 5. By client priority. Don't forget to take your clients into consideration when you break down your projects. Remember, the client is king, and if he needs Part C three weeks before Part A -- you have to deliver it. And deliver it on time. Find out what his priorities are and structure your work plan accordingly. Give him a cost estimate and get busy. Even if your project is defined largely by your client's schedule or priorities, you can still break down the project in the other ways listed here.

Indeed, the more ways you attack the project, the better your chances of completing it on time and under budget. It may seem like a time-consuming exercise at first, but believe me, as an IP who has suffered needlessly, it's better to break down a large project than have a large project break down you.

Breaking down project among team members depends on the organization structure.

The basic roles are as follows

QA manager - Manages the Testing activities, Owner of the project, manages the team efforts

Team lead - Prepares Test plan, Assign task to team members, tracks everyday activity, manages test reports, manages test deliverables,

Senior Test Engineer - Designing test cases, Defect tracking , designing scripts, focuses on functional testing, Reviews

Test engineer - Executes test case, focuses on regression testing, Defect tracking

t can be depend on these following 1) Number of 2) Number of team 3) Complexity of the 4) Time Duration of the 5) Team member's experience etc.

cases---modules members Project project

What a QA engineer does


Write test plans from the requirements, specifications and test strategies Use versioning systems to code test scripts Create and perform test campaign whenever it is necessary to fit in the overall planning Use bug tracking database to report bugs Analyses test results Reports results to the QA manager Raise an alert when an important issue is likely to put in jeopardy the whole project

What makes a good QA Engineer


Broad understanding of the product To test efficiently a product, the QA engineer must know it well enough. This sounds obvious must unfortunately, this is often under-estimated. Knowing well the product includes also knowing how endusers expect it to work. Again this may sound obvious but remember that the biggest part in testing is black-box testing. The QA engineer must have a "customer-focus" vision. But a good QA engineer must also know how the product is designed because the more you know the product, the better you're able to test it. However, the QA engineer will have to analyse the design only after his black-box testplan is completed. Indeed, knowing the design can widely influence the test strategy. It is better to first write the test plan with a high-level vision, then getting more and more information to refine the testing. Effective communication Communication is an extremely important skill for a QA engineer. Of course, meetings (stand-up etc.) and status reports are part of the communication but more importantly, a QA engineer must be particularly efficient in the following tasks:

Direct communication with both Development and Product definition teams Capability to communicate with technical and non-technical people Having the diplomacy to say "no" when a bug is considered as not fixed Having the diplomacy to communicate about a bug without "offensing" the developer. Developers may often feel offensed when a bug is submited. This is 100% natural. This is why the QA engineer must have the ability to "criticize without offensing" Do not rely on "bug tracking" database for communication! there is nothing better that a bug tracking system to create "misunderstanding" between Development and QA teams

Creativity Testing requires a lot of creativity. Bugs are often hidden and just performing the obvious positive tests will have only a few chances to actually find bugs. Hence, the QA engineer must use its creativity to figure out all the scenarios that are likely to detect a bug. In other words, the QA engineer must be able to "see beyond the obvious".

Development knowledge Quality Assurance requires knowledge about software development for two basic reasons:

Development capabilities are required to eventually code automated tests If you know how to develop, you have better ideas on what is "dangerous" to code, so what to test more thoroughly

Driving for results A good QA engineer never forgets that the ultimate goal is not only to find bugs but also have them fixed. Once a bug has been found and has been "acknowledged" by the development team, the QA engineer may be required to "convince" people to fix it. Additionally, getting a nice automation framework with smart tools does not bring anything if it does not find any bug at the end.

Ask yourself if the automation is going to help to find more bugs and when Prioritize your testing tasks on the only important criteria o How many bugs is this likely going to find? o How major will be the found bugs (detecting thousands of cosmetic bugs is irrelevant/useless - and often easy - until all major/show-stopper bugs have been found)?

Creativity. This to me is probably the biggest one. In one sense it's pretty straightforward to identify requirements and create positive and negative scenarios for each one as well as for the branching solutions. However where I've seen (big) problems in a production environment are for scenarios that come out of left field. It takes someone with the ability to devise out-of-the-box input scenarios to capture edge cases which can really undermine a system. Intellectual Curiosity. This is an important quality on many fronts. As a QA engineer you need to be proactive in terms of identifying changes to the system that can impact results. A corollary to that is once a change has been put in place being able to systematically trace it back to its root cause. This is not for the faint of heart and you'll quickly see who's good at regression analysis and who's not. Mental Rigor. As stated above there is a certain amount of analyzing requirements and determining test scenarios. I've seen many cases where obvious test scenarios were not caught because the QA engineer did not do due diligence in creating all of the test cases that were right there in the requirements. Depending on the amount of requirements and desired test coverage it will take a discipline to ensure that this is being done properly. It can be a boring task sometimes and will require rigor.

testing tool at the right time in a project can significantly increase the efficiency of testing by automating processes, increasing communication, promoting
Using the correct best practices and re-use of tests and test data. The leading functional automated testing tools include: HP: Unified functional testing software (which includes QTP - QuickTest Professional tool) HP Unified Functional Testing is industry-leading software that accelerates functional testing by simplifying test design and maintenance for both GUI applications and non-GUI components, and also validates integrated test scenarios, resulting in reduced risk and improved quality for your modern applications. HP Unified Functional Testing includes the HP Functional Testing (HP QuickTest Professional and all the add-ins) and the HP Service Test products. Micro Focus - SilkTest (formerly Borland and Segue) Creates powerful, robust and fast test automation across a broad range of application technologies to identify quality problems early in development lifecycles. Silk4Net and Silk4J options bring the power of SilkTest to the IDE of choice: Visual Studio or Eclipse. Micro Focus - TestPartner (formerly Compuware) TestPartneris an automated testing tool that accelerates functional testing of complex applications developed with a range of distributed technologies. Its visual storyboard approach enables collaboration between application users and quality professionals. IBM Rational Robot IBM Rational Robot is a general purpose test automation tool for QA teams who want to perform functional testing of client/server applications. Odin Technology - Axe Axe is a class of business process-oriented tools that allow non-technical users to automate testing. It provides a means to rapidly deploy automated testing systems that can be used by staff without specialist automation skills and minimal training. This reduces the cost of introducing and maintaining test automation by a factor of four. Axe can translate scripts to run with many of the functional testing tools mentioned above. Selenium Selenium is one of the better Open Source projects that provides a suite of tools to automate web app testing across many platforms. Acutest also have experience with legacy tools such as Compuware's QARun, Mercury WinRunner and others which you may have in-house but are struggling to find support. As well as functional regression testing tools there are also automated performance testing tools

Benefits of TestAdvantage

Gain better coverage and a higher quality product through test automation. Reduce the need for and cost of manual testing with an automated testing process that requires fewer personnel. Effectively institute data-driven testing of NetAdvantage for Windows Forms and NetAdvantage for WPF controls-powered applications. Increase productivity with the time savings as the testing stage finishes faster.

Test automation is the use of software to control the execution of tests, the
comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.[1] Commonly, test automation involves automating a manual process already in place that uses a formalized testing process.

Contents
[hide]

1 Overview 2 Code-driven testing 3 Graphical User Interface (GUI) testing 4 What to test 5 Framework approach in automation 6 Defining boundaries between automation framework and a testing tool 7 Notable test automation tools 8 See also 9 References 10 External links

[edit] Overview
Although manual tests may find many defects in a software application, it is a laborious and time consuming process. In addition, it may not be effective in finding certain classes of defects. Test automation is the process of writing a computer program to do testing that would otherwise need to be done manually. Once tests have been automated, they can be run quickly and repeatedly. This is often the most cost effective method for software products that have a long maintenance life, because even minor patches over the lifetime of the application can cause features to break which were working at an earlier point in time. There are two general approaches to test automation:

Code-driven testing. The public (usually) interfaces to classes, modules or libraries are tested with a variety of input arguments to validate that the results that are returned are correct.

Graphical user interface testing. A testing framework generates user interface events such as keystrokes and mouse clicks, and observes the changes that result in the user interface, to validate that the observable behavior of the program is correct.

Test automation tools can be expensive, and are usually employed in combination with manual testing. Test automation can be made cost-effective in the long term, especially when used repeatedly in regression testing.[citation needed] One way to generate test cases automatically is model-based testing through use of a model of the system for test case generation but research continues into a variety of alternative methodologies for doing so.[citation needed] What to automate, when to automate, or even whether one really needs automation are crucial decisions which the testing (or development) team must make. Selecting the correct features of the product for automation largely determines the success of the automation. Automating unstable features or features that are undergoing changes should be avoided.[2]

[edit] Code-driven testing


A growing trend in software development is the use of testing frameworks such as the xUnit frameworks (for example, JUnit and NUnit) that allow the execution of unit tests to determine whether various sections of the code are acting as expected under various circumstances. Test cases describe tests that need to be run on the program to verify that the program runs as expected. Code driven test automation is a key feature of Agile software development, where it is known as Test-driven development (TDD). Unit tests are written to define the functionality before the code is written. Only when all tests pass is the code considered complete. Proponents argue that it produces software that is both more reliable and less costly than code that is tested by manual exploration. It is considered more reliable because the code coverage is better, and because it is run constantly during development rather than once at the end of a waterfall development cycle. The developer discovers defects immediately upon making a change, when it is least expensive to fix. Finally, code refactoring is safer; transforming the code into a simpler form with less code duplication, but equivalent behavior, is much less likely to introduce new defects.

[edit] Graphical User Interface (GUI) testing


Many test automation tools provide record and playback features that allow users to interactively record user actions and replay them back any number of times, comparing actual results to those expected. The advantage of this approach is that it requires little or no software development. This approach can be applied to any application that has a graphical user interface. However, reliance on these features poses major reliability and maintainability problems. Relabelling a button or moving it to another part of the window may require the test to be re-recorded. Record and playback also often adds irrelevant activities or incorrectly records some activities.

A variation on this type of tool is for testing of web sites. Here, the "interface" is the web page. This type of tool also requires little or no software development.[citation needed] However, such a framework utilizes entirely different techniques because it is reading HTML instead of observing window events.[citation needed] Another variation is scriptless test automation that does not use record and playback, but instead builds a model of the Application Under Test (AUT) and then enables the tester to create test cases by simply editing in test parameters and conditions. This requires no scripting skills, but has all the power and flexibility of a scripted approach.[citation needed] Test-case maintenance seems to be easy, as there is no code to maintain and as the AUT changes the software objects can simply be re-learned or added. It can be applied to any GUI-based software application.[citation needed] The problem is the model of the AUT is actually implemented using test scripts, which have to be constantly maintained whenever there's change to the AUT.[citation needed]

[edit] What to test


Testing tools can help automate tasks such as product installation, test data creation, GUI interaction, problem detection (consider parsing or polling agents equipped with oracles), defect logging, etc., without necessarily automating tests in an end-to-end fashion. One must keep satisfying popular requirements when thinking of test automation:

Platform and OS independence Data driven capability (Input Data, Output Data, Metadata) Customizable Reporting (DB Access, crystal reports) Easy debugging and logging Version control friendly minimal binary files Extensible & Customizable (Open APIs to be able to integrate with other tools) Common Driver (For example, in the Java development ecosystem, that means Ant or Maven and the popular IDEs). This enables tests to integrate with the developers' workflows. Support unattended test runs for integration with build processes and batch runs. Continuous integration servers require this. Email Notifications (automated notification on failure or threshold levels). This may be the test runner or tooling[clarification needed] that executes it. Support distributed execution environment (distributed test bed) Distributed application support (distributed SUT)

he goal of cross-browser testing is to make sure that your website or web application behaves correctly in any browser. This is sometimes very challenging and can cause many QA departments a lot of headache. Performing cross-browser verifications manually is time consuming, since most of the browser versions cannot co-exist with each other. To automate the visual layout of cross-browser testing it is better to apply specialized webservices like Browsershots, BrowserCam, Litmus and others. These services generate a series of screenshots taken in various browsers and operating systems, so that you can easily find what is wrong or if everything displays correctly. These tools are alright for testing the visual appearance. But often the program logic depends on the browser and OS version more so than the visual appearance. What should we use to test how the website, server and application behave and react to user actions in different browsers? Not to mention, do all of this automatically. TestComplete can automatically perform specific actions over a website, server app and/or web application and log the results of your web testing. You can even launch your TestComplete web tests under various operating systems and with various web browser versions. TestComplete 8.60 works with Windows 2000 all the way up to Windows 7 (including 64-bit editions) and supports Internet Explorer ver. 5 - 9; Mozilla Firefox ver. 3 - 6 and browsers based on the Microsoft WebBrowser control. So, you can test a wide variety of configurations and combinations using both test labs with a lot of physical computers or virtual test labs using virtual machines like Virtual PC, Virtual Server, VMWare Workstation, VMWare Server or any other similar tools. Also, to automate the virtual machine management you can use Automated Build Studio. Creating a web test that will work in any supported browser is not as easy as it sounds. The user interface and inner object hierarchy of different browser versions is not always the same, let alone the structure of browsers is always different depending on the vendor. For example, a test recorded with Internet Explorer 6 will most likely not work right out of the box in Mozilla Firefox 3. However, as hard as this may sound, its still possible, but we need to take the browser differences into account when creating web tests. The purpose of this article is to provide a general concept of creating cross-browser tests in TestComplete and to provide you with solutions to some of the most common problems. To illustrate, with real-world code snippets, we created a sample project that will work with TestCompletes Web Test Sample (<TestComplete Samples>\Scripts\WebTest\) in a crossbrowser test.

Organizing a Cross-Browser Testing Project


One of the most fundamental rules of QA is to plan your test thoroughly. So, before we start coding lets clarify what we are going to do and how we will do it.

The TestComplete project should:


1. Get a list of currently available browsers. 2. Launch the first browser. 3. Prepare for the web test. 4. Perform web test or tests: o Open the web page for testing. o Perform the web testing actions. o Log the web testing results. 5. Close the browser. 6. Launch another browser from the list. 7. Repeat steps 3-6 for all browsers.

Six Sigma is a set of practices originally developed by Motorola to systematically improve processes by eliminating defects. A defect is defined as nonconformity of a product or service to its specifications. While the particulars of the methodology were originally formulated by Bill Smith at Motorola in 1986, Six Sigma was heavily inspired by six preceding decades of quality improvement methodologies such as quality control, TQM, and Zero Defects. Like its predecessors, Six Sigma asserts the following:

Continuous efforts to reduce variation in process outputs is key to business success Manufacturing and business processes can be measured, analyzed, improved and controlled Succeeding at achieving sustained quality improvement requires commitment from the entire organization, particularly from top-level management

The term "Six Sigma" refers to the ability of highly capable processes to produce output within specification. In particular, processes that operate with six sigma quality produce at defect levels below 3.4 defects per (one) million opportunities (DPMO). Six Sigma's implicit goal is to improve all processes to that level of quality or better. Six Sigma is a registered service mark and trademark of Motorola, Inc. Motorola has reported over US$17 billion in savings from Six Sigma as of 2006. In addition to Motorola, companies that adopted Six Sigma methodologies early on and continue to practice it today include Honeywell International (previously known as Allied Signal) and General Electric (introduced by Jack Welch). Recently some practitioners have used the TRIZ methodology for problem solving and product design as part of a Six sigma approach.

Contents:

Methodology

Statistics and robustness

Implementation Roles

Examples of Implementation

Origin

Criticism

Six Sigma
ADVERTISEMENT

partner-pub-4686

FORID:11

ISO-8859-1

Search Site:

Search

w w w .onestoptes w w w .onestoptes

Home Quality Management Methodology

Methodology
DMAIC

Basic methodology consists of the following five (5) steps:


Define the process improvement goals that are consistent with customer demands and enterprise strategy. Measure the current process and collect relevant data for future comparison. Analyze to verify relationship and causality of factors. Determine what the relationship is, and attempt to ensure that all factors have been considered. Improve or optimize the process based upon the analysis using techniques like Design of Experiments. Control to ensure that any variances are corrected before they result in defects. Set up pilot runs to establish process capability, transition to production and thereafter continuously measure the process and institute control mechanisms.

DMADV Basic methodology consists of the following five steps:


Define the goals of the design activity that are consistent with customer demands and enterprise strategy. Measure and identify CTQs (critical to qualities), product capabilities, production process capability, and risk assessments. Analyze to develop and design alternatives, create high-level design and evaluate design capability to select the best design. Design details, optimize the design, and plan for design verification. This phase may require simulations. Verify the design, set up pilot runs, implement production process and handover to process owners.

Some people have used DMAICR (Realize). Others contend that focusing on the financial gains realized through Six Sigma is counter-productive and that said financial gains are simply byproducts of a good process improvement.

Looking for Software Testing eBooks and Interview Questions? Join now and get it FREE!

Discussion Center

Discuss Query Feedback

Y! Group Sirfdosti

Contact

Recommended Resources Testing Interview Questions - http://www.coolinterview.com/type.asp Testing Tools Interview Questions - http://www.coolinterview.com/type.asp What is Software Testing?- http://en.wikipedia.org/wiki/Software_testing Software QA & Testing Resource Center- http://www.softwareqatest.com/ Testing Faqs- http://www.testingfaqs.org/

You might also like