STQA_Unit_I
STQA_Unit_I
Unit - I
Software Engineering:
The invention of one technology can have profound and unexpected effects on other seemingly
unrelated technologies, on commercial enterprises, on people, and even on culture as a whole.
Computer software is the single most important technology on the world stage. No one could
have predicted that software would become an indispensable technology for business, science, and
engineering; that software would enable the creation o f new technologies (e.g., genetic
engineering), the extension of existing technologies (e.g., telecommunications), and the demise of
older technologies(e.g.,the printing industry); that software would be the driving force behind the
personal computer revolution; that shrink-wrapped software products would be purchased by
consumers in neighborhood malls; that a software company would become larger and more
influential than the vast majority of industrial-era companies; that a vast software-driven
network called the Internet would evolve and change everything from library research to
consumer shopping to the dating habits o f young (and not-so-young) adults. No one could have
foreseen that software would become embedded in systems of all kinds: transportation,
medical, telecommunications, military, industrial, entertainment, office machines— the list is
almost endless. No one could have foreseen that millions o f computer programs would have to be
corrected, adapted, and enhanced as time passed and that the burden of performing these
"maintenance" activities would absorb more people and more resources than all work applied to
the creation of new software.
The framework that encompasses a process, a set of methods, and an array of tools that we call
software engineering.
Types of Softwares:
In categorizing software, we can distinguish two major types: system software and Application
Software.
System software is the software that acts as tools to help construct or support
applications software. Examples are operating systems, databases, networking
software, compilers.
Applications software is software that helps perform some directly useful or enjoyable
task. Examples are games, the software for automatic teller machines (ATMs), the
control software in an airplane, e-mail software, word processors, spreadsheets.
1
Pallavi Mirajkar ,Dept. of Comp. Sci., MD College, Parel TYBSC-CS SEM 5 (STQA UNIT- I) NOTES
Embedded systems – in which the computer plays a smallish role within a larger system.
For example, the software in a telephone exchange or a mobile phone.
Embedded systems are usually also real-time systems
Office software – word processors, spreadsheets, e-mail
Scientific software – carrying out calculations, modeling, prediction, for example,
weather forecasting.
Software can either be off-the-shelf (e.g. Microsoft Word) or tailor-made for a particular
application (e.g. software for the Apollo moon shots). The latter is sometimes called bespoke
software.
Nature Of Errors :
It would be convenient to know how errors arise, because then we could try to avoid them
during all the stages of development.
Similarly, it would be useful to know the most commonly occurring faults, because then we
could look for them during verification.
Regrettably, the data is inconclusive and it is only possible to make vague statements about
these things.
Specifications are a common source of faults. A software system has an overall
specification, derived from requirements analysis. In addition, each component of the
software ideally has an individual specification that is derived from architectural design.
The specification for a component can be ambiguous (unclear), incomplete, faulty.
Any such problems should, of course, be detected and remedied by verification of the
specification prior to development of the component, but, of course, this verification cannot
and will not be totally effective.
So there are often problems with a component specification.
2
Pallavi Mirajkar ,Dept. of Comp. Sci., MD College, Parel TYBSC-CS SEM 5 (STQA UNIT- I) NOTES
The next type of error is where a component contain faults so that it does not meet its specification.
This may be due to two kinds of problem:
1. errors in the logic of the code – an error of commission
2. code that fails to meet all aspects of the specification – an error of omission.
This second type of error is where the programmer has failed to appreciate and correctly
understand all the detail of the specification and has therefore omitted some necessary code.
Finally, the kinds of errors that can arise in the coding of a component are:
data not initialized
loops repeated an incorrect number of times.
boundary value errors.
Boundary values are values of the data at or near critical values. For example, suppose a component has to
decide whether a person can vote or not, depending on their age.
The voting age is 18. Then boundary values, near the critical value, are 17, 18 and 19. As we have seen,
there are many things that can go wrong and perhaps therefore it is no surprise that
verification is such a time-consuming activity.
Validation is a collection of techniques that try to ensure that the software does meet the
requirements. On the other hand, reliability is to do with the technical issue of whether there are
any faults in the software.
Testing :
Testing is a widely used technique for verification, but note that testing is just one technique
amongst several others. Currently the dominant technique used for verification is testing. And
testing typically consumes an enormous proportion (sometimes as much as 50%) of the effort of
developing a system.
Microsoft employ teams of programmers (who write programs) and completely separate teams of
testers (who test them). At Microsoft there are as many people involved in testing as there are in
programming.
Definition of Quality :
The simplest measure of software is its size. Two possible metrics are the size in bytes and the size
in number of statements. The size in statements is often termed LOCs(lines of code),
sometimes SLOCs (source lines of code). The size in bytes obviously affects the main memory and
disk space requirements and affects performance. The size measured in statements relates to
development effort and maintenance costs. But a longer program does not necessarily take
3
Pallavi Mirajkar ,Dept. of Comp. Sci., MD College, Parel TYBSC-CS SEM 5 (STQA UNIT- I) NOTES
longer to develop than a shorter program, because the complexity of the software also has an
effect. A metric such as LOCs takes no account of complexity.
There are different ways of interpreting even a simple metric like LOCs, since it is possible to
exclude, or include, comments, data declaration statements, and so on. Arguably, blank lines are not
included in the count.
The second major metric is person months, a measure of developer effort. Since people’s time is the
major factor in software development, person months usually determine cost. If an
organization measures the development time for components, the information can be used to
predict the time of future developments. It can also be used to gauge the effectiveness of new
techniques that are used.
The third basic metric is the number of bugs. As a component is being developed, a log can be kept
of the bugs that are found. In week 1 there might be 27, in week 2 there might be 13, and so on.
As we shall see later, this helps predict how many bugs remain at the end of the
development. These figures can also be used to assess how good new techniques are.
Software Quality :
How do you know when you have produced good-quality software? There are two ways of going about it:
Measuring the attributes of software that has been developed (quality control)
Monitoring and controlling the process of development of the software (quality assurance).
Let us compare developing software with preparing a meal, so that we can visualize these options more
clearly. If we prepare a meal (somehow) and then serve it up, we will get ample comments on its quality.
The consumers will assess a number of factors such as the taste, color and temperature. But by then it is too
late to do anything about the quality. Just about the only action that could be taken is to prepare
further meals, rejecting them until the consumers are satisfied. We can now appreciate a commonly
used definition of quality:
A product which fulfills and continues to meet the purpose for which it was produced is a quality product.
There is an alternative course of action: it is to ensure that at each stage of preparation and cooking
everything is in order. So we:
Buy the ingredients and make sure that they are all fresh
Wash the ingredients and check that they are clean
Chop the ingredients and check that they are chopped to the correct size
Monitor the cooking time.
At each stage we can correct a fault if something has been done incorrectly.
Putting this into the jargon of software development, the quality can be assured provided that the
process is assured.
4
Pallavi Mirajkar ,Dept. of Comp. Sci., MD College, Parel TYBSC-CS SEM 5 (STQA UNIT- I) NOTES
For preparing the meal we also need a good recipe – one that can be carried out accurately and delivers
well-defined products at every stage. This corresponds to using good tools and methods during software
development.
Quality Factors :
The list is designed to encompass the complete range of attributes associated with software, except the cost of
construction. These are known as Quality Factors.
1. Correctness – the extent to which the software meets its specification and meets its users’
requirements
2. Reliability – the degree to which the software continues to work without failing
3. Performance – the amount of main memory and processor time that the software uses
4. Integrity – the degree to which the software enforces control over access to information by
users
5. Usability – the ease of use of the software
6. Maintainability – the effort required to find and fix a fault
7. Flexibility – the effort required to change the software to meet changed requirements
8. Testability – the effort required to test the software effectively
9. Portability – the effort required to transfer the software to a different hardware and/or
software platform
10. Reusability – the extent to which the software (or a component within it) can be reused within
some other software
11. Interoperability–theeffort required tomakethe softwareworkinconjunction withsomeother
software
12. Security – the extent to which the software is safe from external sabotage that may damage it and
impair its use.
This list of quality factors can be used in one or more of the following situations:
Quality assurance means ensuring that a software system meets its quality goals. The goals differ from one
project to another. They must be clear and can be selected from the list of quality factors we saw earlier.
To achieve its goals, a project must use effective tools and methods. Also checks must be carried out during
the development process at every available opportunity to see that the process is being carried out
correctly.
To ensure that effective tools and methods are being used, an organization distills its best practices and
documents them in a quality manual. This is like a library of all the effective tools, methods and
notations. This manual describes all the standards and procedures that are available to be used.
Standards : A standard defines a range, limit, tolerance or norm of some measurable attribute against
which compliance can be judged.
5
Pallavi Mirajkar ,Dept. of Comp. Sci., MD College, Parel TYBSC-CS SEM 5 (STQA UNIT- I) NOTES
Procedures: A procedure prescribes a way of doing something (rules, steps, guidelines, plans).
To be effective, quality assurance must be planned in advance – along with the planning of all other aspects of
a software project. The project manager:
1. Decides which quality factors are important for the particular project (e.g. high reliability and
maintainability). In preparing a family meal, perhaps flavor and nutritional value are the
paramount goals.
2. Selects standards and procedures from the quality manual that are appropriate to meeting the
quality goals (e.g. the use of complexity metrics to check maintainability).If the meal does not
involve potatoes, then those partsof the quality manual that deal withpotatoes can be omitted.
3. Assembles these into a quality assurance plan for the project. This describes what the
procedures and standards are, when they will be done, and who does them.
Test Case:
A test case is a document, which has a set of test data, preconditions, expected results and post-
conditions, developed for a particular test scenario in order to verify compliance against a specific
requirement.
Test Case acts as the starting point for the test execution, and after applying a set of input values, the
application has a definitive outcome and leaves the system at some end point or also known as
execution post-condition.
Typical TestCaseParameters:
Test Case ID
Test Scenario
Test Case Description
Test Steps
Prerequisite
Test Data
Expected Result
Test Parameters
Actual Result
Environment Information
Comments
Example:
Let us say that we need to check an input field that can accept maximum of 10 characters.
While developing the test cases for the above scenario, the test cases are documented the following way.
In the below example, the first case is a pass scenario while the second case is a FAIL.
6
Pallavi Mirajkar ,Dept. of Comp. Sci., MD College, Parel TYBSC-CS SEM 5 (STQA UNIT- I) NOTES
More and more the organizations that produce software are having to convince their customers that
they are using effective methods. More and more commonly they must specify what methods they are
using. In addition, the organization must demonstrate that they are using the methods. Thus an
organization must not only use sound methods but must be seen to be using them. Therefore a quality
plan describes a number of quality controls. A quality control is an activity that checks that the project’s
quality factors are being achieved and produces some documentary evidence.
Even the most jaded software developers will agree that high-quality software is an important goal. But
how do we define quality? A wag once said, "Every program does something right, it just may not be the
thing that w e want it to do."
Many definitions o f software quality have been proposed in the literature. For our purposes, software
quality is defined as:
There is little question that this definition could be modified or extended, in fact, the definition of
software quality could be debated endlessly. This definition serves to emphasize three important points:
in which software is engineered. If the criteria are not followed, lack of quality will almost surely result.
Quality assurance consists of a set of auditing and reporting functions that assess the effectiveness and
completeness of quality control activities. The goal of quality assurance is to provide management with
the data necessary to be informed about product quality, thereby gaining insight and confidence that
product quality is meeting its goals. Of course, if the data provided through quality assurance identify
problems, it is management’s responsibility to address the problem s and apply the necessary resources to
resolve quality issues.
7
Pallavi Mirajkar ,Dept. of Comp. Sci., MD College, Parel TYBSC-CS SEM 5 (STQA UNIT- I) NOTES
SQA Activities :
Software quality assurance is composed of a variety of tasks associated with two different
constituencies— the software engineers w h o d o technical work and an SQA group that has
responsibility for quality assurance planning, oversight, record keeping, analysis, and reporting. Software
engineers address quality (and perform quality assurance and quality control activities) by applying solid
technical methods and measures, conducting formal technical reviews, and performing well-planned
software testing.
The charter o f the SQA group is to assist the software team in achieving a high quality end product. The
Software Engineering Institute recommends a set of SQA activities that address quality assurance
planning, oversight, record keeping, analysis, and reporting. These activities are performed (or
facilitated) by an independent SQA group that conducts the following activities:
Prepares an SQA plan for a project. The plan is developed during project planning and is reviewed by all
stakeholders. Quality assurance activities performed by the software engineering team and the SQA
group are governed by the plan.
The plan identifies evaluations to be performed, audits and review s to be performed, standards that are
applicable to the project, procedures for error reporting and tracking, documents to be produced by the
SQA group, and amount o f feedback provided to the software project team.
Audits designated software work products to verify compliance with those defined as part of the
software process. The SQA group review s selected work products; identifies, documents, and tracks
deviations; verifies that corrections have been made; and periodically reports the results o f its work to
the project manager.
Ensures that deviations in software work and work products are documented and handled according
to a documented procedure. Deviations may be en countered in the project plan, process description,
applicable standards, or technical work products.
Records any noncompliance and reports to senior management. Non compliance items are tracked
until they are resolved. In addition to these activities, the SQA group coordinates the control and
management o f change and helps to collect and analyze software metrics.
Software Development Life Cycle (SDLC) is a process used by the software industry to design, develop and
test high quality softwares. The SDLC aims to produce a high-quality software that meets or exceeds
customer expectations, reaches completion within times and cost estimates.
8
Pallavi Mirajkar ,Dept. of Comp. Sci., MD College, Parel TYBSC-CS SEM 5 (STQA UNIT- I) NOTES
Requirement analysis is the most important and fundamental stage in SDLC. It is performed by the
senior members of the team with inputs from the customer, the sales department, market surveys and
domain experts in the industry. This information is then used to plan the basic project approach and to
conduct product feasibility study in the economical, operational and technical areas.
Planning for the quality assurance requirements and identification of the risks associated with the
project is also done in the planning stage. The outcome of the technical feasibility study is to define the
various technical approaches that can be followed to implement the project successfully with minimum
risks.
Once the requirement analysis is done the next step is to clearly define and document the product
requirements and get them approved from the customer or the market analysts. This is done through an
SRS (Software Requirement Specification) document which consists of all the product requirements to
be designed and developed during the project life cycle.
9
Pallavi Mirajkar ,Dept. of Comp. Sci., MD College, Parel TYBSC-CS SEM 5 (STQA UNIT- I) NOTES
SRS is the reference for product architects to come out with the best architecture for the product to be
developed. Based on the requirements specified in SRS, usually more than one design approach for the
product architecture is proposed and documented in a DDS - Design Document Specification.
This DDS is reviewed by all the important stakeholders and based on various parameters as risk
assessment, product robustness, design modularity, budget and time constraints, the best design
approach is selected for the product.
A design approach clearly defines all the architectural modules of the product along with its
communication and data flow representation with the external and third party modules (if any). The
internal design of all the modules of the proposed architecture should be clearly defined with the
minutest of the details in DDS.
In this stage of SDLC the actual development starts and the product is built. The programming code is
generated as per DDS during this stage. If the design is performed in a detailed and organized manner, code
generation can be accomplished without much hassle.
Developers must follow the coding guidelines defined by their organization and programming tools like
compilers, interpreters, debuggers, etc. are used to generate the code. Different high level programming
languages such as C, C++, Pascal, Java and PHP are used for coding. The programming language is chosen with
respect to the type of software being developed.
This stage is usually a subset of all the stages as in the modern SDLC models, the testing activities are
mostly involved in all the stages of SDLC. However, this stage refers to the testing only stage of the
product where product defects are reported, tracked, fixed and retested, until the product reaches the
quality standards defined in the SRS.
Once the product is tested and ready to be deployed it is released formally in the appropriate market.
Sometimes product deployment happens in stages as per the business strategy of that organization. The
product may first be released in a limited segment and tested in the real business environment (UAT-
User acceptance testing).
Then based on the feedback, the product may be released as it is or with suggested enhancements in the
targeting market segment. After the product is released in the market, its maintenance is done for the
existing customer base.
SDLC Models :
There are various software development life cycle models defined and designed which are followed
during the software development process. These models are also referred as Software Development
10
Pallavi Mirajkar ,Dept. of Comp. Sci., MD College, Parel TYBSC-CS SEM 5 (STQA UNIT- I) NOTES
Process Models". Each process model follows a Series of steps unique to its type to ensure success in the
process of softwaredevelopment.
Following are the most important and popular SDLC models followed in the industry :
Waterfall Model
Iterative Model
Spiral Model
V-Model
Big Bang Model
Other related methodologies are Agile Model, RAD Model, Rapid Application Development and
Prototyping Models.
Validation : The process of evaluating software during the development process or at the end of the
development process to determine whether it satisfies specified business requirements.
Validation Testing ensures that the product actually meets the client's needs. It can also be defined as to
demonstrate that the product fulfills its intended use when deployed on appropriate environment. It
answers to the question, Are we building the right product?
Verification and validation encompasses a wide array of SQA activities that include formal technical
reviews, quality and configuration audits, performance monitoring, simulation, feasibility study,
11
Pallavi Mirajkar ,Dept. of Comp. Sci., MD College, Parel TYBSC-CS SEM 5 (STQA UNIT- I) NOTES
documentation review, database review, algorithm analysis, development testing, usability testing,
qualification testing, and installation testing. Although testing plays an extremely important role in V&V,
many other activities are also necessary.
Majority of software engineering practices attempt to create and modify software in a manner that
maximizes the probability of satisfying its user expectations. As a resultant several approaches or
techniques for Verification & Validation evolve across the development cycle.
1) Static Methods: These static methods of V&V basically involve the review processes.
2) Dynamic Methods: These dynamic methods like black box testing can be applied at all levels, even at
system level. While the principle of white box testing is that it checks in for interface errors
It overcomes the disadvantages of waterfall model. In the waterfall model, we have seen that testers
12
Pallavi Mirajkar ,Dept. of Comp. Sci., MD College, Parel TYBSC-CS SEM 5 (STQA UNIT- I) NOTES
involve in the project only at the last phase of the development process.
13
Pallavi Mirajkar ,Dept. of Comp. Sci., MD College, Parel TYBSC-CS SEM 5 (STQA UNIT- I) NOTES
In this, test team involves in the early phases of SDLC. Testing starts in early stages of product
development which avoids downward flow of defects which in turns reduces lot of rework. Both teams
(test and development) work in parallel. The test team works on various activities like preparing test
strategy, test plan and test cases/scripts while the development team works on SRS, Design and Coding.
Once the requirements were received, both the development and test team start their activities.
Deliverables are parallel in this model. Whilst, developers are working on SRS (System Requirement
Specification), testers work on BRS (Business Requirement Specification) and prepare ATP(Acceptance Test
Plan) and ATC (Acceptance Test Cases) and so on.
Testers will be ready with all the required artifacts (such as Test Plan, Test Cases) by the time
developers release the finished product. It saves lots of time.
Let’s see the how the development team and test team involves in each phase of SDLC in V Model.
Once client sends BRS, both the teams (test and development) start their activities. The
developers translate the BRS to SRS. The test team involves in reviewing the BRS to find the
missing or wrongrequirements and writes acceptancetest plan and acceptance test cases.
In the next stage, the development team sends the SRS the testing team for review and the
developers start building the HLD (High Level Design Document) of the product. The test team
involves in reviewing the SRS against the BRS and writes system test plan and test cases.
In the next stage, the development team starts building the LLD (Low Level Design) of the
product. The test team involves in reviewing the HLD (High Level Design) and writes Integration
test plan and integration test cases.
In the next stage, the development team starts with the coding of the product. The test team
involves in reviewing the LLD and writes functional test plan and functional test cases.
In the next stage, the development team releases the build to the test team once the unit
testing was done. The test team carries out functional testing, integration testing, system testing
and acceptance testing on the release build step by step.
Advantages:
Testing starts in early stages of product development which avoids downward flow of defects
and helps to find the defects in the early stages
Test team will be ready with the test cases by the time developers release the software which in
turns saves a lot of time
Testing is involved in every stage of product development. It gives a quality product.
Total investment is less due to less or no rework.
Disadvantages:
Initial investment is more because test team involves right from the early stage.
Whenever there is change in requirement, the same procedure continues. It leads more
documentation work.
14
Pallavi Mirajkar ,Dept. of Comp. Sci., MD College, Parel TYBSC-CS SEM 5 (STQA UNIT- I) NOTES
Applications:
Long term projects, complex applications, When customer is expecting a very high quality product with in
stipulated time frame because every stage is tested and developers & testers are working in parallel
Software reviews are a "filter" for the software process. That is, reviews are applied at various points
during software engineering and serve to uncover errors and defects that can then be removed.
Software reviews "purify" the
software engineering activities that w e have called analysis, design, and coding.
Technical work needs reviewing for the same reason that pencils need erasers: To err is human.
The second reason we need technical reviews is that although people are good at catching some of their
own errors, large classes of errors escape the originator more easily than they escape anyone else.
Each has its place. An informal meeting around the coffee machine is a form of review, if technical
problems are discussed. A formal presentation o f software design to an audience of customers,
management, and technical staff is also a form of review.
However, we focus on the formal technical review, sometimes called a walkthrough or an inspection. A formal
technical review (FTR) is the most effective filter from a quality assurance standpoint. Conducted by
software engineers (and others) for software engineers, the FTR is an effective means for uncovering errors
and improving software quality.
Testing Fundamentals :
Verification is the general term for techniques that aim to produce fault-free software.
15
Pallavi Mirajkar ,Dept. of Comp. Sci., MD College, Parel TYBSC-CS SEM 5 (STQA UNIT- I) NOTES
Testing is a widely used technique for verification, but note that testing is just one technique amongst
several others.
Remember that there is a separate collection of techniques for carrying out validation – which are
techniques which strive to make sure that software meets its users needs.
Software is complex and it is difficult to make it work correctly. Currently the dominant technique used
for verification is testing. And testing typically consumes an enormous proportion (sometimes as much as
50%) of the effort of developing a system.
Microsoft employ teams of programmers (who write programs) and completely separate teams of
testers (who test them).
At Microsoft there are as many people involved in testing as there are in programming.
Arguably, verification is a major problem and we need good techniques to tackle it. Often, towards the
end of a project, the difficult decision has to be made between continuing the testing or delivering the
software to its customers or clients.
This form of testing makes use of knowledge of how the program works – the structure of the program –
as the basis for devising test data.
In white box testing every statement in the program is executed at some time during the testing.
This is equivalent to ensuring that every path (every sequence of instructions) through the program is
executed at some time during testing.
This includes null paths, so an if statement without an else has two paths and every loop has two paths.
Testing should also include any exception handling carried out by the program.
The blackbox approach to testing is to devise sample data that is representative of all possible data.
We then run the program, input the data and see what happens. This type of testing is termed black box
testing because no knowledge of the workings of the program is used as part of the testing – we only
consider inputs and outputs. The program is thought of as being enclosed within a black box.
Black box testing is also known as functional testing because it uses only knowledge of the function of the
program (not how it works).
Ideally, testing proceeds by writing down the test data and the expected outcome of the test before
testing takes place.
This is called a test specification or schedule. Then you run the program, input the data and examine the
outputs for discrepancies between the predicted outcome and the actual outcome.
Test data should also check whether exceptions are handled by the program in accordance with its
specification.
16
Pallavi Mirajkar ,Dept. of Comp. Sci., MD College, Parel TYBSC-CS SEM 5 (STQA UNIT- I) NOTES
Consider a program that decides whether a person can vote, depending on their age. The minimum
voting age is 18. We know that we cannot realistically test this program with all possible values, but
instead we need some typical values. The approach to devising test data for black box testing is to use
equivalence partitioning. This means looking at the nature of the input data to identify common
features. Such a common feature is called a partition. In the voting program, we recognize that the input
data falls into two partitions:
In a large system or program it can be difficult to ensure that the test data is adequate. One way to try to
test whether it does indeed cause all statements to be executed is to use a profiler. A profiler is a
software package that monitors the testing by inserting probes into the software under test. When
testing takes place, the profiler can expose which pieces of the code are not executed and therefore
reveal the weakness in the data.
Another approach to investigating the test data is called mutation testing. In this technique, artificial
bugs are inserted into the program. An example would be to change a + into a –. The test is run and if the
bugs are not revealed, then the test data is obviously inadequate. The test data is modified until the
artificial bugs are exposed.
Beta testing :
In beta testing, a preliminary version of a software product is released to a selected market, the
customer or client, knowing that it has bugs. Users are asked to report on faults so that the product can be
improved for its proper release date. Beta testing gets its name from the second letter of the Greek
alphabet. Its name therefore conveys the idea that it is the second major act of testing, following on
after testing within the developing organization. Once Beta testing is complete and the bugs are fixed, the
software is released.
Automated testing :
Unfortunately this is not some automatic way of generating test data. There is no magical way of doing
that. But it is good practice to automate testing so that tests can be reapplied at the touch of a button.
This is extra work in the beginning but often saves time overall.
Regression testing:
However, when you fix a bug you might introduce a new bug. Worse, this new bug may not manifest
17
Pallavi Mirajkar ,Dept. of Comp. Sci., MD College, Parel TYBSC-CS SEM 5 (STQA UNIT- I) NOTES
itself with the current test. The only safe way to proceed is to apply all the previous tests again. This is
termed regression testing. Clearly this is usually a formidable task. It can be made much easier if all the
testing is carried out automatically rather than manually. In large developments, it is common to
incorporate revised components and reapply all the tests once a day.
Formal verification :
Formal methods employ the precision and power of mathematics in attempting to verify that a program
meets its specification. They place emphasis on the precision of the specification, which must first be
rewritten in a formal mathematical notation. One such specification language is called Z. Once the
formal specification for a program has been written, there are two alternative approaches:
1. write the program and then verify that it conforms to the specification. This requires
considerable time and skill.
2. derive the program from the specification by means of a series of transformations, each of
which preserve the correctness of the product. This is currently the favored approach.
Formal verification is very appealing because of its potential for rigorously verifying a program’s
correctness beyond all possible doubt. However, it must be remembered that these methods are carried out
by fallible human beings, who make mistakes. So they are not a cure-all. Formal verification is still in its
infancyandisnot widelyused inindustryand commerce,except inafewsafety-criticalapplications.
18