Software Engineering
Software Engineering
Paper XVII
Few words before we start
• Lets’ not only study Software Engineering
• But let’s practice it.
Section – I
Software
• Oxford dictionary defines software as:
• The design starts after the requirements analysis is complete, and coding
begins after the design is complete.
• After this, the regular operation and maintenance of the system takes place.
Waterfall Model
Advantages of Waterfall Model
• Freezing the requirements usually requires choosing the hardware (because it forms a part of
the requirements specification). A large project might take a few years to complete. If the
hardware is selected early, then due to the speed at which hardware technology is changing, it is
likely that the final software will use a hardware technology on the verge of becoming obsolete.
This is clearly not desirable for such expensive software systems.
Limitations of Waterfall Model
• The basic idea here is that instead of freezing the requirements before any design or
coding can proceed, a throwaway prototype is built to help understand the
requirements.
• Development of the prototype obviously undergoes design, coding, and testing, but
each of these phases is not done very formally or thoroughly.
Prototyping Model
• By using this prototype, the client can get an actual feel of the system,
because the interactions with the prototype can enable the client to better
understand the requirements of the desired system.
• The basic idea is that the software should be developed in increments, each increment
adding some functional capability to the system until the full system is implemented.
• An advantage of this approach is that it can result in better testing because testing each
increment is likely to be easier than testing the entire system as in the waterfall model.
Spiral Model
• Furthermore as in prototyping, the increments provide feedback to the client that is
useful for determining the final requirements of the system.
• As the name suggests, the activities in this model can be organized like a spiral that has
many cycles as shown in figure in the next slide.
• Each cycle in the spiral begins with the identification of objectives for that cycle, the
different alternatives that are possible for achieving the objectives, and the constraints
that exist.
Spiral Model
Spiral Model
• The next step in the cycle is to evaluate these different alternatives
based on the objectives and constraints.
• The focus of evaluation in this step is based on the risk perception for
the project.
• The next step is to develop strategies that resolve the uncertainties
and risks.
• This step may involve activities such as benchmarking, simulation, and
prototyping.
• Next, the software is developed, keeping in mind the risks.
Fourth Generation Techniques
• The term fourth generation techniques compasses a broad array of software tools that
have one thing in common: each enables the software engineer to specify some
characteristic of the software at a higher level. The tool then automatically generates
source code based on the developer’s specifications.
• Currently, a software development environment that supports the 4GT model includes
some or all of the following tools: nonprocedural languages for database query, report
generation, data manipulation, screen interaction and definition, code generation,
high-level graphics.
Fourth Generation Techniques
• Like all other models, 4GT begins with a requirements gathering phase. Ideally, the customer would
describe the requirements, which are directly translated into an operational prototype. Practically,
however, the client may be unsure of the requirements, may be ambiguous in his specs or may be
unable to specify information in a manner that a 4GT tool can use. Thus, the client/developer dialog
remains an essential part of the development process.
• For small applications, it may be possible to move directly from the requirements gathering phase to
the implementation phase using a nonprocedural fourth generation language. However for larger
projects a design strategy is necessary. Otherwise, the same difficulties are likely to arise as with
conventional approaches.
Fourth Generation Technique
Fourth Generation Technique
• Advantages:
• Dramatic reduction in software development time.
• Disadvantages:
• Not much easier to use as compared to programming languages.
Concept of Project
Management
Project Management Process
• Proper management is an integral part of software development.
• A large software development project involves many people working
for a long period of time.
• We have seen that a development process typically partitions the
problem of developing software into a set of phases.
• To meet the cost, quality, and schedule objectives, resources have to
be properly allocated to each activity for the project, and progress of
different activities has to be monitored and corrective actions taken, if
needed.
Project Management Process
• All these activities are part of the project management process.
• The project management process specifies all activities that need to
be done by the project management to ensure that cost and quality
objectives are met.
• Its basic task is to ensure that, once a development process is chosen,
it is implemented optimally.
• The focus is on issues like planning a project, estimating resource and
schedule, and monitoring and controlling the project.
Project Management Process
• process metrics.
Product Metrics
• For effective monitoring, the management needs to get information about the
project:
• how far it has progressed,
• Therefore, the plan must be adapted and updated as the project proceeds.
Decomposition Techniques
• Decomposition techniques are one of the software project estimation
methods.
• Decomposition techniques take a divide-and-conquer approach to
software project estimation.
• By decomposing a project into major functions and related software
engineering activities, cost and effort estimation can be performed in
a stepwise fashion.
Software Sizing
• The accuracy of a software project estimate is predicated on a number of things:
1. The degree to which you have properly estimated the size of the product to be built.
2. The ability to translate the size estimate into human effort, calendar time, and dollars (a
function of the availability of reliable software metrics from past projects).
3. The degree to which the project plan reflects the abilities of the software team.
4. The stability of product requirements and the environment that supports the software
engineering effort.
Software Sizing
• Because a project estimate is only as good as the estimate of the size
of the work to be accomplished, software sizing represents your fi st
major challenge as a planner.
• In the context of project planning, size refers to a quantifiable
outcome of the software project.
• If a direct approach is taken, size can be measured in lines of code
(LOC).
• If an indirect approach is chosen, size is represented as function
points (FP).
Software Sizing
• The most common technique for estimating a project is to base the estimate on
the process that will be used.
• That is, the process is decomposed into a relatively small set of activities,
actions, and tasks and the effort required to accomplish each is estimated.
• The original COCOMO model became one of the most widely used and discussed
software cost estimation models in the industry.
• It has evolved into a more comprehensive estimation model, called COCOMO II.
• The model has been derived from productivity data collected for over
4,000 contemporary software projects.
1. Correct 5. Consistent
3. Unambiguous stability
4. Verifiable 7. Modifiable
8. Traceable
Components of SRS
• Completeness of specifications is difficult to achieve and even more
difficult to verify.
• Having guidelines about what different things an SRS should specify
will help in completely specifying the requirements.
• The basic issues an SRS must address are:
• Functionality
• Performance
• Design constraints imposed on an implementation
• External interfaces
Functional Requirements
• The functional requirements for a system describe the functionalities
or services that the system is expected to provide.
• They provide how the system should react to particular inputs and
how the system should behave in a particular situation.
Performance Requirements
• There may be a requirement that system will have to use some existing hardware, limited primary
and/or secondary memory.
• There may be some standards of the organization that should be obeyed such as the format of
reports.
• It imposes a restriction sometimes on the use of some commands, control access to data, require
the use of passwords and cryptography techniques etc.
External Interface Requirements
• It views a system as a function that transforms the inputs into desired outputs.
• Any complex system will not perform this transformation in a "single step," and
a data will typically undergo a series of transformations before it becomes the
output.
• The DFD aims to capture the transformations that take place within a system to
the input data so that eventually the output data is produced.
Symbol Used in DFD
• The agent that performs the transformation of data from one
state to another is called a process (or a bubble).
• So, while drawing a DFD, one must not get involved in procedural
details, and procedural thinking must be consciously avoided.
Suggestions for constructing DFD
• Work your way consistently from the inputs to the outputs, or vice versa.
• Start with a high-level data flow graph with few major transforms describing the entire
transformation from the inputs to outputs and then refine each transform with more detailed
transformations.
• Never try to show control logic. If you find yourself thinking in terms of loops and decisions, it is
time to stop and start again.
• Label each arrow with proper data elements. Inputs and outputs of each transform should be
carefully identified.
• Many systems are too large for a single DFD to describe the data
processing clearly.
• Clearly, the goal during the design phase is to produce correct designs.
• However, correctness is not the sole criterion during the design phase, as
there can be many correct designs.
• The goal of the design process is not simply to produce a design for the
system.
Software Design – Principles
• The software design model is the equivalent of an architect’s plans for
a house.
• It begins by representing the totality of the thing to be built (e.g., a
three-dimensional rendering of the house) and slowly refines the
thing to provide guidance for constructing each detail (e.g., the
plumbing layout).
• Similarly, the design model that is created for software provides a
variety of different views of the system.
Software Design – Principles
• User interface design should be tuned to the needs of the end user.
However, in every case, it should stress ease of use.
Software Design - Principles
• Data Design
• Architecture Design
• Procedural Design
• At the program-component level, the design of data structures and the associated algorithms
required to manipulate them is essential to the creation of high- quality applications.
• At the application level, the translation of a data model (derived as part of requirements
engineering) into a database is pivotal to achieving the business objectives of a system.
• At the business level, the collection of information stored in disparate databases and reorganized
into a “data warehouse” enables data mining or knowledge discovery that can have an impact on the
success of the business itself.
• For generic software products, it means that there should be tests for all of the system features,
plus combinations of these features, that will be incorporated in the product release.
• To discover situations in which the behavior of the software is incorrect, undesirable, or does not
conform to its specification. These are a consequence of software defects.
• Defect testing is concerned with rooting out undesirable system behavior such as system crashes,
unwanted interactions with other systems, incorrect computations, and data corruption.
Principles
3. Early Testing
4. Defect Clustering
5. Pesticide Paradox
• Observability
• Controllability
• Decomposability
• Simplicity
• Stability
• Understandability
Test Cases
• Having test cases that are good at revealing the presence of faults is
central to successful testing.
• The reason for this is that if there is a fault in a program, the program
can still provide the expected behavior for many inputs.
• Only for the set of inputs that exercise the fault in the program will
the output of the program deviate from the expected behavior.
• Hence, it is fair to say that testing is as good as its test cases.
Test Cases
• Having test cases that are good at revealing the presence of faults is
central to successful testing.
• The reason for this is that if there is a fault in a program, the program
can still provide the expected behavior for many inputs.
• Only for the set of inputs that exercise the fault in the program will
the output of the program deviate from the expected behavior.
• Hence, it is fair to say that testing is as good as its test cases.
White Box Testing
• Using white-box testing methods, you can derive test cases that
1) guarantee that all independent paths within a module have been exercised at least once,
3) execute all loops at their boundaries and within their operational bounds,
• That is, black-box testing techniques enable you to derive sets of input
conditions that will fully exercise all functional requirements for a program.
2) interface errors,
• Once a programmer has written the code for a module, it has to be verified before
it is used by others.
• So far we have assumed that testing is the means by which this verification is done.
• Though testing is the most common method of verification, there are other
effective techniques also.
• Here we will focus on techniques that are now widely used in practice
— inspections (including code reading), unit testing, and program
checking.
• The tests should be calls to these routines with different input parameters.
• You can use the approaches to test case design to design the function or method tests.
• When you are testing object classes, you should design your tests to provide coverage of
all of the features of the object.
Integration Testing
• The different modules are first testing individually and then combined to make
a system.
• Testing the interface between the small units or modules is integration testing.
• Validation Testing ensures that the product actually meets the client's
needs. It can also be defined as to demonstrate that the product fulfills its
intended use when deployed on appropriate environment
System Testing
• System testing checks that components are compatible, interact correctly and
transfer the right data at the right time across their interfaces.
• When you integrate components to create a system, you get emergent behavior.
• This means that some elements of system functionality only become obvious
when you put the components together.
Extras
Coupling and Cohesion
Introduction
• The purpose of Design phase in the Software Development Life Cycle is to produce
a solution to a problem given in the SRS(Software Requirement Specification)
document. The output of the design phase is Sofware Design Document (SDD).
• Basically, design is a two-part iterative process. First part is Conceptual Design that
tells the customer what the system will do. Second is Technical Design that allows
the system builders to understand the actual hardware and software needed to
solve customer’s problem.
Conceptual Design of System
• It is independent of implementation.
• Software architecture
• Network architecture
• Shows interface.
Modularization
• Basically, cohesion is the internal glue that keeps the module together.