SE 1_removed
SE 1_removed
• Characteristics of Software.
What is software?
2. configuration data
information.
2. Customized products
Generic products
Customized products
• Software engineers probably spend less than 10% of their time writing
code.
• The other 90% of their time is involved with other activities that are
Student/Scientist
Characteristics of Software
• External qualities, such as usability and reliability, are visible to the user.
• Internal qualities are those that may not be necessarily visible to the user,
but help the developers to achieve improvement in external qualities.
• Reliability • Evolvability
• Correctness • Repairability
• Performance • Portability
• Usability • Verifiability
• Interoperability • Traceability
There is an absence of known catastrophic errors (those that disable or destroy the system).
• that is, the probability that the software will operate as expected over a specified time
interval.
• Let S be a software system and let T be the time of system failure. Then the reliability of S
at time t, denoted r(t), is the probability that T is greater than t ; that is,
• This is the probability that a software system will operate without failure for a specified
period.
• The failure intensity is initially high, as would be expected in new software as faults
are detected during testing.
• The bathtub curve is often used to explain the failure function for physical
components that wear out, electronics, and even biological systems.
Bathtub curve
Correctness
• For example,
• the baggage inspection system may be required to process 100 pieces of luggage per minute.
• a photo reproduction system might be required to digitize, clean, and output color copies at a rate of
one every two seconds.
• For example, in embedded systems, the software must be able to communicate with
various devices using standard bus structures and protocols.
• Open systems differ from open source code, which is source code that is made available to the
user community for improvement and correction.
• An open system allows the addition of new functionality by independent organizations through
the use of interfaces whose characteristics are published.
• Any software engineer can then take advantage of these interfaces, and thereby create software
that can communicate using the interface.
• A software system in which changes are relatively easy to make has a high level of
maintainability.
• Evolvability is a measure of how easily the system can be changed to accommodate new
features or modification of existing features.
• Measuring these qualities is not always easy, and is often based on anecdotal observation.
This means that changes and the cost of making them are tracked over time.
Portability
• The term environment refers to the hardware on which the system resides, the
operating system, or other software in which the system is expected to interact.
• Modular design, rigorous software engineering practices, and the effective use
of an appropriate programming language can also contribute to verifiability.
Traceability
• Fragility — When changes cause the system to break in places that have
no conceptual relationship to the part that was changed. This is a sign of
poor design.
• Rigidity — When the design is hard to change because every time you
change something, there are many other changes needed to other parts of
the system.
• Development
• Testing
• Deployment
• Maintenance.
• Producing a software application is relatively simple in
concept: Take an idea and turn it in to a useful program.
• These are more or less the same for any large project
although there are some important differences.
• They are:
2. Design
3. Development (coding)
4. Testing
5. Deployment (Implementation)
6. Maintenance
Requirements Gathering
• Sometimes a project doesn’t follow the plan closely, but every big
project must have a plan.
• The plan tells project members what they should be doing, when and
how long they should be doing it, and most important what the
project’s goals are.
• They give the project direction.
• We need to find out what the customers want and what the
customers need.
• Depending on how well defined the user’s needs are, this can be
time‐consuming.
• Once the customers’ wants and needs are clearly specified, then we
can turn them into requirements documents.
• Those documents tell the customers what they will be getting, and
they tell the project members what they will be building.
• At the end of the project, we use the requirements to verify that the
finished application actually does what it’s supposed to do.
Characteristics of good requirements
• Clear
Good requirements are clear, concise, and easy to understand.
• Prioritized
We might like to include every feature but don’t have the time or budget, so something’s
got to go. At this point, we need to prioritize the requirements.
• Verifiable
If we can’t verify a requirement, how do we know whether we have met it?
Being verifiable means the requirements must be limited and precisely defined.
MOSCOW METHOD - a common system for prioritizing application features.
• M - Must. These are required features that must be included. They are necessary
for the project to be considered a success.
• C - Could. These are desirable features that can be omitted if they won’t fit in the
schedule.
• W - Won’t. These are completely optional features that the customers have agreed
l will not be included in the current release.
REQUIREMENT CATEGORIES
Audience‐Oriented Requirements:
• These categories focus on different audiences and the different points of view that each
audience has.
Business Requirements:
User Requirements:
• User requirements (which are also called stake holder requirements), describe how the project
Non-functional Requirements:
• Non-functional requirements are statements about the quality of the application’s behavior or
• They specify things such as the application’s performance, reliability, and security characteristics.
Implementation Requirements:
• Implementation requirements are temporary features that are needed to transition to using the new
• Design is the only way that you can accurately translate stakeholder’s
requirements into a finished software product or system.
• Usability
• Reliability
• Performance
• Supportability
• Functionality is assessed by evaluating the feature set and capabilities of the program, the
generality of the functions that are delivered, and the security of the overall system
• Reliability is evaluated by measuring the frequency and severity of failure, the accuracy of
output results, the ability to recover from failure
HIGH-LEVEL DESIGN
• The high‐level design includes such things as decisions about what platform to use (such as
desktop, laptop, tablet, or phone), what data design to use and the project architecture at a
relatively high level.
• We break the project into different modules that handle the project’s major areas of
functionality.
• We should make sure that the high‐level design covers every aspect of the requirements.
• It should specify what the pieces(modules) do and how they should interact, but it should
include as few details as possible about how the pieces do their jobs.
LOW‐LEVELDESIGN
• After high‐level design breaks the project into pieces, we can assign those pieces to
groups within the project so that they can work on low‐level designs.
• The low‐level design includes information about how that piece of the project should
work.
• Better interactions between the different pieces of the project that may require changes
here and there.
2. Logical — parts that perform similar tasks are put together in a module.
3. Temporal — tasks that execute within the same time span are brought together.
5. Communicational — all elements of a module act on the same area of a data structure.
6. Sequential — the output of one part of a module serves as input for another part.
7. Functional — each part of the module is necessary for the execution of a single function.
• High cohesion implies that each module represents a single part of the problem solution.
Therefore, if the system ever needs modification, then the part that needs to be modified
exists in a single place, making it easier to change.
• There is great benefit in reducing coupling so that changes made to one code unit do not
propagate to others (that is, they are hidden)
• Low coupling limits the effects of errors in a module (lower “ripple effect”) and reduces
the likelihood of data integrity problems.
• Coupling has also been characterized in increasing levels as follows:
• No direct coupling - all modules are completely unrelated.
• Data coupling - when two modules interact with each other by means of passing data
• Stamp coupling - when a data structure is passed from one module to another, but that module operates on only
some of the data elements of the structure. (When multiple modules share common data structure and work on
• Control coupling - one module passes an element of control to another; that is, one module explicitly controls
the logic of the other. (Two modules are called control-coupled if one of them decides the function of the other
• Common coupling — if two modules both have access to the same global data.
• Content coupling — one module directly references the contents of another (When a module can directly access
• The programmers continue refining the low‐level designs until they know
how to implement those designs in code.
• As the programmers write the code, they test it to make sure it doesn’t
contain any bugs.
• During the Development Phase, the system developer takes the
detailed logical information documented in the previous phase and
transforms it into machine-executable form, and ensures that all of
the individual components of the automated system/application
function correctly and interface properly with other components
within the system/application.
• Even poorly planned and executed testing will improve software quality if it finds
defects.
• Testing is a life-cycle activity; testing activities begin from product inception and
continue through delivery of the software and into maintenance.
• Collecting bug reports and assigning them for repair is also a testing activity.
• But as a life-cycle activity, the most valuable testing activities occur at the
beginning of the project.
• Even if a particular piece of code is thoroughly tested and contains no (or few)
bugs, there’s no guarantee that it will work properly with the other parts of the
system.
• One way to address the problems like this, is to perform different kinds of tests.
• First developers test their own code. Then testers who didn’t write the code test
it. After a piece of code seems to work properly, it is integrated into the rest of
the project, and the whole thing is tested to see if the new code broke anything.
The terms error, bug, fault, and failure:-
• Use of “bug” is that an error crept into the program through no one’s
action. The preferred term for an error in requirement, design, or code is
“error” or “defect.”
• A fault that causes the software system to fail to meet one of its
requirements is called a failure.
• Verification, or testing, determines whether the products of a given
phase of the software development cycle fulfill the requirements
established during the previous phase.
• Testing will flush out errors, this is just one of its purposes.
• Testing must increase faith in the system, even though it may still contain undetected faults, by
ensuring that the software meets its requirements.
• A good test is one that has a high probability of finding an error. A successful test is one that
uncovers an error.
Basic principles of software testing
• These are the most helpful and practical rules for the tester.
All tests should be traceable to customer requirements.
Tests should be planned long before testing begins.
Remember that the Pareto principle applies to software testing.
(The Pareto principle states that for many outcomes roughly 80% of consequences come
from 20% of the causes.)
Testing should begin “in the small” and progress toward testing “in the large.”
Exhaustive testing is not practical.
To be most effective, testing should be conducted by an independent party.
• Testing is a well-planned activity and should not be conducted willy nilly, nor undertaken at
the last minute, just as the code is being integrated.
• The most important activity that the test engineer can conduct during requirements
engineering is to ensure that each requirement is testable.
• A requirement that cannot be tested cannot be guaranteed and, therefore, must be reworked
or eliminated.
• Wide range of testing techniques for unit testing, integration testing, and system level
testing.
• Any one of these test techniques can be either insufficient or computationally unfeasible.
Therefore, some combination of testing techniques is almost always employed.
Levels of Testing ( Types of Testing )
• In black box testing, only inputs and outputs of the unit are considered; how
the outputs are generated based on a particular set of inputs is ignored.
• This kind of testing involves subjecting the code unit to many randomly
• Example:-
Auto manufacturers don’t have a crash test dummy representing every possible
human being. Instead, they use a handful of representative dummies — small,
average, and large adult males; small, average, and large adult females; pregnant
female; toddler, etc. These categories represent the equivalence classes.
Disadvantages to black box testing :-
• Black box tests are data driven, White box tests are logic driven.
• White box testing are designed to exercise all paths in the code unit.
• White box testing also has the advantage that it can discover those code
paths that cannot be executed.
• The following white box testing strategies
DD path testing
DU path testing
McCabe’s basis path method
Code inspections
Formal program proving
• DD path testing, or decision-to-decision path testing is based
on the control structure of the program.
• The basis path method begins with the selection of a baseline path,
which should correspond to some “ordinary” case of program
execution along one of the programs.
engineers.
• Code inspections can detect errors as well as discover ways for improving
the implementation.
a theorem and some form of calculus is used to prove that the program is
correct.
required.
Integration Testing:-
• Integration testing involves testing of groups of components integrated
to create a system or sub-system.
• It checks that the existing code calls the new method correctly, and that
new method can call other methods correctly.
• Incremental Integration Testing:- This is a strategy that partitions the
system in some way to reduce the code tested. Incremental testing strategy
includes:
• Top-Down Testing
• Bottom-Up Testing
• Other kinds of system partitioning
Most software builders use a process called alpha and beta testing to
uncover errors that only the end user able to find.
Alpha test:-
First round testing by selected customers or independent testers.
version of the complete software is tested by customer under the
supervision of the developer at the developer’s site
Beta test:-
Second round testing after alpha test.
version of the complete software is tested by customer at his or
her own site without the developer being present.
• Software Fault Injection :- Fault injection is a form of dynamic
software testing that acts like “crashtesting” the software by
demonstrating the consequences of incorrect code or data.
• The main benefit of fault injection testing is that it can demonstrate that
the software is unlikely to do what it shouldn’t.
• Eg: type a letter when the input called is for a number
• Destructive test :- Makes the application fail so that you can study
its behaviour when the worst happens.
• Installation test:- Make sure that you can successfully install the
system on a fresh computer
• Functional test:- Deals with features the application provides.
• There are several criteria that can be used to determine when testing should cease.
• What is deployment?
• Getting software out of the hands of the developers into the
hands of the users.
5. Expertise:- Does the customer have the IT expertise to install the software?
• Fixing a bug sometimes leads to another bug, so now we get to fix that one
as well.
– COCOMO
Planning Objective:-
• The objective of software project planning is to
provide a framework that enables the manager
to make reasonable estimates of resources, cost,
and schedule.
• Estimates should attempt to define best-case
and worst-case scenarios so that project
outcomes can be bounded.
derived formulas are used for predicting the data that are a required
assumptions.
measure work.
• In this technique, firstly the task is divided or broken down into its
the standard time is available from some other source, then these
embedded)
COCOMO Model Levels
• Basic - predicted software size (lines of code) was used to
rigid requirements
2. reports,
application.
• Each object instance (e.g., a screen or report) is
classified into one of three complexity levels (i.e.,
simple, medium, or difficult).
• Complexity is a function of the number and source of
the client and server data tables that are required to
generate the screen or report and the number of views
or sections presented as part of the screen or report.
• Once complexity is determined, the number of
screens, reports, and components are weighted.
• The object point count is then determined by multiplying
the original number of object instances by the weighting
factor in the figure and summing to obtain a total object
point count.
• When component-based development or general
software reuse is to be applied, the percent of reuse
(%reuse) is estimated and the object point count is
adjusted
or project management.
or maintained
Staffing Activities for Software Projects
Factors to Consider when Staffing
1. Education 1. Commitment
2. Experience 2. Self-Motivation
4. Motivation 4. Intelligence
Education:
• Does the candidate have the minimum level of education for the
job? Does the candidate have the proper education for future
growth in the company?
Experience:
• Does the candidate have an acceptable level of experience? Is it
the right type and variety of experience?
Training:
• Is the candidate trained in the language, methodology, and
equipment to be used, and the application area of the software
system?
Motivation:
• Is the candidate motivated to do the job, work for the
project, work for the company, and take on the
assignment?
Commitment:
• Will the candidate demonstrate loyalty to the project, to
the company, and to the decisions made?
Self-motivation:
• Is the candidate a self-starter, willing to carry a task
through to the end without excessive direction?
Group affinity:
• Does the candidate fit in with the current staff? Are
there potential conflicts that need to be resolved?
Intelligence:
• Does the candidate have the capability to learn, to
take difficult assignments, and adapt to changing
environments?
Sources of Qualified Project Individuals
• Model Approaches
• Prerequisites
– Sashimi
– Incremental waterfall
– V model
work flow.
project.
• Sequential process models, such as the waterfall and
V models, are the oldest software engineering
paradigms.
• They suggest a linear process flow that is often
inconsistent with modern realities (e.g., continuous
change, evolving systems, tight time lines) in the
software world. However, they have applicability in
situations where requirements are well defined and
stable
• Incremental process models are iterative in nature
and produce working versions of software quite
rapidly.
• Evolutionary process models recognize the iterative,
incremental nature of most software engineering
projects and are designed to accommodate change.
• Evolutionary models, such as prototyping and the
spiral model, produce incremental work products (or
working versions of the software) quickly.
• These models can be adopted to apply across all software
engineering activities— from concept development to
long-term system maintenance.
as time goes.
• Predictive models are useful primarily because they give
a lot of structure to a project.
• It is Empirical Software Engineering.
• The goal of such methods is repeatable, refutable (and
possibly improvable) results in software engineering.
• Some of the predictive software development models are
waterfall model, waterfall with feedback,
sashimi model, incremental waterfall model, V‐
model and software development life cycle.
Success and Failure Indicators
development approaches.
Predictive:
• Provides all three features at the same time with full fidelity.
Iterative:
• Initially provides all three features at a low (but usable) fidelity. Later
iteration provide higher and higher fidelity until all the features are
Agile:
• Initially provides the fewest possible features at low
fidelity. Later versions improve the fidelity of existing
features and add new features. Eventually all the features are
provided at full fidelity.
WATERFALL MODEL
the project.
• Prototype does not work exactly the same way the finished
• However it lets the customer to see what the application will look like.
• After the customers experiment with the prototype, they can give us
2. Evolutionary prototype
3. Incremental prototype
1. Improved requirements
2. Common vision
3. Better design
Improved requirements
• Prototypes allow customers to see what the finished
application will look like.
Better design
• Prototypes let the developers quickly explore specific pieces of
the application to learn what they involve.
• It also help them to improve the design and make the final code
more elegant and robust
Disadvantages of Prototype
1. Narrowing vision
2. Customer impatient
3. Scheduled pressure
4. Raised expectation
5. Attachment to code
6. Never-ending prototype
Narrowing vision
• People (customers and developers) tend to focus
on a prototype’s specific approach rather than on
the problem it addresses.
Customer impatience
• A good prototype can make customers think that
the finished application is just around the corner.
Scheduled pressure
Raised expectation
• Preliminary planning –
– Project Manager and technical lead are assigned to the project, and they start
planning.
• Review – The team uses metrics to assess the project and decide
whether the development process can be improved in the future.
• Requirement specifications
• Requirements modelling
• Requirements documentation.
Software Requirements Specification
(SRS)
• A requirement can range from a high-level, abstract statement of a service or
constraint to a detailed, formal specification.
• SRS is the set of activities designed to capture behavioral and non-behavioral
aspects of the system in the SRS document.
• The goal of the SRS activity, and the resultant documentation, is to provide a
complete description of the system’s behavior without describing the internal
structure.
• This aspect is easily stated, but difficult to achieve.
• Software specifications provide the basis for analyzing the requirements,
validating that they are the stakeholder’s intentions, defining what the
designers have to build, and verifying that they have done so correctly.
• The term “gathering” is also used to describe the process of collecting software
requirements.
• Eliciting requirements is one of the hardest jobs for the requirements engineer.
• Modeling requirements involves representing the requirements in some
form.
• Words, pictures, and mathematical formulas can all be used but it is never
easy to effectively model requirements.
• Finally, requirements change all the time and the requirements engineer
must be equipped to deal with this eventuality.
• SRSs are usually classified in terms of their level of abstraction:
1. user requirements
2. system requirements
• The user needs are usually called “functional requirements” and the
external constraints are called “non-functional requirements.”
• Functional requirements describe the services the system should provide.
• Sometimes the functional requirements state what the system should not do.
• Most systems must operate with other systems and the operating interfaces
must be specified as part of the requirements.
• There are three types of interface that may have to be defined: procedural
interfaces, data structures that are exchanged , and data representations.
current technology and within budget the system can be integrated with
• The software engineer must monitor and control these factors throughout
the requirements engineering process.
The following three approaches to requirements elicitation will be
discussed in detail:
• Designer as apprentice
JAD
• These meetings occur four to eight hours per day and over a period
lasting one day to a couple of weeks.
Software engineers can use JAD for:
i. eliciting requirements and for the SRS
ii. design and software design description
iii. code
iv. tests and test plans
v. user manuals
• Planning for a review or audit session involves
three steps:
i. selecting participants
ii. preparing the agenda
iii. selecting a location
Rules for JAD sessions
The session leader must make every effort to ensure that these practices are implemented.
• Stay on schedule
• Resolve conflicts.
• QFD provides a structure for ensuring that customers’ wants and needs
are carefully heard, and then directly translated into a company’s
internal technical requirements from analysis through implementation to
deployment.
• The basic idea of QFD is to construct relationship matrices between
customer needs, technical requirements, priorities, and
competitor assessment.
“looks over the shoulder” of the customer to enable the engineer to learn enough about the work
his needs.
• The relationship between customer and designer is like that between a master craftsman and
apprentice.
• The apprentice learns a skill from the master just as we want the requirements engineer (the
• Both customer and designer learn during this process; the customer learns what may be
possible and the designer expands his understanding of the work.
• Requirements modeling involves the techniques needed to express requirements in a way that
can capture user needs.
• There are a number of ways to model software requirements; these include natural languages,
informal and semiformal techniques, user stories, use case diagrams, structured diagrams, object-
oriented techniques, formal methods, and more.
• English, or any other natural language, have many problems for requirements communication.
• These problems include lack of clarity and precision, mixing of functional and non-functional
requirements, ambiguity, overflexibility, and lack of modularization.
• Every clear SRS must have a great deal of narrative in clear and concise natural language. But
when it comes to expressing complex behavior, it is best to use formal or semiformal methods,
clear diagrams or tables, and narrative as needed to tie these elements together.
Use Cases
• Use cases are an essential artifact in object-oriented requirements
elicitation and analysis and are described graphically using any of several
techniques.
• Artifact – something observed in a scientific investigation or experiment
that is not naturally present but occurs as a result of the preparative or
investigative procedure.
• One representation for the use case is the use case diagram, which depicts
the interactions of the software system with its external environment.
In a use case diagram, the box
represents the system itself.
The stick figures represent “actors” that
designate external entities that interact
with the system.
The actors can be humans, other
systems, or device inputs.
Internal ellipses represent each activity
of use for each of the actors (use cases).
The solid lines associate actors with
each use.
• Each use case is a document that describes scenarios of operation of the
system under consideration as well as pre- and post conditions and
exceptions.
• User stories are short conversational texts that are used for initial requirements
discovery and project planning.
• User stories are written by the customers in their own “voice,” in terms of what
the system needs to do for them.
• User stories usually consist of two to four sentences written by the customer in
his own terminology, usually on a three-by-five inch card.
• The appropriate amount of user stories for one system increment or evolution is about
80, but the appropriate number will vary widely depending upon the application size
and scope
• User stories should provide only enough detail to make a reasonably low risk estimate
of how long the story will take to implement.
• When the time comes to implement the story, developers will meet with the customer
to flesh out the details.
• The SRS document is the official statement of what is required of the system
developers.
• SRS is not a design document. It should be a set of what the system should do rather
than how it should do it.
• A variety of stakeholders uses the software requirements throughout the software life
cycle.
• Stakeholders include customers (these might be external customers
or internal customers such as the marketing department), managers,
developers, testers, and those who maintain the system.
• Each stakeholder has a different perspective on and use for the SRS.
• Requirements traceability is concerned with the relationships between
requirements, their sources, and the system design.
Clear
• That means they can’t be pumped full of management‐speak, and confusing jargon.
• It is okay to use technical terms and abbreviations if they are defined some where or
they are common knowledge in the project's domain.
• If the requirement is worded so that we can’t tell what it requires, then we can’t build a
system to satisfy it.
• Although this may seem like an obvious feature of any good requirement, it’s
sometimes harder to guarantee than we might think.
• Read them carefully to make sure we can’t think of any way to interpret them other
than the way we intend.
Consistent
• That means not only that they cannot contradict each other, but
that they also don’t provide so many constraints that the problem
is unsolvable.
• When we start working on the project’s schedule, it’s likely we need to cut a
few nice‐to‐haves from the design.
• We might like to include every feature but don’t have the time or budget, so
something’s got to go.
• Unambiguous — An unambiguous SRS is one that is clear and not subject to different interpretations. Using
appropriate language can help avoid ambiguity.
• Ranked — An SRS must be ranked for importance and/or stability. Not every requirement is as critical as another.
By ranking the requirements, designers will find guidance in making tradeoff decisions.
• Verifiable — Any requirement that cannot be verified is a requirement that cannot be shown to have been met.
• Traceable — The SRS must be traceable because the requirements provide the starting point for the traceability
chain.
REQUIREMENT CATEGORIES
Audience‐Oriented Requirements:
• These categories focus on different audiences and the different points of view that each
audience has.
Business Requirements:
• Business requirements layout the project’s high‐level goals. They explain what the
customer hopes to achieve with the project.
User Requirements:
• User requirements (which are also called stake holder requirements), describe how
the project will be used by the end users.
• They often include things like sketches of forms, scripts that’s how the steps users
will perform to accomplish specific tasks, use cases, and prototypes.
Functional Requirements:
• They are similar to the user requirements but they may also include things that the
users won’t see directly.
Non-functional Requirements:
• They specify things such as the application’s performance, reliability, and security
characteristics.
Implementation Requirements:
• After we finish testing the system and are ready to use it full time, you need a method
to copy any pending invoices from the old database into the new one.
GATHERING REQUIREMENTS
• Learn as much as we can, about the problem they are trying to address and any ideas.