software_engineering_5modules
software_engineering_5modules
MODULE 1
• All of these factors have contributed to the evolution of increasingly complex and
sophisticated computer-based systems.
• Sophistication and complexity yield desirable outcomes if the system works as
intended, but can pose serious challenges if the opposite is true.
• Large software companies now employ entire groups of specialists, each of whom
works on a specific aspect of the technology needed to complete a single application.
• The inquiries made of the lone programmer are identical to those made throughout the
development of contemporary computer systems.
They are,
(1) What factors contribute to the protracted time required to finish software?
(2) What aspects of development add to the staggering costs?
(3) Why is it that we are unable to detect all of the faults in the software before releasing
it to our clients?
(4) Why do we put forth a lot of time and energy to keep the programmes that are
currently in place going?
(5) Why do we still struggle to accurately measure the amount of progress made when
the software is being built and maintained?
3
• The fact that these inquiries are being made demonstrates that businesses are
worried about software and its creation process.
• This worry has prompted the growth of software engineering practices.
1.4 Software:
Definition:
“ It is a set of instruction that when executed provide desired features, functions
and performance”
(Or)
“ It is a data structure that enable the programs to adequately manipulate
information”
(Or)
“ It is a documents that describe the operation and use of programs”
This leads to an increase in the failure rate; alternatively, the hardware "wears out." The
"Bathtub Curve" below illustrates this point.
Failure
Rate
Time
* The failure rate curve for software should take the form of a "idealised curve,"
as it is not affected by environmental conditions in any way.
Failure
Rate
Actual Curve
Change
Idealized Curve
* In the beginning stages of a program's life, significant failure rates are caused by
mistakes that have not yet been founTdim
. e
* The curve will become flatter once these faults have been rectified (provided that no
new flaws are introduced).
Therefore, it is abundantly evident that:
5
(3) Although the industry is moving toward component – based construction, most
software continues to be custom built
Component reuse is a natural element of the engineering process in the world of
hardware, but in the world of software, it has to be performed on a large scale to be
effective.
* It is recommended to build and develop a reusable software component that may be
utilized in a variety of different programmes.
Example:
Present-day user interfaces are constructed using reusable components, which facilitate
the following:
=> Window creation;
=> Pull-down menus; and
=> An extensive range of interaction mechanisms.
=> Some pieces of system software process information structures that are complex yet
determinable, whereas other pieces of system software process data that is mostly
undetermined.
As an example: Compilers, editors, and utilities for managing and organizing files
Characteristics:
=> Extensive connection with computer hardware
=> Prolonged and intensive use by a number of individuals
=> Structures of data that are complex
= Multiple connections to the outside world
=> Working in parallel at the same time
************************************************************************
Note:
Legacy Software’s:
* Decades have passed since the development of this software system [i.e., older
programmes], and it has been continuously updated to accommodate shifting business
requirements and computing platforms.
* Such system proliferation causes headaches for large organisations due to the high cost
of maintenance and the inherent risk associated with their evolution.
************************************************************************
1.7 Software Myths:
* Beliefs about software
* The procedure used to build it, which can be traced back to the earliest days of
computers
* The myth – has a number of characteristics that have contributed to them becoming
insidious [i.e. proceeding inconspicuously but harmfully]
* For example, myths give the impression of being factual claims that are rational [and
sometimes do contain aspects of truth].
Myth 1:
There is already a book in our possession that is loaded with guidelines and processes
for the construction of software....Won't that provide the information that my people
require to make informed decisions?
Reality:
=> The book of standards might very well exist......However, is it put to use?
=> Are those who work in the software industry aware of its existence?
9
Myth 2:
If we fall behind schedule, we have the ability to add more programmers and make up
the time [this strategy is sometimes referred to as the "Mongolian Horde Concept"].
Reality:
* Adding personnel to a software project that has already been delayed results in the
project being further behind schedule.
* By educating the newly added members, the time allocated for productive development
is diminished.
* While it is possible to add personnel, it is crucial that such additions are conducted in a
methodical and coordinated fashion.
Myth 3: By delegating the software development to a third-party firm, I can simply
sit back and unwind while the project is executed.
Reality:
Failure to comprehend internal software project management and control will inevitably
result in difficulties for an organisation when it attempts to outsource software projects.
Myth 1:
It suffices to commence programme writing with a broad statement of objectives.
The details can be completed later.
Reality:
• A statement that is equivocal, or has two meanings, gives rise to a multitude of
complications.
• However, unambiguous statements can only be generated via consistent and
efficient communication between the client and the developer.
• Consequently, it is not always feasible to formulate statements of requirements
that are exhaustive and consistent.
Myth 2:
Myth 2:
Before the programme is operational, it is impossible for me to evaluate its quality.
Reality:
* Implement one of the effective software quality mechanisms during the project's
inception.
Software quality evaluations exhibit greater efficacy in identifying specific categories of
software errors when compared to testing.
Myth 3:
The working programme is the sole deliverable work product that ensures a
successful undertaking.
Reality:
Work programmes are components of software configuration, which comprises a
multitude of elements.
However, documentation merely serves as a cornerstone for effective software
engineering and support.
Myth 4:
Inevitably, software engineering will force us to generate copious amounts of
superfluous documentation, which will impede our progress.
Reality:
Software engineering focuses on producing high-quality software rather than mere
document creation.
Thus, improved quality results in decreased rework.
* Decreased rework results in expedited delivery
In closing,
The fact that numerous software professionals acknowledge the fallacy (erroneous belief)
of software myths will indirectly contribute to the propagation of ineffective management
and technical practices.
12
Framework Activity:
* It has a set of software engineering actions [a group of related jobs that come together
to make a big piece of software engineering work].
Design is an action in software engineering.
14
Each action has its own set of tasks that need to be done. These tasks do some of the
work that the action implies.
15
A Process Framework
* It describes the
=> Technical tasks to be conducted
=> The risks that are expected
=> Resources that will be required
=> Work products to be produced
=> Work Schedule
(3) Modeling:
It explains how to build models that help both developers and clients visualise and
discuss software's desired features and functionality.
(4) Construction:
* It combines
=> Code generation [either manual (Or) automated]
=> Testing [Required uncovering errors in the code]
(5) Deployment:
* The customer receives the software and evaluates it.
The customer then gives feedback based on the evaluation.
These general framework tasks can be used during the
(5) Measurements:
To aid the team in delivering software, it specifies and collects process, project, and
product measures, and it can be used in conjunction with other frameworks and
overarching tasks.
(6) Software configuration management:
Effectively handles change impact management for software development.
(7) Reusability management:
* It sets up a way to make parts that can be used again and again.
* It sets rules for reusing work products.
(8) Work product preparation and production:
* It includes the things that need to be done to make work goods, like
=> Models
=> Documents
=> Logs, forms and lists
PROCESS MODELS
2.0 Process Models – Definition
* It's a separate collection of things you have to do, accomplish, and produce in order to
create high-quality software.
* While not flawless, these process models do provide a helpful framework for software
engineering projects.
=> Planning
=> Modeling
=> Construction and
=> Deployment
Problems encountered in waterfall model:
(1) In practise, projects rarely progress in a linear fashion. Consequently, the team's
progress is muddled by the constant stream of changes.
(2) The consumer often has trouble articulating their needs in detail;
(3) The customer must be patient
* Because of the sequential structure of the water fall model, "Blocking State" occurs
when certain members of the project team must wait for others to finish dependent tasks.
* In certain contexts, the water-fall model can be utilized effectively as a model for the
process.
=> Requirements are fixed and work is to proceed to completion in a
linear manner
19
* meaning that the fundamental needs have been met, but the primary extra features have
not been given
* Either the fundamental product is put through extensive testing by the customer, or the
customer uses it.
* As a direct outcome of the evaluation, a strategy for the subsequent increment is
prepared.
=> Communication
=> Planning
Unlike the prototyping model, which prioritises the delivery of a functional product with
each increment, the incremental model focuses on the delivery of an operational product
with each increment, ensuring that the product fully meets the needs of the customer
before it is considered complete.
In contrast to the prototype methodology, the incremental model focuses on adapting the
original product to new circumstances.
21
What Is Agility ?
• Modifications to the software under development, adjustments to team members,
modifications brought about by new technology, and modifications of any kind that could
22
affect the product they created or the project that develops the project are all instances of
the kinds of modifications to which an agile team can adapt in a way that is appropriate.
• An agile team is aware that software is created by people working in groups, and that
the success of the project depends on the abilities and talents of these people cooperating.
• Agility encompasses more than just the ability to adapt quickly to change. In addition to
that, it incorporates the agile way of thinking.
• Based on these three presumptions, we are able to assert that the process's success
resides in its adaptability (to rapidly shifting technical conditions and project
parameters).
• Flexibility is an absolute requirement for an agile process.
• The iterative approach to software development known as agile needs to change.
• The agile team needs feedback from customers in order to achieve incremental
goals.
23
• The iterative methodology enables the customer to frequently evaluate the software
increment, provide the software team with any necessary input, and have some say in
the process changes that are made to meet the feedback provided by the customer.
Those who wish to attain agility are required to adhere to the following 12 principles,
as defined by the Agile Alliance:
1. The earliest and most consistent delivery of useful software is our first and
foremost concern in order to fulfil the requirements of our customers.
2. Be prepared to modify plans in response to changing needs, particularly as
development progresses. Agile processes give the client a competitive edge by
allowing them to adapt to change.
3. Regularly deliver functional software, giving attention to completion in the least
amount of time. A few weeks or a few months could pass in this case.
4. Business experts and developers work together every day for the duration of the
project.
5. Focus on individuals with a strong sense of motivation. Have faith in their abilities
to do the task and give them the environment and assistance they need.
6. Direct, in-person communication is the most effective and beneficial means of
sharing information with other team members and members of a development team.
7. The best measure of success is having software that functions as intended.
Using agile methodologies promotes sustainable development. The speed is
anticipated to be steady indefinitely, and sponsors, developers, and consumers should
all be able to maintain it.
9. You can improve agility by keeping an eye on sound design and technical
excellence.
Simplifying involves maximizing the amount of work that is not done, which is a key
element.
11. The best architectures, specifications, and designs are created by self-organizing
teams.
12. On a regular basis, the team reflects on how it may become more efficient and
then adapts and changes its behavior to take those ideas into account.
24
PLANNING
• The creation of a collection of tales, also known as user stories, that outline the features and
functionalities required for the program to be developed is the first step in the planning process.
• Each story, sometimes referred to as a "Use-Case," must be written by the consumer and put on an
index card.
• Based on the feature or function's overall business value, the customer assigns the story a VALUE
(priority).
• Following that, each tale is assessed by the XP team members, who then determine its cost, which
is expressed in terms of the weeks needed to complete it.
25
• The customer will be asked to divide the story into manageable chunks and the value and cost
will be allocated to each story separately once more if the narrative requires more than three
weeks of development time.
•The author is free to write the new stories whenever they like.
•The XP team works with its customers to determine how to best arrange individual user stories
within the upcoming release, which is also referred to as the upcoming software increment. The
XP team makes this choice.
•The XP team will order the stories that will be developed in one of the following three ways
after a release date has been committed to:
2. The tasks related to the stories with the biggest possible impact will be finished
first and given a higher priority in the schedule.
26
3. The stories with the highest potential for failure will be placed to the front of the
schedule and implemented first.
DESIGN
• The design of XP adheres to the "Keep it Simple" (KIS) approach. A less complicated
representation is preferable over a more complicated design.
• The design offers implementation assistance for a story exactly as it is stated; nothing
less and nothing more than that.
• XP promotes the utilization of CRC cards, which stand for "Class-Responsible
Collaboration." These cards identify and organize the object-oriented classes that are
pertinent to the currently active software increment.
• The only design work products that were generated as a result of the XP process were
the CRC cards.
• Refactoring, a building approach that doubles as a design technique, is encouraged by
the XP game mode.
• The term "REFACTORING" refers to a design process that is ongoing during the
construction of the system.
CODING
• According to the XP approach, once a team has finished working on the stories and the
preliminary design work, they should not move on to coding but rather develop a set of
unit tests that are included into the software increment that is presently being worked on.
The XP methodology proposes that once the team has finished working on the stories and
the preliminary design work, they should not go on to work on coding.
• Once the unit test has been constructed, the developer will have a much easier time
focusing on the requirements that need to be met in order for the unit test to be passed. •
Once the programming has been completed, the code may be instantly put through unit
testing, which provides the developers with immediate feedback.
EX: It's possible that one person will focus on the specifics of the coding for a segment of
the design while another looks over their shoulder to make sure the coding standards are
being adhered to.
• The created code will "FIT" into the larger framework of what the tale is about.
• The integration work is the duty of the pair programmers. This technique of continuous
integration helps to avoid problems with compatibility and interfacing, and it creates a
"SMOKE TESTING" environment, which helps to expose defects at an earlier stage.
TESTING
• The newly developed unit tests ought to be put into practise by making use of a
framework that gives rise to the possibility of their being automated. When code
is modified in any way, this should encourage the use of a regression testing
technique.
• Integration and validation testing of the system can be performed on a daily basis
now that the unit tests have been arranged into a "Universal Testing suit." The XP
team receives a continuous indication of progress from it, and it also has the
potential to raise early warning signs if things start to deteriorate.
• XP acceptance tests, also known as customer tests, are tests that are specified by
the customer and concentrate on the overall features and functionality of the
system that are reviewed by the client.
• Acceptance tests are created from user stories after they have been incorporated
into a product release.
• Once the XP team has completed the delivery of the first release of the project,
they will compute PROJECT VELOCITY, which is the number of customer
stories that were implemented during the first release. After then, one may make
advantage of Project Velocity to
1. Contribute to the estimation of delivery dates and the release schedule for
subsequent versions and
2. Determine whether an over commitment has been made for all of the stories that
are part of the overall development project. In the event that an over commitment
takes place, either the substance of the release or the end-delivery dates will be
adjusted.
28
• As the development process progresses, the client has the option of adding new
tales, altering the value of an existing narrative, splitting existing stories, or
removing existing stories. After then, the XP team reviews all of the releases that
are still to come and adjusts its plan accordingly.
SCRUM
SCRUM principles are consistent with the agile manifesto :
• Small working teams are organised to make the most of their communication,
minimise their overhead costs, and make the most of their opportunity to share
their knowledge.
• In order to "ensure the best product is produced," the process needs to be flexible
enough to accommodate both changes in technology and in business.
• The procedure results in frequent software increments "that can be inspected,
adjusted, tested, documented, and built upon."
• The work of development and the individuals who carry it out are separated "into
clean, low coupling partitions, or packets"
• As the product is being assembled, testing and documentation are carried out in a
continuous manner.
• The SCRUM methodology affords users the "flexibility to declare a product
'done' whenever it is necessary to do so."
• The SCRUM principles are utilised to direct the development activities that take
place within a process that also includes the activities of the following framework:
requirements, analysis, design, evolution, and delivery.
• Within each activity of the framework, work tasks take place according to a
pattern of processes known as Sprint.
• The work that is completed during a Sprint (the number of sprints necessary for
each framework activity will vary based on the complexity and scale of the
product) is suited to the problem that is currently being worked on. Additionally,
the SCRUM team defines the work and frequently modifies it in real time. The
following diagram outlines the general steps involved in the SCRUM process.
• SCRUM places an emphasis on the utilisation of a collection of "Software process
patterns," which have been demonstrated to be effective for projects that have
tight timeframes, fluctuating requirements, and high business criticality.
29
• Scrum meetings are brief meetings that are held every day by the scrum team. During
the meeting, there are three important issues that are discussed, and each member of the
team provides an answer.
o Since our previous gathering, what have you been up to?
o What kinds of challenges are you facing right now?
o What do you hope to have accomplished by the time we get together again as a team?
30
• The gathering is directed by a group leader known as a "Scrum master," who also
evaluates the contributions made by each individual. The team is able to identify potential
difficulties at the earliest possible stage thanks to the Scrum sessions.
• The regular meetings facilitate "knowledge socialization," which in turn contributes to
the development of a structure that allows the team to organise itself.
• Demos - Deliver the software increment to the customer so that the customer can
showcase and evaluate the functionality that has been built. This allows the customer to
provide feedback on the functionality.
• The demonstration might not have all of the functionality that was anticipated, but it
should be possible to implement these features within the time constraint that was set.
Feasibility study – In order to determine if an application is a good fit for the DSDM
process, it is necessary to first establish the fundamental business requirements and
constraints connected with the application.
Business Study – Establishes the functional and information requirements that must be
met in order for the application to be able to give value to the company, as well as
specifies the fundamental architecture of the application and outlines the needs that must
be met for the application to be maintainable.
Implementation – Installs the most recent software update available into the environment
in which it is now running. It is essential to keep in mind that
1) The increment might not be completely done, or
2) Changes might be required while the incremental is being implemented. Both of these
scenarios are crucial to keep in mind.
• DSDM and XP can be coupled to give a strategy that defines a stable process model
with the nuts and bolts practises (XP) that are used to construct software increments.
This combination method is known as a combination approach.
AGILE MODELING(AM)
Software engineers often find themselves in the position of having to construct
massive, mission-critical systems for businesses.
Modelling the scope and complexity of such systems is necessary in order to achieve
the following goals:
1. ensuring that all stakeholders have a better understanding of what has to be
achieved.
2. The individuals who are responsible for finding a solution to the problem can be
divided up into groups that are more likely to be successful. And
3. The quality can be evaluated at each stage of the engineering and construction
processes for the system.
• Many other software engineering modelling methods and notations have been
suggested for use in the process of analysis and design; however, despite the major
virtues of these methods, it has been found that they are difficult to implement and
demanding to maintain.
32
• The only methods that provide a sustainable benefit for larger projects are the
analysis and the design modelling.
• Modeling
• Implementation
• Testing
• Deployment
• Configuration and Project Management
• Environment Management
Component-Based Development
1. 1. Research and analysis are conducted on the component-based products that are
currently available on the market for the application domain in question.
2. Problems with component integration are taken into consideration.
3. A software architecture that can accommodate the components is built.
4. The architecture incorporates the components into its structure.
5. Extensive testing is carried out to validate the correct operation of the component.
The use of a component-based development methodology results in increased
software reuse and reusability.
The Formal Methods Model
• Using formal techniques, you can describe, build, and validate a computer-based
system by employing a stringent mathematical language. This is made possible by
the use of formal methods.
• The creation of formal models now requires a significant amount of time and
money due to their complexity.
• Extensive training is necessary since only a small percentage of software
engineers have the appropriate experience to apply formal approaches.
• Customers who are not technically savvy will have a tough time understanding
how to use the models as a communication channel.
* In the Beta testing phase, the programme is made available to actual end users.
Feedback on reported issues and necessary tweaks comes directly from customers.
Methods of Assembly
MODULE-2
REQUIREMENTS ENGINEERING
What is Requirement
Requirements engineering is the systematic procedure of determining the specific
services that a client demands from a system, as well as the limitations and conditions
that govern its operation and development. Requirements are the explicit descriptions of
the services and limitations of a system that are developed during the process of
requirements engineering.
Different types of Requirement Specification
Domain requirements
39
Domain-specific requirements are system requirements that are derived from the
application domain and reflect the unique characteristics and functionalities of that
domain.
Functional requirements
• Specify the features and services of the system.
• Be contingent upon the nature of the software, its intended audience, and the nature of
the system on which it runs.
• The functional system requirements must be specific, in contrast to the functional user
requirements which might be more general descriptions of the system's expected
behaviour.
Requirements completeness and consistency
It is generally accepted that requirements should be exhaustive and uniform.
Complete
– All services required by the user should be defined.
Consistent
It is impossible to create a comprehensive and consistent requirements document in
practise, but it is essential that there be no inconsistencies or conflicts in the descriptions
of system facilities.
Non-functional requirements
• The requirements and characteristics of a system are defined by these factors. I/O
device limitations, system representations, and similar factors all serve as
constraints.
• Non-functional needs can be more crucial than functional requirements. If these
conditions are not fulfilled, the system becomes ineffective.
• The implementation of these needs may be spread out across the system. There
are two factors contributing to this:
• The non-functional requirements of a system may have more of an impact on the
system's architecture as a whole than on its individual components. To fulfil performance
criteria, for instance, you might have to organise the system in such a way as to reduce
the amount of communication that occurs between its various components.
• It is possible for a single non-functional demand, such as a requirement for system
security, to spawn a number of related functional requirements that describe new system
40
Non-functional classifications
• Product requirements
• Prescriptions on how the delivered product must perform, including time to
completion, reliability, etc.
• Organisational requirements
• Company-specific requirements, such as those for meeting process standards,
meeting deadlines, etc., that are a direct outcome of the company's policies and
procedures.
• External requirements
- Requirements that are imposed on the system and its development process by elements
that are not directly related to the system itself, such as interoperability requirements,
legislative requirements, and so on.
Requirements specification
• An ideal requirements definition would result in user and system needs that
are crystal obvious, unambiguous, simple to grasp, exhaustive, consistent, and
well-organized into a requirements document.
• System users without extensive technical knowledge should be able to grasp
the user requirements for a system if they accurately represent the system's
functional and nonfunctional requirements.
• The system's external behaviour and operational restrictions should be simply
described in the requirements.
• They shouldn't worry about the system's design or implementation.
All design information must be included, and that's not doable. This is due to a
number of factors:
1. To better organise the requirements specification, you may need to create an
initial architecture of the system. Requirements are categorised by the several
subsystems that make up the whole.
2. Most systems need to communicate with one another, which might place
limitations on the design and additional demands on the new system.
44
Requirements Analysis:
obtaining the system requirements by performing activities such as analysing tasks,
talking to possible users, evaluating existing systems, etc. The analyst gains a better
46
Requirements Specification:
The development of this document is conducted concurrently with the high-level design
process. Deficiencies in requirement specification are identified during the process of
document development, necessitating modifications to rectify these issues.
* A comprehensive and accurate specification of the system requirements is established,
serving as the foundation for the contractual agreement between the client and the
software developer.
* The system requirements statement serves as the foundation for the contractual
agreement between the client and the software developer.
Requirement Elicitation:
* It is a procedure that involves asking review questions with the client, the user, and
other people to inquire about
=> What the goal of the system is
.=> What exactly is it that has to be done?
=> How does the system accommodate the requirements of the organization?
=> What is the recommended way to use the product or the system on a day-to-day basis?
47
4. Requirements specification The specifications are written down and included into the
subsequent iteration of the spiral. There is potential for the production of formal or
informal requirements documents.
Viewpoints
• Viewpoints serve as a method for organizing requirements in order to accurately
represent the many perspectives of different stakeholders. Stakeholders can be
categorized based on several perspectives.
• A multi-perspective study is crucial because there is no definitive method for
analyzing system requirements.
Types of viewpoint
• Interactor viewpoints
• Individuals or other entities that directly engage with the system. In an
ATM, the customer's information and the account database are
interconnected virtual private networks.
• Indirect viewpoints
• Individuals who are not directly involved in the system's operation but
who have an impact on its requirements. Both the management and the
security staff at an ATM are considered to be indirect opinions.
• Domain viewpoints
• The needs are influenced by the characteristics and constraints of the
domain. In the context of an Automated Teller Machine (ATM), an
illustration would be the protocols and guidelines governing the exchange
of information between different banks.
Interviewing
• During either a formal or a casual interview, the RE team will ask stakeholders
questions about the system that they currently use as well as the system that will be
constructed.
• There are two different kinds of interviews: closed interviews, in which participants
answer a series of questions that have been determined in advance, and open interviews.
— Interviews with no set agenda, in which a wide variety of topics are discussed with
various stakeholders; these are open interviews.
49
Scenarios
• Scenarios are real-life examples of how a system might be utilised. • Scenarios should
include the following: a description of the beginning situation; a description of the goal of
the scenario.
Requirements checking
• <text>Validity.</text> <text>Does the system offer the functions that most
effectively meet the customer's requirements?</text>
• Consistency. Are there any conflicts arising from requirements?
• Completeness. Does the customer's requirements encompass all necessary
functions?
• Realism. Is it feasible to achieve the requirements within the constraints of the
existing budget and technology?
• Verifiability. Could you verify the requirements?
Requirements validation techniques
• Requirements reviews
– A methodical manual analysis of the specifications.
• Prototyping
– Verifying requirements with an executable model of the system.
• Test-case generation
– Creating tests to verify the testability of requirements.
Requirements reviews
• While the requirements definition is being developed, regular reviews ought to be
conducted.
• Staff from the contractor and the client should participate in reviews.
50
Review checks
• Verifiability. Can the requirement be tested in a practical way?
• Comprehensibility. Does the requirement make sense to you?
• Traceability. Does the requirement clearly identify where it came from?
• Adaptability. Is it possible to modify the requirement without significantly
affecting other requirements?
Requirements Management
• Managing evolving needs during requirements engineering and system
development is known as requirements management.
• There will always be incomplete and inconsistent requirements;
– As business needs evolve and a deeper understanding of the system is
created, new requirements will always arise;
– Diverse perspectives will result in diverse requirements, many of which
are contradictory.
Requirements evolution
51
Requirements classification
Requirements Modeling
Requirements Analysis
• Models of the following kinds are produced as a result of the requirements
modelling process:
• Requirements models based on scenarios, including input from a wide range of
system "actors."
• Class-oriented models capture the interplay between classes in an object-oriented
framework and the properties and operations they use to fulfil system needs.
• Pattern- and behavior-based models that show how the programme responds to
outside "events."
• Data models representing the problem's information space.
• Models focused on data flow that depict the system's functional components and
the transformations they enact on data as it moves through the system.
• There are three main goals that the requirements model has to accomplish:
(1) describing the needs of the customer;
(2) providing a foundation for the software architecture; and
(3) defining a set of requirements that can be verified once the software is developed.
Data Objects
• A data object is a computer-understandable representation of complex
data.
• • Anything that meets the definition of a data object (such as anything that
generates or uses information) can be considered a data object.
• something (such a report or a display, for example)
• an occurrence (like a phone call), • an event (like an alarm), or both
• a function (such as a salesperson),
• a department within an organisation (such as accountancy), a location
(such as a warehouse), or a structure (such as a file) are all examples of
organisational units.
Relationships
• There are various methods in which data objects are related to one another.
55
SCENARIO-BASED MODELING
Firstly, what topic should you write on?
58
These are the problems that need to be addressed in order for use cases to be a useful tool
for requirements modeling.
The SafeHome home surveillance function that are performed by the homeowner
actor:
• Choose the camera you want to view.
• Make sure thumbnails are requested from all of the cameras.
• Views of the camera can be seen in a window on your computer.
• Manage the camera's pan and zoom settings individually.
• Record the output of the camera in a selectable manner.
• Play back the output from the camera.
• Use the Internet to access the video surveillance cameras.
Use case: Access camera surveillance via the Internet—display camera views
Actor: homeowner
1. The homeowner visits the SafeHome Products website and checks in to their account.
2. The user ID of the homeowner is entered into the system.
3. The homeowner is required to enter two passwords, each of which must be at least
eight characters long.
4. The system presents buttons for all of the primary functions.
5. The homeowner presses the button labelled "surveillance" to access the system's
primary functions.
6. The homeowner then chooses the option to "pick a camera."
7. The system will show you the layout of the house's floor plan.
8. The homeowner chooses an icon for a camera from the floor layout.
9. The homeowner clicks the "view" button on their computer screen.
59
Activity diagram for Access camera surveillance via the Internet— display camera
views function.
61
Swimlane diagram for Access camera surveillance via the Internet—display camera
views function.
The following size-related variables dictate how much focus is placed on requirements
modeling for Web and mobile applications:
(1) The scope and intricacy of the application increment;
(2) The quantity of stakeholders (analysis can assist in identifying conflicting
requirements originating from various sources);
(3) The size of the app development team;
(4) The extent to which team members have collaborated previously (analysis can aid in
creating a shared understanding of the project); and
(5) The duration elapsed since the team's last collaboration.
62
Module 3
1. Class Diagram
Purpose: Represents the static structure of a system by showing its classes, attributes,
operations, and the relationships among objects.
• Class: A blueprint for creating objects. It encapsulates data for the object and methods to
manipulate that data.
o Attributes: Properties or fields of a class.
o Operations (Methods): Functions or procedures that the class can perform.
• Association: A relationship between classes.
o Multiplicity: Defines how many instances of a class are associated with one
instance of another class (e.g., one-to-many, many-to-many).
• Aggregation: A type of association that represents a "whole-part" relationship. For
example, a library and books.
• Composition: A stronger form of aggregation with a life-cycle dependency. For
example, a house and its rooms.
• Inheritance: A mechanism where one class (subclass) inherits attributes and operations
from another class (superclass).
• Interface: Defines a contract that implementing classes must follow, without providing
the implementation details.
• Dependency: A relationship indicating that a change in one class may affect another
class.
Purpose: Describes the functional requirements of a system from the end user's perspective.
• Actor: An external entity (user or another system) that interacts with the system.
• Use Case: A specific functionality or service that the system provides to actors.
• System Boundary: Defines the scope of the system, showing which use cases are
included.
• Relationship:
o Association: A line connecting an actor to a use case indicating interaction.
o Include: A relationship where one use case includes the functionality of another
use case.
o Extend: A relationship where one use case extends the behavior of another use
case.
63
3. Activity Diagram
Purpose: Illustrates the flow of activities or actions in a system, showing the sequence and
conditions for these activities.
4. Interaction Diagram
Purpose: Focuses on the flow of messages between objects and how they collaborate.
Purpose: Models the dynamic behavior of a system or an object by showing its states and
transitions.
6. Component Diagram
7. Deployment Diagram
Purpose: Shows the physical deployment of artifacts on nodes and their relationships.
These UML diagrams collectively help in visualizing, specifying, constructing, and documenting
the artifacts of a software system, ensuring a comprehensive understanding of both static and
dynamic aspects of the system.
1. Process Metrics
Purpose: To evaluate the efficiency and effectiveness of the software development process.
• Cycle Time: The time taken to complete a particular process or task from start to finish.
For instance, the time to develop a feature or resolve a defect.
• Lead Time: The total time from when a request is made until the final delivery. This
includes development time, testing time, and any other phases.
• Throughput: The number of units of work (e.g., features, defects) completed in a
specific period. It helps in assessing productivity.
• Defect Density: The number of defects per unit size of the code (e.g., per 1000 lines of
code). This helps in evaluating the quality of the development process.
• Work in Progress (WIP): The number of tasks or features currently being worked on. It
helps in understanding the current load and efficiency.
65
• Process Compliance: The degree to which the development process adheres to defined
standards and practices.
2. Project Metrics
• Cost Performance Index (CPI): A measure of cost efficiency in a project. CPI = Earned
Value / Actual Cost. A CPI greater than 1 indicates cost efficiency.
• Schedule Performance Index (SPI): A measure of schedule efficiency. SPI = Earned
Value / Planned Value. An SPI greater than 1 indicates ahead of schedule.
• Earned Value (EV): The value of work actually performed compared to the baseline
plan. It helps in measuring project performance.
• Planned Value (PV): The value of work planned to be performed by a specific time. It
helps in assessing whether the project is on track.
• Actual Cost (AC): The actual cost incurred for the work performed by a specific time. It
helps in comparing with the planned costs.
• Variance Analysis: Analyzing the differences between planned and actual performance.
This includes cost variance (CV = EV - AC) and schedule variance (SV = EV - PV).
• Risk Metrics: Measures related to risk management, such as the number of identified
risks, risk impact, and the effectiveness of risk mitigation actions.
Software Measurement
• Size Metrics: Quantify the size of software components. Common measures include:
o Lines of Code (LOC): The number of lines in the codebase. It helps in estimating
complexity and effort.
o Function Points (FP): A measure of functionality provided to the user,
independent of the programming language.
• Complexity Metrics: Measure the complexity of software to estimate effort and
maintainability.
o Cyclomatic Complexity: Measures the number of linearly independent paths
through the program's source code. Higher values indicate higher complexity.
o Halstead Complexity Measures: Metrics based on the number of operators and
operands in the code.
• Maintainability Metrics: Assess how easily software can be modified.
o Maintainability Index: A composite metric that includes cyclomatic complexity,
lines of code, and other factors to estimate maintainability.
66
By employing these metrics, organizations can gain insights into their software development
processes, manage projects more effectively, and ensure high-quality software products. Metrics
help in making data-driven decisions, improving performance, and achieving better alignment
with business goals.
62
MODULE-4
TESTING STRATEGIES
* Various procedures are carried out under the umbrella term of "validation" to guarantee
that the final product of software development is tied to the requirements of the customer.
Example:
Validation: Are we building the right product?
* The processes of verification and validation involve a wide range of SQA operations
that include the following:
=> Formal Technical Reviews
=> Quality and Configuration audits
=> Performance Monitoring
=> Simulation
=> Feasibility Study
=> Documentation Review
=> Database Review
=> Analysis Algorithm
=> Development Testing
=> Usability Testing
=> Qualification Testing
=> Installation Testing
(2) Organizing for Software Testing:
* The developer often also carries out integration testing, which is a phase of testing that
comes before the complete software architecture is constructed.
* Testing the various units (components) of the program is always the responsibility of
the software developer.
* Once the software architecture has been completed, an independent testing group will
be recruited to evaluate the product. The purpose of an Independent Test Group, which is
also abbreviated as an ITG, is to eliminate the inherent challenges that come when the
builder is given the opportunity to test the thing that has been built. This is accomplished
by eliminating the inherent difficulties. During the course of a software project, the
developer and the ITG work together very closely to ensure that through testing will be
performed.
* The developer needs to be available when testing is being done so that he or she may
fix any mistakes that are found.
64
* Unit testing starts at the middle of the spiral and concentrates on the software's
components as they are written in the source code. * Proceeding in a different direction
down the spiral, we encounter integration testing, which is concerned with the planning
and building of software architecture. * Next, validation testing is introduced, which
compares requirements that have been established during software requirements analysis
with software that has already been developed. * Finally, we come across verification
testing, which verifies that software that has been constructed satisfies requirements that
were determined during software requirements At last, we come to the system testing
phase, which involves testing the software along with the other components of the system
as a whole.
Software Testing Steps:
(i) Unit Testing:
* The initial phase of testing involves the examination of each component in isolation to
verify its proper functioning as an independent unit.
* Unit testing employs rigorous testing techniques to achieve comprehensive coverage
and optimal error detection within the control structure of the components.
* Subsequently, the components are integrated to form cohesive software packages.
65
* A series of high - order tests are executed after the software has been integrated [built].
Independent Paths:
* Each and every basis path that passes through the control structures is investigated to
guarantee that
=> Each and every statement contained within a module has been run at least once.
Boundary Conditions:
* These are verified to ensure that
68
=> the module runs correctly within the boundaries that have been specified in order to
limit (Or restrict) processing.
* And lastly, all possible error-handling routes are put through their paces.
Before any other tests can be run, there must first be validation of the dataflow across an
interface module.
another test will be performed.
During the unit testing process, one of the most important tasks is to perform selective
testing of the execution path.
Boundary Testing:
* This is one of the most significant responsibilities involved in unit testing
* A common cause of software failure is when it reaches one of its limits (for example,
an error frequently happens when the nth element of an n-dimensional array is handled).
69
When evaluating error handling, the following are some examples of potential errors that
should be tested:
Processing under exception conditions is not right. The error description is
insufficient to help identify the error's cause.
(1) The error description is not clear.
(2) The error reported does not match the error experienced.
(3) An operating system intervention occurs before error handling due to an error state.
(4) The error description is insufficient to help identify the error's cause.
Driver:
* A driver is not much more than an application's "main programme" in the vast majority
of cases * It accepts
=> data from test cases
=> Sends these data to the component that is [about to be tested].
=> Print the findings that are relevant.
* Drivers and stubs are two different types of software that need to be built, but they are
not included in the final software product.
* The real overhead is reasonably modest if the drivers and stubs are kept simple;
otherwise, it is substantial.
* When a component that has a high cohesiveness is designed, it simplifies the unit
testing process.
Integration Testing:
* Once all of the modules have passed their own unit tests meaning that all of the
modules function properly, we have doubts about whether or not they will work, when do
we integrate them together?
* Integration testing is going to be the solution for this problem.
Interfacing:
It is the mechanism that brings together all of the individual modules.
The following are the issues that can arise throughout the interfacing process: It is
possible to lose data when moving between interfaces.
=> An unintended negative effect can be caused by one module on another module.
71
=> The combination of subfunctions might not result in the principal function that is
required.
What it does
=> An imprecision that is tolerable on its own might become intolerable when it is
multiplied.
scales levels
=> Issues may arise due to the use of global data structures
Integration Testing – Definition:
* It is a methodical approach to building the software architecture, while at the same time
running tests to find faults linked with the software's interface.
* The goal is to use components that have undergone unit testing and construct a
programme structure according to the specifications set out by the design.
Incremental Integration:
* In this scenario, the programme is built and tested in small steps
* Errors are easy to localise and rectify
* Interfaces are tested in their entirety and a systematic testing strategy may be used
* There are several variants of the incremental integration method to choose from.
The software architecture is being built up in stages, including through this testing, which
is an incremental method.
72
* The modules are integrated by moving down the control hierarchy, beginning with the main
control module, also referred to as the main programme.It is possible to incorporate the
subordinate module to the principal control module either depth-first or breadth-first.
Advantages:
(1) Top-down integration has several advantages, including ensuring important control or
decision points are proven early in the testing process.
(2) In order to gain the trust of both the customer and the developer, it is advantageous to do an
early demonstration of the product's functional capabilities. This is noteworthy because it
shows that the feature is operating according to plan, which is crucial information to know.
(3) Despite not being unduly complex, this strategy could result in a number of real-world
problems.
Example:
Clusters 1, 2, and 3 are formed by assembling the components, as seen in the image below.
* A driver assists in putting each Cluster through its paces.
* The components from Clusters 1 and 2 are subordinate to the Ma component.After taking drivers
D1 and D2 out of commission, clusters are now connected to Ma directly.Driver D3 for cluster-3
has also been removed and integrated with MB in a similar fashion.
The Mc structure incorporates both Ma and Mb as components.
74
Bottom up Integration
* Software engineers can record test cases and outcomes with capture and playback tools
for later comparison and playback.
* There are three types of test cases in the regression test suite:
(i) A sample set of tests that will run through every feature of the
software
(ii) Further testing concentrating on software features that are probably
going to be impacted by the modification
(iii) Tests concentrating on the modified software components
Smoke Testing:
* When developing software products, this approach to integration testing is frequently
employed.
* It is intended to serve as a patching mechanism for projects that are time-sensitive,
enabling the software team to regularly evaluate its work.
Critical Module:
* It is a measure which contains one (Or) more of the following characteristics:
(i) Addresses several software requirements
(ii) Has a high level of control [resides relatively high in program
structure]
(iii) is complex (Or) error prone
(iv) Has definite performance requirements
* Testing the crucial module as soon as feasible is recommended.
76
Validation Testing:
* The validation process at the system level concentrates on the following:
=> User – visible actions
=> User recognizable output from the system
* Validation testing is only successful when the program operates as the customer would
reasonably expect it to.
* The Software Requirements Specifications define reasonable expectations
* A section of the specification called validation criteria serves as the foundation for a
validation testing approach
* One of the following two scenarios could arise following the validation test:
(i) The function's (or) performance characteristics meet the requirements
and are approved.
(ii) A list of deficiencies is made and the derivation from the specification
is discovered.
(iii) Derivation (Or) errors found at this point are rarely able to be fixed in
time for the delivery date.
System Testing:
* It is a set of several tests with the main goal of thoroughly testing the computer-based
system.
* Although the goals of each test vary, they always aim to confirm that the various
components of the system have been correctly integrated and are carrying out their
designated tasks.
* Despite the fact that this test series' main objective is to thoroughly test the computer-
based system
Types:
(i) Recovery Testing
(ii) Security Testing
(iii) Stress Testing
(iv) Performance Testing
(v) Sensitivity Testing
(1) Recovery Testing:
* It is a type of system test that involves purposefully breaking the programme in a
number of various ways and checking to see whether or not recovery is carried out
properly.
* If the recovery process is automated, often known as being carried out by the system on
its own, then
79
=> * If it is necessary for human intervention to recover the lost data and restart the
system, the Mean Time to Repair (MTTR) metric is analysed to evaluate whether or not it
falls within the allowed range of values. The evaluation of the validity of the
checkpointing processes that occur as a consequence of reinitialization is what ultimately
leads to the assessment of data recovery and restarting.
* In the process of evaluating the system's security, the tester acts out the part of a
potential threat who wants to break into the system.
* It ensures that the safety feature that was incorporated into a
* The execution of a test case is the first step in the debugging process.
Next, the results are analysed, and a discrepancy between expected and actual
performance is discovered.
Next, debugging attempts to match symptoms with the underlying causes of errors, which
ultimately leads to error correction.
Finally, debugging always has one of two possible outcomes:
(i) the cause will be located and fixed, or
(ii) the cause will not be located. Debugging will always have one of these two outcomes.
(6) It may be challenging to correctly duplicate the conditions of the input [for example,
in a real-time application when the ordering of the data is unpredictable].
(7) The manifestation of the ailment may come and go. This is especially common in
embedded systems, which combine hardware and software in a manner that cannot be
separated.
(8) The symptom may be the result of a lot of reasons that are spread across a variety of
jobs that are being executed on various processors.
*The amount of pressure to determine the causes of a mistake also grows *This pressure
compels the software developer to fix one problem while simultaneously adding two
more
*The amount of pressure to determine the causes of a mistake also increases
*The greater the implications of an error, the greater the amount of pressure there is to
investigate and pinpoint its root cause.
(1) Guarantee that all independent routes contained within a module have been
investigated at least once;
(2) Exercise all logical judgements on both their true and false sides.
(3) Ensure that all loops are executed at their respective boundaries and within the scope
of their respective operational bounds; and
(4). Perform tests on the organization's internal data structures to confirm that they are
correct.
White box testing
• To test logical paths across the software, test cases with particular sets of conditions
and/or loops are provided.
• Testing of the software is referred to as white-box testing when it is dependent on a
detailed investigation of the product's procedures.
• The "status of the programme" can be evaluated at a number of different periods in
time.
• White-box testing, also known as glass-box testing, is a method for designing test cases
that derives test cases by using the control structure of the procedural design. This
method is frequently referred to as "glass-box testing."
Through the use of this strategy, SE is able to generate test cases that
1. Ensure that each and every independent branch contained within a module has been
traversed at least once.
2. Consider all logical judgements from both the correct and incorrect perspectives,
3. Ensure that all loops are executed at their respective limits and remain inside their
respective operational limitations
4. Perform tests on the internal data structures to confirm that they are valid.
Methods:
1. Flow graph notation
2. Independent program paths or Cyclomatic complexity
3. Deriving test cases
4. Graph Matrices
86
Each circle in figure B, which is referred to as a node on a flow graph, represents one or
more procedural statements.
• A mapping into a single node is possible for a series of process boxes and a decision
diamond.
• Similar to the arrows on flowcharts, the arrows on the flow graph, sometimes referred to
as edges or links, show the flow of control.
• Even in cases when a node does not reflect a procedural statement, it still needs an edge
to finish at that node.
• Any area bounded by a network of edges and nodes is referred to as a region. The area
outside the graph is counted together with the regions when we calculate the overall
count.
• In the event that a compound condition arises during the process of procedural design,
the flow graph will become marginally more convoluted.
• Take, for instance, the following as an illustration of a set of independent paths for a
flow graph:
1-11 is the first path, and
1-2-3-4-5-10-1-11 is the second.
– Path 3: 1-2-3-6-8-9-1-11
– Path 4: 1-2-3-6-7-9-1-11
• Take note that every new path results in the creation of a new edge.
• The path 1-2-3-4-5-10-1-2-3-6-8-9-1-11 is not an independent path because it is merely
a combination of paths that have already been stated and does not traverse any new
edges. This means that it does not go through any new nodes.
• Test cases should be built in such a way that they are forced to follow these basic set
paths.
• Each and every line in the programme should have at least one opportunity to be run,
and each and every condition should have been tested both ways (true and false).
How can we determine the total number of possible routes to investigate?
• Cyclomatic complexity is a software metric that provides a quantitative measure of the
logical difficulty of a programme. It does this by counting the number of paths through a
programme.
•It offers the number of tests that need to be carried out in addition to defining the total
number of independent pathways in the basis set.
•Calculating cyclomatic complexity can be done one of three ways:
1. The cyclomatic complexity is directly proportional to the number of areas.
2. The cyclomatic complexity, denoted by the symbol V(G), of a flow graph, G, is
defined as V(G) = E - N + 2, where E represents the number of edges in the flow graph
and N represents the number of nodes in the flow graph.
3. A flow graph's cyclomatic complexity, V(G), can be written as V(G) = P + 1, where P
is the total number of nodes and edges in the predicate.
As a result, we have an upper bound on the total number of tests based on the value of
V(G).
Deriving Test Cases
• This strategy consists of a predetermined order of actions.
• The average of the operation as shown in the PDL.
88
* By applying testing methodologies known as black box testing, we are able to develop
a set of test cases that are compliant with the following criteria.(i) Test cases that reduce
the number of additional test cases that need to be written to ensure reasonable testing by
a count that is greater than one (ii) Test cases that provide information on the presence
(Or) absence of classes of mistakes rather than errors that are just associated with the
particular test that is now being carried out (iii) Test cases that provide information on the
presence (Or) absence of classes of mistakes rather than errors that are just associated
with the particular test that is now being carried out (iv) Test cases that
• Testing using a black box whereas the control structure is being willfully disregarded,
the emphasis will be placed on the information sphere. Exams are structured to provide
responses to the following questions:
Testing for functional validity entails what steps, exactly?
How are the system's performance and behaviour put to the test?
Which types of input will make the most effective test cases?
• We are able to build a set of test cases that are in agreement with the following criteria
thanks to the utilisation of black-box approaches, which are described below.
- Test cases that cut down on the number of additional test cases that have to be designed
in order to conduct testing that is regarded to be reasonable (thus minimising the amount
of work that must be done and the amount of time that must be spent).
• The nodes in the network are depicted as circles, and the connections between them can
take on a variety of forms.
• A one-way relationship is denoted by a directed link, which is depicted as an arrow and
indicates that the link only goes in one direction.
• The relationship is considered to apply in both directions when there is a bidirectional
link, which is also referred to as a symmetric link.
When multiple distinct associations need to be constructed between two nodes in a graph,
the use of parallel links is necessary.
Example
93
• Object #1 This is the item that you selected from the “New File” option.
• Object #2 This is the window that was made just for the document.
• Object No. 3 indicates the next that is present in the document.
2. When an input condition asks for a certain value, there are two defined incorrect
equivalence classes and one defined genuine equivalency class.
3. When an input condition identifies a member of a set, an equivalency class will be
created.
94
4. When an input condition is Boolean, a valid class and an invalid class are defined.
Example
• The area code, which could be a three-digit number or a blank number
• The terms prefix and suffix refer to three-digit numbers that do not begin with zero or one,
respectively, and sometimes to four-digit numbers.
• Password: a six-digit string that is both numeric and alphabetic
• orders like pay bills, deposit checks, and other comparable ones
• Boolean input condition: either true or false, depending on whether the area code is present.
• Kindly enter the value as a three-digit number and the condition.
• prefix: • Range of values defined between 200 and 999, with some deviations from the norm.
• Four-digit length for the input condition and value; • Suffix:
• password: Boolean input condition that allows a password to be either present or absent.
-When these quantities are small, it is possible to take into account any possible combination of
inputs. On the other hand, extensive testing is necessary when the number of input values
increases in tandem with the number of discrete values for each data item.
An example of this would be having three input parameters, each of which might take on three
discrete values. When the problem's scope is relatively vast and full testing is not possible one
technique that can be used is orthogonal array testing, provided that the input domain is
reasonably small.
Example
-The send function gets four parameters, P1, P2, P3, and P4, which are mentioned in the fax
program. • Keep in mind the send option. Any one of the three possible values could apply to
each one.
-P1 makes the following assumptions:
If P1 equals 1, send it right away; if P1 equals 2, send it an hour from now; if P1 equals 3, send
it after midnight. P2, P3, and P4 would represent additional send functions, with the
corresponding values being 1, 2, and 3, respectively.
-The OAT is a values array where each column denotes a parameter, a value that may be any of
a range of predefined alternatives known as levels.
-Parameters are mixed in pairs rather than expressing all of the potential combinations of
levels and parameters; this is done in order to improve performance.
-This table's entries each correspond to a distinct test scenario.
-To increase performance, parameters are combined in pairs rather than stating every possible
pairing of levels and parameters.
96
97
-MODULE - 5
RISK, QUALITY MANAGEMENT AND REENGINEERING
RISK MANAGEMENT
Introduction:
* A software development team can learn to deal with uncertainty through a process
known as risk analysis and management.
* A risk is a prospective problem; it may or may not materialize.
* However, it is a good idea to identify risks, evaluate their likelihood of occurring, and
calculate their potential impact.
(1) The possibility that the risk will not materialize; in other words, there are no dangers
that are one hundred percent likely to occur.
(2) Unwanted Consequences Or Losses: If a risk were to materialize, unintended
consequences or losses would follow.
* When doing an analysis of risks, it is essential to quantify both the degree of loss that is
connected with each risk as well as the level of uncertainty that is associated with that
risk.
Categories of Danger:
* There is a potential for technical concerns because the problem is more difficult to
address than we had anticipated it would be.
* These risks can be identified when the project strategy, as well as the commercial and
technological environment in which the project is being produced, have undergone
thorough evolution.
* Additional credible information sources include the following:
=>Date of delivery that is impossible to meet
=> The lack of requirements for documents
=> An unfavourable setting for development
* These dangers were derived from analysis of completed projects in the past.
As an example
:=> Turnover in the workforce
=> Lack of effective communication with the client
100
=> A reduction in the amount of work put in by staff as continuous maintenance requests
are attended to
(i) Generic Risks - Every software development project faces the possibility of these
risks.
(ii) Product-special Risks: These can be identified only by individuals who have a clear
grasp of the technology, the people, and the environment that is special to the software
that is going to be produced.
These risks are unique to the software that is going to be built.
* The creation of a risk item check list is one approach of identifying risks that can be
used.
* The checklist can be used to identify potential risks, with a particular emphasis on a
subset of known and foreseeable dangers falling into one or more of the following
generic subcategories:
Size of the Product = Danger Associated with the Overall Size of the Software to
be constructed (Or) remodeled.
The risk that is connected with the limitations placed on the business as a result of
either administration or the market place
101
Risks linked with the extent to which the process is followed = Process Definition
The software development method has been outlined and is being carried out
by the Organisation for Development and Cooperation Risks related to the quantitative
abilities linked with the development environment as well as the quality of the tools that
will be utilised to construct the item being soldRisk linked with the complexity of the
system equals the technology that must be built.
As well as the "Newness" of the technology to be constructed, It is contained within a
package by the System.
Risks associated with the overall technical and project experience of the software
engineers who will be doing the work are proportional to the size and level of expertise of
the staff.
Assessing the Overall Dangers of the Project:
* The questions that are presented here are based on risk information received from
seasoned software project managers.
* The order of the questions reflects their relative significance to the completion of the
project.
Possible Answers:
(1) Do the highest-level software and customer managers have an official commitment to
back the project?
(2) Do the people who will be using the finished result have a passionate commitment to
the project and the system or product that will be built?
(3) Is there a mutual understanding between the software engineering team and its
customers regarding the requirements?
(4) To what extent have customers been involved in the process of defining the
requirements?
(5) Do end-users have expectations that are grounded in reality?
(6) Is there no change to the project's scope?
(7) Does the team working on the software engineering have the appropriate variety of
skill sets?
102
(8) Can you guarantee that the project requirements won't change?
(9) Does the team working on the project have previous experience working with the
technology that will be implemented?
(10) Does the project team have a sufficient number of members to complete the task at
hand?
(11) Do all of the customer and user constituencies have the same opinion regarding the
significance of the project as well as the needs for the system or product that will be
constructed?
* The proportion of unfavorable replies to these questions is directly linked to the degree
to which the project is at risk.
Projections of Danger:
* Another name for this concept is risk assessment.
* It tries to assign a score to each risk in two different ways:
(i) the chance or probability that the risk is indeed there in the situation
103
(ii) the repercussions of the problem connected with the risk in the event that it occurs
take place.
* The project planner and the technical personnel will complete the following four
processes of risk projection:
(1) Construct a rating system that takes into account how likely it is that a risk may occur.
(2) Explain in detail the possible outcomes of the risk.
(3) Determine an estimate of how the risk will affect the project and the product.
(4) Make sure that the overall precision of the risk projection is taken into consideration,
so that there is no room for misinterpretation.
Values of Influence
1. Extremely bad
104
2. Critical Level
3. Minimum:
4. Maximum:
* All risks identified by the project team are included in the first column of the table.
PS = Project Sizing Danger
BU == Business Risk
* Third column == Impact
* Fourth column == Probability of Occurrence
* Fifth column == After the first four columns are filled in, the risk table is complete; it is
sorted according to likelihood and impact.
* High probability and high impact risks move to the top of the table
* Low probability risks move to the bottom of the table
* A "Cut Off Line" is established when the project manager analyses the sorted table.
*All risks above the Cut Off Line must be managed, as indicated by the horizontal line
drawn at some point on the table, which suggests that only those risks above the line will
receive further attention.
* Risk mitigation, monitoring, and management information can be found at the link
given in the fifth column labelled RMMM.
Example:
High
Very Management
High Concern
Impact Disregard
risk factor
Very Low
0
Probability of
occurrence 1.0
105
* A risk factor like the one in the above image that has a big impact but a low chance of
happening shouldn't receive a lot of management attention.
* Both low impact risks with a high probability and high impact risks with a moderate to
high likelihood should be carried over into the subsequent risk analysis steps.
Risk Refinement:
* As more time goes by and more information is gathered about the project and the risk,
it is feasible that the risk can be refined into a group of hazards that are more specific.
* One method for performing refinement is to express the risk in the format of Condition,
Transition, and Consequence [CTC]. * The potential harm is described using the
following format:
Mitigation Strategy:
=> In order to discover the factors that contribute to employee turnover, you need have a
conversation with the current personnel.
Example: Unsafe working conditions, low pay, and a highly competitive employment
market
=> Avoid those factors that are within our control prior to the beginning of the project
=> Assume that people will quit the project at some point after it has begun, and work on
developing strategies to maintain continuity in their absence.
Organise the project team in such a way that knowledge regarding each aspect of the
development process is broadly distributed. Define the documentation standards, and then
set up a procedure to make sure that the papers are developed in a timely manner.
106
=> Carry out peer reviews on every piece of work [so that more than one person is aware
of what's going on]
=> Ensure that every important technologist has a backup staff member assigned to them.
When initiatives are already in progress, risk monitoring operations start
=> The following are some of the factors that can be monitored:
Monitoring Strategy:
=> The general attitudes of the members of the team in response to the demands of the
project
=> The degree to which the team's members have developed strong interpersonal
relationships with one another
=> The cohesiveness of the team as a whole
=> Problems that could arise with regard to salary and benefits
=> Differences in the quality of work available both inside and outside the organisation
RMMM – Plan:
* The RMMM plan, which records every task completed for risk analysis
* The project manager incorporates it into the overall project plan.
* Some software teams use Risk Information Sheets (RIS) to document each risk
separately rather than creating a formal RMMM document.
107
User satisfaction = compliant product + good quality + delivery within budget and
schedule
• Quality control refers to the various tests, evaluations, and inspections that are
conducted during the software development process.
• The procedure has a feedback loop as part of quality control.
• The idea that every work product has measurable, established requirements to
which we can compare the results of every operation is fundamental to quality
control.
• The feedback loop is necessary to reduce the amount of flaws that are created.
3. Quality Assurance
• Management's auditing and reporting duties comprise quality assurance.
• Should the data obtained from quality assurance reveal flaws, management must
address the issues and deploy the required resources to rectify quality concerns.
4. Cost of Quality
• All expenses incurred in pursuing quality or carrying out quality-related tasks are
included in the cost of quality.
• Quality costs may be divided 3 mode of cost:
– Prevention
– Appraisal
– Failure.
Software Quality Assurance (SQA)
Conformance to openly stated functional and performance objectives, explicitly defined
development standards, and implicit features that are anticipated of all software that has
been produced professionally is the definition of software quality.
-The purpose of definition is to emphasise the following three crucial points:
-The requirements for the software serve as the basis for determining its overall quality.
Alack of quality can be defined as non-compliance with the requirements.
- A set of development criteria is defined by the standards that have been specified. If the
criteria are not followed, it will almost certainly result in a lack of quality.
109
A group of unstated requirements is frequently not taken into consideration. The quality
of software is questionable if it satisfies its formal requirements but falls short of meeting
its implicit requirements even if it does so.
• Software engineers address quality activities through the use of technical protocols and
metrics, formal technical reviews, and carefully planned software testing.
• The mission of the SQA group is to provide support to the software development team
so that they may produce a product of high quality.
• All stakeholders evaluate the plan, which is created during project planning.
description to make sure it complies with organizational guidelines, internal software standards,
externally enforced standards (such as ISO-9001), and other project plan elements.
111
• Conducts audits on specific software work products to verify compliance with standards that
have been set forth during the software development process. • The SQA group looks over
specific work items, finds, records, and monitors deviations; confirms that repairs have been
made; and sends regular updates to the project manager regarding the outcome of its work.
• Makes certain that any deviations from the software work and work products are accurately
recorded and managed according with a stated procedure.
It is possible that there will be variations from the project plan, process description, relevant
standards, or technical work outputs.
• Records any noncompliance and reports to senior management.
• Uncomplied items are monitored until they are corrected..
• The normal number of attendees for a review meeting should be between three and
five people; • The number of participants in the review should fall anywhere between
three and five.
112
• There should be some level of advance preparation, but the amount of effort
required from each individual should not exceed two hours.
• The time allotted for the review meeting must be significantly shorter than two
hours.
• FTR concentrates on a particular (and relatively insignificant) portion of the
software as a whole. • For instance, rather than attempting to examine the complete
design, FTRs are carried out for each component or small group of components.
• Each and every review meeting ought to be constrained by the following guidelines:
• Typically, the evaluation should include participation from between three and five
different people.
• There should be some level of advance preparation, but the amount of effort
required from each individual should not exceed two hours.
• The time allotted for the review meeting must be significantly shorter than two
hours.
• FTR concentrates on a particular (and relatively insignificant) portion of the
software as a whole. • For instance, rather than attempting to examine the complete
design, FTRs are carried out for each component or small group of components.
• The review summary report is sent to the project leader and any other parties interested in
the work, and it is included in the project's historical record.
1. To identify the specific areas of the product that are producing problems
2. to serve as a reference point for action items that the producer can use as direction while
making adjustments.
• Creating a follow-up procedure is crucial to ensuring that the problems listed on the list
have been appropriately fixed.
• The summary report usually comes with a problems list.
An issues list is typically included in a summary report. Should this not be done, there's a
chance that the issues raised will "fall between the cracks."
• Giving the person in charge of the review the responsibility for follow-up is one tactic.
SOFTWARE RELIABILITY
Software reliability is defined statistically as "the probability of failure-free operation of a
computer programme in a specified environment for a specified amount of time."
•What does it really mean to be unsuccessful in something?
–Failure in the context of any discussion about the reliability and quality of software is defined
as nonconformance to software requirements.
It is likely that correcting one error can cause others to arise, which will then cause more errors,
which will ultimately result in further failures.
•When development and historical data are combined, software reliability may be tracked,
directed, and evaluated.
In other words, an end-user is not concerned with the overall number of errors; rather, they are only
concerned with the number of failures. Because each individual fault identified within a program
has a different failure rate, the total error count is not a very trustworthy predictor of the
dependability of a system.
•In addition to our current dependability metric, we also need to build an availability meter.
115
Reengineering
Introduction
• Regardless of application size, complexity and domain environment modification
occurs.
1. Due to new feature demanded by customer.
2. Due to errors.
3. Due to new technology
• We have to maintain when it is necessary and we have to re-engineer right.
• What is it?
• Who does it?
117
• Why it is important?
• What are the steps?
❖ Maintenance correct the defects, adopts new functionality as per user needs and
changing env
❖ At strategic level BPR identifies and evaluates the existing business process and
create revised BP that better meet the current goals
• What is the work product?
❖ Variety of maintenance and re-engineering work products are produced
eg: usecases, analysis and design model, test procedures
❖ The final output is upgraded software
• How do you ensure that you have done right?
❖ Use SQA practices that are applied to every SE process
✓ Technical reviews assess the analysis and design models
✓ Specialized review consider the business applicability and compatibility
✓ Testing is applied to uncover the errors in content functionality etc.
Re-Engineering Advantages
✓ Reduced risk
✓ There is a high risk in new software development. There may be development
problems, staffing problems and specification problems.
✓ Reduced cost
✓ The cost of re-engineering is often significantly less than the costs of developing
new software.
Business Process Re-Engineering
• BPR extends for beyond scope of IT and SE
• Concerned with re-designing business processes to make them more responsive
and more efficient.
Business Process:
• BP is a set of logically related tasks performed to achieve a defined business
outcome.
• Within the BP people, equipment, material resources and business procedures are
combined to produce s specified results.
• The overall business can be segmented as fallow
Business ->Business System->Business Process->Business Sub-process.
118
• BPR can be applied to at any level of hierarchy but as the scope of BPR broadens,
risk associated with BPR grows dramatically.
BPR Model:
- *BPR is an Iterative Model*: The BPR process is evolutionary and does not have a fixed
start or end. It adapts continuously to changes in the environment.
- *BPR Model Activities*:
1. *Business Definition*:
- Identify business goals based on four key drivers: cost reduction, time reduction,
quality improvement, and personal development/empowerment.
- Goals can be defined at the overall business level or for specific business components.
2. *Process Identification*:
- Determine which processes are necessary to achieve the identified business goals.
- Rank these processes by importance, need for change, or other relevant criteria.
3. *Process Evaluation*:
- Analyze and measure the current process thoroughly.
- Identify tasks within the process and note the cost, time consumption, and performance
issues.
4. *Process Specification and Design*:
- Develop use-cases based on information from the previous activities.
- Define scenarios within these use-cases that reflect outcomes for the customer.
- Design new tasks and processes to address identified needs.
119
5. *Prototyping*:
- Create a prototype of the redesigned business process.
- Test the prototype to make necessary refinements before full integration.
❑ BPR activities are sometimes used in conjunction with workflow analysis tools.
❑ The intent of this tool is to build model of existing workflow in an effort to better
analyze existing process.
Software Re-Engineering
• The scenario is all to common
• Reorganizing and modifying existing software systems to make them more
maintainable.
Objectives:
• Describe the steps involved in the software re-engineering process;
• Make a distinction between software and data re-engineering and address the issues with the
latter;
• Explain why software re-engineering is a financially advantageous alternative for system
evolution.
Software Re-Engineering Process Model:
120
• In reverse engineering designer must extract the design info from source code but
❖ Abstraction level
❖ Completeness of the documentation
❖ The degree to which tools and a human analyst work together
❖ The directionality of the process are highly variable.
• The abstraction level and completeness can be extracted from source code.
• The RE process should be capable of deriving
❖ procedural design representation(LL)
❖ Program and DS info(somewhat HL)
❖ Object model(HL)
• As a abstraction level increases you are provided with info that will allow easier
understand of the program.
• The completeness of RE process refers to the level of details that is provided at an
abstraction.
• Completeness improve direct proportion to the amount of analysis performed by
the person doing RE
• Interactivity refers to degree to which the human is integrated with automated
tools to create effective REP.
• In most cases as AL increases interactivity must increase or completeness will
suffer.
• If the directionality of REP is one way all the info extracted from the source code
is provided to the SE who can use it during any maintenance activity.
• If directionality is two way the info is fed into RE tools that attempts to
restructure or regenerate old program.
• Before Re-Eng commences unstructured source code is restructured
• This makes source code easier to read and provides the basis for all subsequence
RE activity.
• You must evaluate the old program from the source code and develop
❖ Meaningful specification
❖ User interface applied
❖ Program DS or database i.e used
Reverse engineering to understand data
• RE of data occurs at different level of abstraction often it is first RE task.
124
Restructuring
• Software restructuring modifies source code and/or data in effort to make future
changes.
• This restructuring doesn't modifies the entire program architecture.
• Mainly it focus on the design details of individual modules and local data
structure within the module.
• If suppose restructuring efforts extends beyond the boundaries of module and
encompasses the software architecture restructuring becomes Forward
engineering.
• Restructuring occurs when the basic architecture of an application is solid, even
through technical internal need to work.
126
• This step initiated when major parts of software are serviceable and only subset of
all modules and data need extensive modification.
Code Restructuring:
• CR is performed to yield a design that the same function but with higher quality
than the original program.
• In general CR technology model program logic with Boolean algebra and then
apply a series of transformation rules that restructures logic.
• A resource exchange diagram is a map that depicts each programme module as
well as the resources that are traded between that module and the other modules.
• The programme design can be changed to ensure minimum coupling among
modules if the representation flow is first created. This will allow for more
flexibility.
Data Restructuring:
• The process of reverse engineering, also known as analysis of code, needs to be
carried out before DR can get underway.
• The evaluation of all PL statements takes place, whether or not they contain data
definition, file descriptions, I/O, or interface description.
• The purpose of this activity, which is known as data analysis, is to extract data
items and objects in order to obtain information on data flow and to gain an
understanding of the existing data structures that have been put into place.
• The process of data redesign begins once the examination of the data has been
finished.
• Data record standardization step clarifies data definition to achieve consistency
among data items names or physical record format within the existing DS or file
format
• Another form of redesign called data name rationalization it ensure that all data
naming conventions conform to local standards that aliases are eliminated as data
flow through the system.
• When restructuring move behind the standardization and rationalization physical
modification can be done to existing one in order to make the data design more
effective.
• This means that translation from one file format to another or in some cases
translation from one type database to another.
127
Forward Engineering
• A program with control flow that is the graphic equivalent of bowl of spaghetti
with modules.
❖ You can struggle through modification after modification, fighting the ad hoc
design and source code to implement the necessary changes.
❖ You can attempt to understand inner working of program in broader way to make
modification more efficiently.
❖ You can redesign, recode and test those portion of the software that require
modification and apply SE approach.
❖ You can completely redesign or recode and conduct test to understand the current
design.
• There is no single correct option circumstances may dictate the first option even if
others are more desirable.
• In most cases FE does not simply create modern equivalent of existing program
rather new user and tech req are integrated into the reengineering effort.
• So that the redeveloped program extends the capabilities of the older application.
Forward engineering for client server architectures
• Although a variety of different distributed environment can be designed, the
typical mainframe application that is reengineered into a client server architecture
has the following features.
❖ Application functionality must migrate to each client computer.
❖ New GUI are implemented at client sites.
❖ Database function are allocated to server.
❖ Specialized functionality may remain at server site.
❖ New communication, security, archiving and control requirement must be
established at both the client and server sites.
• Reengineering for client-server application begins with analysis of the business
environment that encompasses the existing mainframe.
• Three layers of abstraction can be identified
1. The database sits
2. The business rules layer
3. Client application layer
The database sits:
• It is foundation for client server architecture and manages the transactions and
queries from server application.
• Yet this transactions and queries can be controlled within the context of business
rules.
• The function of the existing DBMS and data architecture of existing DB must be
reengineered to redesign DB foundation layer.
Business rules layer:
• Represents software resident at both the client and server.
• This software performs control and coordination tasks to ensure that transactions
and queries between the client application and database conform to the
established business process.
• In many cases mainframe application can be segmented into set of desktop
application that is controlled by business rules layer.
129