0% found this document useful (0 votes)
245 views

Advanced Software Engineering

Uploaded by

ishwaryakasi989
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
245 views

Advanced Software Engineering

Uploaded by

ishwaryakasi989
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 139

ADVANCED SOFTWARE ENGINEERING

UNIT I SOFTWARE PROCESS &MODELING

Prescriptive Process Models – Agility and Process – Scrum – XP – Kanban – DevOps –


Prototype Construction – Prototype Evaluation – Prototype Evolution – Modeling – Principles –
Requirements Engineering – Scenario-based Modeling – Class-based Modeling – Functional
Modeling – Behavioral Modeling.

Prescriptive Process Models


The following framework activities are carried out irrespective of the process model
chosen by the organization.

1. Communication
2. Planning
3. Modeling
4. Construction
5. Deployment

The name 'prescriptive' is given because the model prescribes a set of activities, actions,
tasks, quality assurance and change the mechanism for every project.

There are three types of prescriptive process models. They are:

1. The Waterfall Model


2. Incremental Process model
3. RAD model

1. The Waterfall Model

• The waterfall model is also called as 'Linear sequential model' or 'Classic life cycle
model'.
• In this model, each phase is fully completed before the beginning of the next phase.
• This model is used for the small projects.
• In this model, feedback is taken after each phase to ensure that the project is on the right
path.
• Testing part starts only after the development is complete.
NOTE: The description of the phases of the waterfall model is same as that of the process
model.

An alternative design for 'linear sequential model' is as follows:

Advantages of waterfall model


• The waterfall model is simple and easy to understand, implement, and use.
• All the requirements are known at the beginning of the project, hence it is easy to
manage.
• It avoids overlapping of phases because each phase is completed at once.
• This model works for small projects because the requirements are understood very well.
• This model is preferred for those projects where the quality is more important as
compared to the cost of the project.
Disadvantages of the waterfall model
• This model is not good for complex and object oriented projects.
• It is a poor model for long projects.
• The problems with this model are uncovered, until the software testing.
• The amount of risk is high.

2. Incremental Process model


• The incremental model combines the elements of waterfall model and they are applied in
an iterative fashion.
• The first increment in this model is generally a core product.
• Each increment builds the product and submits it to the customer for any suggested
modifications.
• The next increment implements on the customer's suggestions and add additional
requirements in the previous increment.
• This process is repeated until the product is finished.
For example, the word-processing software is developed using the incremental model.

Advantages of incremental model


• This model is flexible because the cost of development is low and initial product delivery
is faster.
• It is easier to test and debug during the smaller iteration.
• The working software generates quickly and early during the software life cycle.
• The customers can respond to its functionalities after every increment.
Disadvantages of the incremental model
• The cost of the final product may cross the cost estimated initially.
• This model requires a very clear and complete planning.
• The planning of design is required before the whole system is broken into small
increments.
• The demands of customer for the additional functionalities after every increment causes
problem during the system architecture.
3. RAD model

• RAD is a Rapid Application Development model.


• Using the RAD model, software product is developed in a short period of time.
• The initial activity starts with the communication between customer and developer.
• Planning depends upon the initial requirements and then the requirements are divided
into groups.
• Planning is more important to work together on different modules.
The RAD model consist of following phases:

1. Business Modeling
• Business modeling consist of the flow of information between various functions in the
project.
• For example what type of information is produced by every function and which are the
functions to handle that information.
• A complete business analysis should be performed to get the essential business
information.
2. Data modeling
• The information in the business modeling phase is refined into the set of objects and it is
essential for the business.
• The attributes of each object are identified and define the relationship between objects.
3. Process modeling
• The data objects defined in the data modeling phase are changed to fulfil the information
flow to implement the business model.
• The process description is created for adding, modifying, deleting or retrieving a data
object.
4. Application generation
• In the application generation phase, the actual system is built.
• To construct the software the automated tools are used.
5. Testing and turnover
• The prototypes are independently tested after each iteration so that the overall testing
time is reduced.
• The data flow and the interfaces between all the components are are fully tested. Hence,
most of the programming components are already tested.
Software Engineering | Agility
Agility has become today’s buzzword when describing a contemporary software method.
Everyone is agile. An associate agile team could be a nimble team able to befittingly reply to
changes. modification is what software development is extremely abundant.
• Changes within the software being engineered,
• Changes to the team members,
• Changes attributable to new technology,
• Changes of all types that will have an effect on the merchandise they build or the project that
makes the merchandise.
All changes can be represented as shown in the below diagram which is considered according to
Ivar Jacobson Agility process of Software.

Agile Model
The meaning of Agile is swift or versatile."Agile process model" refers to a software
development approach based on iterative development. Agile methods break tasks into smaller
iterations, or parts do not directly involve long term planning. The project scope and
requirements are laid down at the beginning of the development process. Plans regarding the
number of iterations, the duration and the scope of each iteration are clearly defined in advance.

Each iteration is considered as a short time "frame" in the Agile process model, which typically
lasts from one to four weeks. The division of the entire project into smaller parts helps to
minimize the project risk and to reduce the overall project delivery time requirements. Each
iteration involves a team working through a full software development life cycle including
planning, requirements analysis, design, coding, and testing before a working product is
demonstrated to the client.

Phases of Agile Model:

Following are the phases in the Agile model are as follows:

1. Requirements gathering
2. Design the requirements
3. Construction/ iteration
4. Testing/ Quality assurance
5. Deployment
6. Feedback

1. Requirements gathering: In this phase, you must define the requirements. You should
explain business opportunities and plan the time and effort needed to build the project. Based on
this information, you can evaluate technical and economic feasibility.

2. Design the requirements: When you have identified the project, work with stakeholders to
define requirements. You can use the user flow diagram or the high-level UML diagram to
show the work of new features and show how it will apply to your existing system.

3. Construction/ iteration: When the team defines the requirements, the work begins.
Designers and developers start working on their project, which aims to deploy a working
product. The product will undergo various stages of improvement, so it includes simple,
minimal functionality.

4. Testing: In this phase, the Quality Assurance team examines the product's performance and
looks for the bug.

5. Deployment: In this phase, the team issues a product for the user's work environment.

6. Feedback: After releasing the product, the last step is feedback. In this, the team receives
feedback about the product and works through the feedback.

Agile Testing Methods:

o Scrum
o Crystal
o Dynamic Software Development Method(DSDM)
o Feature Driven Development(FDD)
o Lean Software Development
o eXtreme Programming(XP)

Scrum

SCRUM is an agile development process focused primarily on ways to manage tasks in team-
based development conditions.

There are three roles in it, and their responsibilities are:

o Scrum Master: The scrum can set up the master team, arrange the meeting and remove
obstacles for the process
o Product owner: The product owner makes the product backlog, prioritizes the delay and
is responsible for the distribution of functionality on each repetition.
o Scrum Team: The team manages its work and organizes the work to complete the sprint
or cycle.

eXtreme Programming(XP)

This type of methodology is used when customers are constantly changing demands or
requirements, or when they are not sure about the system's performance.

Crystal:

There are three concepts of this method-

1. Chartering: Multi activities are involved in this phase such as making a development
team, performing feasibility analysis, developing plans, etc.
2. Cyclic delivery: under this, two more cycles consist, these are:
o Team updates the release plan.
o Integrated product delivers to the users.
3. Wrap up: According to the user environment, this phase performs deployment, post-
deployment.

Dynamic Software Development Method(DSDM):

DSDM is a rapid application development strategy for software development and gives an agile
project distribution structure. The essential features of DSDM are that users must be actively
connected, and teams have been given the right to make decisions. The techniques used in
DSDM are:

1. Time Boxing
2. MoSCoW Rules
3. Prototyping

The DSDM project contains seven stages:

1. Pre-project
2. Feasibility Study
3. Business Study
4. Functional Model Iteration
5. Design and build Iteration
6. Implementation
7. Post-project
Feature Driven Development(FDD):

This method focuses on "Designing and Building" features. In contrast to other smart methods,
FDD describes the small steps of the work that should be obtained separately per function.

Lean Software Development:

Lean software development methodology follows the principle "just in time production." The
lean method indicates the increasing speed of software development and reducing costs. Lean
development can be summarized in seven phases.

1. Eliminating Waste
2. Amplifying learning
3. Defer commitment (deciding as late as possible)
4. Early delivery
5. Empowering the team
6. Building Integrity
7. Optimize the whole

When to use the Agile Model?

o When frequent changes are required.


o When a highly qualified and experienced team is available.
o When a customer is ready to have a meeting with a software team all the time.
o When project size is small.

Advantage(Pros) of Agile Method:

1. Frequent Delivery
2. Face-to-Face Communication with clients.
3. Efficient design and fulfils the business requirement.
4. Anytime changes are acceptable.
5. It reduces total development time.

Disadvantages(Cons) of Agile Model:

1. Due to the shortage of formal documents, it creates confusion and crucial decisions taken
throughout various phases can be misinterpreted at any time by different team members.
2. Due to the lack of proper documentation, once the project completes and the developers
allotted to another project, maintenance of the finished project can become a difficulty.
What is kanban?

Kanban is a popular framework used to implement agile and DevOps software development. It
requires real-time communication of capacity and full transparency of work. Work items are
represented visually on a kanban board, allowing team members to see the state of every piece
of work at any time.

What is DevOps?

The DevOps is a combination of two words, one is software Development, and second is
Operations. This allows a single team to handle the entire application lifecycle, from
development to testing, deployment, and operations. DevOps helps you to reduce the
disconnection between software developers, quality assurance (QA) engineers, and system
administrators.

DevOps promotes collaboration between Development and Operations team to deploy code to
production faster in an automated & repeatable way.

4DevOps helps to increase organization speed to deliver applications and services. It also
allows organizations to serve their customers better and compete more strongly in the market.

DevOps can also be defined as a sequence of development and IT operations with better
communication and collaboration.

DevOps has become one of the most valuable business disciplines for enterprises or
organizations. With the help of DevOps, quality, and speed of the application delivery has
improved to a great extent.

DevOps is nothing but a practice or methodology of making "Developers" and "Operations"


folks work together. DevOps represents a change in the IT culture with a complete focus on
rapid IT service delivery through the adoption of agile practices in the context of a system-
oriented approach.

DevOps is all about the integration of the operations and development process. Organizations
that have adopted DevOps noticed a 22% improvement in software quality and a 17%
improvement in application deployment frequency and achieve a 22% hike in customer
satisfaction. 19% of revenue hikes as a result of the successful DevOps implementation.

Why DevOps?

Before going further, we need to understand why we need the DevOps over the other methods.

o The operation and development team worked in complete isolation.


o After the design-build, the testing and deployment are performed respectively. That's why
they consumed more time than actual build cycles.
o Without the use of DevOps, the team members are spending a large amount of time on
designing, testing, and deploying instead of building the project.
o Manual code deployment leads to human errors in production.
o Coding and operation teams have their separate timelines and are not in synch, causing
further delays.

DevOps History

o In 2009, the first conference named DevOpsdays was held in Ghent Belgium. Belgian
consultant and Patrick Debois founded the conference.
o In 2012, the state of DevOps report was launched and conceived by Alanna Brown at
Puppet.
o In 2014, the annual State of DevOps report was published by Nicole Forsgren, Jez
Humble, Gene Kim, and others. They found DevOps adoption was accelerating in 2014
also.
o In 2015, Nicole Forsgren, Gene Kim, and Jez Humble founded DORA (DevOps
Research and Assignment).
o In 2017, Nicole Forsgren, Gene Kim, and Jez Humble published "Accelerate: Building
and Scaling High Performing Technology Organizations".

DevOps Architecture Features

Here are some key features of DevOps architecture, such as:


1) Automation

Automation can reduce time consumption, especially during the testing and deployment phase.
The productivity increases, and releases are made quicker by automation. This will lead in
catching bugs quickly so that it can be fixed easily. For contiguous delivery, each code is
defined through automated tests, cloud-based services, and builds. This promotes production
using automated deploys.

2) Collaboration

The Development and Operations team collaborates as a DevOps team, which improves the
cultural model as the teams become more productive with their productivity, which strengthens
accountability and ownership. The teams share their responsibilities and work closely in sync,
which in turn makes the deployment to production faster.

3) Integration

Applications need to be integrated with other components in the environment. The integration
phase is where the existing code is combined with new functionality and then tested.
Continuous integration and testing enable continuous development. The frequency in the
releases and micro-services leads to significant operational challenges. To overcome such
problems, continuous integration and delivery are implemented to deliver in a quicker, safer,
and reliable manner.

4) Configuration management

It ensures the application to interact with only those resources that are concerned with the
environment in which it runs. The configuration files are not created where the external
configuration to the application is separated from the source code. The configuration file can be
written during deployment, or they can be loaded at the run time, depending on the environment
in which it is running.

DevOps Advantages and Disadvantages

Here are some advantages and disadvantages that DevOps can have for business, such as:

Advantages
o DevOps is an excellent approach for quick development and deployment of applications.
o It responds faster to the market changes to improve business growth.
o DevOps escalate business profit by decreasing software delivery time and transportation
costs.
o DevOps clears the descriptive process, which gives clarity on product development and
delivery.
o It improves customer experience and satisfaction.
o DevOps simplifies collaboration and places all tools in the cloud for customers to access.
o DevOps means collective responsibility, which leads to better team engagement and
productivity.

Disadvantages
o DevOps professional or expert's developers are less available.
o Developing with DevOps is so expensive.
o Adopting new DevOps technology into the industries is hard to manage in short time.
o Lack of DevOps knowledge can be a problem in the continuous integration of automation
projects.

Prerequisite

To learn DevOps, you should have basic knowledge of Linux, and at least one Scripting
language.

Audience

Our DevOps tutorial is designed to help beginners and professionals.

Problem

We assure you that you will not find any issue with this DevOps tutorial. But if there is any
mistake or error, please post the error in the contact form.
Prototype Model

The prototype model requires that before carrying out the development of actual software, a
working prototype of the system should be built. A prototype is a toy implementation of the
system. A prototype usually turns out to be a very crude version of the actual system, possible
exhibiting limited functional capabilities, low reliability, and inefficient performance as
compared to actual software. In many instances, the client only has a general view of what is
expected from the software product. In such a scenario where there is an absence of detailed
information regarding the input to the system, the processing needs, and the output requirement,
the prototyping model may be employed.

Steps of Prototype Model

1. Requirement Gathering and Analyst


2. Quick Decision
3. Build a Prototype
4. Assessment or User Evaluation
5. Prototype Refinement
6. Engineer Product

Advantage of Prototype Model

1. Reduce the risk of incorrect user requirement


2. Good where requirement are changing/uncommitted
3. Regular visible process aids management
4. Support early product marketing
5. Reduce Maintenance cost.
6. Errors can be detected much earlier as the system is made side by side.

Disadvantage of Prototype Model

1. An unstable/badly implemented prototype often becomes the final product.


2. Require extensive customer collaboration
o Costs customer money
o Needs committed customer
o Difficult to finish if customer withdraw
o May be too customer specific, no broad market
3. Difficult to know how long the project will last.
4. Easy to fall back into the code and fix without proper requirement analysis, design,
customer evaluation, and feedback.
5. Prototyping tools are expensive.
6. Special tools & techniques are required to build a prototype.
7. It is a time-consuming process.

Evolutionary Process Model

Evolutionary process model resembles the iterative enhancement model. The same phases are
defined for the waterfall model occurs here in a cyclical fashion. This model differs from the
iterative enhancement model in the sense that this does not require a useful product at the end of
each cycle. In evolutionary development, requirements are implemented by category rather than
by priority.
For example, in a simple database application, one cycle might implement the graphical user
Interface (GUI), another file manipulation, another queries and another updates. All four cycles
must complete before there is a working product available. GUI allows the users to interact with
the system, file manipulation allow the data to be saved and retrieved, queries allow user to get
out of the system, and updates allows users to put data into the system.

Benefits of Evolutionary Process Model

Use of EVO brings a significant reduction in risk for software projects.

EVO can reduce costs by providing a structured, disciplined avenue for experimentation.

EVO allows the marketing department access to early deliveries, facilitating the development of
documentation and demonstration.

Better fit the product to user needs and market requirements.

Manage project risk with the definition of early cycle content.

Uncover key issues early and focus attention appropriately.

Increase the opportunity to hit market windows.

Accelerate sales cycles with early customer exposure.

Increase management visibility of project progress.

Increase product team productivity and motivations.

WHAT IS SOFTWARE MODELING?


By software modeling we do not mean expressing a scientific theory or algorithm in software.
This is what scientists traditionally call a software model. What we mean here by software
modeling is larger than an algorithm or a single method. Software modeling should address the
entire software design including interfaces, interactions with other software, and all the software
methods.
Software models are ways of expressing a software design. Usually some sort of abstract
language or pictures are used to express the software design. For object-oriented software, an
object modeling language such as UML is used to develop and express the software design.
There are several tools that you can use to develop your UML design.

Software Design Principles


Software design principles are concerned with providing means to handle the complexity of the
design process effectively. Effectively managing the complexity will not only reduce the effort
needed for design but can also reduce the scope of introducing errors during design.

Following are the principles of Software Design

Problem Partitioning

For small problem, we can handle the entire problem at once but for the significant problem,
divide the problems and conquer the problem it means to divide the problem into smaller pieces
so that each piece can be captured separately.

For software design, the goal is to divide the problem into manageable pieces.

Benefits of Problem Partitioning


1. Software is easy to understand
2. Software becomes simple
3. Software is easy to test
4. Software is easy to modify
5. Software is easy to maintain
6. Software is easy to expand
These pieces cannot be entirely independent of each other as they together form the system.
They have to cooperate and communicate to solve the problem. This communication adds
complexity.

Note: As the number of partition increases = Cost of partition and complexity increases

Abstraction

An abstraction is a tool that enables a designer to consider a component at an abstract level


without bothering about the internal details of the implementation. Abstraction can be used for
existing element as well as the component being designed.

Here, there are two common abstraction mechanisms

1. Functional Abstraction
2. Data Abstraction

Functional Abstraction
i. A module is specified by the method it performs.
ii. The details of the algorithm to accomplish the functions are not visible to the user of the
function.

Functional abstraction forms the basis for Function oriented design approaches.

Data Abstraction

Details of the data elements are not visible to the users of data. Data Abstraction forms the basis
for Object Oriented design approaches.

Modularity

Modularity specifies to the division of software into separate modules which are differently
named and addressed and are integrated later on in to obtain the completely functional software.
It is the only property that allows a program to be intellectually manageable. Single large
programs are difficult to understand and read due to a large number of reference variables,
control paths, global variables, etc.

The desirable properties of a modular system are:

o Each module is a well-defined system that can be used with other applications.
o Each module has single specified objectives.
o Modules can be separately compiled and saved in the library.
o Modules should be easier to use than to build.
o Modules are simpler from outside than inside.

Advantages and Disadvantages of Modularity

In this topic, we will discuss various advantage and disadvantage of Modularity.

Advantages of Modularity

There are several advantages of Modularity

o It allows large programs to be written by several or different people


o It encourages the creation of commonly used routines to be placed in the library and used
by other programs.
o It simplifies the overlay procedure of loading a large program into main storage.
o It provides more checkpoints to measure progress.
o It provides a framework for complete testing, more accessible to test
o It produced the well designed and more readable program.

Disadvantages of Modularity

There are several disadvantages of Modularity

o Execution time maybe, but not certainly, longer


o Storage size perhaps, but is not certainly, increased
o Compilation and loading time may be longer
o Inter-module communication problems may be increased
o More linkage required, run-time may be longer, more source lines must be written, and
more documentation has to be done

Modular Design

Modular design reduces the design complexity and results in easier and faster implementation
by allowing parallel development of various parts of a system. We discuss a different section of
modular design in detail in this section:

1. Functional Independence: Functional independence is achieved by developing functions


that perform only one kind of task and do not excessively interact with other modules.
Independence is important because it makes implementation more accessible and faster. The
independent modules are easier to maintain, test, and reduce error propagation and can be
reused in other programs as well. Thus, functional independence is a good design feature which
ensures software quality.

It is measured using two criteria:

o Cohesion: It measures the relative function strength of a module.


o Coupling: It measures the relative interdependence among modules.

2. Information hiding: The fundamental of Information hiding suggests that modules can be
characterized by the design decisions that protect from the others, i.e., In other words, modules
should be specified that data include within a module is inaccessible to other modules that do
not need for such information.

The use of information hiding as design criteria for modular system provides the most
significant benefits when modifications are required during testing's and later during software
maintenance. This is because as most data and procedures are hidden from other parts of the
software, inadvertent errors introduced during modifications are less likely to propagate to
different locations within the software.

Strategy of Design

A good system design strategy is to organize the program modules in such a method that are
easy to develop and latter too, change. Structured design methods help developers to deal with
the size and complexity of programs. Analysts generate instructions for the developers about
how code should be composed and how pieces of code should fit together to form a program.

To design a system, there are two possible approaches:

1. Top-down Approach
2. Bottom-up Approach

1. Top-down Approach: This approach starts with the identification of the main components
and then decomposing them into their more detailed sub-components.
2. Bottom-up Approach: A bottom-up approach begins with the lower details and moves
towards up the hierarchy, as shown in fig. This approach is suitable in case of an existing
system.

Requirement Engineering

Requirements engineering (RE) refers to the process of defining, documenting, and


maintaining requirements in the engineering design process. Requirement engineering provides
the appropriate mechanism to understand what the customer desires, analyzing the need, and
assessing feasibility, negotiating a reasonable solution, specifying the solution clearly,
validating the specifications and managing the requirements as they are transformed into a
working system. Thus, requirement engineering is the disciplined application of proven
principles, methods, tools, and notation to describe a proposed system's intended behavior and
its associated constraints.

Requirement Engineering Process

It is a four-step process, which includes -


1. Feasibility Study
2. Requirement Elicitation and Analysis
3. Software Requirement Specification
4. Software Requirement Validation
5. Software Requirement Management

1. Feasibility Study:

The objective behind the feasibility study is to create the reasons for developing the software
that is acceptable to users, flexible to change and conformable to established standards.

Types of Feasibility:

18.3M

277
1. Technical Feasibility - Technical feasibility evaluates the current technologies, which
are needed to accomplish customer requirements within the time and budget.
2. Operational Feasibility - Operational feasibility assesses the range in which the required
software performs a series of levels to solve business problems and customer
requirements.
3. Economic Feasibility - Economic feasibility decides whether the necessary software can
generate financial profits for an organization.

2. Requirement Elicitation and Analysis:

This is also known as the gathering of requirements. Here, requirements are identified with
the help of customers and existing systems processes, if available.

Analysis of requirements starts with requirement elicitation. The requirements are analyzed to
identify inconsistencies, defects, omission, etc. We describe requirements in terms of
relationships and also resolve conflicts if any.

Problems of Elicitation and Analysis

o Getting all, and only, the right people involved.


o Stakeholders often don't know what they want
o Stakeholders express requirements in their terms.
o Stakeholders may have conflicting requirements.
o Requirement change during the analysis process.
o Organizational and political factors may influence system requirements.
3. Software Requirement Specification:

Software requirement specification is a kind of document which is created by a software analyst


after the requirements collected from the various sources - the requirement received by the
customer written in ordinary language. It is the job of the analyst to write the requirement in
technical language so that they can be understood and beneficial by the development team.

The models used at this stage include ER diagrams, data flow diagrams (DFDs), function
decomposition diagrams (FDDs), data dictionaries, etc.

o Data Flow Diagrams: Data Flow Diagrams (DFDs) are used widely for modeling the
requirements. DFD shows the flow of data through a system. The system may be a
company, an organization, a set of procedures, a computer hardware system, a software
system, or any combination of the preceding. The DFD is also known as a data flow
graph or bubble chart.
o Data Dictionaries: Data Dictionaries are simply repositories to store information about
all data items defined in DFDs. At the requirements stage, the data dictionary should at
least define customer data items, to ensure that the customer and developers use the same
definition and terminologies.
o Entity-Relationship Diagrams: Another tool for requirement specification is the entity-
relationship diagram, often called an "E-R diagram." It is a detailed logical
representation of the data for the organization and uses three main constructs i.e. data
entities, relationships, and their associated attributes.

4. Software Requirement Validation:

After requirement specifications developed, the requirements discussed in this document are
validated. The user might demand illegal, impossible solution or experts may misinterpret the
needs. Requirements can be the check against the following conditions -

o If they can practically implement


o If they are correct and as per the functionality and specially of software
o If there are any ambiguities
o If they are full
o If they can describe

Requirements Validation Techniques

o Requirements reviews/inspections: systematic manual analysis of the requirements.


o Prototyping: Using an executable model of the system to check requirements.
o Test-case generation: Developing tests for requirements to check testability.
o Automated consistency analysis: checking for the consistency of structured
requirements descriptions.

Software Requirement Management:

Requirement management is the process of managing changing requirements during the


requirements engineering process and system development.

New requirements emerge during the process as business needs a change, and a better
understanding of the system is developed.

The priority of requirements from different viewpoints changes during development process.

The business and technical environment of the system changes during the development.

Prerequisite of Software requirements

Collection of software requirements is the basis of the entire software development project.
Hence they should be clear, correct, and well-defined.
A complete Software Requirement Specifications should be:

o Clear
o Correct
o Consistent
o Coherent
o Comprehensible
o Modifiable
o Verifiable
o Prioritized
o Unambiguous
o Traceable
o Credible source

Software Requirements: Largely software requirements must be categorized into two


categories:

1. Functional Requirements: Functional requirements define a function that a system or


system element must be qualified to perform and must be documented in different forms.
The functional requirements are describing the behavior of the system as it correlates to
the system's functionality.
2. Non-functional Requirements: This can be the necessities that specify the criteria that
can be used to decide the operation instead of specific behaviors of the system.
Non-functional requirements are divided into two main categories:
o Execution qualities like security and usability, which are observable at run time.
o Evolution qualities like testability, maintainability, extensibility, and scalability
that embodied in the static structure of the software system.

Elements of the Requirements Model


Requirements for a computer-based system can be seen in many different ways. Some
software people argue that it’s worth using a number of different modes of representation
while others believe that it’s best to select one mode of representation.
The specific elements of the requirements model are dedicated to the analysis modeling
method that is to be used.
• Scenario-based elements :
Using a scenario-based approach, system is described from user’s point of view. For
example, basic use cases and their corresponding use-case diagrams evolve into more
elaborate template-based use cases. Figure 1(a) depicts a UML activity diagram for
eliciting requirements and representing them using use cases. There are three levels of
elaboration.
• Class-based elements :
A collection of things that have similar attributes and common behaviors i.e., objects are
categorized into classes. For example, a UML case diagram can be used to depict a Sensor
class for the SafeHome security function. Note that diagram lists attributes of sensors and
operations that can be applied to modify these attributes. In addition to class diagrams,
other analysis modeling elements depict manner in which classes collaborate with one
another and relationships and interactions between classes.
• Behavioral elements :
Effect of behavior of computer-based system can be seen on design that is chosen and
implementation approach that is applied. Modeling elements that depict behavior must be
provided by requirements model.

Figure 1(a):UML activity diagrams for eliciting requirements

Class diagram for sensor

Method for representing behavior of a system by depicting its states and events that cause
system to change state is state diagram. A state is an externally observable mode of
behavior. In addition, state diagram indicates actions taken as a consequence of a particular
event.
To illustrate use of a state diagram, consider software embedded within safeHome control
panel that is responsible for reading user input. A simplified UML state diagram is shown
in figure 2.

Figure 2: UML state diagram notation

• Flow-oriented elements :
As it flows through a computer-based system information is transformed. System accepts
input, applies functions to transform it, and produces output in a various forms. Input may
be a control signal transmitted by a transducer, a series of numbers typed by human
operator, a packet of information transmitted on a network link, or a voluminous data file
retrieved from secondary storage. Transform may compromise a single logical comparison,
a complex numerical algorithm, or a rule-inference approach of an expert system. Output
produce a 200-page report or may light a single LED. In effect, we can create a flow model
for any computer-based system, regardless of size and complexity.

UNIT II SOFTWARE DESIGN


Design Concepts – Design Model – Software Architecture – Architectural Styles –
Architectural Design – Component-Level Design – User Experience Design – Design for
Mobility – PatternBased Design.

Introduction of Software Design process | Set 2


• Difficulty Level : Easy
• Last Updated : 11 Aug, 2021
Software Design is the process to transform the user requirements into some suitable form,
which helps the programmer in software coding and implementation. During the software
design phase, the design document is produced, based on the customer requirements as
documented in the SRS document. Hence the aim of this phase is to transform the SRS
document into the design document.
The following items are designed and documented during the design phase:

• Different modules required.


• Control relationships among modules.
• Interface among different modules.
• Data structure among the different modules.
• Algorithms required to implement among the individual modules.
Objectives of Software Design:

1. Correctness:
A good design should be correct i.e. it should correctly implement all the functionalities of
the system.
2. Efficiency:
A good software design should address the resources, time, and cost optimization issues.
3. Understandability:
A good design should be easily understandable, for which it should be modular and all the
modules are arranged in layers.
4. Completeness:
The design should have all the components like data structures, modules, and external
interfaces, etc.
5. Maintainability:
A good software design should be easily amenable to change whenever a change request is
made from the customer side.
Software Design Concepts:
Concepts are defined as a principal idea or invention that comes into our mind or in thought to
understand something. The software design concept simply means the idea or principle behind
the design. It describes how you plan to solve the problem of designing software, the logic, or
thinking behind how you will design software. It allows the software engineer to create the
model of the system or software or product that is to be developed or built. The software design
concept provides a supporting and essential structure or model for developing the right
software. There are many concepts of software design and some of them are given below:
The following points should be considered while designing Software:

1. Abstraction- hide Irrelevant data


Abstraction simply means to hide the details to reduce complexity and increases efficiency
or quality. Different levels of Abstraction are necessary and must be applied at each stage of
the design process so that any error that is present can be removed to increase the efficiency
of the software solution and to refine the software solution. The solution should be described
in broad ways that cover a wide range of different things at a higher level of abstraction and
a more detailed description of a solution of software should be given at the lower level of
abstraction.
2. Modularity- subdivide the system
Modularity simply means dividing the system or project into smaller parts to reduce the
complexity of the system or project. In the same way, modularity in design means
subdividing a system into smaller parts so that these parts can be created independently and
then use these parts in different systems to perform different functions. It is necessary to
divide the software into components known as modules because nowadays there are
different software available like Monolithic software that is hard to grasp for software
engineers. So, modularity in design has now become a trend and is also important. If the
system contains fewer components then it would mean the system is complex which requires
a lot of effort (cost) but if we are able to divide the system into components then the cost
would be small.
3. Architecture- design a structure of something
Architecture simply means a technique to design a structure of something. Architecture in
designing software is a concept that focuses on various elements and the data of the
structure. These components interact with each other and use the data of the structure in
architecture.
4. Refinement- removes impurities
Refinement simply means to refine something to remove any impurities if present and
increase the quality. The refinement concept of software design is actually a process of
developing or presenting the software or system in a detailed manner that means to elaborate
a system or software. Refinement is very necessary to find out any error if present and then
to reduce it.
5. Pattern- a repeated form
The pattern simply means a repeated form or design in which the same shape is repeated
several times to form a pattern. The pattern in the design process means the repetition of a
solution to a common recurring problem within a certain context.
6. Information Hiding- hide the information
Information hiding simply means to hide the information so that it cannot be accessed by an
unwanted party. In software design, information hiding is achieved by designing the
modules in a manner that the information gathered or contained in one module is hidden and
can’t be accessed by any other modules.
7. Refactoring- reconstruct something
Refactoring simply means reconstructing something in such a way that it does not affect the
behavior of any other features. Refactoring in software design means reconstructing the
design to reduce complexity and simplify it without affecting the behavior or its functions.
Fowler has defined refactoring as “the process of changing a software system in a way that it
won’t affect the behavior of the design and improves the internal structure”.
Different levels of Software Design:
There are three different levels of software design. They are:

1. Architectural Design:
The architecture of a system can be viewed as the overall structure of the system & the way
in which structure provides conceptual integrity of the system. The architectural design
identifies the software as a system with many components interacting with each other. At
this level, the designers get the idea of the proposed solution domain.

2. Preliminary or high-level design:


Here the problem is decomposed into a set of modules, the control relationship among
various modules identified, and also the interfaces among various modules are identified.
The outcome of this stage is called the program architecture. Design representation
techniques used in this stage are structure chart and UML.

3. Detailed design:
Once the high-level design is complete, a detailed design is undertaken. In detailed design,
each module is examined carefully to design the data structure and algorithms. The stage
outcome is documented in the form of a module specification document.

Software Engineering | Software Design Process


• Difficulty Level : Easy
• Last Updated : 23 Dec, 2021
The design phase of software development deals with transforming the customer requirements
as described in the SRS documents into a form implementable using a programming language.
The software design process can be divided into the following three levels of phases of
design:
1. Interface Design
2. Architectural Design
3. Detailed Design
Interface Design:
Interface design is the specification of the interaction between a system and its environment.
this phase proceeds at a high level of abstraction with respect to the inner workings of the
system i.e, during interface design, the internal of the systems are completely ignored and the
system is treated as a black box. Attention is focused on the dialogue between the target
system and the users, devices, and other systems with which it interacts. The design problem
statement produced during the problem analysis step should identify the people, other
systems, and devices which are collectively called agents.
Interface design should include the following details:
• Precise description of events in the environment, or messages from agents to which the
system must respond.
• Precise description of the events or messages that the system must produce.
• Specification on the data, and the formats of the data coming into and going out of the
system.
• Specification of the ordering and timing relationships between incoming events or
messages, and outgoing events or outputs.
Architectural Design:
Architectural design is the specification of the major components of a system, their
responsibilities, properties, interfaces, and the relationships and interactions between them. In
architectural design, the overall structure of the system is chosen, but the internal details of
major components are ignored.
Issues in architectural design includes:
• Gross decomposition of the systems into major components.
• Allocation of functional responsibilities to components.
• Component Interfaces
• Component scaling and performance properties, resource consumption properties,
reliability properties, and so forth.
• Communication and interaction between components.
The architectural design adds important details ignored during the interface design. Design of
the internals of the major components is ignored until the last phase of the design.
Detailed Design:
Design is the specification of the internal elements of all major system components, their
properties, relationships, processing, and often their algorithms and the data structures.
The detailed design may include:
• Decomposition of major system components into program units.
• Allocation of functional responsibilities to units.
• User interfaces
• Unit states and state changes
• Data and control interaction between units
• Data packaging and implementation, including issues of scope and visibility of program
elements
• Algorithms and data structures

Software Architecture & Design Introduction

The architecture of a system describes its major components, their relationships (structures),
and how they interact with each other. Software architecture and design includes several
contributory factors such as Business strategy, quality attributes, human dynamics, design, and
IT environment.
We can segregate Software Architecture and Design into two distinct phases: Software
Architecture and Software Design. In Architecture, nonfunctional decisions are cast and
separated by the functional requirements. In Design, functional requirements are accomplished.

Software Architecture

Architecture serves as a blueprint for a system. It provides an abstraction to manage the


system complexity and establish a communication and coordination mechanism among
components.
• It defines a structured solution to meet all the technical and operational requirements,
while optimizing the common quality attributes like performance and security.
• Further, it involves a set of significant decisions about the organization related to
software development and each of these decisions can have a considerable impact on
quality, maintainability, performance, and the overall success of the final product. These
decisions comprise of −
o Selection of structural elements and their interfaces by which the system is
composed.
o Behavior as specified in collaborations among those elements.
o Composition of these structural and behavioral elements into large subsystem.
o Architectural decisions align with business objectives.
o Architectural styles guide the organization.

Software Design

Software design provides a design plan that describes the elements of a system, how they fit,
and work together to fulfill the requirement of the system. The objectives of having a design
plan are as follows −
• To negotiate system requirements, and to set expectations with customers, marketing,
and management personnel.
• Act as a blueprint during the development process.
• Guide the implementation tasks, including detailed design, coding, integration, and
testing.
It comes before the detailed design, coding, integration, and testing and after the domain
analysis, requirements analysis, and risk analysis.
Goals of Architecture

The primary goal of the architecture is to identify requirements that affect the structure of the
application. A well-laid architecture reduces the business risks associated with building a
technical solution and builds a bridge between business and technical requirements.
Some of the other goals are as follows −
• Expose the structure of the system, but hide its implementation details.
• Realize all the use-cases and scenarios.
• Try to address the requirements of various stakeholders.
• Handle both functional and quality requirements.
• Reduce the goal of ownership and improve the organization’s market position.
• Improve quality and functionality offered by the system.
• Improve external confidence in either the organization or system.
Limitations
Software architecture is still an emerging discipline within software engineering. It has the
following limitations −
• Lack of tools and standardized ways to represent architecture.
• Lack of analysis methods to predict whether architecture will result in an implementation
that meets the requirements.
• Lack of awareness of the importance of architectural design to software development.
• Lack of understanding of the role of software architect and poor communication among
stakeholders.
• Lack of understanding of the design process, design experience and evaluation of design.

Role of Software Architect


A Software Architect provides a solution that the technical team can create and design for the
entire application. A software architect should have expertise in the following areas −
Design Expertise
• Expert in software design, including diverse methods and approaches such as object-
oriented design, event-driven design, etc.
• Lead the development team and coordinate the development efforts for the integrity of
the design.
• Should be able to review design proposals and tradeoff among themselves.
Domain Expertise
• Expert on the system being developed and plan for software evolution.
• Assist in the requirement investigation process, assuring completeness and consistency.
• Coordinate the definition of domain model for the system being developed.
Technology Expertise
• Expert on available technologies that helps in the implementation of the system.
• Coordinate the selection of programming language, framework, platforms, databases, etc.
Methodological Expertise
• Expert on software development methodologies that may be adopted during SDLC
(Software Development Life Cycle).
• Choose the appropriate approaches for development that helps the entire team.
Hidden Role of Software Architect
• Facilitates the technical work among team members and reinforcing the trust relationship
in the team.
• Information specialist who shares knowledge and has vast experience.
• Protect the team members from external forces that would distract them and bring less
value to the project.
Deliverables of the Architect
• A clear, complete, consistent, and achievable set of functional goals
• A functional description of the system, with at least two layers of decomposition
• A concept for the system
• A design in the form of the system, with at least two layers of decomposition
• A notion of the timing, operator attributes, and the implementation and operation plans
• A document or process which ensures functional decomposition is followed, and the
form of interfaces is controlled

Quality Attributes

Quality is a measure of excellence or the state of being free from deficiencies or defects.
Quality attributes are the system properties that are separate from the functionality of the
system.
Implementing quality attributes makes it easier to differentiate a good system from a bad one.
Attributes are overall factors that affect runtime behavior, system design, and user experience.
They can be classified as −
Static Quality Attributes
Reflect the structure of a system and organization, directly related to architecture, design, and
source code. They are invisible to end-user, but affect the development and maintenance cost,
e.g.: modularity, testability, maintainability, etc.
Dynamic Quality Attributes
Reflect the behavior of the system during its execution. They are directly related to system’s
architecture, design, source code, configuration, deployment parameters, environment, and
platform.
They are visible to the end-user and exist at runtime, e.g. throughput, robustness, scalability,
etc.

Quality Scenarios

Quality scenarios specify how to prevent a fault from becoming a failure. They can be divided
into six parts based on their attribute specifications −
• Source − An internal or external entity such as people, hardware, software, or physical
infrastructure that generate the stimulus.
• Stimulus − A condition that needs to be considered when it arrives on a system.
• Environment − The stimulus occurs within certain conditions.
• Artifact − A whole system or some part of it such as processors, communication
channels, persistent storage, processes etc.
• Response − An activity undertaken after the arrival of stimulus such as detect faults,
recover from fault, disable event source etc.
• Response measure − Should measure the occurred responses so that the requirements
can be tested.
Common Quality Attributes
The following table lists the common quality attributes a software architecture must have −

Category Quality Attribute Description

Design Qualities Conceptual Integrity Defines the consistency and


coherence of the overall design.
This includes the way
components or modules are
designed.

Maintainability Ability of the system to undergo


changes with a degree of ease.

Reusability Defines the capability for


components and subsystems to be
suitable for use in other
applications.

Run-time Qualities Interoperability Ability of a system or different


systems to operate successfully
by communicating and
exchanging information with
other external systems written
and run by external parties.

Manageability Defines how easy it is for system


administrators to manage the
application.

Reliability Ability of a system to remain


operational over time.
Scalability Ability of a system to either
handle the load increase without
impacting the performance of the
system or the ability to be readily
enlarged.

Security Capability of a system to prevent


malicious or accidental actions
outside of the designed usages.

Performance Indication of the responsiveness


of a system to execute any action
within a given time interval.

Availability Defines the proportion of time


that the system is functional and
working. It can be measured as a
percentage of the total system
downtime over a predefined
period.

System Qualities Supportability Ability of the system to provide


information helpful for
identifying and resolving issues
when it fails to work correctly.

Testability Measure of how easy it is to


create test criteria for the system
and its components.

User Qualities Usability Defines how well the application


meets the requirements of the
user and consumer by being
intuitive.
Architecture Quality Correctness Accountability for satisfying all
the requirements of the system.
What
is an
Portability Ability of the system to run under
Archite
different computing environment.
ctural
Style
Integrality Ability to make separately An
developed components of the architec
Non-runtime Quality
system work correctly together. tural
style is
Modifiability Ease with which each software a set of
system can accommodate principl
changes to its software. es. You
can
Business quality attributes Cost and schedule Cost of the system with respect to think of
time to market, expected project it as a
lifetime & utilization of legacy. coarse-
grained
pattern
Marketability Use of system with respect to
that
market competition.
provide
s an
abstract framework for a family of systems. An architectural style improves partitioning and
promotes design reuse by providing solutions to frequently recurring problems.

• Named collection. An architectural style is a named collection of architectural design


decisions that are applicable in a given development context, constrain architectural
design decisions that are specific to a particular system within that context, elicit
beneficial qualities in each resulting system.
• Recurring organizational patterns and idioms. Established, shared understanding
of common design forms. Mark of mature engineering field.
• Abstraction. Abstraction of recurring composition and interaction characteristics in a
set of architectures.

Benefits of Architectural Styles


Architectural styles provide several benefits. The most important of these benefits is that they
provide a common language. Another benefit is that they provide a way to have a conversation
that is technology-agnostic. This allows you to facilitate a higher level of conversation that is
inclusive of patterns and principles, without getting into the specifics. For example, by using
architecture styles, you can talk about client-server versus N-Tier.

• Design Reuse. Well-understood solutions applied to new problems.


• Code reuse. Shared implementations of invariant aspects of a style.
• Understandability of system organization. A phrase such as ‘client-server” conveys
a lot of information.
• Interoperability. Supported by style standardization.
• Style-specific analysis. Enabled by the constrained design space.
• Visualizations. Style-specific descriptions matching engineer’s mental models.

Architectural styles for Software Design


The architectural styles that are used while designing the software as follows:
1. Data-centered architecture

• The data store in the file or database is occupying at the center of the architecture.
• Store data is access continuously by the other components like an update, delete, add,
modify from the data store.
• Data-centered architecture helps integrity.
• Pass data between clients using the blackboard mechanism.
• The processes are independently executed by the client components.
2. Data-flow architecture

• This architecture is applied when the input data is converted into a series of manipulative
components into output data.
• A pipe and filter pattern is a set of components called as filters.
• Filters are connected through pipes and transfer data from one component to the next
component.
• The flow of data degenerates into a single line of transform then it is known as batch
sequential.
3. Call and return architectures

This architecture style allows to achieve a program structure which is easy to modify.

Following are the sub styles exist in this category:

1. Main program or subprogram architecture

• The program is divided into smaller pieces hierarchically.


• The main program invokes many of program components in the hierarchy that program
components are divided into subprogram.
2. Remote procedure call architecture

• The main program or subprogram components are distributed in network of multiple


computers.
• The main aim is to increase the performance.
4. Object-oriented architectures

• This architecture is the latest version of call-and-return architecture.


• It consist of the bundling of data and methods.
5. Layered architectures

• The different layers are defined in the architecture. It consists of outer and inner layer.
• The components of outer layer manage the user interface operations.
• Components execute the operating system interfacing at the inner layer.
• The inner layers are application layer, utility layer and the core layer.
• In many cases, It is possible that more than one pattern is suitable and the alternate
architectural style can be designed and evaluated.
Component-Based Architecture

Component-based architecture focuses on the decomposition of the design into individual


functional or logical components that represent well-defined communication interfaces
containing methods, events, and properties. It provides a higher level of abstraction and divides
the problem into sub-problems, each associated with component partitions.
The primary objective of component-based architecture is to ensure component reusability. A
component encapsulates functionality and behaviors of a software element into a reusable and
self-deployable binary unit. There are many standard component frameworks such as
COM/DCOM, JavaBean, EJB, CORBA, .NET, web services, and grid services. These
technologies are widely used in local desktop GUI application design such as graphic
JavaBean components, MS ActiveX components, and COM components which can be reused
by simply drag and drop operation.
Component-oriented software design has many advantages over the traditional object-oriented
approaches such as −
• Reduced time in market and the development cost by reusing existing components.
• Increased reliability with the reuse of the existing components.

What is a Component?

A component is a modular, portable, replaceable, and reusable set of well-defined functionality


that encapsulates its implementation and exporting it as a higher-level interface.
A component is a software object, intended to interact with other components, encapsulating
certain functionality or a set of functionalities. It has an obviously defined interface and
conforms to a recommended behavior common to all components within an architecture.
A software component can be defined as a unit of composition with a contractually specified
interface and explicit context dependencies only. That is, a software component can be
deployed independently and is subject to composition by third parties.
Views of a Component
A component can have three different views − object-oriented view, conventional view, and
process-related view.
Object-oriented view
A component is viewed as a set of one or more cooperating classes. Each problem domain
class (analysis) and infrastructure class (design) are explained to identify all attributes and
operations that apply to its implementation. It also involves defining the interfaces that enable
classes to communicate and cooperate.
Conventional view
It is viewed as a functional element or a module of a program that integrates the processing
logic, the internal data structures that are required to implement the processing logic and an
interface that enables the component to be invoked and data to be passed to it.
Process-related view
In this view, instead of creating each component from scratch, the system is building from
existing components maintained in a library. As the software architecture is formulated,
components are selected from the library and used to populate the architecture.
• A user interface (UI) component includes grids, buttons referred as controls, and utility
components expose a specific subset of functions used in other components.
• Other common types of components are those that are resource intensive, not frequently
accessed, and must be activated using the just-in-time (JIT) approach.
• Many components are invisible which are distributed in enterprise business applications
and internet web applications such as Enterprise JavaBean (EJB), .NET components, and
CORBA components.
Characteristics of Components
• Reusability − Components are usually designed to be reused in different situations in
different applications. However, some components may be designed for a specific task.
• Replaceable − Components may be freely substituted with other similar components.
• Not context specific − Components are designed to operate in different environments
and contexts.
• Extensible − A component can be extended from existing components to provide new
behavior.
• Encapsulated − A A component depicts the interfaces, which allow the caller to use its
functionality, and do not expose details of the internal processes or any internal variables
or state.
• Independent − Components are designed to have minimal dependencies on other
components.

Principles of Component−Based Design

A component-level design can be represented by using some intermediary representation (e.g.


graphical, tabular, or text-based) that can be translated into source code. The design of data
structures, interfaces, and algorithms should conform to well-established guidelines to help us
avoid the introduction of errors.
• The software system is decomposed into reusable, cohesive, and encapsulated
component units.
• Each component has its own interface that specifies required ports and provided ports;
each component hides its detailed implementation.
• A component should be extended without the need to make internal code or design
modifications to the existing parts of the component.
• Depend on abstractions component do not depend on other concrete components, which
increase difficulty in expendability.
• Connectors connected components, specifying and ruling the interaction among
components. The interaction type is specified by the interfaces of the components.
• Components interaction can take the form of method invocations, asynchronous
invocations, broadcasting, message driven interactions, data stream communications,
and other protocol specific interactions.
• For a server class, specialized interfaces should be created to serve major categories of
clients. Only those operations that are relevant to a particular category of clients should
be specified in the interface.
• A component can extend to other components and still offer its own extension points. It
is the concept of plug-in based architecture. This allows a plugin to offer another plugin
API.
Component-Level Design Guidelines

Creates a naming conventions for components that are specified as part of the architectural
model and then refines or elaborates as part of the component-level model.
• Attains architectural component names from the problem domain and ensures that they
have meaning to all stakeholders who view the architectural model.
• Extracts the business process entities that can exist independently without any associated
dependency on other entities.
• Recognizes and discover these independent entities as new components.
• Uses infrastructure component names that reflect their implementation-specific meaning.
• Models any dependencies from left to right and inheritance from top (base class) to
bottom (derived classes).
• Model any component dependencies as interfaces rather than representing them as a
direct component-to-component dependency.

Conducting Component-Level Design

Recognizes all design classes that correspond to the problem domain as defined in the analysis
model and architectural model.
• Recognizes all design classes that correspond to the infrastructure domain.
• Describes all design classes that are not acquired as reusable components, and specifies
message details.
• Identifies appropriate interfaces for each component and elaborates attributes and defines
data types and data structures required to implement them.
• Describes processing flow within each operation in detail by means of pseudo code or
UML activity diagrams.
• Describes persistent data sources (databases and files) and identifies the classes required
to manage them.
• Develop and elaborates behavioral representations for a class or component. This can be
done by elaborating the UML state diagrams created for the analysis model and by
examining all use cases that are relevant to the design class.
• Elaborates deployment diagrams to provide additional implementation detail.
• Demonstrates the location of key packages or classes of components in a system by using
class instances and designating specific hardware and operating system environment.
• The final decision can be made by using established design principles and guidelines.
Experienced designers consider all (or most) of the alternative design solutions before
settling on the final design model.
Advantages
• Ease of deployment − As new compatible versions become available, it is easier to
replace existing versions with no impact on the other components or the system as a
whole.
• Reduced cost − The use of third-party components allows you to spread the cost of
development and maintenance.
• Ease of development − Components implement well-known interfaces to provide
defined functionality, allowing development without impacting other parts of the
system.
• Reusable − The use of reusable components means that they can be used to spread the
development and maintenance cost across several applications or systems.
• Modification of technical complexity − A component modifies the complexity through
the use of a component container and its services.
• Reliability − The overall system reliability increases since the reliability of each
individual component enhances the reliability of the whole system via reuse.
• System maintenance and evolution − Easy to change and update the implementation
without affecting the rest of the system.
• Independent − Independency and flexible connectivity of components. Independent
development of components by different group in parallel. Productivity for the software
development and future software development.
User Experience or UX Design

Being in the field of Computer Science, there are endless opportunities. This field is seen with a
perception that it involves only coding but there is a lot more to it. There are jobs that do not
require coding. One of these is UX Design. If you are pursuing your degree in computer science
or similar and are not able to find any interest in coding then this article will help you out.
User Experience: UX is known as User Experience. Basically how a user feels and his/her
demands getting fulfilled after using the software or automobile or any other gadget designed.
In simple terms “is user able to use the product in an efficient manner, the way the developer
has intended to use his artifact.”
User Experience Design is all about the user interaction or overall experience of a user with the
product, webpage or an application. How the customer feels about the product when he/she is
using the service and if he/she is facing any problem while interacting with the product or
application, also how easy it was for a user to perform a certain task to use a product. UX can
have everything from physical product to digital experience. It considers a user’s journey to
solve a problem. Think of e-commerce, online food delivery or online travel company website
where how easy it was for a user to make the payment, how long it took to complete the
payment is considered as UX design. Empathy is a crucial part of UX design. UX designers
need to put themselves in customer’s. So it’s all about the overall experience of a user with the
product.
There is one related topic User Interface (UI). UI is known as a User interface. It is a bit
different from the User Experience. The User Interface is defined as the interaction between the
user and the design. These two terms go hand in hand. Don’t get confused between UI and UX.
How to start with UX designing:
• Start observing the surroundings.
• Start sketching and making doodles. The point of making sketches is that when you observe
things, more ideas come to mind. You get a perception of how things should be seen.
Learn UX through books:
• Design of everyday things.
• Don’t make me think.
• The elements of user experience and much more.
Get a mentor: Find a teacher who could guide you with UX designing.
Make a portfolio: Start collecting your drawings and prototypes. To create a prototype there is
software used for UX experience. A beginner can start with Adobe XD.
UX designing is a 4 step process:
• Requirements: The problem space where the task has been lacking.
• Design Alternatives: To know what could be the alternative for a preexisting software.
• Prototyping: To ensure that the design has met the needs of the user better than the existing
design.
• Evaluation: Allows to ascertain that UX has been improved.
UX designing is based upon the user’s interaction with the design. Now to understand what a
user thinks and wants, there are some methods discussed as follows. These methods are
required to be followed by a designer to collect some vital information about his/her product by
getting reviews.
• Naturalistic Observation: In this section just watch the user using the design in their own
environment. In this type of observation, the user does not interact with the designer. Data
collected in such an observation is quantitative as well as qualitative.
• Surveys: Get the user’s opinions. It requires physical interaction with the user.
• Focus groups: In this type of observation a group of users is made to communicate with
each other in a room where a moderator who controls the team is required, basically who
looks upon the actions and decisions of that group.
• Communicate: This is a face to face or one to one interaction with the user and the
designer.
All of these methods have their pros and cons discussed as follows:
Steps Advantages Disadvantages
1. The user is in his/her comfort
zone. 1. This leads to incorrect notes a few
2. Designers get quality feedback times.
Naturalistic because the user uses the 2. There is no privacy for the user’s
Observation design in their environment information.

1. Efficient data collection. 1. The information provided could


Surveys 2. Easy data analysis. be superficial.

1. A moderator is required.
2. The ideas could depend on the
influential member in the team, it
means that the views could of any
one member only.
1. A rich amount of data can be 3. The ideas could get affected by
Focus groups collected social influence.

1. Skilled communicators are


1. An in-depth conversation can required.
lead to greater results in 2. It is time-intensive. A lot of time
knowing how a user wishes to is required to interview with so
Communicate use a specified product. many users.

Design Pattern - Overview

Advertisements

Previous Page

Next Page

Design patterns represent the best practices used by experienced object-oriented software
developers. Design patterns are solutions to general problems that software developers faced
during software development. These solutions were obtained by trial and error by numerous
software developers over quite a substantial period of time.

What is Gang of Four (GOF)?

In 1994, four authors Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides
published a book titled Design Patterns - Elements of Reusable Object-Oriented
Software which initiated the concept of Design Pattern in Software development.
These authors are collectively known as Gang of Four (GOF). According to these authors
design patterns are primarily based on the following principles of object orientated design.
• Program to an interface not an implementation
• Favor object composition over inheritance

Usage of Design Pattern

Design Patterns have two main usages in software development.


Common platform for developers
Design patterns provide a standard terminology and are specific to particular scenario. For
example, a singleton design pattern signifies use of single object so all developers familiar with
single design pattern will make use of single object and they can tell each other that program is
following a singleton pattern.
Best Practices
Design patterns have been evolved over a long period of time and they provide best solutions
to certain problems faced during software development. Learning these patterns helps
unexperienced developers to learn software design in an easy and faster way.

Types of Design Patterns

As per the design pattern reference book Design Patterns - Elements of Reusable Object-
Oriented Software , there are 23 design patterns which can be classified in three categories:
Creational, Structural and Behavioral patterns. We'll also discuss another category of design
pattern: J2EE design patterns.

S.N. Pattern & Description

1 Creational Patterns
These design patterns provide a way to create objects while hiding the creation
logic, rather than instantiating objects directly using new operator. This gives
program more flexibility in deciding which objects need to be created for a
given use case.

2 Structural Patterns
These design patterns concern class and object composition. Concept of
inheritance is used to compose interfaces and define ways to compose objects
to obtain new functionalities.

3 Behavioral Patterns
These design patterns are specifically concerned with communication between
objects.

4 J2EE Patterns
These design patterns are specifically concerned with the presentation tier.
These patterns are identified by Sun Java Center.

UNIT III SYSTEM DEPENDABILITY AND SECURITY


Dependable Systems – Dependability Properties – Sociotechnical Systems – Redundancy and
Diversity – Dependable Processes – Formal Methods and Dependability – Reliability
Engineering – Availability and Reliability – Reliability Requirements – Fault-tolerant
Architectures – Programming for Reliability – Reliability Measurement – Safety Engineering –
Safety-critical Systems – Safety Requirements – Safety Engineering Processes – Safety Cases –
Security Engineering – Security and Dependability – Safety and Organizations – Security
Requirements – Secure System Design – Security Testing and Assurance – Resilience
Engineering – Cybersecurity – Sociotechnical Resilience – Resilient Systems Design.

What Are Dependable Systems?


Dependable systems are desirable since they are “trustworthy,” as discussed in the security
communities and reliable engineering communities. Dependable systems are typically
characterized by the the following attributes:
Reliability: the system behaves as expected, with very few errors.
Availability: the system and services are mostly available, with very little or no down
time.
Safety: the systems do not pose unacceptable risks to the environment or the health of
users.
Confidentiality: data and other information should not be divulged without intent and
authorization.
Survivability: The system services should be robust enough to withstand accidents and
attacks.
Integrity: System data should not be modified without intent and authorization.
Maintainability: Maintenance of system hardware and services should not be difficult or
excessively expensive.

Dependability properties
Principal properties of dependability:

Principal properties:

• Availability: The probability that the system will be up and running and able to
deliver useful services to users.
• Reliability: The probability that the system will correctly deliver services as expected
by users.
• Safety: A judgment of how likely it is that the system will cause damage to people or
its environment.
• Security: A judgment of how likely it is that the system can resist accidental or
deliberate intrusions.
• Resilience: A judgment of how well a system can maintain the continuity of its
critical services in the presence of disruptive events such as equipment failure and
cyberattacks.

Other properties of software dependability:

• Repairability reflects the extent to which the system can be repaired in the event of a
failure;
• Maintainability reflects the extent to which the system can be adapted to new
requirements;
• Survivability reflects the extent to which the system can deliver services whilst under
hostile attack;
• Error tolerance reflects the extent to which user input errors can be avoided and
tolerated.

Socio-technical Systems
Software and hardware are interdependent. Without the hardware, a software is an abstraction.
When you put hardware and software togather, you create a system. This system will be able
to carry out multiple complex computations and return the result to its environment.
This illustrates one of the fundamental characteristics of the system. Socio-technical system is
basically a study of how any technology is used and produced. This help us to identify the
ethical errors in technical and social aspects of the systems. Socio-technical system is a
mixture of people and technology. It consists of many items. These items are difficult to
distinguish from each other because they all have close inter-relationships. Some of the items
are shown in figure:

Socio-technical systems include:


1. People:
People can be individuals or in groups. We also need to consider their roles and agencies.
An organization employs the people, who build and make use of hardware and software,
operate within law and regulations, and share and maintain the data.
2. Hardware:
The classical meaning if the technology is hardware. It involves mainframe, workstations,
peripheral, connecting devices. There is no way for a socio-technical system to be without
any kind of hardware component.
3. Softwares:
Software is nothing but an executable code. Softwares include operating system, utilities,
application programs. Software is an integral part of the socio-technical system. Software
often incorporates social rules and procedures as a part of the design, i.e. optimize these
parameters, store the data in these format, ask for these data, etc.
4. Law and regulations:
There might be laws about the protection of privacy, or regulations of chips testing in
military use, etc. Laws and regulations set by organization and government need to be
followed. They carry special societal sanctions if the violators are caught.
5. Data:
The design of the socio-technical systems design involve what data are collected, to
whome the data should be available and in which formats the data should be stored.
These Systems have properties that are perceptible when all the components are integrated and
operate together.
Example:
Lets say, Software engineers in Silicon Valley may build a software with all sorts of bells and
whistles expecting everyone to be tech savy and without even considering the basic
configuration of system requirement. If a large scale of people are in fact elderly and
unaccustomed to the interface and have traditional systems operating on old technology then
again the whole system will significantly reduced.
That’s the reason we are interested not only in technical dimension but also domain of Socio-
technical systems. The system include non-technical elements such as people, processes,
regulations, goals, culture, etc., as well as technical components such as computers, software,
infrastructure, etc.
To understand socio-technical systems as a whole, you have to know the various layers, as
shown in figure.

These systems can be impossible to understand. So, we refer to these 7 layers. These layers
make up the Socio-technical systems stack.
1. The equipment layer:
It contains set of hardware devices some of which may be computer, laptops, phones, etc.
Most of the devices include embedded system of some kind.
2. The operating system layer:
This layer provides a set of common facilities for higher software layers in the system.
This layer acts as an bridge to the hardware as it allows interaction between software and
hardware.
3. The communications and data management layer:
This layer extends the operating system facilities and provides an interface that allows
interaction with more extensive functionality, such as access to remote systems, access to a
system database, etc. This is sometimes called middleware, as it is in between the
application and the operating system.
4. The application layer:
This layer provides more specific functionality to meet some organization requirements.
There may be many different application programs in this layer.
5. The business process layer:
This layer consists a set of processes involving people and computer systems that support
the activities of the business. The use of software system, are defined and enacted.
6. The organizational layer:
At this level, the business rules, regulations, policies along with high-level strategic
processes are defined and are to be followed when using the system.
7. The social layer:
Laws, regulations and culture that govern the operation of te system are defined.

Redundancy and diversity


Redundancy: Keep more than a single version of critical components so that if one fails then a
backup is available.
Diversity: Provide the same functionality in different ways in different components so that they
will not fail in the same way.
Redundant and diverse components should be independent so that they will not suffer from
'common-mode' failures.
Process activities, such as validation, should not depend on a single approach, such as testing,
to validate the system. Redundant and diverse process activities are important especially for
verification and validation. Multiple, different process activities the complement each other and
allow for cross-checking help to avoid process errors, which may lead to errors in the software.
Dependable processes
To ensure a minimal number of software faults, it is important to have a well-defined,
repeatable software process. A well-defined repeatable process is one that does not depend
entirely on individual skills; rather can be enacted by different people. Regulators use
information about the process to check if good software engineering practice has been used. For
fault detection, it is clear that the process activities should include significant effort devoted to
verification and validation.
Dependable process characteristics:
Explicitly defined
A process that has a defined process model that is used to drive the software production
process. Data must be collected during the process that proves that the development team
has followed the process as defined in the process model.
Repeatable
A process that does not rely on individual interpretation and judgment. The process can
be repeated across projects and with different team members, irrespective of who is
involved in the development.
Dependable process activities

• Requirements reviews to check that the requirements are, as far as possible,


complete and consistent.
• Requirements management to ensure that changes to the requirements are controlled
and that the impact of proposed requirements changes is understood.
• Formal specification, where a mathematical model of the software is created and
analyzed.
• System modeling, where the software design is explicitly documented as a set of
graphical models, and the links between the requirements and these models are
documented.
• Design and program inspections, where the different descriptions of the system are
inspected and checked by different people.
• Static analysis, where automated checks are carried out on the source code of the
program.
• Test planning and management, where a comprehensive set of system tests is
designed.

Dependable software often requires certification so both process and product documentation
has to be produced. Up-front requirements analysis is also essential to discover requirements
and requirements conflicts that may compromise the safety and security of the system.
These conflict with the general approach in agile development of co-development of the
requirements and the system and minimizing documentation. An agile process may be defined
that incorporates techniques such as iterative development, test-first development and user
involvement in the development team. So long as the team follows that process and documents
their actions, agile methods can be used. However, additional documentation and planning is
essential so 'pure agile' is impractical for dependable systems engineering.

Formal Methods

Formal methods are techniques used to model complex systems as mathematical entities. By
building a mathematically rigorous model of a complex system, it is possible to verify the
system's properties in a more thorough fashion than empirical testing.
What is dependability of a system?
✧ The dependability of a system reflects the user's degree of trust in that system. It reflects
the extent of the user's confidence that it will operate as users expect and that it will not 'fail' in
normal use. ✧ Dependability covers the related systems attributes of reliability, availability and
security.

Reliability Engineering Defined


Reliability engineering is engineering that emphasizes dependability in the life-cycle
management of a product. Reliability is defined as the ability of a product or system to perform
its required functions without failure for a specified time period and when used under specified
conditions. Engineering and analysis techniques are used to improve the reliability or
dependability of a product or system.
Reliability engineering falls within the maintenance phase of the software development life
cycle (SDLC). The overall aim of the SDLC is to make software and products more reliable.

Reliability Engineering Objectives


The main objectives of reliability engineering are:

• To apply engineering knowledge and specialist techniques to prevent or to reduce the


likelihood or frequency of failures
• To identify and correct the causes of failures that occur despite the efforts to prevent
them
• To determine ways of coping with failures that occur, if their causes have not been fixed
• To apply methods for estimating the likely reliability of new software and for analyzing
reliability data

• Availability is a measure of the percentage of time that an IT service or component is in


an operable state.
• Reliability, on the other hand, is a measure of the probability that the system will meet
defined performance standards in performing its intended function during a specified
interval.
Key Metrics

Here are some key metrics that are typically used to measure Availability and Reliability.

Availability

Availability, as a measure of uptime, can be calculated as follows:

Percentage of availability = (total elapsed time – sum of downtime)/total elapsed time


Oftentimes, service providers provide an availability SLA based on the availability percentage
table below, committing to ensure that functionality is up and running based on expectations.

Availability Level Allowed unavailability window

Per
Per year Per quarter Per week Per day Per hour
month

90% 36.5 days 9 days 3 days 16.8 hours 2.4 hours 6 minutes

95% 12.85 days 4.5 days 1.5 days 8.4 hours 1.2 hours 3 minutes

7.2
99% 3.65 days 21.6 hours 1.68 hours 14.4 minutes 36 seconds
hours

3.6 50.4
99.5% 1.83 days 10.8 hours 7.20 minutes 18 seconds
hours minutes

43.2 10.1
99.9% 8.76 hours 2.16 hours 1.44 minutes 3.6 seconds
minutes minutes

21.6 5.04
99.95% 4.38 hours 1.08 hours 43.2 seconds 1.8 seconds
minutes minutes

4.32 60.5
99.99% 52.6 minutes 12.96 minutes 8.64 seconds 0.36 seconds
minutes seconds

25.9 6.05
99.999% 5.26 minutes 1.30 minutes 0.87 seconds 0.04 seconds
seconds seconds

Reliability

Reliability helps teams understand how the service will be available given real-world scenarios
— in other words, measuring the frequency and impact of failures. Common metrics to measure
reliability are:

Mean time between failure (MTBF) = total time in service/number of failures

Failure rate = number of failures/total time in service


In determining metrics for both reliability and availability, IT organizations need to make
tradeoffs and decisions with respect to costs and service levels. They need to balance costs and
investments in infrastructure/performance to maintain high service levels, with maximum
allowable increments of downtime/failures that minimize impact to the business and user
experience

CS 410/510 - Software Engineering

Reliability Engineering

Reference: Sommerville, Software Engineering, 10 ed., Chapter 11


The big picture
In general, software customers expect all software to be dependable. However, for non-
critical applications, they may be willing to accept some system failures. Some applications
(critical systems) have very high reliability requirements and special software engineering
techniques may be used to achieve this.
Reliability terminology
Term Description
Human error or
Human behavior that results in the introduction of faults into a system.
mistake
System fault A characteristic of a software system that can lead to a system error.
An erroneous system state that can lead to system behavior that is
System error
unexpected by system users.
An event that occurs at some point in time when the system does not deliver
System failure
a service as expected by its users.
Failures are a usually a result of system errors that are derived from faults in the system.
However, faults do not necessarily result in system errors if the erroneous system state is
transient and can be 'corrected' before an error arises. Errors do not necessarily lead to system
failures if the error is corrected by built-in error detection and recovery mechanism.
Fault management strategies to achieve reliability:
Fault avoidance
Development techniques are used that either minimize the possibility of mistakes or trap
mistakes before they result in the introduction of system faults.
Fault detection and removal
Verification and validation techniques that increase the probability of detecting and
correcting errors before the system goes into service are used.
Fault tolerance
Run-time techniques are used to ensure that system faults do not result in system errors
and/or that system errors do not lead to system failures.
Availability and reliability
Reliability is the probability of failure-free system operation over a specified time in a given
environment for a given purpose. Availability is the probability that a system, at a point in
time, will be operational and able to deliver the requested services. Both of these attributes can
be expressed quantitatively e.g. availability of 0.999 means that the system is up and running
for 99.9% of the time.
The formal definition of reliability does not always reflect the user's perception of a system's
reliability. Reliability can only be defined formally with respect to a system specification i.e.
a failure is a deviation from a specification. Users don't read specifications and don't know
how the system is supposed to behave; therefore, perceived reliability is more important in
practice.
Availability is usually expressed as a percentage of the time that the system is available to
deliver services e.g. 99.95%. However, this does not take into account two factors:

• The number of users affected by the service outage. Loss of service in the middle of
the night is less important for many systems than loss of service during peak usage
periods.
• The length of the outage. The longer the outage, the more the disruption. Several
short outages are less likely to be disruptive than 1 long outage. Long repair times are
a particular problem.

Removing X% of the faults in a system will not necessarily improve the reliability by X%.
Program defects may be in rarely executed sections of the code so may never be encountered by
users. Removing these does not affect the perceived reliability. Users adapt their behavior to
avoid system features that may fail for them. A program with known faults may therefore still
be perceived as reliable by its users.
Reliability requirements
Functional reliability requirements define system and software functions that avoid, detect or
tolerate faults in the software and so ensure that these faults do not lead to system failure.
Reliability is a measurable system attribute so non-functional reliability requirements may be
specified quantitatively. These define the number of failures that are acceptable during
normal use of the system or the time in which the system must be available. Functional
reliability requirements define system and software functions that avoid, detect or tolerate faults
in the software and so ensure that these faults do not lead to system failure. Software reliability
requirements may also be included to cope with hardware failure or operator error.
Reliability metrics are units of measurement of system reliability. System reliability is
measured by counting the number of operational failures and, where appropriate, relating these
to the demands made on the system and the time that the system has been operational. Metrics
include:
• Probability of failure on demand (POFOD). The probability that the system will
fail when a service request is made. Useful when demands for service are intermittent
and relatively infrequent.
• Rate of occurrence of failures (ROCOF). Reflects the rate of occurrence of failure
in the system. Relevant for systems where the system has to process a large number of
similar requests in a short time. Mean time to failure (MTTF) is the reciprocal of
ROCOF.
• Availability (AVAIL). Measure of the fraction of the time that the system is available
for use. Takes repair and restart time into account. Relevant for non-stop,
continuously running systems.

Non-functional reliability requirements are specifications of the required reliability and


availability of a system using one of the reliability metrics (POFOD, ROCOF or AVAIL).
Quantitative reliability and availability specification has been used for many years in safety-
critical systems but is uncommon for business critical systems. However, as more and more
companies demand 24/7 service from their systems, it makes sense for them to be precise about
their reliability and availability expectations.
Functional reliability requirements specify the faults to be detected and the actions to be
taken to ensure that these faults do not lead to system failures.

• Checking requirements that identify checks to ensure that incorrect data is detected
before it leads to a failure.
• Recovery requirements that are geared to help the system recover after a failure has
occurred.
• Redundancy requirements that specify redundant features of the system to be
included.
• Process requirements for reliability which specify the development process to be used
may also be included.

Fault tolerance
In critical situations, software systems must be fault tolerant. Fault tolerance is required where
there are high availability requirements or where system failure costs are very high. Fault
tolerance means that the system can continue in operation in spite of software failure. Even if
the system has been proved to conform to its specification, it must also be fault tolerant as there
may be specification errors or the validation may be incorrect.
Fault-tolerant systems architectures are used in situations where fault tolerance is essential.
These architectures are generally all based on redundancy and diversity. Examples of situations
where dependable architectures are used:

• Flight control systems, where system failure could threaten the safety of passengers;
• Reactor systems where failure of a control system could lead to a chemical or nuclear
emergency;
• Telecommunication systems, where there is a need for 24/7 availability.
Protection system is a specialized system that is associated with some other control system,
which can take emergency action if a failure occurs, e.g. a system to stop a train if it passes a
red light, or a system to shut down a reactor if temperature/pressure are too high. Protection
systems independently monitor the controlled system and the environment. If a problem is
detected, it issues commands to take emergency action to shut down the system and avoid a
catastrophe. Protection systems are redundant because they include monitoring and control
capabilities that replicate those in the control software. Protection systems should be diverse
and use different technology from the control software. They are simpler than the control
system so more effort can be expended in validation and dependability assurance. Aim is to
ensure that there is a low probability of failure on demand for the protection system.

Self-monitoring architecture is a multi-channel architectures where the system monitors its


own operations and takes action if inconsistencies are detected. The same computation is
carried out on each channel and the results are compared. If the results are identical and are
produced at the same time, then it is assumed that the system is operating correctly. If the
results are different, then a failure is assumed and a failure exception is raised. Hardware in
each channel has to be diverse so that common mode hardware failure will not lead to each
channel producing the same results. Software in each channel must also be diverse, otherwise
the same software error would affect each channel. If high-availability is required, you may use
several self-checking systems in parallel. This is the approach used in the Airbus family of
aircraft for their flight control systems.
N-version programming involves multiple versions of a software system to carry out
computations at the same time. There should be an odd number of computers involved,
typically 3. The results are compared using a voting system and the majority result is taken to
be the correct result. Approach derived from the notion of triple-modular redundancy, as used in
hardware systems.

Hardware fault tolerance depends on triple-modular redundancy (TMR). There are three
replicated identical components that receive the same input and whose outputs are compared. If
one output is different, it is ignored and component failure is assumed. Based on most faults
resulting from component failures rather than design faults and a low probability of
simultaneous component failure.
Programming for reliability
Good programming practices can be adopted that help reduce the incidence of program faults.
These programming practices support fault avoidance, detection, and tolerance.
Limit the visibility of information in a program
Program components should only be allowed access to data that they need for their
implementation. This means that accidental corruption of parts of the program state by
these components is impossible. You can control visibility by using abstract data types
where the data representation is private and you only allow access to the data through
predefined operations such as get () and put ().
Check all inputs for validity
All program take inputs from their environment and make assumptions about these
inputs. However, program specifications rarely define what to do if an input is not
consistent with these assumptions. Consequently, many programs behave unpredictably
when presented with unusual inputs and, sometimes, these are threats to the security of
the system. Consequently, you should always check inputs before processing against the
assumptions made about these inputs.
Provide a handler for all exceptions
A program exception is an error or some unexpected event such as a power failure.
Exception handling constructs allow for such events to be handled without the need for
continual status checking to detect exceptions. Using normal control constructs to detect
exceptions needs many additional statements to be added to the program. This adds a
significant overhead and is potentially error-prone.
Minimize the use of error-prone constructs
Program faults are usually a consequence of human error because programmers lose track
of the relationships between the different parts of the system This is exacerbated by error-
prone constructs in programming languages that are inherently complex or that don't
check for mistakes when they could do so. Therefore, when programming, you should try
to avoid or at least minimize the use of these error-prone constructs.
Error-prone constructs:

• Unconditional branch (goto) statements


• Floating-point numbers (inherently imprecise, which may lead to invalid
comparisons)
• Pointers
• Dynamic memory allocation
• Parallelism (can result in subtle timing errors because of unforeseen interaction
between parallel processes)
• Recursion (can cause memory overflow as the program stack fills up)
• Interrupts (can cause a critical operation to be terminated and make a program
difficult to understand)
• Inheritance (code is not localized, which may result in unexpected behavior
when changes are made and problems of understanding the code)
• Aliasing (using more than 1 name to refer to the same state variable)
• Unbounded arrays (may result in buffer overflow)
• Default input processing (if the default action is to transfer control elsewhere in
the program, incorrect or deliberately malicious input can then trigger a
program failure)

Provide restart capabilities


For systems that involve long transactions or user interactions, you should always
provide a restart capability that allows the system to restart after failure without users
having to redo everything that they have done.
Check array bounds
In some programming languages, such as C, it is possible to address a memory location
outside of the range allowed for in an array declaration. This leads to the well-known
'bounded buffer' vulnerability where attackers write executable code into memory by
deliberately writing beyond the top element in an array. If your language does not include
bound checking, you should therefore always check that an array access is within the
bounds of the array.
Include timeouts when calling external components
In a distributed system, failure of a remote computer can be 'silent' so that programs
expecting a service from that computer may never receive that service or any indication
that there has been a failure. To avoid this, you should always include timeouts on all
calls to external components. After a defined time period has elapsed without a response,
your system should then assume failure and take whatever actions are required to recover
from this.
Name all constants that represent real-world values
Always give constants that reflect real-world values (such as tax rates) names rather than
using their numeric values and always refer to them by name You are less likely to make
mistakes and type the wrong value when you are using a name rather than a value. This
means that when these 'constants' change (for sure, they are not really constant), then you
only have to make the change in one place in your program.

Safety Engineering

Reference: Sommerville, Software Engineering, 10 ed., Chapter 12


The big picture
Safety is a property of a system that reflects the system's ability to operate, normally or
abnormally, without danger of causing human injury or death and without damage to the
system's environment. It is important to consider software safety as most devices whose
failure is critical now incorporate software-based control systems.
Safety and reliability are related but distinct. Reliability is concerned with conformance to a
given specification and delivery of service. Safety is concerned with ensuring system cannot
cause damage irrespective of whether or not it conforms to its specification. System reliability
is essential for safety but is not enough.
Reliable systems can be unsafe:

• Dormant faults in a system can remain undetected for many years and only rarely
arise.
• Specification errors: the system can behave as specified but still cause an accident.
• Hardware failures could generate spurious inputs that are hard to anticipate in the
specification.
• Context-sensitive commands (the right command at the wrong time) are often the
result of operator error.
Safety-critical systems
In safety-critical systems it is essential that system operation is always safe i.e. the system
should never cause damage to people or the system's environment. Examples: control and
monitoring systems in aircraft, process control systems in chemical manufacture, automobile
control systems such as braking and engine management systems.
Two levels of safety criticality:

• Primary safety-critical systems: embedded software systems whose failure can


cause the associated hardware to fail and directly threaten people.
• Secondary safety-critical systems: systems whose failure results in faults in other
(socio-technical) systems, which can then have safety consequences.

Safety terminology
Term Definition
An unplanned event or sequence of events which results in human death or injury,
Accident
damage to property, or to the environment. An overdose of insulin is an example
(mishap)
of an accident.
Hazard A condition with the potential for causing or contributing to an accident.
A measure of the loss resulting from a mishap. Damage can range from many
Damage
people being killed as a result of an accident to minor injury or property damage.
An assessment of the worst possible damage that could result from a particular
Hazard
hazard. Hazard severity can range from catastrophic, where many people are
severity
killed, to minor, where only minor damage results.
The probability of the events occurring which create a hazard. Probability values
Hazard tend to be arbitrary but range from 'probable' (e.g. 1/100 chance of a hazard
probability occurring) to 'implausible' (no conceivable situations are likely in which the
hazard could occur).
This is a measure of the probability that the system will cause an accident. The
Risk risk is assessed by considering the hazard probability, the hazard severity, and the
probability that the hazard will lead to an accident.
Safety achievement strategies:
Hazard avoidance
The system is designed so that some classes of hazard simply cannot arise.
Hazard detection and removal
The system is designed so that hazards are detected and removed before they result in an
accident.
Damage limitation
The system includes protection features that minimize the damage that may result from
an accident.
Accidents in complex systems rarely have a single cause as these systems are designed to be
resilient to a single point of failure. Almost all accidents are a result of combinations of
malfunctions rather than single failures. It is probably the case that anticipating all problem
combinations, especially, in software controlled systems is impossible so achieving complete
safety is impossible. However, accidents are inevitable.
Safety requirements
The goal of safety requirements engineering is to identify protection requirements that ensure
that system failures do not cause injury or death or environmental damage. Safety requirements
may be 'shall not' requirements i.e. they define situations and events that should never occur.
Functional safety requirements define: checking and recovery features that should be included
in a system, and features that provide protection against system failures and external attacks.
Hazard-driven analysis:
Hazard identification
Identify the hazards that may threaten the system. Hazard identification may be based
on different types of hazard: physical, electrical, biological, service failure, etc.
Hazard assessment
The process is concerned with understanding the likelihood that a risk will arise and the
potential consequences if an accident or incident should occur. Risks may be categorized
as: intolerable (must never arise or result in an accident), as low as reasonably
practical - ALARP (must minimize the possibility of risk given cost and schedule
constraints), and acceptable (the consequences of the risk are acceptable and no extra
costs should be incurred to reduce hazard probability).
The acceptability of a risk is determined by human, social, and political considerations.
In most societies, the boundaries between the regions are pushed upwards with time i.e.
society is less willing to accept risk (e.g., the costs of cleaning up pollution may be less
than the costs of preventing it but this may not be socially acceptable). Risk assessment is
subjective.
Hazard assessment process: for each identified hazard, assess hazard probability, accident
severity, estimated risk, acceptability.
Hazard analysis
Concerned with discovering the root causes of risks in a particular system. Techniques
have been mostly derived from safety-critical systems and can be: inductive, bottom-up:
start with a proposed system failure and assess the hazards that could arise from that
failure; and deductive, top-down: start with a hazard and deduce what the causes of this
could be.
Fault-tree analysis is a deductive top-down technique.:

• Put the risk or hazard at the root of the tree and identify the system states that
could lead to that hazard.
• Where appropriate, link these with 'and' or 'or' conditions.
• A goal should be to minimize the number of single causes of system failure.

Risk reduction
The aim of this process is to identify dependability requirements that specify how the
risks should be managed and ensure that accidents/incidents do not arise. Risk reduction
strategies: hazard avoidance; hazard detection and removal; damage limitation.
Safety engineering processes
Safety engineering processes are based on reliability engineering processes. Regulators may
require evidence that safety engineering processes have been used in system development.
Agile methods are not usually used for safety-critical systems engineering. Extensive process
and product documentation is needed for system regulation, which contradicts the focus in agile
methods on the software itself. A detailed safety analysis of a complete system specification is
important, which contradicts the interleaved development of a system specification and
program. However, some agile techniques such as test-driven development may be used.
Process assurance involves defining a dependable process and ensuring that this process is
followed during the system development. Process assurance focuses on:

• Do we have the right processes? Are the processes appropriate for the level of
dependability required. Should include requirements management, change
management, reviews and inspections, etc.
• Are we doing the processes right? Have these processes been followed by the
development team.

Process assurance is important for safety-critical systems development: accidents are rare
events so testing may not find all problems; safety requirements are sometimes 'shall not'
requirements so cannot be demonstrated through testing. Safety assurance activities may be
included in the software process that record the analyses that have been carried out and the
people responsible for these.
Safety-related process activities:

• Creation of a hazard logging and monitoring system;


• Appointment of project safety engineers who have explicit responsibility for system
safety;
• Extensive use of safety reviews;
• Creation of a safety certification system where the safety of critical components is
formally certified;
• Detailed configuration management.

Formal methods can be used when a mathematical specification of the system is produced.
They are the ultimate static verification technique that may be used at different stages in the
development process. A formal specification may be developed and mathematically analyzed
for consistency. This helps discover specification errors and omissions. Formal arguments that a
program conforms to its mathematical specification may be developed. This is effective in
discovering programming and design errors.
Advantages of formal methods
Producing a mathematical specification requires a detailed analysis of the requirements
and this is likely to uncover errors. Concurrent systems can be analyzed to discover race
conditions that might lead to deadlock. Testing for such problems is very difficult. They
can detect implementation errors before testing when the program is analyzed alongside
the specification.
Disadvantages of formal methods
Require specialized notations that cannot be understood by domain experts. Very
expensive to develop a specification and even more expensive to show that a program
meets that specification. Proofs may contain errors. It may be possible to reach the same
level of confidence in a program more cheaply using other V & V techniques.
Model checking involves creating an extended finite state model of a system and, using a
specialized system (a model checker), checking that model for errors. The model
checker explores all possible paths through the model and checks that a user-specified
property is valid for each path. Model checking is particularly valuable for verifying concurrent
systems, which are hard to test. Although model checking is computationally very expensive, it
is now practical to use it in the verification of small to medium sized critical systems.
Static program analysis uses software tools for source text processing. They parse the program
text and try to discover potentially erroneous conditions and bring these to the attention of the V
& V team. They are very effective as an aid to inspections - they are a supplement to but not a
replacement for inspections.
Three levels of static analysis:
Characteristic error checking
The static analyzer can check for patterns in the code that are characteristic of errors
made by programmers using a particular language.
User-defined error checking
Users of a programming language define error patterns, thus extending the types of error
that can be detected. This allows specific rules that apply to a program to be checked.
Assertion checking
Developers include formal assertions in their program and relationships that must hold.
The static analyzer symbolically executes the code and highlights potential problems.
Static analysis is particularly valuable when a language such as C is used which has weak
typing and hence many errors are undetected by the compiler. Particularly valuable for security
checking - the static analyzer can discover areas of vulnerability such as buffer overflows or
unchecked inputs. Static analysis is now routinely used in the development of many safety and
security critical systems.

What is in a safety case?


A safety case is a written demonstration of evidence and due diligence provided by a
corporation to demonstrate that it has the ability to operate a facility safely and can
effectively control hazards. The primary use of safety cases in an occupational health and
safety context is in the process industries.

Security Engineering

Reference: Sommerville, Software Engineering, 10 ed., Chapter 13


The big picture
Security engineering is a sub-field of the broader field of computer security. It
encompasses tools, techniques and methods to support the development and maintenance of
systems that can resist malicious attacks that are intended to damage a computer-based system
or its data.
Dimensions of security:

• Confidentiality Information in a system may be disclosed or made accessible to


people or programs that are not authorized to have access to that information.
• Integrity Information in a system may be damaged or corrupted making it unusual or
unreliable.
• Availability Access to a system or its data that is normally available may not be
possible.

Three levels of security:

• Infrastructure security is concerned with maintaining the security of all systems and
networks that provide an infrastructure and a set of shared services to the
organization.
• Application security is concerned with the security of individual application systems
or related groups of systems.
• Operational security is concerned with the secure operation and use of the
organization's systems.
Application security is a software engineering problem where the system is designed to resist
attacks. Infrastructure security is a systems management problem where the infrastructure is
configured to resist attacks.
System security management involves user and permission management (adding and
removing users from the system and setting up appropriate permissions for users), software
deployment and maintenance (installing application software and middleware and configuring
these systems so that vulnerabilities are avoided), attack monitoring, detection and
recovery (monitoring the system for unauthorized access, design strategies for resisting attacks
and develop backup and recovery strategies).
Operational security is primarily a human and social issue, which is concerned with ensuring
the people do not take actions that may compromise system security. Users sometimes take
insecure actions to make it easier for them to do their jobs. There is therefore a trade-off
between system security and system effectiveness.
Security and dependability
The security of a system is a property that reflects the system's ability to protect itself from
accidental or deliberate external attack. Security is essential as most systems are networked
so that external access to the system through the Internet is possible. Security is an essential
pre-requisite for availability, reliability and safety.
Reliability terminology
Term Description
Something of value which has to be protected. The asset may be the software
Asset
system itself or data used by that system.
An exploitation of a system's vulnerability. Generally, this is from outside the
Attack
system and is a deliberate attempt to cause some damage.
A protective measure that reduces a system's vulnerability. Encryption is an
Control
example of a control that reduces a vulnerability of a weak access control system.
Possible loss or harm to a computing system. This can be loss or damage to data,
Exposure
or can be a loss of time and effort if recovery is necessary after a security breach.
Circumstances that have potential to cause loss or harm. You can think of these as
Threat
a system vulnerability that is subjected to an attack.
A weakness in a computer-based system that may be exploited to cause loss or
Vulnerability
harm.
Four types of security threats:

• Interception threats that allow an attacker to gain access to an asset.


• Interruption threats that allow an attacker to make part of the system unavailable.
• Modification threats that allow an attacker to tamper with a system asset.
• Fabrication threats that allow an attacker to insert false information into a system.

Security assurance strategies:


Vulnerability avoidance
The system is designed so that vulnerabilities do not occur. For example, if there is no
external network connection then external attack is impossible.
Attack detection and elimination
The system is designed so that attacks on vulnerabilities are detected and neutralised
before they result in an exposure. For example, virus checkers find and remove viruses
before they infect a system.
Exposure limitation and recovery
The system is designed so that the adverse consequences of a successful attack are
minimised. For example, a backup policy allows damaged information to be restored.
Security and attributes of dependability:
Security and reliability
If a system is attacked and the system or its data are corrupted as a consequence of that
attack, then this may induce system failures that compromise the reliability of the system.
Security and availability
A common attack on a web-based system is a denial of service attack, where a web server
is flooded with service requests from a range of different sources. The aim of this attack
is to make the system unavailable.
Security and safety
An attack that corrupts the system or its data means that assumptions about safety may
not hold. Safety checks rely on analyzing the source code of safety critical software and
assume the executing code is a completely accurate translation of that source code. If this
is not the case, safety-related failures may be induced and the safety case made for the
software is invalid.
Security and resilience
Resilience is a system characteristic that reflects its ability to resist and recover from
damaging events. The most probable damaging event on networked software systems is a
cyberattack of some kind so most of the work now done in resilience is aimed at
deterring, detecting and recovering from such attacks.
Security and organizations
Security is expensive and it is important that security decisions are made in a cost-effective
way. There is no point in spending more than the value of an asset to keep that asset secure.
Organizations use a risk-based approach to support security decision making and should have
a defined security policy based on security risk analysis. Security risk analysis is a business
rather than a technical process.
Security policies should set out general information access strategies that should apply across
the organization. The point of security policies is to inform everyone in an organization about
security so these should not be long and detailed technical documents. From a security
engineering perspective, the security policy defines, in broad terms, the security goals of the
organization. The security engineering process is concerned with implementing these goals.
Security policies principles:
The assets that must be protected
It is not cost-effective to apply stringent security procedures to all organizational assets.
Many assets are not confidential and can be made freely available.
The level of protection that is required for different types of asset
For sensitive personal information, a high level of security is required; for other
information, the consequences of loss may be minor so a lower level of security is
adequate.
The responsibilities of individual users, managers and the organization
The security policy should set out what is expected of users e.g. strong passwords, log out
of computers, office security, etc.
Existing security procedures and technologies that should be maintained
For reasons of practicality and cost, it may be essential to continue to use existing
approaches to security even where these have known limitations.
Risk assessment and management is concerned with assessing the possible losses that might
ensue from attacks on the system and balancing these losses against the costs of security
procedures that may reduce these losses. Risk management should be driven by an
organizational security policy. Risk management involves:
Preliminary risk assessment
The aim of this initial risk assessment is to identify generic risks that are applicable to the
system and to decide if an adequate level of security can be achieved at a reasonable cost.
The risk assessment should focus on the identification and analysis of high-level risks to
the system. The outcomes of the risk assessment process are used to help identify security
requirements.
Design risk assessment
This risk assessment takes place during the system development life cycle and is
informed by the technical system design and implementation decisions. The results of the
assessment may lead to changes to the security requirements and the addition of new
requirements. Known and potential vulnerabilities are identified, and this knowledge is
used to inform decision making about the system functionality and how it is to be
implemented, tested, and deployed.
Operational risk assessment
This risk assessment process focuses on the use of the system and the possible risks that
can arise from human behavior. Operational risk assessment should continue after a
system has been installed to take account of how the system is used. Organizational
changes may mean that the system is used in different ways from those originally
planned. These changes lead to new security requirements that have to be implemented as
the system evolves.
Security requirements
Security specification has something in common with safety requirements specification - in
both cases, your concern is to avoid something bad happening. Four major differences:

• Safety problems are accidental - the software is not operating in a hostile


environment. In security, you must assume that attackers have knowledge of system
weaknesses.
• When safety failures occur, you can look for the root cause or weakness that led to
the failure. When failure results from a deliberate attack, the attacker may conceal the
cause of the failure.
• Shutting down a system can avoid a safety-related failure. Causing a shut down
may be the aim of an attack.
• Safety-related events are not generated from an intelligent adversary. An attacker
can probe defenses over time to discover weaknesses.

Security requirement classification

• Risk avoidance requirements set out the risks that should be avoided by designing the
system so that these risks simply cannot arise.
• Risk detection requirements define mechanisms that identify the risk if it arises and
neutralize the risk before losses occur.
• Risk mitigation requirements set out how the system should be designed so that it
can recover from and restore system assets after some loss has occurred.
Security risk assessment

• Asset identification: identify the key system assets (or services) that have to be
protected.
• Asset value assessment: estimate the value of the identified assets.
• Exposure assessment: assess the potential losses associated with each asset.
• Threat identification: identify the most probable threats to the system assets.
• Attack assessment: decompose threats into possible attacks on the system and the
ways that these may occur.
• Control identification: propose the controls that may be put in place to protect an
asset.
• Feasibility assessment: assess the technical feasibility and cost of the controls.
• Security requirements definition: define system security requirements. These can be
infrastructure or application system requirements.

Misuse cases are instances of threats to a system:

• Interception threats: attacker gains access to an asset.


• Interruption threats: attacker makes part of a system unavailable.
• Modification threats: a system asset if tampered with.
• Fabrication threats: false information is added to a system.

Secure systems design


Security should be designed into a system - it is very difficult to make an insecure system
secure after it has been designed or implemented.
Adding security features to a system to enhance its security affects other attributes of the
system:
• Performance: additional security checks slow down a system so its response time or
throughput may be affected.
• Usability: security measures may require users to remember information or require
additional interactions to complete a transaction. This makes the system less usable
and can frustrate system users.

Design risk assessment is done while the system is being developed and after it has been
deployed. More information is available - system platform, middleware and the system
architecture and data organization. Vulnerabilities that arise from design choices may therefore
be identified.
During architectural design, two fundamental issues have to be considered when designing an
architecture for security:
Protection: how should the system be organized so that critical assets can be protected
against external attack?
Layered protection architecture:
Platform-level protection: top-level controls on the platform on which a system runs.
Application-level protection: specific protection mechanisms built into the application
itself e.g. additional password protection.
Record-level protection: protection that is invoked when access to specific information
is requested.
Distribution: how should system assets be distributed so that the effects of a successful
attack are minimized?
Distributing assets means that attacks on one system do not necessarily lead to complete
loss of system service. Each platform has separate protection features and may be
different from other platforms so that they do not share a common vulnerability.
Distribution is particularly important if the risk of denial of service attacks is high.
These are potentially conflicting. If assets are distributed, then they are more expensive to
protect. If assets are protected, then usability and performance requirements may be
compromised.
Design guidelines for security engineering
Design guidelines encapsulate good practice in secure systems design. Design guidelines serve
two purposes: they raise awareness of security issues in a software engineering team, and they
can be used as the basis of a review checklist that is applied during the system validation
process. Design guidelines here are applicable during software specification and design.
Base decisions on an explicit security policy
Define a security policy for the organization that sets out the fundamental security
requirements that should apply to all organizational systems.

Avoid a single point of failure


Ensure that a security failure can only result when there is more than one failure in
security procedures. For example, have password and question-based authentication.
Fail securely
When systems fail, for whatever reason, ensure that sensitive information cannot be
accessed by unauthorized users even although normal security procedures are
unavailable.
Balance security and usability
Try to avoid security procedures that make the system difficult to use. Sometimes you
have to accept weaker security to make the system more usable.
Log user actions
Maintain a log of user actions that can be analyzed to discover who did what. If users
know about such a log, they are less likely to behave in an irresponsible way.
Use redundancy and diversity to reduce risk
Keep multiple copies of data and use diverse infrastructure so that an infrastructure
vulnerability cannot be the single point of failure.
Specify the format of all system inputs
If input formats are known then you can check that all inputs are within range so that
unexpected inputs don't cause problems.
Compartmentalize your assets
Organize the system so that assets are in separate areas and users only have access to the
information that they need rather than all system information.
Design for deployment
Design the system to avoid deployment problems
Design for recoverability
Design the system to simplify recoverability after a successful attack.

What is Security Testing?


Security Testing is a type of Software Testing that uncovers vulnerabilities, threats, risks in a
software application and prevents malicious attacks from intruders. The purpose of Security
Tests is to identify all possible loopholes and weaknesses of the software system which might
result in a loss of information, revenue, repute at the hands of the employees or outsiders of the
Organization.

Why Security Testing is Important?


The main goal of Security Testing is to identify the threats in the system and measure its
potential vulnerabilities, so the threats can be encountered and the system does not stop
functioning or can not be exploited. It also helps in detecting all possible security risks in the
system and helps developers to fix the problems through coding.
In this tutorial, you will learn-

Types of Security Testing:


There are seven main types of security testing as per Open Source Security Testing
methodology manual. They are explained as follows:

• Vulnerability Scanning: This is done through automated software to scan a system


against known vulnerability signatures.
• Security Scanning: It involves identifying network and system weaknesses, and later
provides solutions for reducing these risks. This scanning can be performed for both
Manual and Automated scanning.
• Penetration testing: This kind of testing simulates an attack from a malicious hacker.
This testing involves analysis of a particular system to check for potential vulnerabilities
to an external hacking attempt.
• Risk Assessment: This testing involves analysis of security risks observed in the
organization. Risks are classified as Low, Medium and High. This testing recommends
controls and measures to reduce the risk.
• Security Auditing: This is an internal inspection of Applications and Operating systems
for security flaws. An audit can also be done via line by line inspection of code
• Ethical hacking: It’s hacking an Organization Software systems. Unlike malicious
hackers, who steal for their own gains, the intent is to expose security flaws in the
system.
• Posture Assessment: This combines Security scanning, Ethical Hacking and Risk
Assessments to show an overall security posture of an organization.

How to do Security Testing


It is always agreed, that cost will be more if we postpone security testing after software
implementation phase or after deployment. So, it is necessary to involve security testing in the
SDLC life cycle in the earlier phases.

Let’s look into the corresponding Security processes to be adopted for every phase in SDLC

SDLC Phases Security Processes


Requirements Security analysis for requirements and check abuse/misuse cases
Security risks analysis for designing. Development of Test Plan including se
Design
tests
Coding and Unit
Static and Dynamic Testing and Security White Box Testing
Testing
Integration Testing Black Box Testing
System Testing Black Box Testing and Vulnerability scanning
Implementation Penetration Testing, Vulnerability Scanning
Support Impact analysis of Patches
Resilience Engineering

Reference: Sommerville, Software Engineering, 10 ed., Chapter 14


The big picture
The resilience of a system is a judgment of how well that system can maintain
the continuity of its critical services in the presence of disruptive events, such as equipment
failure and cyberattacks. This view encompasses these three ideas:

• Some of the services offered by a system are critical services whose failure could
have serious human, social or economic effects.
• Some events are disruptive and can affect the ability of a system to deliver its
critical services.
• Resilience is a judgment - there are no resilience metrics and resilience cannot be
measured. The resilience of a system can only be assessed by experts, who can
examine the system and its operational processes.

Resilience engineering places more emphasis on limiting the number of system failures that
arise from external events such as operator errors or cyberattacks. Assumptions:

• It is impossible to avoid system failures and so is concerned with limiting the


costs of these failures and recovering from them.
• Good reliability engineering practices have been used to minimize the number of
technical faults in a system.

Four related resilience activities are involved in the detection and recovery from system
problems:

• The system or its operators should recognize early indications of system failure.
• If the symptoms of a problem or cyberattack are detected early,
then resistance strategies may be used to reduce the probability that the system will
fail.
• If a failure occurs, the recovery activity ensures that critical system services are
restored quickly so that system users are not badly affected by failure.
• In this final activity, all of the system services are restored and normal system
operation can continue.
Cybersecurity
Cybercrime is the illegal use of networked systems and is one of the most serious problems
facing our society. Cybersecurity is a broader topic than system security
engineering. Cybersecurity is a socio-technical issue covering all aspects of ensuring the
protection of citizens, businesses, and critical infrastructures from threats that arise from their
use of computers and the Internet. Cybersecurity is concerned with all of an organization's IT
assets from networks through to application systems.
Factors contributing to cybersecurity failure:

• Organizational ignorance of the seriousness of the problem,


• Poor design and lax application of security procedures,
• Human carelessness,
• Inappropriate trade-offs between usability and security.

Cybersecurity threats:

• Threats to the confidentiality of assets: data is not damaged but it is made available to
people who should not have access to it.
• Threats to the integrity of assets: systems or data are damaged in some way by a
cyberattack.
• Threats to the availability of assets: aim to deny the use of assets by authorized users.

Examples of controls to protect the assets:

• Authentication, where users of a system have to show that they are authorized to
access the system.
• Encryption, where data is algorithmically scrambled so that an unauthorized reader
cannot access the information.
• Firewalls, where incoming network packets are examined then accepted or rejected
according to a set of organizational rules.

Redundancy and diversity are valuable for cybersecurity resilience:

• Copies of data and software should be maintained on separate computer systems


(supports recovery and reinstatement).
• Multi-stage diverse authentication can protect against password attacks (supports
resistance).
• Critical servers may be over-provisioned i.e. they may be more powerful than is
required to handle their expected load (supports resistance).

Cyber resilience planning:

• Asset classification: the organization's hardware, software and human assets are
examined and classified depending on how essential they are to normal operations.
• Threat identification: for each of the assets (or, at least the critical and important
assets), you should identify and classify threats to that asset.
• Threat recognition: for each threat or, sometimes asset/threat pair, you should
identify how an attack based on that threat might be recognized.
• Threat resistance: for each threat or asset/threat pair, you should identify possible
resistance strategies. These may be either embedded in the system (technical
strategies) or may rely on operational procedures.
• Asset recovery: for each critical asset or asset/threat pair, you should work out how
that asset could be recovered in the event of a successful cyberattack.
• Asset reinstatement: this is a more general process of asset recovery where you
define procedures to bring the system back into normal operation.
Socio-technical resilience
Resilience engineering is concerned with adverse external events that can lead to system failure.
To design a resilient system, you have to think about socio-technical systems design and not
exclusively focus on software. Dealing with these events is often easier and more effective in
the broader socio-technical system.
Four characteristics that reflect the resilience of an organization:
The ability to respond
Organizations have to be able to adapt their processes and procedures in response to
risks. These risks may be anticipated risks or may be detected threats to the organization
and its systems.
The ability to monitor
Organizations should monitor both their internal operations and their external
environment for threats before they arise.
The ability to anticipate
A resilient organization should not simply focus on its current operations but should
anticipate possible future events and changes that may affect its operations and resilience.
The ability to learn
Organizational resilience can be improved by learning from experience. It is particularly
important to learn from successful responses to adverse events such as the effective
resistance of a cyberattack. Learning from success allows.
People inevitably make mistakes (human errors) that sometimes lead to serious system
failures. There are two ways to consider human error:

• The person approach. Errors are considered to be the responsibility of the individual
and 'unsafe acts' (such as an operator failing to engage a safety barrier) are a
consequence of individual carelessness or reckless behavior.
• The systems approach. The basic assumption is that people are fallible and will
make mistakes. People make mistakes because they are under pressure from high
workloads, poor training or because of inappropriate system design.

Systems engineers should assume that human errors will occur during system operation. To
improve the resilience of a system, designers have to think about the defense and barriers to
human error that could be part of a system. Can these barriers should be built into the
technical components of the system (technical barriers)? If not, they could be part of the
processes, procedures and guidelines for using the system (socio-technical barriers).
Defensive layers have vulnerabilities: they are like slices of Swiss cheese with holes in the
layer corresponding to these vulnerabilities. Vulnerabilities are dynamic: the 'holes' are not
always in the same place and the size of the holes may vary depending on the operating
conditions. System failures occur when the holes line up and all of the defenses fail.
Strategies to increase system resilience:

• Reduce the probability of the occurrence of an external event that might trigger
system failures.
• Increase the number of defensive layers. The more layers that you have in a system,
the less likely it is that the holes will line up and a system failure occur.
• Design a system so that diverse types of barriers are included. The 'holes' will
probably be in different places and so there is less chance of the holes lining up and
failing to trap an error.
• Minimize the number of latent conditions in a system. This means reducing the
number and size of system 'holes'.

Resilient systems design


Designing systems for resilience involves two streams of work:

• Identifying critical services and assets that allow a system to fulfill its primary
purpose.
• Designing system components that support problem recognition, resistance, recovery
and reinstatement.

Survivable systems analysis

• System understanding: for an existing or proposed system, review the goals of the
system (sometimes called the mission objectives), the system requirements and the
system architecture.
• Critical service identification: the services that must always be maintained and the
components that are required to maintain these services are identified.
• Attack simulation: scenarios or use cases for possible attacks are identified along
with the system components that would be affected by these attacks.
• Survivability analysis: components that are both essential and compromisable by an
attack are identified and survivability strategies based on resistance, recognition and
recovery are identified.

UNIT IV SERVICE-ORIENTED SOFTWARE ENGINEERING, SYSTEMS


ENGINEERING AND REAL-TIME SOFTWARE ENGINEERING
Service-oriented Architecture – RESTful Services – Service Engineering – Service
Composition – Systems Engineering – Sociotechnical Systems – Conceptual Design – System
Procurement – System Development – System Operation and Evolution – Real-time Software
Engineering – Embedded System Design – Architectural Patterns for Real-time Software –
Timing Analysis – Real-time Operating Systems.

Service-Oriented Architecture
Service-Oriented Architecture (SOA) is a stage in the evolution of application development
and/or integration. It defines a way to make software components reusable using the interfaces.
Formally, SOA is an architectural approach in which applications make use of services
available in the network. In this architecture, services are provided to form applications, through
a network call over the internet. It uses common communication standards to speed up and
streamline the service integrations in applications. Each service in SOA is a complete business
function in itself. The services are published in such a way that it makes it easy for the
developers to assemble their apps using those services. Note that SOA is different from
microservice architecture.
• SOA allows users to combine a large number of facilities from existing services to form
applications.
• SOA encompasses a set of design principles that structure system development and provide
means for integrating components into a coherent and decentralized system.
• SOA-based computing packages functionalities into a set of interoperable services, which
can be integrated into different software systems belonging to separate business domains.
There are two major roles within Service-oriented Architecture:
1. Service provider: The service provider is the maintainer of the service and the organization
that makes available one or more services for others to use. To advertise services, the
provider can publish them in a registry, together with a service contract that specifies the
nature of the service, how to use it, the requirements for the service, and the fees charged.
2. Service consumer: The service consumer can locate the service metadata in the registry and
develop the required client components to bind and use the service.
Services might aggregate information and data retrieved from other services or create
workflows of services to satisfy the request of a given service consumer. This practice is known
as service orchestration Another important interaction pattern is service choreography, which is
the coordinated interaction of services without a single point of control.
Components of SOA:

Guiding Principles of SOA:


1. Standardized service contract: Specified through one or more service description
documents.
2. Loose coupling: Services are designed as self-contained components, maintain relationships
that minimize dependencies on other services.
3. Abstraction: A service is completely defined by service contracts and description
documents. They hide their logic, which is encapsulated within their implementation.
4. Reusability: Designed as components, services can be reused more effectively, thus
reducing development time and the associated costs.
5. Autonomy: Services have control over the logic they encapsulate and, from a service
consumer point of view, there is no need to know about their implementation.
6. Discoverability: Services are defined by description documents that constitute supplemental
metadata through which they can be effectively discovered. Service discovery provides an
effective means for utilizing third-party resources.
7. Composability: Using services as building blocks, sophisticated and complex operations
can be implemented. Service orchestration and choreography provide a solid support for
composing services and achieving business goals.
Advantages of SOA:
• Service reusability: In SOA, applications are made from existing services. Thus, services
can be reused to make many applications.
• Easy maintenance: As services are independent of each other they can be updated and
modified easily without affecting other services.
• Platform independent: SOA allows making a complex application by combining services
picked from different sources, independent of the platform.
• Availability: SOA facilities are easily available to anyone on request.
• Reliability: SOA applications are more reliable because it is easy to debug small services
rather than huge codes
• Scalability: Services can run on different servers within an environment, this increases
scalability
Disadvantages of SOA:
• High overhead: A validation of input parameters of services is done whenever services
interact this decreases performance as it increases load and response time.
• High investment: A huge initial investment is required for SOA.
• Complex service management: When services interact they exchange messages to tasks.
the number of messages may go in millions. It becomes a cumbersome task to handle a large
number of messages.
Practical applications of SOA: SOA is used in many ways around us whether it is mentioned
or not.
1. SOA infrastructure is used by many armies and air forces to deploy situational awareness
systems.
2. SOA is used to improve healthcare delivery.
3. Nowadays many apps are games and they use inbuilt functions to run. For example, an app
might need GPS so it uses the inbuilt GPS functions of the device. This is SOA in mobile
solutions.
4. SOA helps maintain museums a virtualized storage pool for their information and content.

RESTful services
Current web services standards have been criticized as 'heavyweight' standards that are over-
general and inefficient. REST (REpresentational State Transfer) is an architectural style
based on transferring representations of resources from a server to a client. This
style underlies the web as a whole and is simpler than SOAP/WSDL for implementing web
services. RESTful services involve a lower overhead than so-called 'big web services' and are
used by many organizations implementing service-based systems.
The fundamental element in a RESTful architecture is a resource. Essentially, a resource is
simply a data element such as a catalog, a medical record, or a document. In general, resources
may have multiple representations i.e. they can exist in different formats.
Resource operations:

• Create - bring the resource into existence.


• Read - return a representation of the resource.
• Update - change the value of the resource.
• Delete - make the resource inaccessible.

The Web is an example of a system that has a RESTful architecture. Web pages are
resources, and the unique identifier of a web page is its URL.

• POST is used to create a resource. It has associated data that defines the resource.
• GET is used to read the value of a resource and return that to the requestor in the
specified representation, such as XHTML, that can be rendered in a web browser.
• PUT is used to update the value of a resource.
• DELETE is used to delete the resource.

Disadvantages of RESTful approach:

• When a service has a complex interface and is not a simple resource, it can
be difficult to design a set of RESTful services to represent this.
• There are no standards for RESTful interface description so service users must rely
on informal documentation to understand the interface.
• When you use RESTful services, you have to implement your own infrastructure for
monitoring and managing the quality of service and the service reliability.

Service engineering
Service engineering is the process of developing services for reuse in service-oriented
applications. The service has to be designed as a reusable abstraction that can be used in
different systems. Generally useful functionality associated with that abstraction must be
designed and the service must be robust and reliable. The service must be documented so that it
can be discovered and understood by potential users.
Stages of service engineering include:

• Service candidate identification, where you identify possible services that might be
implemented and define the service requirements.It involves understanding an
organization's business processes to decide which reusable services could support
these processes. Three fundamental types of service:
o Utility services that implement general functionality used by different business
processes.
o Business services that are associated with a specific business function e.g., in a
university, student registration.
o Coordination services that support composite processes such as ordering.
• Service design, where you design the logical service interface and its implementation
interfaces (SOAP and/or RESTful). Involves thinking about the operations associated
with the service and the messages exchanged. The number of messages exchanged to
complete a service request should normally be minimized. Service state information
may have to be included in messages. Interface design stages:
o Logical interface design. Starts with the service requirements and defines the
operation names and parameters associated with the service. Exceptions should
also be defined.
o Message design (SOAP). For SOAP-based services, design the structure and
organization of the input and output messages. Notations such as the UML are a
more abstract representation than XML. The logical specification is converted to
a WSDL description.
o Interface design (REST). Design how the required operations map onto REST
operations and what resources are required.
• Service implementation and deployment, where you implement and test the service
and make it available for use. Programming services using a standard programming
language or a workflow language. Services then have to be tested by creating input
messages and checking that the output messages produced are as expected.
Deployment involves publicizing the service and installing it on a web server. Current
servers provide support for service installation.
Service composition
Existing services are composed and configured to create new composite services and
applications. The basis for service composition is often a workflow. Workflows are logical
sequences of activities that, together, model a coherent business process. For example, provide a
travel reservation services which allows flights, car hire and hotel bookings to be coordinated.

Service construction by composition:


Formulate outline workflow
In this initial stage of service design, you use the requirements for the composite service
as a basis for creating an 'ideal' service design.
Discover services
During this stage of the process, you search service registries or catalogs to discover what
services exist, who provides these services and the details of the service provision.
Select possible services
Your selection criteria will obviously include the functionality of the services offered.
They may also include the cost of the services and the quality of service (responsiveness,
availability, etc.) offered.
Refine workflow
This involves adding detail to the abstract description and perhaps adding or removing
workflow activities.
Create workflow program
During this stage, the abstract workflow design is transformed to an executable program
and the service interface is defined. You can use a conventional programming language,
such as Java or a workflow language, such as WS-BPEL.
Test completed service or application
The process of testing the completed, composite service is more complex than component
testing in situations where external services are used.

Systems Engineering
The big picture
Software engineering is not an isolated activity but is part of a broader systems engineering
process. Software systems are therefore not isolated systems but are essential components of
broader systems that have a human, social or organizational purpose.
Systems that include software fall into two categories:

• Technical computer-based systems include hardware and software but not humans
or organizational processes. Off the shelf applications, control systems, etc.
• Socio-technical systems include technical systems plus people who use and manage
these systems and the organizations that own the systems and set policies for their use.
Business systems, command and control systems, etc.

Systems engineering includes procuring, specifying, designing, implementing, validating,


deploying and maintaining socio-technical systems. It is concerned with the services provided
by the system, constraints on its construction and operation and the ways in which it is used to
fulfill its purpose or purposes.
Software is now the dominant element in all enterprise systems. Software engineers have to
play a more active part in high-level systems decision making if the system software is to be
dependable and developed on time and to budget. As a software engineer, it helps if you have a
broader awareness of how software interacts with other hardware and software systems, and the
human, social and organizational factors that affect the ways in which software is used.
Systems engineering stages:

• Conceptual design: sets out the purpose of the system, why it is needed and the high-
level features that users might expect to see in the system.
• Procurement or acquisition: conceptual design is developed so that decisions about
the contract for the system development can be made.
• Development: Hardware and software is engineered and operational processes
defined.
• Operation: The system is deployed and used for its intended purpose.
Many professional disciplines are involved in the systems engineering process. There are three
reasons for misunderstanding or other differences between engineers with different
backgrounds:

• Communication difficulties: Different disciplines use the same terminology to mean


different things. This can lead to misunderstandings about what will be implemented.
• Differing assumptions: Each discipline makes assumptions about what can and can't
be done by other disciplines.
• Professional boundaries: Each discipline tries to protect their professional
boundaries and expertise and this affects their judgments on the system.

Socio-technical systems
Socio-technical systems are large-scale systems that do not just include software and hardware
but also people, processes and organizational policies. Socio-technical systems are often
'systems of systems' i.e. are made up of a number of independent systems. The boundaries of
socio-technical system are subjective rather than objective: different people see the system in
different ways.
Socio-technical systems are used within organizations and are therefore profoundly affected by
the organizational environment in which they are used. Failure to take this environment into
account when designing the system is likely to lead to user dissatisfaction and system rejection.
There are a number of key elements in an organization that may affect the requirements, design,
and operation of a socio-technical system. A new system may lead to changes in some or all of
these elements:

• Process changes: Systems may require changes to business processes so training may
be required. Significant changes may be resisted by users.
• Job changes: Systems may de-skill users or cause changes to the way they work. The
status of individuals may be affected by a new system.
• Organizational policies: The proposed system may not be consistent with current
organizational policies.
• Organizational politics: Systems may change the political power structure in an
organization. Those that control the system have more power.

A complex system may include software, mechanical, electrical and electronic hardware and be
operated by people. System components are dependent on other
system components. The properties and behavior of system components are inextricably inter-
mingled. This leads to complexity. Complexity is the reason why socio-technical systems have
emergent properties, are non-deterministic and have subjective success criteria:

• Emergent properties: Properties of the system of a whole that depend on the system
components and their relationships.
• Non-deterministic: They do not always produce the same output when presented
with the same input because the systems's behavior is partially dependent on human
operators.
• Complex relationships with organizational objectives: The extent to which the
system supports organizational objectives does not just depend on the system itself.

Emergent properties are properties of the system as a whole rather than properties that can be
derived from the properties of components of a system. Emergent properties are a consequence
of the relationships between system components. They can therefore only be assessed and
measured once the components have been integrated into a system.
Two types of emergent properties:

• Functional properties: These appear when all the parts of a system work together to
achieve some objective. For example, a bicycle has the functional property of being a
transportation device once it has been assembled from its components.
• Non-functional emergent properties: Examples are reliability, performance, safety,
and security. These relate to the behavior of the system in its operational environment.
They are often critical for computer-based systems as failure to achieve some minimal
defined level in these properties may make the system unusable.

System reliability is a good example of an emergent property. Because of component inter-


dependencies,
faults can be propagated through the system. System failures often occur because of unforeseen
inter-relationships between components. It is practically impossible to anticipate all possible
component relationships. Software reliability measures may give a false
picture of the overall system reliability.
System reliability is influenced by:

• Hardware reliability: What is the probability of a hardware component failing and


how long does it take to repair that component?
• Software reliability: How likely is it that a software component will produce an
incorrect output. Software failure is usually distinct from hardware failure in that
software does not wear out.
• Operator reliability: How likely is it that the operator of a system will make an
error?

Failures are not independent and they propagate from one level to another.
System reliability depends on the context where the system is used. A system that is reliable in
one environment may be less reliable in a different environment because the physical conditions
(e.g. the temperature) and the mode of operation is different.
A deterministic system is one where a given sequence of inputs will always produce the same
sequence of outputs. Software systems are deterministic; systems that include humans are non-
deterministic. A socio-technical system will not always produce the same sequence of outputs
from the same input sequence:

• Human elements: People do not always behave in the same way.


• System changes: System behavior is unpredictable because of frequent changes to
hardware, software and data.

Complex systems are developed to address 'wicked problems' - problems where there cannot
be a complete specification. Different stakeholders see the problem in different ways and each
has a partial understanding of the issues affecting the system. Consequently, different
stakeholders have their own views about whether or not a system is 'successful'. Success is a
judgment and cannot be objectively measured. Success is judged using the effectiveness of the
system when deployed rather than judged against the original reasons for procurement.

Conceptual design
Conceptual design investigates the feasibility of an idea and develops that idea to create an
overall vision of a system. Conceptual design precedes and overlaps with requirements
engineering. May involve discussions with users and other stakeholders and the identification of
critical requirements. The aim of conceptual design is to create a high-level system description
that communicates the system purpose to non-technical decision makers.
Conceptual design activities:

• Concept formulation: Refine an initial statement of needs and work out what type of
system is most likely to meet the needs of system stakeholders.
• Problem understanding: Discuss with stakeholders how they do their work, what is
and isn't important to them, what they like and don't like about existing systems.
• System proposal development: Set out ideas for possible systems (maybe more than
one).
• Feasibility study: Look at comparable systems that have been developed elsewhere
(if any) and assess whether or not the proposed system could be implemented using
current hardware and software technologies.
• System structure development: Develop an outline architecture for the system,
identifying (where appropriate) other systems that may be reused.
• System vision document: Document the results of the conceptual design in a
readable, non-technical way. Should include a short summary and more detailed
appendices.

System procurement
System procurement is the process of acquiring a system (or systems) to meet some identified
organizational need. Before procurement, decisions are made on: scope of the system,
system budgets and timescales, high-level system requirements. Based on this information,
decisions are made on whether to procure a system, the type of system and the potential system
suppliers. These decisions are driven by:

• The state of other organizational systems and whether or not they need to be replaced
• The need to comply with external regulations
• External competition
• Business re-organization
• Available budget
It is usually necessary to develop a conceptual design document and high-level requirements
before procurement. You need a specification to let a contract for system development. The
specification may allow you to buy a commercial off-the-shelf (COTS) system. Almost always
cheaper than developing a system from scratch. Large complex systems usually consist of a mix
of off the shelf and specially designed components. The procurement processes for these
different types of component are usually different.
Three types of systems or system components may have to be procured:

• Off-the-shelf applications that may be used without change and which need only
minimal configuration for use.
• Configurable application or ERP systems that have to be modified or adapted for use
either by modifying the code or by using inbuilt configuration features, such as
process definitions and rules.
• Custom systems that have to be designed and implemented specially for use.

Issues with system procurement:

• Organizations often have an approved and recommended set of application software


that has been checked by the IT department. It is usually possible to buy or acquire
open source software from this set directly without the need for detailed justification.
There are no detailed requirements and the users adapt to the features of the chosen
application.
• Off-the-shelf components do not usually match requirements exactly. Choosing a
system means that you have to find the closest match between the system
requirements and the facilities offered by off-the-shelf systems.
• When a system is to be built specially, the specification of requirements is part of the
contract for the system being acquired. It is therefore a legal as well as a technical
document. The requirements document is critical and procurement processes of this
type usually take a considerable amount of time.
• For public sector systems especially, there are detailed rules and regulations that
affect the procurement of systems. These force the development of detailed
requirements and make agile development difficult.
• For application systems that require change or for custom systems there is usually a
contract negotiation period where the customer and supplier negotiate the terms and
conditions for the development of the system. During this process, requirements
changes may be agreed to reduce the overall costs and avoid some development
problems.

System development
System development usually follows a plan-driven approach because of the need for parallel
development of different parts of the system. Little scope for iteration between phases because
hardware changes are very expensive. Software may have to compensate for hardware
problems. Inevitably involves engineers from different disciplines who must work together.
Much scope for misunderstanding here. Different disciplines use a different vocabulary and
much negotiation is required. Engineers may have personal agendas to fulfil.
The system development process:

• Requirements engineering: The process of refining, analyzing and documenting the


high-level and business requirements identified in the conceptual design.
• Architectural design: Establishing the overall architecture of the system, identifying
components and their relationships.
• Requirements partitioning: Deciding which subsystems (identified in the system
architecture) are responsible for implementing the system requirements.
• Subsystem engineering: Developing the software components of the system,
configuring off-the-shelf hardware and software, defining the operational processes
for the system and re-designing business processes.
• System integration: Putting together system elements to create a new system.
• System testing: The whole system is tested to discover problems.
• System deployment: the process of making the system available to its users,
transferring data from existing systems and establishing communications with other
systems in the environment.

Requirements engineering and system design are inextricably linked. Constraints posed by
the system's environment and other systems limit design choices so the actual design to be used
may be a requirement. Initial design may be necessary to structure the requirements. As you do
design, you learn more about the requirements.
Subsystem engineering may involve some application systems procurement. Typically parallel
projects developing the hardware, software and communications. Lack of communication
across implementation teams can cause problems. There may be a bureaucratic and slow
mechanism for
proposing system changes, which means that the development schedule may be extended
because of the need for rework.
System integration is the process of putting hardware, software and
people together to make a system. Should ideally be tackled incrementally so that sub-systems
are integrated one at a time. The system is tested as it is integrated. Interface problems between
sub-systems are usually found at this stage. May be problems with uncoordinated deliveries
of system components.
System delivery and deployment takes place after completion, when the system has to be
installed in the customer's environment. A number of issues can occur:

• Environmental assumptions may be incorrect;


• May be human resistance to the introduction of a new system;
• System may have to coexist with alternative systems for some time;
• May be physical installation problems (e.g. cabling problems);
• Data cleanup may be required;
• Operator training has to be identified.
System operation and evolution
Operational processes are the processes involved in using the system for its defined purpose.
For new systems, these processes may have to be designed and tested and operators trained in
the use of the system. Operational processes should be flexible to allow operators to cope with
problems and periods of fluctuating workload.
Problems with operation automation:

• It is likely to increase the technical complexity of the system because it has to be


designed to cope with all anticipated failure modes. This increases the costs and time
required to build the system.
• Automated systems are inflexible. People are adaptable and can cope with problems
and unexpected situations. This means that you do not have to anticipate everything
that could possibly go wrong when you are specifying and designing the system.

Large systems have a long lifetime. They must evolve to meet changing requirements. Existing
systems which must be maintained are sometimes called legacy systems. Evolution is inherently
costly for a number of reasons:

• Changes must be analyzed from a technical and business perspective;


• Sub-systems interact so unanticipated problems can arise;
• There is rarely a rationale for original design decisions;
• System structure is corrupted as changes are made to it.

Factors that affect system lifetimes:

• Investment cost: The costs of a systems engineering project may be tens or even
hundreds of millions of dollars. These costs can only be justified if the system can
deliver value to an organization for many years.
• Loss of expertise: As businesses change and restructure to focus on their core
activities, they often lose engineering expertise. This may mean that they lack the
ability to specify the requirements for a new system.
• Replacement cost: The cost of replacing a large system is very high. Replacing an
existing system can only be justified if this leads to significant cost savings over the
existing system.
• Return on investment: If a fixed budget is available for systems engineering,
spending this on new systems in some other area of the business may lead to a higher
return on investment than replacing an existing system.
• Risks of change: Systems are an inherent part of business operations and the risks of
replacing existing systems with new systems cannot be justified. The danger with a
new system is that things can go wrong in the hardware, software and operational
processes. The potential costs of these problems for the business may be so high that
they cannot take the risk of system replacement.
• System dependencies: Other systems may depend on a system and making changes
to these other systems to accommodate a replacement system may be impractical.
Proposed changes have to be analyzed very carefully from a business and a technical
perspective. Subsystems are never completely independent so changes to a subsystem may have
side-effects that adversely affect other subsystems. Reasons for original design decisions are
often unrecorded. Those responsible for the system evolution have to work out why these
decisions were made. As systems age, their structure becomes corrupted by change so the costs
of making further changes increases.

Real-time Software Engineering

The big picture


Computers are used to control a wide range of systems from simple domestic machines, through
games controllers, to entire manufacturing plants. Their software must react to events generated
by the hardware and, often, issue control signals in response to these events. The software in
these systems is embedded in system hardware, often in read-only memory, and usually
responds, in real time, to events from the system's environment.
Responsiveness in real-time is the critical difference between embedded systems and other
software systems, such as information systems, web-based systems or personal software
systems. For non-real-time systems, correctness can be defined by specifying how system
inputs map to corresponding outputs that should be produced by the system. In a real-time
system, the correctness depends both on the response to an input and the time taken to
generate that response. If the system takes too long to respond, then the required response
may be ineffective.
A real-time system is a software system where the correct functioning of the system depends
on the results produced by the system and the time at which these results are produced. A soft
real-time system is a system whose operation is degraded if results are not produced according
to the specified timing requirements. A hard real-time system is a system whose operation is
incorrect if results are not produced according to the timing specification.
Characteristics of embedded systems:

• Embedded systems generally run continuously and do not terminate.


• Interactions with the system's environment are unpredictable.
• There may be physical limitations that affect the design of a system.
• Direct hardware interaction may be necessary.
• Issues of safety and reliability may dominate the system design.

Embedded system design


The design process for embedded systems is a systems engineering process that has to consider,
in detail, the design and performance of the system hardware. Part of the design process may
involve deciding which system capabilities are to be implemented in software and which in
hardware. Low-level decisions on hardware, support software and system timing must be
considered early in the process. These may mean that additional software functionality, such as
battery and power management, has to be included in the system.
Real-time systems are often considered to be reactive systems. Given a stimulus, the system
must produce a
reaction or response within a specified time. Stimuli come from sensors in the systems
environment and from actuators controlled by the system.

• Periodic stimuli occur at predictable time intervals. For example, the system may
examine a sensor every 50 milliseconds and take action (respond) depending on that
sensor value (the stimulus).
• Aperiodic stimuli occur irregularly and unpredictably and are may be signalled using
the computer's interrupt mechanism. An example of such a stimulus would be an
interrupt indicating that an I/O transfer was complete and that data was available in a
buffer.

Because of the need to respond to timing demands made by different stimuli/responses, the
system architecture must allow for fast switching between stimulus handlers. Timing
demands of different stimuli are different so a simple sequential loop is not usually adequate.
Real-time systems are therefore usually designed as cooperating processes with a real-time
executive controlling these processes.
• Sensor control processes collect information from sensors. May buffer information
collected in response to a sensor stimulus.
• Data processor carries out processing of collected information and computes the
system response.
• Actuator control processes generate control signals for the actuators.

Processes in a real-time system have to be coordinated and share information. Process


coordination mechanisms ensure mutual exclusion to shared resources. When one process
is modifying a shared resource, other processes should not be able to change that resource.
When designing the information exchange between processes, you have to take into account the
fact that these processes may be running at different speeds.
Producer processes collect data and add it to the buffer. Consumer processes take data from the
buffer and make elements available. Producer and consumer processes must be mutually
excluded from accessing the same element. The buffer must stop producer processes adding
information to a full buffer and consumer processes trying to take information from an empty
buffer.
The effect of a stimulus in a real-time system may trigger a transition from one state to
another. State models are therefore often used to describe embedded real-time systems. UML
state diagrams may be used to show the states and state transitions in a real-time system.
Programming languages for real-time systems development have to include facilities to access
system hardware, and it should be possible to predict the timing of particular operations in these
languages. Systems-level languages, such as C, which allow efficient code to be generated are
widely used in preference to languages such as Java. There is a performance overhead in object-
oriented systems because extra code is required to mediate access to attributes and handle calls
to operations. The loss of performance may make it impossible to meet real-time deadlines.
Architectural patterns for real-time software
Characteristic system architectures for embedded systems:

• Observe and React pattern is used when a set of sensors are routinely monitored and
displayed.
• Environmental Control pattern is used when a system includes sensors, which
provide information about the environment and actuators that can change the
environment.
• Process Pipeline pattern is used when data has to be transformed from one
representation to another before it can be processed.

Observe and React pattern description


The input values of a set of sensors of the same types are collected and analyzed. These
values are displayed in some way. If the sensor values indicate that some exceptional
condition has arisen, then actions are initiated to draw the operator's attention to that
value and, in certain cases, to take actions in response to the exceptional value.
Stimuli
Values from sensors attached to the system.
Responses
Outputs to display, alarm triggers, signals to reacting systems.
Processes
Observer, Analysis, Display, Alarm, Reactor.
Used in
Monitoring systems, alarm systems.

Environmental Control pattern description


The system analyzes information from a set of sensors that collect data from the system's
environment. Further information may also be collected on the state of the actuators that
are connected to the system. Based on the data from the sensors and actuators, control
signals are sent to the actuators that then cause changes to the system's environment.
Information about the sensor values and the state of the actuators may be displayed.
Stimuli
Values from sensors attached to the system and the state of the system actuators.
Responses
Control signals to actuators, display information.
Processes
Monitor, Control, Display, Actuator Driver, Actuator monitor.
Used in
Control systems.
Process Pipeline pattern description
A pipeline of processes is set up with data moving in sequence from one end of the
pipeline to another. The processes are often linked by synchronized buffers to allow the
producer and consumer processes to run at different speeds. The culmination of a pipeline
may be display or data storage or the pipeline may terminate in an actuator.
Stimuli
Input values from the environment or some other process
Responses
Output values to the environment or a shared buffer
Processes
Producer, Buffer, Consumer
Used in
Data acquisition systems, multimedia systems

Timing analysis
The correctness of a real-time system depends not just on the correctness of its outputs but also
on the time at which these outputs were produced. In a timing analysis, you calculate how often
each process in the system must be executed to ensure that all inputs are processed and all
system responses produced in a timely way. The results of the timing analysis are used to
decide how frequently each process should execute and how these processes should be
scheduled by the real-time operating system.
Factors in timing analysis:

• Deadlines: the times by which stimuli must be processed and some response
produced by the system.
• Frequency: the number of times per second that a process must execute so that you
are confident that it can always meet its deadlines.
• Execution time: the time required to process a stimulus and produce a response.

Real-time operating systems


Real-time operating systems are specialized operating systems which manage the processes in
the RTS. Responsible for process management and
resource (processor and memory) allocation. May be based on a standard kernel which
is used unchanged or modified for a particular
application. Do not normally include facilities such as file management.
Real-time operating system components:

• Real-time clock provides information for process scheduling.


• Interrupt handler manages aperiodic requests for service.
• Scheduler chooses the next process to be run.
• Resource manager allocates memory and processor resources.
• Dispatcher starts process execution.
The scheduler chooses the next process to be executed by the processor. This depends on a
scheduling strategy which may take the process priority into account. The resource manager
allocates memory and a processor for the process to be executed. The dispatcher takes the
process from ready list, loads it onto a processor and starts execution.
Scheduling strategies:

• Non pre-emptive scheduling: once a process has been scheduled for execution, it
runs to completion or until it is blocked for some reason (e.g. waiting for I/O).
• Pre-emptive scheduling: the execution of an executing processes may be stopped if a
higher priority process requires service.
• Scheduling algorithms include round-robin, rate monotonic, and shortest deadline
first.

UNIT V SOFTWARE TESTING AND SOFTWARE CONFIGURATION MANAGEMENT

Software Testing Strategy – Unit Testing – Integration Testing – Validation Testing – System
Testing – Debugging – White-Box Testing – Basis Path Testing – Control Structure Testing –
Black-Box Testing – Software Configuration Management (SCM) – SCM Repository – SCM
Process – Configuration Management for Web and Mobile Apps.
Software Testing Strategies
Software Testing is a type of investigation to find out if there is any default or error present in
the software so that the errors can be reduced or removed to increase the quality of the software
and to check whether it fulfills the specifies requirements or not.

According to Glen Myers, software testing has the following objectives:


• The process of investigating and checking a program to find whether there is an error or not
and does it fulfill the requirements or not is called testing.
• When the number of errors found during the testing is high, it indicates that the testing was
good and is a sign of good test case.
• Finding an unknown error that’s wasn’t discovered yet is a sign of a successful and a good
test case.

The main objective of software testing is to design the tests in such a way that it systematically
finds different types of errors without taking much time and effort so that less time is required
for the development of the software.
The overall strategy for testing software includes:
1. Before testing starts, it’s necessary to identify and specify the requirements of the
product in a quantifiable manner.
Different characteristics quality of the software is there such as maintainability that means
the ability to update and modify, the probability that means to find and estimate any risk, and
usability that means how it can easily be used by the customers or end-users. All these
characteristic qualities should be specified in a particular order to obtain clear test results
without any error.

2. Specifying the objectives of testing in a clear and detailed manner.


Several objectives of testing are there such as effectiveness that means how effectively the
software can achieve the target, any failure that means inability to fulfill the requirements
and perform functions, and the cost of defects or errors that mean the cost required to fix the
error. All these objectives should be clearly mentioned in the test plan.

3. For the software, identifying the user’s category and developing a profile for each user.
Use cases describe the interactions and communication among different classes of users and
the system to achieve the target. So as to identify the actual requirement of the users and
then testing the actual use of the product.

4. Developing a test plan to give value and focus on rapid-cycle testing.


Rapid Cycle Testing is a type of test that improves quality by identifying and measuring the
any changes that need to be required for improving the process of software. Therefore, a test
plan is an important and effective document that helps the tester to perform rapid cycle
testing.

5. Robust software is developed that is designed to test itself.


The software should be capable of detecting or identifying different classes of errors.
Moreover, software design should allow automated and regression testing which tests the
software to find out if there is any adverse or side effect on the features of software due to
any change in code or program.

6. Before testing, using effective formal reviews as a filter.


Formal technical reviews is technique to identify the errors that are not discovered yet. The
effective technical reviews conducted before testing reduces a significant amount of testing
efforts and time duration required for testing software so that the overall development time
of software is reduced.

7. Conduct formal technical reviews to evaluate the nature, quality or ability of the test
strategy and test cases.
The formal technical review helps in detecting any unfilled gap in the testing approach.
Hence, it is necessary to evaluate the ability and quality of the test strategy and test cases by
technical reviewers to improve the quality of software.

8. For the testing process, developing a approach for the continuous development.
As a part of a statistical process control approach, a test strategy that is already measured
should be used for software testing to measure and control the quality during the
development of software.

Unit testing
Unit testing is the process of testing individual components in isolation. It is a defect testing
process. Units may be:

• Individual functions or methods within an object;


• Object classes with several attributes and methods;
• Composite components with defined interfaces used to access their functionality.

When testing object classes, tests should be designed to provide coverage of all of the features
of the object:

• Test all operations associated with the object;


• Set and check the value of all attributes associated with the object;
• Put the object into all possible states, i.e. simulate all events that cause a state change.

Whenever possible, unit testing should be automated so that tests are run and checked
without manual intervention. In automated unit testing, you make use of a test automation
framework (such as JUnit) to write and run your program tests. Unit testing frameworks provide
generic test classes that you extend to create specific test cases. They can then run all of the
tests that you have implemented and report, often through some GUI, on the success of
otherwise of the tests. An automated test has three parts:

• A setup part, where you initialize the system with the test case, namely the inputs and
expected outputs.
• A call part, where you call the object or method to be tested.
• An assertion part where you compare the result of the call with the expected result. If
the assertion evaluates to true, the test has been successful if false, then it has failed.

The test cases should show that, when used as expected, the component that you are testing
does what it is supposed to do. If there are defects in the component, these should be revealed
by test cases. This leads to two types of unit test cases:

• The first of these should reflect normal operation of a program and should show that
the component works as expected.
• The other kind of test case should be based on testing experience of where common
problems arise. It should use abnormal inputs to check that these are properly
processed and do not crash the component.

Integration testing

Integration testing is the second level of the software testing process comes after unit testing. In
this testing, units or individual components of the software are tested in a group. The focus of
the integration testing level is to expose defects at the time of interaction between integrated
components or units.

Unit testing uses modules for testing purpose, and these modules are combined and tested in
integration testing. The Software is developed with a number of software modules that are
coded by different coders or programmers. The goal of integration testing is to check the
correctness of communication among all the modules.

Once all the components or modules are working independently, then we need to check the data
flow between the dependent modules is known as integration testing.

Let us see one sample example of a banking application, as we can see in the below image of
amount transfer.

4.6M

166

How to Break Free From 'Doom-Scrolling'


o First, we will login as a user P to amount transfer and send Rs200 amount, the
confirmation message should be displayed on the screen as amount transfer
successfully. Now logout as P and login as user Q and go to amount balance page and
check for a balance in that account = Present balance + Received Balance. Therefore, the
integration test is successful.
o Also, we check if the amount of balance has reduced by Rs200 in P user account.
o Click on the transaction, in P and Q, the message should be displayed regarding the data
and time of the amount transfer.

Integration Testing Techniques

Any testing technique (Blackbox, Whitebox, and Greybox) can be used for Integration Testing;
some are listed below:

Black Box Testing


o State Transition technique
o Decision Table Technique
o Boundary Value Analysis
o All-pairs Testing
o Cause and Effect Graph
o Equivalence Partitioning
o Error Guessing

White Box Testing


o Data flow testing
o Control Flow Testing
o Branch Coverage Testing
o Decision Coverage Testing

Types of Integration Testing

Integration testing can be classified into two parts:

o Incremental integration testing


o Non-incremental integration testing

Incremental Approach

In the Incremental Approach, modules are added in ascending order one by one or according to
need. The selected modules must be logically related. Generally, two or more than two modules
are added and tested to determine the correctness of functions. The process continues until the
successful testing of all the modules.

OR

In this type of testing, there is a strong relationship between the dependent modules. Suppose
we take two or more modules and verify that the data flow between them is working fine. If it
is, then add more modules and test again.
For example: Suppose we have a Flipkart application, we will perform incremental integration
testing, and the flow of the application would like this:

Flipkart→ Login→ Home → Search→ Add cart→Payment → Logout

Incremental integration testing is carried out by further methods:

o Top-Down approach
o Bottom-Up approach

Top-Down Approach

The top-down testing strategy deals with the process in which higher level modules are tested
with lower level modules until the successful completion of testing of all the modules. Major
design flaws can be detected and fixed early because critical modules tested first. In this type of
method, we will add the modules incrementally or one by one and check the data flow in the
same order.
In the top-down approach, we will be ensuring that the module we are adding is the child of the
previous one like Child C is a child of Child B and so on as we can see in the below image:

Advantages:

o Identification of defect is difficult.


o An early prototype is possible.

Disadvantages:

o Due to the high number of stubs, it gets quite complicated.


o Lower level modules are tested inadequately.
o Critical Modules are tested first so that fewer chances of defects.
Bottom-Up Method

The bottom to up testing strategy deals with the process in which lower level modules are tested
with higher level modules until the successful completion of testing of all the modules. Top
level critical modules are tested at last, so it may cause a defect. Or we can say that we will be
adding the modules from bottom to the top and check the data flow in the same order.

In the bottom-up method, we will ensure that the modules we are adding are the parent of the
previous one as we can see in the below image:

Advantages

o Identification of defect is easy.


o Do not need to wait for the development of all the modules as it saves time.

Disadvantages

o Critical modules are tested last due to which the defects can occur.
o There is no possibility of an early prototype.
In this, we have one addition approach which is known as hybrid testing.

Hybrid Testing Method

In this approach, both Top-Down and Bottom-Up approaches are combined for testing. In this
process, top-level modules are tested with lower level modules and lower level modules tested
with high-level modules simultaneously. There is less possibility of occurrence of defect
because each module interface is tested.

Advantages

o The hybrid method provides features of both Bottom Up and Top Down methods.
o It is most time reducing method.
o It provides complete testing of all modules.

Disadvantages

o This method needs a higher level of concentration as the process carried out in both
directions simultaneously.
o Complicated method.

Non- incremental integration testing

We will go for this method, when the data flow is very complex and when it is difficult to find
who is a parent and who is a child. And in such case, we will create the data in any module
bang on all other existing modules and check if the data is present. Hence, it is also known as
the Big bang method.

Big Bang Method

In this approach, testing is done via integration of all modules at once. It is convenient for small
software systems, if used for large software systems identification of defects is difficult.

Since this testing can be done after completion of all modules due to that testing team has less
time for execution of this process so that internally linked interfaces and high-risk critical
modules can be missed easily.
Advantages:

o It is convenient for small size software systems.

Disadvantages:

o Identification of defects is difficult because finding the error where it came from is a
problem, and we don't know the source of the bug.
o Small modules missed easily.
o Time provided for testing is very less.
o We may miss to test some of the interfaces.

Software Testing - Validation Testing

The process of evaluating software during the development process or at the end of the
development process to determine whether it satisfies specified business requirements.
Validation Testing ensures that the product actually meets the client's needs. It can also be
defined as to demonstrate that the product fulfills its intended use when deployed on
appropriate environment.
It answers to the question, Are we building the right product?

Validation Testing - Workflow:


Validation testing can be best demonstrated using V-Model. The Software/product under test is
evaluated during this type of testing.

Activities:

• Unit Testing
• Integration Testing
• System Testing
• User Acceptance Testing
System Testing

System Testing includes testing of a fully integrated software system. Generally, a computer
system is made with the integration of software (any software is only a single element of a
computer system). The software is developed in units and then interfaced with other software
and hardware to create a complete computer system. In other words, a computer system consists
of a group of software to perform the various tasks, but only software cannot perform the task;
for that software must be interfaced with compatible hardware. System testing is a series of
different type of tests with the purpose to exercise and examine the full working of an integrated
software computer system against requirements.

To check the end-to-end flow of an application or the software as a user is known as System
testing. In this, we navigate (go through) all the necessary modules of an application and check
if the end features or the end business works fine, and test the product as a whole system.

It is end-to-end testing where the testing environment is similar to the production environment.
There are four levels of software testing

: unit testing

, integration testing

, system testing and acceptance testing

, all are used for the testing purpose. Unit Testing used to test a single software; Integration
Testing used to test a group of units of software, System Testing used to test a whole system
and Acceptance Testing used to test the acceptability of business requirements. Here we are
discussing system testing which is the third level of testing levels.

Hierarchy of Testing Levels

There are mainly two widely used methods for software testing, one is White box
testing which uses internal coding to design test cases and another is black box testing
which uses GUI or user perspective to develop test cases.

o White box testing


o Black box testing

System testing falls under Black box testing as it includes testing of the external working of
the software. Testing follows user's perspective to identify minor defects.

System Testing includes the following steps.

o Verification of input functions of the application to test whether it is producing the


expected output or not.
o Testing of integrated software by including external peripherals to check the interaction
of various components with each other.
o Testing of the whole system for End to End testing.
o Behavior testing of the application via auser's experience

Types of System Testing

System testing is divided into more than 50 types, but software testing companies typically uses
some of them. These are listed below:
Regression Testing

Regression testing is performed under system testing to confirm and identify that if there's any
defect in the system due to modification in any other part of the system. It makes sure, any
changes done during the development process have not introduced a new defect and also gives
assurance; old defects will not exist on the addition of new software over the time.

Load Testing

Load testing is performed under system testing to clarify whether the system can work under
real-time loads or not.

Functional Testing

Functional testing of a system is performed to find if there's any missing function in the system.
Tester makes a list of vital functions that should be in the system and can be added during
functional testing and should improve quality of the system.

Recovery Testing

Recovery testing of a system is performed under system testing to confirm reliability,


trustworthiness, accountability of the system and all are lying on recouping skills of the system.
It should be able to recover from all the possible system crashes successfully.

In this testing, we will test the application to check how well it recovers from the crashes or
disasters.

Recovery testing contains the following steps:

o Whenever the software crashes, it should not vanish but should write the crash log
message or the error log message where the reason for crash should be mentioned. For
example: C://Program Files/QTP/Cresh.log
o It should kill its own procedure before it vanishes. Like, in Windows, we have the Task
Manager to show which process is running.
o We will introduce the bug and crash the application, which means that someone will lead
us to how and when will the application crash. Or By experiences, after few months of
involvement on working the product, we can get to know how and when the application
will crash.
o Re-open the application; the application must be reopened with earlier settings.

For example: Suppose, we are using the Google Chrome browser, if the power goes off, then
we switch on the system and re-open the Google chrome, we get a message asking whether we
want to start a new session or restore the previous session. For any developed product, the
developer writes a recovery program that describes, why the software or the application is
crashing, whether the crash log messages are written or not, etc.

Migration Testing

Migration testing is performed to ensure that if the system needs to be modified in new
infrastructure so it should be modified without any issue.

Usability Testing

The purpose of this testing to make sure that the system is well familiar with the user and it
meets its objective for what it supposed to do.

Software and Hardware Testing

This testing of the system intends to check hardware

and software

compatibility. The hardware configuration must be compatible with the software to run it
without any issue. Compatibility provides flexibility by providing interactions between
hardware and software.

Why is System Testing Important?

o System Testing gives hundred percent assurance of system performance as it covers end
to end function of the system.
o It includes testing of System software architecture and business requirements.
o It helps in mitigating live issues and bugs even after production.
o System testing uses both existing system and a new system to feed same data in both and
then compare the differences in functionalities of added and existing functions so, the
user can understand benefits of new added functions of the system.

Software Engineering | Debugging


Introduction:
In the context of software engineering, debugging is the process of fixing a bug in the
software. In other words, it refers to identifying, analyzing and removing errors. This activity
begins after the software fails to execute properly and concludes by solving the problem and
successfully testing the software. It is considered to be an extremely complex and tedious task
because errors need to be resolved at all stages of debugging.
Debugging Process: Steps involved in debugging are:
• Problem identification and report preparation.
• Assigning the report to software engineer to the defect to verify that it is genuine.
• Defect Analysis using modeling, documentations, finding and testing candidate flaws, etc.
• Defect Resolution by making required changes to the system.
• Validation of corrections.
Debugging Strategies:
1. Study the system for the larger duration in order to understand the system. It helps
debugger to construct different representations of systems to be debugging depends on the
need. Study of the system is also done actively to find recent changes made to the software.
2. Backwards analysis of the problem which involves tracing the program backward from the
location of failure message in order to identify the region of faulty code. A detailed study
of the region is conducting to find the cause of defects.
3. Forward analysis of the program involves tracing the program forwards using breakpoints
or print statements at different points in the program and studying the results. The region
where the wrong outputs are obtained is the region that needs to be focused to find the
defect.
4. Using the past experience of the software debug the software with similar problems in
nature. The success of this approach depends on the expertise of the debugger.
Debugging Tools:
Debugging tool is a computer program that is used to test and debug other programs. A lot of
public domain software like gdb and dbx are available for debugging. They offer console-
based command line interfaces. Examples of automated debugging tools include code based
tracers, profilers, interpreters, etc.
Some of the widely used debuggers are:
• Radare2
• WinDbg
• Valgrind
Difference Between Debugging and Testing:
Debugging is different from testing. Testing focuses on finding bugs, errors, etc whereas
debugging starts after a bug has been identified in the software. Testing is used to ensure that
the program is correct and it was supposed to do with a certain minimum success rate. Testing
can be manual or automated. There are several different types of testing like unit testing,
integration testing, alpha and beta testing, etc.
Debugging requires a lot of knowledge, skills, and expertise. It can be supported by some
automated tools available but is more of a manual process as every bug is different and
requires a different technique, unlike a pre-defined testing mechanism.
Software Engineering | White box Testing

Prerequisite – Software Testing | Basics


White box testing techniques analyze the internal structures the used data structures, internal
design, code structure and the working of the software rather than just the functionality as in
black box testing. It is also called glass box testing or clear box testing or structural testing.
Working process of white box testing:
• Input: Requirements, Functional specifications, design documents, source code.
• Processing: Performing risk analysis for guiding through the entire process.
• Proper test planning: Designing test cases so as to cover entire code. Execute rinse-repeat
until error-free software is reached. Also, the results are communicated.
• Output: Preparing final report of the entire testing process.
Testing techniques:
• Statement coverage: In this technique, the aim is to traverse all statement at least once.
Hence, each line of code is tested. In case of a flowchart, every node must be traversed at
least once. Since all lines of code are covered, helps in pointing out faulty code.

Statement Coverage Example

• Branch Coverage: In this technique, test cases are designed so that each branch from all
decision points are traversed at least once. In a flowchart, all edges must be traversed at least
once.
4 test cases required such that all branches of all decisions are covered, i.e, all edges of

flowchart are covered

• Condition Coverage: In this technique, all individual conditions must be covered as shown
in the following example:
1. READ X, Y
2. IF(X == 0 || Y == 0)
3. PRINT ‘0’
In this example, there are 2 conditions: X == 0 and Y == 0. Now, test these conditions get
TRUE and FALSE as their values. One possible example would be:
4. #TC1 – X = 0, Y = 55
5. #TC2 – X = 5, Y = 0
• Multiple Condition Coverage: In this technique, all the possible combinations of the
possible outcomes of conditions are tested at least once. Let’s consider the following
example:
1. READ X, Y
2. IF(X == 0 || Y == 0)
3. PRINT ‘0’
4. #TC1: X = 0, Y = 0
5. #TC2: X = 0, Y = 5
6. #TC3: X = 55, Y = 0
7. #TC4: X = 55, Y = 5
Hence, four test cases required for two individual conditions.
Similarly, if there are n conditions then 2n test cases would be required.

• Basis Path Testing: In this technique, control flow graphs are made from code or flowchart
and then Cyclomatic complexity is calculated which defines the number of independent
paths so that the minimal number of test cases can be designed for each independent path.
Steps:
1. Make the corresponding control flow graph
2. Calculate the cyclomatic complexity
3. Find the independent paths
4. Design test cases corresponding to each independent path
Flow graph notation: It is a directed graph consisting of nodes and edges. Each node
represents a sequence of statements, or a decision point. A predicate node is the one that
represents a decision point that contains a condition after which the graph splits. Regions are
bounded by nodes and edges.

Cyclomatic Complexity: It is a measure of the logical complexity of the software and is


used to define the number of independent paths. For a graph G, V(G) is its cyclomatic
complexity.
Calculating V(G):
5. V(G) = P + 1, where P is the number of predicate nodes in the flow graph
6. V(G) = E – N + 2, where E is the number of edges and N is the total number of nodes
7. V(G) = Number of non-overlapping regions in the graph
Example:

V(G) = 4 (Using any of the above formulae)


No of independent paths = 4
8. #P1: 1 – 2 – 4 – 7 – 8
9. #P2: 1 – 2 – 3 – 5 – 7 – 8
10.#P3: 1 – 2 – 3 – 6 – 7 – 8
11.#P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8
• Loop Testing: Loops are widely used and these are fundamental to many algorithms hence,
their testing is very important. Errors often occur at the beginnings and ends of loops.
1. Simple loops: For simple loops of size n, test cases are designed that:
• Skip the loop entirely
• Only one pass through the loop
• 2 passes
• m passes, where m < n
• n-1 ans n+1 passes
2. Nested loops: For nested loops, all the loops are set to their minimum count and we start
from the innermost loop. Simple loop tests are conducted for the innermost loop and this
is worked outwards till all the loops have been tested.
3. Concatenated loops: Independent loops, one after another. Simple loop tests are applied
for each.
If they’re not independent, treat them like nesting.
Advantages:
1. White box testing is very thorough as the entire code and structures are tested.
2. It results in the optimization of code removing error and helps in removing extra lines of
code.
3. It can start at an earlier stage as it doesn’t require any interface as in case of black box
testing.
4. Easy to automate.
Disadvantages:
1. Main disadvantage is that it is very expensive.
2. Redesign of code and rewriting code needs test cases to be written again.
3. Testers are required to have in-depth knowledge of the code and programming language as
opposed to black box testing.
4. Missing functionalities cannot be detected as the code that exists is tested.
5. Very complex and at times not realistic.

Control Structure Testing


Control structure testing is used to increase the coverage area by testing various control
structures present in the program. The different types of testing performed under control
structure testing are as follows-
1. Condition Testing
2. Data Flow Testing
3. Loop Testing
1. Condition Testing :
Condition testing is a test cased design method, which ensures that the logical condition and
decision statements are free from errors. The errors present in logical conditions can be
incorrect boolean operators, missing parenthesis in a booleans expression, error in relational
operators, arithmetic expressions, and so on.
The common types of logical conditions that are tested using condition testing are-
1. A relation expression, like E1 op E2 where ‘E1’ and ‘E2’ are arithmetic expressions and
‘OP’ is an operator.
2. A simple condition like any relational expression preceded by a NOT (~) operator.
For example, (~E1) where ‘E1’ is an arithmetic expression and ‘a’ denotes NOT operator.
3. A compound condition consists of two or more simple conditions, Boolean operator, and
parenthesis.
For example, (E1 & E2)|(E2 & E3) where E1, E2, E3 denote arithmetic expression and ‘&’
and ‘|’ denote AND or OR operators.
4. A Boolean expression consists of operands and a Boolean operator like ‘AND’, OR, NOT.
For example, ‘A|B’ is a Boolean expression where ‘A’ and ‘B’ denote operands and |
denotes OR operator.
2. Data Flow Testing :
The data flow test method chooses the test path of a program based on the locations of the
definitions and uses all the variables in the program.
The data flow test approach is depicted as follows suppose each statement in a program is
assigned a unique statement number and that theme function cannot modify its parameters or
global variables.
For example, with S as its statement number.
DEF (S) = {X | Statement S has a definition of X}
USE (S) = {X | Statement S has a use of X}
If statement S is an if loop statement, them its DEF set is empty and its USE set depends on the
state of statement S. The definition of the variable X at statement S is called the line of
statement S’ if the statement is any way from S to statement S’ then there is no other definition
of X.
A definition use (DU) chain of variable X has the form [X, S, S’], where S and S’ denote
statement numbers, X is in DEF(S) and USE(S’), and the definition of X in statement S is line
at statement S’.
A simple data flow test approach requires that each DU chain be covered at least once. This
approach is known as the DU test approach. The DU testing does not ensure coverage of all
branches of a program.
However, a branch is not guaranteed to be covered by DU testing only in rar cases such as then
in which the other construct does not have any certainty of any variable in its later part and the
other part is not present. Data flow testing strategies are appropriate for choosing test paths of a
program containing nested if and loop statements.
3. Loop Testing :
Loop testing is actually a white box testing technique. It specifically focuses on the validity of
loop construction.
Following are the types of loops.
1. Simple Loop – The following set of test can be applied to simple loops, where the
maximum allowable number through the loop is n.
1. Skip the entire loop.
2. Traverse the loop only once.
3. Traverse the loop two times.
4. Make p passes through the loop where p<n.
5. Traverse the loop n-1, n, n+1 times.

2. Concatenated Loops – If loops are not dependent on each other, contact loops can be tested
using the approach used in simple loops. if the loops are interdependent, the steps are
followed in nested loops.
3. Nested Loops – Loops within loops are called as nested loops. when testing nested loops,
the number of tested increases as level nesting increases.
The following steps for testing nested loops are as follows-
1. Start with inner loop. set all other loops to minimum values.
2. Conduct simple loop testing on inner loop.
3. Work outwards.
4. Continue until all loops tested.
4. Unstructured loops – This type of loops should be redesigned, whenever possible, to reflect
the use of unstructured the structured programming constructs.

Black box testing


Black box testing is a technique of software testing which examines the functionality of
software without peering into its internal structure or coding. The primary source of black box
testing is a specification of requirements that is stated by the customer.

In this method, tester selects a function and gives input value to examine its functionality, and
checks whether the function is giving expected output or not. If the function produces correct
output, then it is passed in testing, otherwise failed. The test team reports the result to the
development team and then tests the next function. After completing testing of all functions if
there are severe problems, then it is given back to the development team for correction.

Black box testing

Black box testing is a type of software testing in which the functionality of the software is not
known. The testing is done without the internal knowledge of the products.
Black box testing can be done in following ways:
1. Syntax Driven Testing – This type of testing is applied to systems that can be syntactically
represented by some language. For example- compilers,language that can be represented by
context free grammar. In this, the test cases are generated so that each grammar rule is used at
least once.
2. Equivalence partitioning – It is often seen that many type of inputs work similarly so
instead of giving all of them separately we can group them together and test only one input of
each group. The idea is to partition the input domain of the system into a number of equivalence
classes such that each member of class works in a similar way, i.e., if a test case in one class
results in some error, other members of class would also result into same error.
The technique involves two steps:
1. Identification of equivalence class – Partition any input domain into minimum two
sets: valid values and invalid values. For example, if the valid range is 0 to 100 then select
one valid input like 49 and one invalid like 104.
2. Generating test cases –
(i) To each valid and invalid class of input assign unique identification number.
(ii) Write test case covering all valid and invalid test case considering that no two invalid
inputs mask each other.
To calculate the square root of a number, the equivalence classes will be:
(a) Valid inputs:
• Whole number which is a perfect square- output will be an integer.
• Whole number which is not a perfect square- output will be decimal number.
• Positive decimals
(b) Invalid inputs:
• Negative numbers(integer or decimal).
• Characters other that numbers like “a”,”!”,”;”,etc.
3. Boundary value analysis – Boundaries are very good places for errors to occur. Hence if test
cases are designed for boundary values of input domain then the efficiency of testing improves
and probability of finding errors also increase. For example – If valid range is 10 to 100 then
test for 10,100 also apart from valid and invalid inputs.
4. Cause effect Graphing – This technique establishes relationship between logical input
called causes with corresponding actions called effect. The causes and effects are represented
using Boolean graphs. The following steps are followed:
1. Identify inputs (causes) and outputs (effect).
2. Develop cause effect graph.
3. Transform the graph into decision table.
4. Convert decision table rules to test cases.
For example, in the following cause effect graph:

It can be converted into decision table like:

Each column corresponds to a rule which will become a test case for testing. So there will be 4
test cases.
5. Requirement based testing – It includes validating the requirements given in SRS of
software system.
6. Compatibility testing – The test case result not only depend on product but also
infrastructure for delivering functionality. When the infrastructure parameters are changed it is
still expected to work properly. Some parameters that generally affect compatibility of software
are:
1. Processor (Pentium 3,Pentium 4) and number of processors.
2. Architecture and characteristic of machine (32 bit or 64 bit).
3. Back-end components such as database servers.
4. Operating System (Windows, Linux, etc).
Software Configuration Management

When we develop software, the product (software) undergoes many changes in their
maintenance phase; we need to handle these changes effectively.

Several individuals (programs) works together to achieve these common goals. This individual
produces several work product (SC Items) e.g., Intermediate version of modules or test data
used during debugging, parts of the final product.

The elements that comprise all information produced as a part of the software process are
collectively called a software configuration.

As software development progresses, the number of Software Configuration elements (SCI's)


grow rapidly.

These are handled and controlled by SCM. This is where we require software
configuration management.

A configuration of the product refers not only to the product's constituent but also to a particular
version of the component.

Therefore, SCM is the discipline which

o Identify change
o Monitor and control change
o Ensure the proper implementation of change made to the item.
o Auditing and reporting on the change made.

Configuration Management (CM) is a technic of identifying, organizing, and controlling


modification to software being built by a programming team.

The objective is to maximize productivity by minimizing mistakes (errors).

CM is used to essential due to the inventory management, library management, and updation
management of the items essential for the project.
Why do we need Configuration Management?

Multiple people are working on software which is consistently updating. It may be a method
where multiple version, branches, authors are involved in a software project, and the team is
geographically distributed and works concurrently. It changes in user requirements, and policy,
budget, schedules need to be accommodated.

Importance of SCM

It is practical in controlling and managing the access to various SCIs e.g., by preventing the two
members of a team for checking out the same component for modification at the same time.

It provides the tool to ensure that changes are being properly implemented.

It has the capability of describing and storing the various constituent of software.

SCM is used in keeping a system in a consistent state by automatically producing derived


version upon modification of the same component.

SCM Process

It uses the tools which keep that the necessary change has been implemented adequately to the
appropriate component. The SCM process defines a number of tasks:

o Identification of objects in the software configuration


o Version Control
o Change Control
o Configuration Audit
o Status Reporting
Identification

Basic Object: Unit of Text created by a software engineer during analysis, design, code, or test.

Aggregate Object: A collection of essential objects and other aggregate objects. Design
Specification is an aggregate object.

Each object has a set of distinct characteristics that identify it uniquely: a name, a description, a
list of resources, and a "realization."

The interrelationships between configuration objects can be described with a Module


Interconnection Language (MIL).

Version Control

Version Control combines procedures and tools to handle different version of configuration
objects that are generated during the software process.

Clemm defines version control in the context of SCM: Configuration management allows a
user to specify the alternative configuration of the software system through the selection of
appropriate versions. This is supported by associating attributes with each software version, and
then allowing a configuration to be specified [and constructed] by describing the set of desired
attributes.

Change Control

James Bach describes change control in the context of SCM is: Change Control is Vital. But the
forces that make it essential also make it annoying.
We worry about change because a small confusion in the code can create a big failure in the
product. But it can also fix a significant failure or enable incredible new capabilities.

We worry about change because a single rogue developer could sink the project, yet brilliant
ideas originate in the mind of those rogues, and

A burdensome change control process could effectively discourage them from doing creative
work.

A change request is submitted and calculated to assess technical merit; potential side effects, the
overall impact on other configuration objects and system functions, and projected cost of the
change.

The results of the evaluations are presented as a change report, which is used by a change
control authority (CCA) - a person or a group who makes a final decision on the status and
priority of the change.

The "check-in" and "check-out" process implements two necessary elements of change control-
access control and synchronization control.

Access Control governs which software engineers have the authority to access and modify a
particular configuration object.

Synchronization Control helps to ensure that parallel changes, performed by two different
people, don't overwrite one another.

Configuration Audit

SCM audits to verify that the software product satisfies the baselines requirements and ensures
that what is built and what is delivered.

SCM audits also ensure that traceability is maintained between all CIs and that all work
requests are associated with one or more CI modification.

SCM audits are the "watchdogs" that ensures that the integrity of the project's scope is
preserved.

Status Reporting

Configuration Status reporting (sometimes also called status accounting) providing accurate
status and current configuration data to developers, testers, end users, customers and
stakeholders through admin guides, user guides, FAQs, Release Notes, Installation Guide,
Configuration Guide, etc.
Software Configuration Management for Cloud, Mobile, and Database Development
Supporting cloud, mobile, and database development sounds like a remarkably technical
endeavor. In practice, personality issues between team members can impact just how effectively
you handle these complex technical efforts. You need to be able to understand these challenges
and how the personality of the members of your team will impact their individual performances
as well as the performance of the entire team. The good news is that there is a lot you can do to
ensure success. Here's how to get started.

Taming Complexity
Cloud, mobile, and database development have some unique characteristics. When Bob Aiello
and I worked on the source code management chapter of Configuration Management Best
Practices [1], specifying practices to handle mainline, bugfixes, and other variants seemed to be
fairly straightforward. But version control in the cloud, on your Android, and also databases
adds some unique challenges that are not always so straightforward to tame. For example,
technology experts are highly specialized and may not have a firm grasp on all of the technical
details requirement to produce a complete complex system. Much of this complexity revolves
around managing ambiguity.
Dealing with Ambiguity
The cloud brings with it the risk of having to manage resources, over which you have little
actual control. There is ample ambiguity in having to rely upon the service provided by a
vendor that may or may not be in alignment with your own goals and priorities. Mobile devices
have their own set of requirements and industry standards that are rapidly emerging as smart
phones become as ubiquitous as less dynamic cell phones.
Trying to version control your database also carries its own set of complexities. For example,
handling SQL scripts and stored procedures to managing low DBA level performance tuning
takes creative and flexible strategies. Although, there are emerging tools in this space, there also
remain considerable challenges. Some people are better at accepting and dealing with ambiguity
than others. One effective way to tame complexity is to reduce it into simple constructs that are
easier to handle.
Reducing Cognitive Complexity
Taking complex problems and reducing them into one or more, less complicated challenges,
reduces work and eliminates many protential sources of error. Many technology professionals
can tackle complex tasks but are very challenged when it comes to reducing cognitive
complexity. When you feel challenged and uncertain as to how to handle complexity, start by
defining the requirements for the situation at hand. Many agile practices are very effective at
managing requirements and provide a good starting point for gaining a clear understanding of
the system that you need to develop.
Requirements Management
Requirements may be specified in big documents with formal requirements specifications.
Many agile enthusiasts prefer to define requirements in lighter terms using what are
called epics and stories. [2] Still others use test-driven development to manage and reduce
cognitive complexity. [3] From a personality perspective, a critical determinant is whether you
are an innovative leader or will be mired and rendered immobile by the challenge of handling
platforms that may be a little bit out of the ordinary. Innovative leaders help the team tackle
tough challenges and set the direction for the entire development effort. This can be especially
important when there is a lack of control.
Teamwork When There Is a Lack of Control
You will find that some members of your team are better than others at demonstrating
leadership in the face of complexity. Make sure that you balance your teams with not only
regard to technical competence but also sensitivity to personalities. You need to have some
people who will help provide the synergy for effective teamwork even when there is a lack of
control.
Leadership for Managing Complexity
Some technology leaders truly shine in the face of challenge. When managing development in
the cloud or newly emerging platforms such as mobile devices, management complexity must
be your primary task. True leaders thrive on this type of challenge. They also create
environments where learning can impact the entire lifecycle development effort.
Retrospectives for Learning and Improving
Agile retrospectives are a very effective process improvement methodology in which the team
meets to discuss what went well and what needs to be improved. Agile retrospectives [4]
provide an excellent opportunity for team members to meet and discuss what went well and
what needs to be improved. Some people find this introspection therapeutic and readily
embrace the opportunity. Others find it uncomfortable to open up about mistakes and accept
feedback from others. Too often, these sessions degenerate into finger pointing and name
calling. Team leaders play a vital role in setting the right tone so that retrospectives can be
effective and fruitful.

You might also like