0% found this document useful (0 votes)
211 views

Advanced Software

Prescriptive process models prescribe the activities, tasks, and quality assurance mechanisms that should be followed for software development projects. The three main types of prescriptive process models are: 1. The waterfall model, which involves completing each phase fully before beginning the next in a linear sequential fashion. 2. The incremental process model, which applies elements of the waterfall model iteratively by developing software in increments with customer feedback between each increment. 3. The RAD (rapid application development) model, which aims to develop software quickly through iterative phases of business and data modeling, process modeling, application generation, and testing. Agile methods also take an iterative approach through breaking work into shorter iterations with
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
211 views

Advanced Software

Prescriptive process models prescribe the activities, tasks, and quality assurance mechanisms that should be followed for software development projects. The three main types of prescriptive process models are: 1. The waterfall model, which involves completing each phase fully before beginning the next in a linear sequential fashion. 2. The incremental process model, which applies elements of the waterfall model iteratively by developing software in increments with customer feedback between each increment. 3. The RAD (rapid application development) model, which aims to develop software quickly through iterative phases of business and data modeling, process modeling, application generation, and testing. Agile methods also take an iterative approach through breaking work into shorter iterations with
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 208

ADVANCED SOFTWARE ENGINEERING

UNIT I SOFTWARE PROCESS &MODELING


Prescriptive Process Models
A prescriptive process model is a model that describes "how to do" according to a certain software
process system. A prescriptive model prescribes how a new software system should be developed.
The following framework activities are carried out irrespective of the process model chosen by the
organization.
The name 'prescriptive' is given because the model prescribes a set of activities, actions, tasks, quality
assurance and change the mechanism for every project.

There are three types of prescriptive process models. They are:

1. The Waterfall Model


2. Incremental Process model
3. RAD model
1. The Waterfall Model

 The waterfall model is also called as 'Linear sequential model' or 'Classic life cycle
model'.
 In this model, each phase is fully completed before the beginning of the next phase.
 This model is used for the small projects.
 In this model, feedback is taken after each phase to ensure that the project is on the right
path.
 Testing part starts only after the development is complete.

An alternative design for 'linear sequential model' is as follows:

Advantages of waterfall model

 The waterfall model is simple and easy to understand, implement, and use.
 All the requirements are known at the beginning of the project, hence it is easy to
manage.
 It avoids overlapping of phases because each phase is completed at once.
 This model works for small projects because the requirements are understood very well.
 This model is preferred for those projects where the quality is more important as
compared to the cost of the project.
Disadvantages of the waterfall model

 This model is not good for complex and object oriented projects.
 It is a poor model for long projects.
 The problems with this model are uncovered, until the software testing.
 The amount of risk is high.
2. Incremental Process model

 The incremental model combines the elements of waterfall model and they are applied in
an iterative fashion.
 The first increment in this model is generally a core product.
 Each increment builds the product and submits it to the customer for any suggested
modifications.
 The next increment implements on the customer's suggestions and add additional
requirements in the previous increment.
 This process is repeated until the product is finished.
For example, the word-processing software is developed using the incremental model.

Advantages of incremental model

 This model is flexible because the cost of development is low and initial product delivery
is faster.
 It is easier to test and debug during the smaller iteration.
 The working software generates quickly and early during the software life cycle.
 The customers can respond to its functionalities after every increment.
Disadvantages of the incremental model

 The cost of the final product may cross the cost estimated initially.
 This model requires a very clear and complete planning.
 The planning of design is required before the whole system is broken into small
increments.
 The demands of customer for the additional functionalities after every increment causes
problem during the system architecture.
3. RAD model

 RAD is a Rapid Application Development model.


 Using the RAD model, software product is developed in a short period of time.
 The initial activity starts with the communication between customer and developer.
 Planning depends upon the initial requirements and then the requirements are divided into
groups.
 Planning is more important to work together on different modules.
The RAD model consist of following phases:
1. Business Modeling

 Business modeling consist of the flow of information between various functions in the
project.
 For example what type of information is produced by every function and which are the
functions to handle that information.
 A complete business analysis should be performed to get the essential business
information.
2. Data modeling

 The information in the business modeling phase is refined into the set of objects and it is
essential for the business.
 The attributes of each object are identified and define the relationship between objects.
3. Process modeling

 The data objects defined in the data modeling phase are changed to fulfil the information
flow to implement the business model.
 The process description is created for adding, modifying, deleting or retrieving a data
object.
4. Application generation

 In the application generation phase, the actual system is built.


 To construct the software the automated tools are used.
5. Testing and turnover

 The prototypes are independently tested after each iteration so that the overall testing time
is reduced.
 The data flow and the interfaces between all the components are are fully tested. Hence,
most of the programming components are already tested.
Agility and Process
The meaning of Agile is swift or versatile."Agile process model" refers to a software development
approach based on iterative development. Agile methods break tasks into smaller iterations, or parts
do not directly involve long term planning. The project scope and requirements are laid down at the
beginning of the development process. Plans regarding the number of iterations, the duration and the
scope of each iteration are clearly defined in advance.
Each iteration is considered as a short time "frame" in the Agile process model, which typically lasts
from one to four weeks. The division of the entire project into smaller parts helps to minimize the
project risk and to reduce the overall project delivery time requirements. Each iteration involves a
team working through a full software development life cycle including planning, requirements
analysis, design, coding, and testing before a working product is demonstrated to the client.

Phases of Agile Model:


Following are the phases in the Agile model are as follows:
1. Requirements gathering
2. Design the requirements
3. Construction/ iteration
4. Testing/ Quality assurance
5. Deployment
6. Feedback
1. Requirements gathering: In this phase, you must define the requirements. You should explain
business opportunities and plan the time and effort needed to build the project. Based on this
information, you can evaluate technical and economic feasibility.
2. Design the requirements: When you have identified the project, work with stakeholders to define
requirements. You can use the user flow diagram or the high-level UML diagram to show the work of
new features and show how it will apply to your existing system.
3. Construction/ iteration: When the team defines the requirements, the work begins. Designers and
developers start working on their project, which aims to deploy a working product. The product will
undergo various stages of improvement, so it includes simple, minimal functionality.
4. Testing: In this phase, the Quality Assurance team examines the product's performance and looks
for the bug.
5. Deployment: In this phase, the team issues a product for the user's work environment.
6. Feedback: After releasing the product, the last step is feedback. In this, the team receives feedback
about the product and works through the feedback.
Agile Testing Methods:

 Scrum
 Crystal
 Dynamic Software Development Method(DSDM)
 Feature Driven Development(FDD)
 Lean Software Development
 eXtreme Programming(XP)
Scrum
SCRUM is an agile development process focused primarily on ways to manage tasks in team-based
development conditions.
eXtreme Programming(XP)
This type of methodology is used when customers are constantly changing demands or requirements,
or when they are not sure about the system's performance.
Crystal:
There are three concepts of this method-
1. Chartering: Multi activities are involved in this phase such as making a development
team, performing feasibility analysis, developing plans, etc.
2. Cyclic delivery: under this, two more cycles consist, these are:
 Team updates the release plan.
 Integrated product delivers to the users.
3. Wrap up: According to the user environment, this phase performs deployment, post-
deployment.
Dynamic Software Development Method(DSDM):
DSDM is a rapid application development strategy for software development and gives an agile
project distribution structure. The essential features of DSDM are that users must be actively
connected, and teams have been given the right to make decisions. The techniques used in DSDM are:
The DSDM project contains seven stages:
1. Pre-project
2. Feasibility Study
3. Business Study
4. Functional Model Iteration
5. Design and build Iteration
6. Implementation
7. Post-project
Feature Driven Development(FDD):
This method focuses on "Designing and Building" features. In contrast to other smart methods, FDD
describes the small steps of the work that should be obtained separately per function.
Lean Software Development:
Lean software development methodology follows the principle "just in time production." The lean
method indicates the increasing speed of software development and reducing costs. Lean development
can be summarized in seven phases.
1. Eliminating Waste
2. Amplifying learning
3. Defer commitment (deciding as late as possible)
4. Early delivery
5. Empowering the team
6. Building Integrity
7. Optimize the whole
Advantage(Pros) of Agile Method:
1. Frequent Delivery
2. Face-to-Face Communication with clients.
3. Efficient design and fulfils the business requirement.
4. Anytime changes are acceptable.
5. It reduces total development time.
Disadvantages(Cons) of Agile Model:
1. Due to the shortage of formal documents, it creates confusion and crucial decisions taken
throughout various phases can be misinterpreted at any time by different team members.
2. Due to the lack of proper documentation, once the project completes and the developers
allotted to another project, maintenance of the finished project can become a difficulty.

Scrum
Scrum is a lightweight yet incredibly powerful set of values, principles, and practices. Scrum relies on
cross-functional teams to deliver products and services in short cycles, enabling:

 Fast feedback
 Quicker innovation
 Continuous improvement
 Rapid adaptation to change
 More delighted customers
 Accelerated pace from idea to delivery

Scrum is "a lightweight framework that helps people, teams and organizations generate value through
adaptive solutions for complex problems.1" Scrum is the most widely used and popular agile
framework. The term agile describes a specific set of foundational principles and values for
organizing and managing complex work.
Though it has its roots in software development, today scrum refers to a lightweight framework that is
used in every industry to deliver complex, innovative products and services that truly delight
customers. It is simple to understand, but difficult to master.
Scrum's Approach to Work
People are the focus of scrum. Scrum organizes projects using cross-functional teams, each one of
which has all of the capabilities necessary to deliver a piece of functionality from idea to delivery.
The scrum framework guides the creation of a product, focusing on value and high visibility of
progress. Working from a dynamic list of the most valuable things to do, a team brings that product
from an idea to life using the scrum framework as a guide for transparency, inspection, and
adaptation. The goal of scrum is to help teams work together to delight your customers.
The Scrum Team

 Developers - On a scrum team, a developer is anyone on the team that is delivering work,
including those team members outside of software development. In fact, the 15th State of
Agile Report found that the number of non-software teams adopting agile frameworks like
scrum doubled from 2020 to 2021, with 27% reporting agile use in marketing, and between
10-16% reporting use in security, sales, finance, human resources, and more.
 Product Owner - Holds the vision for the product and prioritizes the product backlog
 Scrum Master - Helps the team best use scrum to build the product.

Scrum Artifacts

 Product Backlog - An emergent, ordered list of what is needed to improve the product and
includes the product goal.
 Sprint Backlog - The set of product backlog items selected for the sprint by the developers
(team members), plus a plan for delivering the increment and realizing the sprint goal.
 Increment - A sum of usable sprint backlog items completed by the developers in the sprint
that meets the definition of done, plus the value of all the increments that came before. Each
increment is a recognizable, visibly improved, operating version of the product.

Scrum Commitments
Each artifact has an associated commitment - not to be confused with one of the scrum values
(covered below) - that ensures quality and keeps the team focused on delivering value to its users.

 Definition of Done - When the increment is delivered, it needs to meet a shared


understanding of what “done” means. The definition of done ensures that the standard of
quality is met. The definition of done can differ between organizations and teams.
 Sprint Goal - A specific and singular purpose for the sprint backlog. This goal helps
everyone focus on the essence of what needs to be done and why.
 Product Goal - To plan the work to be done each sprint, teams need an idea of their product's
overall objective. Each team may have multiple product goals over its lifetime, but only one
at a time.

Scrum Events
Scrum teams work in sprints, each of which includes several events (or activities). Don't think of these
events as meetings or ceremonies; the events that are contained within each sprint are valuable
opportunities to inspect and adapt the product or the process (and sometimes both).

 The Sprint - The heartbeat of scrum. Each sprint should bring the product closer to the
product goal and is a month or less in length.
 Sprint Planning - The entire scrum team establishes the sprint goal, what can be done, and
how the chosen work will be completed. Planning should be timeboxed to a maximum of 8
hours for a month-long sprint, with a shorter timebox for shorter sprints.
 Daily Scrum - The developers (team members delivering the work) inspect the progress
toward the sprint goal and adapt the sprint backlog as necessary, adjusting the upcoming
planned work. A daily scrum should be timeboxed to 15 minutes each day.
 Sprint Review - The entire scrum team inspects the sprint's outcome with stakeholders and
determines future adaptations. Stakeholders are invited to provide feedback on the increment.
 Sprint Retrospective - The scrum team inspects how the last sprint went regarding
individuals, interactions, processes, tools, and definition of done. The team identifies
improvements to make the next sprint more effective and enjoyable. This is the conclusion of
the sprint.

Scrum vs Agile:
The difference between agile and scrum is that agile refers to a set of principles and values shared by
several methodologies, processes, and practices; scrum is one of several agile frameworks—and is the
most popular. Learn more about agile vs scrum and how they differ from traditional project
management approaches.
Fundamentals of Agile and Scrum
Agile principles and values foster the mindset and skills businesses need in order to succeed in an
uncertain and turbulent environment. The term agile was first used in the Manifesto for Agile
Software Development (Agile Manifesto) back in 2001. The main tenets of the Agile Manifesto are:
Scrum fulfills the vision of the Agile Manifesto by helping individuals and businesses organize their
work to maximize collaboration, minimize red tape, deliver frequently, and create multiple
opportunities to inspect and adapt.
Why an Agile Framework Like Scrum Works
As mentioned in more detail above, scrum is an agile framework that helps companies meet complex,
changing needs while creating high-quality products and services. Scrum works by delivering large
projects in small chunks bite-sized increments that a cross-functional team can begin and complete in
one, short timeboxed iteration.
As each product increment is completed, teams review the functionality and then decide what to
create next based on what they learned and the feedback they received during the review.
Transparency
To make decisions, people need visibility into the process and the current state of the product. To
ensure everyone understands what they are seeing, participants in an empirical process must share one
language.
Inspection
To prevent deviation from the desired process or end product, people need to inspect what is being
created, and how, at regular intervals. Inspection should occur at the point of work but should not get
in the way of that work.
Adaptation
Adaptation means that when deviations occur, the process or product should be adjusted as soon as
possible. Scrum Teams Can Adapt the Product at the End of Every Sprint. Scrum allows for
adjustments at the end of every iteration.
Iterative
Iterative processes are a way to arrive at a decision or a desired result by repeating rounds of analysis
or a cycle of operations. The objective is to bring the desired decision or result closer to discovery
with each repetition (iteration). Scrum’s use of a repeating cycle of iterations is iterative.
Incremental
Incremental refers to a series of small improvements to an existing product or product line that usually
helps maintain or improve its competitive position over time. Incremental innovation is regularly used
within the high technology business by companies that need to continue to improve their products to
include new features increasingly desired by consumers. The way scrum teams deliver pieces of
functionality into small batches is incremental.
The Five Scrum Values
A team’s success with scrum depends on five values: commitment, courage, focus, openness, and
respect.
Commitment Allows Scrum Teams to Be Agile
The scrum value of commitment is essential for building an agile culture. Scrum teams work together
as a unit. This means that scrum and agile teams trust each other to follow through on what they say
they are going to do. When team members aren’t sure how work is going, they ask. Agile teams only
agree to take on tasks they believe they can complete, so they are careful not to overcommit.
Courage Allows Scrum Teams to Be Agile
The Scrum value of courage is critical to an agile team’s success. Scrum teams must feel safe enough
to say no, to ask for help, and to try new things. Agile teams must be brave enough to question the
status quo when it hampers their ability to succeed.
Focus Allows Scrum Teams to Be Agile
The scrum value of focus is one of the best skills scrum teams can develop. Focus means that
whatever scrum teams start they finish--so agile teams are relentless about limiting the amount of
work in process (limit WIP).
Openness Allows Scrum Teams to Be Agile
Scrum teams consistently seek out new ideas and opportunities to learn. Agile teams are also honest
when they need help.
Respect Allows Scrum Teams to Be Agile
Scrum team members demonstrate respect to one another, to the product owner, to stakeholders, and
to the Scrum Master. Agile teams know that their strength lies in how well they collaborate and that
everyone has a distinct contribution to make toward completing the work of the sprint. They respect
each other’s ideas, give each other permission to have a bad day once in a while, and recognize each
other’s accomplishments.
XP
Extreme programming (XP) is a software development methodology intended to improve software
quality and responsiveness to changing customer requirements. As a type of agile software
development.it advocates frequent releases in short development cycles, intended to improve
productivity and introduce checkpoints at which new customer requirements can be adopted.

XP is a lightweight, efficient, low-risk, flexible, predictable, scientific, and fun way to develop a
software.

eXtreme Programming (XP) was conceived and developed to address the specific needs of software
development by small teams in the face of vague and changing requirements.

Extreme Programming is one of the Agile software development methodologies. It provides values
and principles to guide the team behavior. The team is expected to self-organize. Extreme
Programming provides specific core practices where −

 Each practice is simple and self-complete.


 Combination of practices produces more complex and emergent behavior.
 Creativity
 Learning and improving through trials and errors
 Iterations
Extreme Programming is based on the following values

 Communication
 Simplicity
 Feedback
 Courage
 Respect

Embrace Change

A key assumption of Extreme Programming is that the cost of changing a program can be held mostly
constant over time.

 Emphasis on continuous feedback from the customer


 Short iterations
 Design and redesign
 Coding and testing frequently
 Eliminating defects early, thus reducing costs
 Keeping the customer involved throughout the development
 Delivering working product to the customer

Extreme Programming in a Nutshell

 Writing unit tests before programming and keeping all of the tests running at all times. The
unit tests are automated and eliminates defects early, thus reducing the costs.
 Starting with a simple design just enough to code the features at hand and redesigning when
required.
 Programming in pairs (called pair programming), with two programmers at one screen, taking
turns to use the keyboard. While one of them is at the keyboard, the other constantly reviews
and provides inputs.
 Integrating and testing the whole system several times a day.
 Putting a minimal working system into the production quickly and upgrading it whenever
required.
 Keeping the customer involved all the time and obtaining constant feedback.

Why is it called “Extreme?”

Extreme Programming takes the effective principles and practices to extreme levels.
 Code reviews are effective as the code is reviewed all the time.
 Testing is effective as there is continuous regression and testing.
 Design is effective as everybody needs to do refactoring daily.
 Integration testing is important as integrate and test several times a day.
 Short iterations are effective as the planning game for release planning and iteration planning.

Success in Industry

The success of projects, which follow Extreme Programming practices, is due to −

 Rapid development.
 Immediate responsiveness to the customer’s changing requirements.
 Focus on low defect rates.
 System returning constant and consistent value to the customer.
 High customer satisfaction.
 Reduced costs.
 Team cohesion and employee satisfaction.

Extreme Programming Advantages

Extreme Programming solves the following problems often faced in the software development
projects −

 Slipped schedules − and achievable development cycles ensure timely deliveries.


 Cancelled projects − Focus on continuous customer involvement ensures transparency with
the customer and immediate resolution of any issues.
 Costs incurred in changes − Extensive and ongoing testing makes sure the changes do not
break the existing functionality. A running working system always ensures sufficient time for
accommodating changes such that the current operations are not affected.
 Production and post-delivery defects: Emphasis is on − the unit tests to detect and fix the
defects early.
 Misunderstanding the business and/or domain − Making the customer a part of the team
ensures constant communication and clarifications.
 Business changes − Changes are considered to be inevitable and are accommodated at any
point of time.
 Staff turnover − Intensive team collaboration ensures enthusiasm and good will. Cohesion of
multi-disciplines fosters the team spirit.
Kanban
Kanban is an inventory control system used in just-in-time (JIT) manufacturing. It was developed by
Taiichi Ohno, an industrial engineer at Toyota, and takes its name from the colored cards that track
production and order new shipments of parts or materials as they run out. Kanban is a Japanese word
that directly translates to "visual card", so the kanban system simply means to use visual cues to
prompt the action needed to keep a process flowing..
The kanban system can be thought of as a signal and response system. When an item is running low
at an operational station, there will be a visual cue specifying how much to order from the supply.
The person using the parts makes the order for the quantity indicated by the kanban and the supplier
provides the exact amount requested.

Kanban often requires company-wide buy-in to be effective. Each department must be relied upon
to perform their necessary tasks at a specific time in order to transition the process to future
departments. Without this wide buy-in, kanban methodologies will be futile.

Visualize Workflows

At the heart of kanban, the process must be visually depicted. Whether by physical, tangible cards or
leveraging technology and software, the process must be shown step by step using visual cues that
make each tasks clearly identifiable. The idea is to clearly show what each step is, what expectations
are, and who will take what tasks.

Old-fashioned (but still used today) methods included drafting kanban tasks on sticky notes. Each
sticky note could be colored differently to signify different types of work items. These tasks would
then be placed into swim lanes, defined sections that group related tasks to create a more organized
project. Today, inventory management software typically drives kanban process.

Limit WIP

As kanban is rooted in efficiency, the goal of kanban is to minimize the amount of work in progress.
Teams are encouraged to complete prior tasks before moving on to a new one. This ensures that
future dependencies can be started earlier and that resources such as staff are not inefficiently
waiting to start their task while relying on others.

Manage Workflows

As a process is undertaken, a company will be able to identify strengths and weaknesses along the
work flow. Sometimes, limitations are not met or goals not achieved; in this case, it is up to the team
to manage the work flow and better understand the deficiencies that must be overcome.

Clearly Define Policies

As part of visually depicting workflows, processes are often clearly defined. Departments can often
easily understand the expectations placed on their teams, and kanban cards assigned to specific
individuals clearly identify responsibilities for each task. By very clearly defining policies, each
worker will understand what is expected of them, what checklist criteria must be met before
completion, and what occurs during the transition between steps.
Kanban Board

The kanban process utilizes kanban boards, organizational systems that clearly outline the elements
of a process. A kanban board often has three elements: boards, lists, and cards.

Kanban boards are the biggest picture of a process that organizes broad aspects of a workflow. For
example, a company may choose to have a different kanban board for different departments within
its organization (i.e. finance, marketing, etc.). The kanban board is used to gather relevant processes
within a single workspace or taskboard area.

Electronic Kanban Systems

To enable real-time demand signaling across the supply chain, electronic kanban systems have
become widespread. These e-kanban systems can be integrated into enterprise resource
planning (ERP) systems. These systems leverage digital kanban boards, lists, and cards that
communicate the status of processes across departments

Scrum vs. Kanban

Scrum and kanban both hold methodologies that help companies operate more efficiently. However,
each have very different approaches to achieving that efficiency. Scrum approaches affix certain
timeframes for changes to be made; during these periods, specific changes are made. With kanban,
changes are made continuously.

The scrum methodology breaks tasks into sprints, defined periods with start and end periods in
which the tasks are well defined and to be executed in a certain manner. No changes or deviations
from these timings or tasks should occur. Scrum is often measured by velocity or planned capacity,
and a product owner or scrum master oversees the process.

On the other hand, kanban is more adaptive in that it analyzes what has been done in the past and
makes continuous changes. Teams set their own cadence or cycles, and these cycles often change as
needed. Kanban measures success by measuring cycle time, throughput, and work in progress.

Benefits of Kanban

 The idea of kanban carries various benefits, ranging from internal efficiencies to positive
impacts on customers.
 The purpose of kanban is to visualize the flow of tasks and processes. For this reason,
kanban brings greater visibility and transparency to the flow of tasks and objectives. By
depicting steps and the order in which they must occur, project participants may get a better
sense of the flow of tasks and importance of interrelated steps.
 Because kanban strives to be more efficient, companies using kanban often experience faster
turnaround times. This includes faster manufacturing processes, quicker packaging and
handling, and more efficient delivery times to customers. This reduces company carrying
costs (i.e. storage, insurance, risk of obsolescence) while also turning over capital quicker for
more efficient usage.
 Companies that use kanban practices may also have greater predictability for what's to come.
By outlining future steps and tasks, companies may be able to get a better sense of risks,
roadblocks, or difficulties that would have otherwise slowed the process. Instead, companies
can preemptively plan to attack these deficiencies and allocate resources to combat hurdles
before they slow processes.
 Last, the ultimate goal of kanban is to provide better service to customers. With more
efficient and less wasteful processes, customers may be charged lower prices. With faster
processes, customers may get their goods faster. By being on top of processes, customers
may be able to interact with customer service quicker and have resolutions met faster.

Disadvantages of Kanban

 For some companies, kanban is not possible to be implemented or not feasible to practice.
First, kanban relies on stability; a company must have a predictable process that cannot
materially deviate. For companies operating in dynamic environments where activities are
not stable, the company may find it difficult to operate using kanban.
 Kanban is often related to other production methodologies (just-in-time, scrum, etc.). For
this reason, a company may not reap all benefits if it only accepts kanban practices. For
example, a company may understand when it will need raw materials when reviewing
kanban cards; however, if the company does not utilize just-in-time inventory, it may be
incurring unnecessary expenses to carry the raw materials during periods when it is sitting
idle.
 Kanban also has the demand of needing to be consistently updated for a few reasons. First, if
completed tasks are not marked off, the team analyzing next steps may not adequately assess
where along the process the team is at. Second, there is no timing assessments to different
phases, so team members must be aware of how much time is allocated to their task and
what future deadlines rely on the task at hand.

DevOps

The DevOps is the combination of two words, one is Development and other is Operations. It is a
culture to promote the development and operation process collectively.

The DevOps tutorial will help you to learn DevOps basics and provide depth knowledge of various
DevOps tools such as Git, Ansible, Docker, Puppet, Jenkins, Chef, Nagios, and Kubernetes.

The DevOps is a combination of two words, one is software Development, and second is Operations.
This allows a single team to handle the entire application lifecycle, from development to testing,
deployment, and operations. DevOps helps you to reduce the disconnection between software
developers, quality assurance (QA) engineers, and system administrators.

DevOps promotes collaboration between Development and Operations team to deploy code to
production faster in an automated & repeatable way.

DevOps has become one of the most valuable business disciplines for enterprises or organizations.
With the help of DevOps, quality, and speed of the application delivery has improved to a great
extent.

DevOps is nothing but a practice or methodology of making "Developers" and "Operations" folks
work together. DevOps represents a change in the IT culture with a complete focus on rapid IT
service delivery through the adoption of agile practices in the context of a system-oriented approach.
Why DevOps?

Before going further, we need to understand why we need the DevOps over the other methods.

 The operation and development team worked in complete isolation.


 After the design-build, the testing and deployment are performed respectively. That's why
they consumed more time than actual build cycles.
 Without the use of DevOps, the team members are spending a large amount of time on
designing, testing, and deploying instead of building the project.
 Manual code deployment leads to human errors in production.
 Coding and operation teams have their separate timelines and are not in synch, causing
further delays.

DevOps Architecture Features

1) Automation

Automation can reduce time consumption, especially during the testing and deployment phase. The
productivity increases, and releases are made quicker by automation. This will lead in catching bugs
quickly so that it can be fixed easily. For contiguous delivery, each code is defined through
automated tests, cloud-based services, and builds. This promotes production using automated
deploys.

2) Collaboration

The Development and Operations team collaborates as a DevOps team, which improves the cultural
model as the teams become more productive with their productivity, which strengthens
accountability and ownership. The teams share their responsibilities and work closely in sync, which
in turn makes the deployment to production faster.

3) Integration

Applications need to be integrated with other components in the environment. The integration phase
is where the existing code is combined with new functionality and then tested. Continuous
integration and testing enable continuous development. The frequency in the releases and micro-
services leads to significant operational challenges. To overcome such problems, continuous
integration and delivery are implemented to deliver in a quicker, safer, and reliable manner.
4) Configuration management

It ensures the application to interact with only those resources that are concerned with the
environment in which it runs. The configuration files are not created where the external
configuration to the application is separated from the source code. The configuration file can be
written during deployment, or they can be loaded at the run time, depending on the environment in
which it is running.

Advantages of DevOps

 DevOps is an excellent approach for quick development and deployment of applications.


 It responds faster to the market changes to improve business growth.
 DevOps escalate business profit by decreasing software delivery time and transportation
costs.
 DevOps clears the descriptive process, which gives clarity on product development and
delivery.
 It improves customer experience and satisfaction.
 DevOps simplifies collaboration and places all tools in the cloud for customers to access.
 DevOps means collective responsibility, which leads to better team engagement and
productivity.

Disadvantages of DevOps

 DevOps professional or expert's developers are less available.


 Developing with DevOps is so expensive.
 Adopting new DevOps technology into the industries is hard to manage in short time.
 Lack of DevOps knowledge can be a problem in the continuous integration of automation
projects.

Prototype Construction

Prototyping is defined as the process of developing a working replication of a product or system that
has to be engineered. It offers a small scale facsimile of the end product and is used for obtaining
customer feedback as described below:

The Prototyping Model is one of the most popularly used Software Development Life Cycle Models
(SDLC models). This model is used when the customers do not know the exact project requirements
beforehand. In this model, a prototype of the end product is first developed, tested and refined as per
customer feedback repeatedly till a final acceptable prototype is achieved which forms the basis for
developing the final product.
In this process model, the system is partially implemented before or during the analysis phase thereby
giving the customers an opportunity to see the product early in the life cycle. The process starts by
interviewing the customers and developing the incomplete high-level paper model. This document is
used to build the initial prototype supporting only the basic functionality as desired by the customer.
There are four types of models available:
A) Rapid Throwaway Prototyping – This technique offers a useful method of exploring ideas and
getting customer feedback for each of them. In this method, a developed prototype need not
necessarily be a part of the ultimately accepted prototype. Customer feedback helps in preventing
unnecessary design faults and hence, the final prototype developed is of better quality.
B) Evolutionary Prototyping – In this method, the prototype developed initially is incrementally
refined on the basis of customer feedback till it finally gets accepted. In comparison to Rapid
Throwaway Prototyping, it offers a better approach which saves time as well as effort. This is because
developing a prototype from scratch for every iteration of the process can sometimes be very
frustrating for the developers.
C) Incremental Prototyping – In this type of incremental Prototyping, the final expected product is
broken into different small pieces of prototypes and being developed individually. In the end, when all
individual pieces are properly developed, then the different prototypes are collectively merged into a
single final product in their predefined order. It’s a very efficient approach that reduces the
complexity of the development process, where the goal is divided into sub-parts and each sub-part is
developed individually.
D) Extreme Prototyping – This method is mainly used for web development. It is consists of three
sequential independent phases:
D.1) In this phase a basic prototype with all the existing static pages are presented in the HTML
format.
D.2) In the 2nd phase, Functional screens are made with a simulated data process using a prototype
services layer.
D.3) This is the final step where all the services are implemented and associated with the final
prototype.
Advantages
 The customers get to see the partial product early in the life cycle. This ensures a greater level
of customer satisfaction and comfort.
 New requirements can be easily accommodated as there is scope for refinement.
 Missing functionalities can be easily figured out.
 Errors can be detected much earlier thereby saving a lot of effort and cost, besides enhancing
the quality of the software.
 The developed prototype can be reused by the developer for more complicated projects in the
future.
 Flexibility in design.
Disadvantages

 Costly w.r.t time as well as money.


 There may be too much variation in requirements each time the prototype is evaluated by the
customer.
 Poor Documentation due to continuously changing customer requirements.
 It is very difficult for developers to accommodate all the changes demanded by the customer.
 There is uncertainty in determining the number of iterations that would be required before the
prototype is finally accepted by the customer.
 After seeing an early prototype, the customers sometimes demand the actual product to be
delivered soon.
 Developers in a hurry to build prototypes may end up with sub-optimal solutions.
 The customer might lose interest in the product if he/she is not satisfied with the initial
prototype.

Prototype Evaluation
Evaluation of a Prototype should be built into the development process. It should begin before any
technical phases and should continue beyond the life of the Prototype. It is the control mechanism for
the entire iterative design procedure. The evaluation process keeps the cost and effort of the
Prototype in line with its value. With constant evaluation, the system can die when the need for it is
over or it proves not to be valuable. Prototype evaluation can also help to quantify the impact of
decision-making processes on organisational goals. Sprague & Carlson consider what to measure,
how to measure it and presents a general model for evaluation. Prototype evaluations should be
considered as planned experiments designed to test one or more hypotheses.

 Testing a prototype / developed design is a very important part of the design and manufacturing
process. Testing and evaluation, simply confirms that the product will work as it is supposed to,
or if it needs refinement.
 In general, testing a prototype allows the designer and client to assess the viability of a design.
Will it be successful as a commercial product? Testing also helps identify potential faults, which
in turn allows the designer to make improvements.
 There are many reasons why testing and evaluation takes place. Some reasons are described
below.
 Testing and evaluation, allows the client / customer to view the prototype and to give his/her
views. Changes and improvements are agreed and further work carried out.
 A focus group can try out the prototype and give their views and opinions. Faults and problems
are often identified at this stage.
 Suggestions for improvement are often made at this stage. Safety issues are sometimes
identified, by thorough testing and evaluation. The prototype can be tested to British and
European Standards.
 The prototype can be tested against any relevant regulations and legislation. Adjustments /
improvements to the design can then be made.
 Evaluating a prototype allows the production costs to be assessed and finalised. Every stage of
manufacturing can be scrutinised for potential costs.
 If the client has set financial limits / restrictions, then alterations to the design or manufacturing
processes, may have to be made. This may lead to alternative and cheaper manufacturing
processes being selected, for future production.
 Component failure is often identified during the testing process. This may mean a component is
redesign and not the entire product. Sometimes a component or part of a product, will be tested
separately and not the whole product.
 This allows more specific tests to be carried out. Evaluating the manufacture of the prototype,
allows the designer to plan an efficient and cost effective manufacturing production line.
 Prototype testing can be carried out alongside the testing of similar designs or even the products
of competitors. This may lead to improvements.
 Testing ensures that any user instructions can be worked out, stage by stage, so that the future
consumer can use the product efficiently and safely. This guarantees customer satisfaction.

Prototype Model

The prototype model requires that before carrying out the development of actual software, a working
prototype of the system should be built. A prototype is a toy implementation of the system. A
prototype usually turns out to be a very crude version of the actual system, possible exhibiting
limited functional capabilities, low reliability, and inefficient performance as compared to actual
software. In many instances, the client only has a general view of what is expected from the software
product. In such a scenario where there is an absence of detailed information regarding the input to
the system, the processing needs, and the output requirement, the prototyping model may be
employed.
Steps of Prototype Model

1. Requirement Gathering and Analyst


2. Quick Decision
3. Build a Prototype
4. Assessment or User Evaluation
5. Prototype Refinement
6. Engineer Product

Advantage of Prototype Model

1. Reduce the risk of incorrect user requirement


2. Good where requirement are changing/uncommitted
3. Regular visible process aids management
4. Support early product marketing
5. Reduce Maintenance cost.
6. Errors can be detected much earlier as the system is made side by side.

Disadvantage of Prototype Model

1. An unstable/badly implemented prototype often becomes the final product.


2. Require extensive customer collaboration
3. Difficult to know how long the project will last.
4. Easy to fall back into the code and fix without proper requirement analysis, design, customer
evaluation, and feedback.
5. Prototyping tools are expensive.
6. Special tools & techniques are required to build a prototype.
7. It is a time-consuming process.

Evolutionary Process Model

Evolutionary process model resembles the iterative enhancement model. The same phases are
defined for the waterfall model occurs here in a cyclical fashion. This model differs from the
iterative enhancement model in the sense that this does not require a useful product at the end of
each cycle. In evolutionary development, requirements are implemented by category rather than by
priority.

Benefits of Evolutionary Process Model

 Use of EVO brings a significant reduction in risk for software projects.


 EVO can reduce costs by providing a structured, disciplined avenue for experimentation.
 EVO allows the marketing department access to early deliveries, facilitating the
development of documentation and demonstration.
 Better fit the product to user needs and market requirements.
 Manage project risk with the definition of early cycle content.
 Uncover key issues early and focus attention appropriately.
 Increase the opportunity to hit market windows.
 Accelerate sales cycles with early customer exposure.
 Increase management visibility of project progress.
 Increase product team productivity and motivations.
Prototype Principles

Although each project will have its own prototyping goal and a choice of fidelity that supports your
strategy, the following seven principles of effective prototyping should always be kept in mind:

1. Be clear on why you are prototyping, and write it down.


2. Use paper for the first version. Yeah, paper. The stuff in the printer.
3. Show, don't tell.
4. Gather inspiration from many sources (patterns).
5. Prototype what can't be built.
6. Keep scope to what is required for your reason for prototyping.
7. Iterate!

Be clear on why you are prototyping

In a previous chapter, we spoke about having a prototyping goal. If you have defined a prototyping
goal in advance, your chances of success are much higher!

Use paper for the first version

Paper prototyping is the best approach for your very first prototype. Not only it is quicker to create a
paper prototype than any other type, but also there is something inherently creative about using a pen
or pencil.

The very first prototype that you do is really for yourself. It is to get your own thoughts straight and to
be able to create a few different versions of something to see what you like. It's important to be able
to quickly visualize several possibilities and to see what works the best.

I have found in the past that certain issues (particularly those involving the flow of screens) are only
discovered when you actually see them linked together and when you actually try to click through the
flow of a certain sequence.

Show, don't tell.

A prototype is a proxy experience for the final product. Although it may not be fully "real", it should
be "real enough" for the purposes of the feedback you want to create.

When working with other team members, it may be that we want to achieve a shared understanding of
what we are about to build. When showing a prototype to a potential customer, it may be that we want
to see if they understand how to use the product in front of them and/or have observations about why
it would be a success or failure.

In both cases, it is better for you to say nothing at first. Watch the person interact with the prototype.
Is their reaction and understanding what you expected? If not, is it clear why not? If your prototype is
as clear as you think, you should not have to describe anything! Watching the person use the
prototype and checking if they follow the intended flows, experience anxiety over what their available
options are, or display enthusiasm at an outcome being so easily achieved are what you are looking
for.

Gather inspiration from many sources

When building a prototype, the chances are that at least some elements of that prototype have been
done before in other products. Become a student of the products around you.
There are lots of great sites out there that show lists of existing design paradigms in the product
landscape. For example, if you were thinking about building a calendar, you could check out the
calendar pattern on pttrns.com. This will give you lots of ideas and get your juices flowing. Bear in
mind this is just inspiration, and your context will be different to some of the sites or apps you see
here.

Prototype what can't be built.

It is recommended to prototype anything that cannot be built easily by your team. The reason for this
is that if there is enough demand for it, a solution can be found. When we prototype, we want to
escape the boundaries of what is sensible and possible. Rather we want to assume that (by magic if
necessary), the ideal solutions to a problem are available to us.

If something is hard to build, then it is particularly important that we get feedback on what the
reaction would be if this feature or product were built. If the reaction is hugely positive, then a great
question is, "How can we make this possible, given that everyone wants it?" This is a better approach
than "Let's build it and see if anyone wants it."

Keep scope to what is required for your reason for prototyping.

It is important to keep scope to what is required for your reason for prototyping.

As we have mentioned already:

1. Starting with a paper prototype is ideal. This can get you to the point where you are relatively
happy with how many screens there are and what the major elements on each screen are.
2. Then often a black and white prototype is best for getting internal agreement with the team
and stakeholders of what to build. Eliminating color and major design elements (e.g. just
using a black rectangle instead of an image) will prevent conversations around design
happening too soon (i.e before the main interactions are decided). Also, do not include too
many screens. You are trying to build consensus within the team and if you include some
non-related features, the conversation could end up being focused there!
3. Finally, a prototype with high-fidelity design is usually used when showing the prototype to
customers (whether it is clickable or fully interactive). The same rule applies to not showing
too many screens. If there are just enough screens to represent those customer outcomes and
flows that you wish to learn more about as defined by your prototyping goal - then that's
enough!

Iterate!

The main goal of prototyping is to create an early version of a product/feature that allows us to get
feedback that informs later versions of the product.

Specifically, if we get valuable feedback and new insight, the quickest way to verify that we have
correctly understood that feedback is another prototype. It is typical for prototypes to go through
several rounds of iterations in this fashion.

The key question to ask yourself after showing a version of a prototype a few times is, "Have I
learned anything new?" If you are not learning anything new, then you may not have any need for
further prototype iterations.
Requirements Engineering

Requirements engineering (RE) refers to the process of defining, documenting, and maintaining
requirements in the engineering design process. Requirement engineering provides the appropriate
mechanism to understand what the customer desires, analyzing the need, and assessing feasibility,
negotiating a reasonable solution, specifying the solution clearly, validating the specifications and
managing the requirements as they are transformed into a working system. Thus, requirement
engineering is the disciplined application of proven principles, methods, tools, and notation to
describe a proposed system's intended behavior and its associated constraints.

Requirement Engineering Process

1. Feasibility Study
2. Requirement Elicitation and Analysis
3. Software Requirement Specification
4. Software Requirement Validation
5. Software Requirement Management

1. Feasibility Study:

The objective behind the feasibility study is to create the reasons for developing the software that is
acceptable to users, flexible to change and conformable to established standards.

Types of Feasibility:

1. Technical Feasibility - Technical feasibility evaluates the current technologies, which are
needed to accomplish customer requirements within the time and budget.
2. Operational Feasibility - Operational feasibility assesses the range in which the required
software performs a series of levels to solve business problems and customer requirements.
3. Economic Feasibility - Economic feasibility decides whether the necessary software can
generate financial profits for an organization.

2. Requirement Elicitation and Analysis:

This is also known as the gathering of requirements. Here, requirements are identified with the help of
customers and existing systems processes, if available.
Analysis of requirements starts with requirement elicitation. The requirements are analyzed to identify
inconsistencies, defects, omission, etc. We describe requirements in terms of relationships and also
resolve conflicts if any.

Problems of Elicitation and Analysis

o Getting all, and only, the right people involved.


o Stakeholders often don't know what they want
o Stakeholders express requirements in their terms.
o Stakeholders may have conflicting requirements.
o Requirement change during the analysis process.
o Organizational and political factors may influence system requirements.

3. Software Requirement Specification:

Software requirement specification is a kind of document which is created by a software analyst after
the requirements collected from the various sources - the requirement received by the customer
written in ordinary language. It is the job of the analyst to write the requirement in technical language
so that they can be understood and beneficial by the development team.

o Data Flow Diagrams: Data Flow Diagrams (DFDs) are used widely for modeling the
requirements. DFD shows the flow of data through a system. The system may be a company,
an organization, a set of procedures, a computer hardware system, a software system, or any
combination of the preceding. The DFD is also known as a data flow graph or bubble chart.
o Data Dictionaries: Data Dictionaries are simply repositories to store information about all
data items defined in DFDs. At the requirements stage, the data dictionary should at least
define customer data items, to ensure that the customer and developers use the same
definition and terminologies.
o Entity-Relationship Diagrams: Another tool for requirement specification is the entity-
relationship diagram, often called an "E-R diagram." It is a detailed logical representation of
the data for the organization and uses three main constructs i.e. data entities, relationships,
and their associated attributes.

4. Software Requirement Validation:

After requirement specifications developed, the requirements discussed in this document are
validated. The user might demand illegal, impossible solution or experts may misinterpret the needs.
Requirements can be the check against the following conditions -
o If they can practically implement
o If they are correct and as per the functionality and specially of software
o If there are any ambiguities
o If they are full
o If they can describe

Requirements Validation Techniques

o Requirements reviews/inspections: systematic manual analysis of the requirements.


o Prototyping: Using an executable model of the system to check requirements.
o Test-case generation: Developing tests for requirements to check testability.
o Automated consistency analysis: checking for the consistency of structured requirements
descriptions.

Software Requirement Management:

 Requirement management is the process of managing changing requirements during the


requirements engineering process and system development.
 New requirements emerge during the process as business needs a change, and a better
understanding of the system is developed.
 The priority of requirements from different viewpoints changes during development process.
 The business and technical environment of the system changes during the development.
 Prerequisite of Software requirements
 Collection of software requirements is the basis of the entire software development project.
Hence they should be clear, correct, and well-defined.

Software Requirements: Largely software requirements must be categorized into two categories:

1. Functional Requirements: Functional requirements define a function that a system or


system element must be qualified to perform and must be documented in different forms. The
functional requirements are describing the behavior of the system as it correlates to the
system's functionality.
2. Non-functional Requirements: This can be the necessities that specify the criteria that can
be used to decide the operation instead of specific behaviors of the system.
Non-functional requirements are divided into two main categories:
o Execution qualities like security and usability, which are observable at run time.
o Evolution qualities like testability, maintainability, extensibility, and scalability that
embodied in the static structure of the software system.

Scenario-based Modelling

Scenario-based modeling has no one, correct way to proceed, but different processes can meet
different aspects. Explore requirements modeling and scenario-based modeling as well as use case
and activity diagrams and discover how to apply them to determine the best way to proceed.

Requirements Modeling

Requirements modeling is the process of identifying the requirements this software solution must
meet in order to be successful. Requirements modeling contains several sub-stages, typically:
scenario-based modeling, flow-oriented modeling, data modeling, class-based modeling, and
behavioral modeling. Also, as the term ''modeling'' implies, all of these stages typically result in
producing diagrams that visually convey the concepts they identify. The most common method for
creating these diagrams is Unified Modeling Language (UML).
Use Case Diagrams

The use case is essentially a primary example of how the proposed software application or system is
meant to be used, from the user's point of view. A use case diagram will typically show system actors,
humans or other entities external to the system and how they interact with the system. Technically,
each action such a system actor can perform with the application or system is considered to be a
separate use case.

How to Draw a Use-Case Diagram?

If we want to draw a use case diagram in UML first, we must study the complete system
appropriately. We need to find out every function which is offered by the system. When we find out
all the system's functionalities then we convert these functionalities into a number of use cases, and
we use these use-cases in the use case diagram.
A use case means essential functionality of any working system. When we organize the use cases,
then next we need to enlist the numerous actors or things that will collaborate with the system. These
actors are used to implement the functionality of a system. Actors can be someone or something. It
can likewise be a private system's entity. The actors should be pertinent to the functionality or a
system in which the actors are interacting.

o The use case name and actor name should be meaningful and related to the system.
o The actor's interaction with the use case should be well-described and in a comprehensible
manner.
o Use annotations wherever they are essential.
o If the actor or use case has many relationships, then display only important interactions.

When to Use a Use-Case Diagram?

The use-case diagram is an extraordinary system's functionality that is accomplished by a client. The
objective of use-case diagram is to capture the system's key functionalities and visualize the
interactions of different thinkings known as actors with the use case. It is the basic use of use-case
diagram.
With the help of the use-case diagram, we can characterize the system's main part and flow of work
among them. In the use-case, the implementation of details is hidden from external use, and only the
flow of the event is represented.
Using use-case diagrams, we can detect the pre-and post-conditions after communication with the
actor. We can determine these conditions using several test cases.

Generally, the use-cases diagram is used for:

o Examining the system's requirements.


o Capturing the system's Functionalities.
o We use use-case diagram in order to modeling the general idea behind the system.
o System's Forward and reverse engineering using several test cases.
o Complex visual designing of software.

Use cases are planned to convey wanted functionality so that the exact scope of use case can differ
based on the system and the purpose of making the UML model.

There are various tips for drawing a use-case diagram:

o It must be complete.
o It must be simple.
o The use-case diagram must show each and every interaction with the use case.
o It is must that the use-case should be generalized if it is large.
o At least one system module must be defined in the use case diagram.
o When there are number of actors or use-cases in the use-case diagram, only the significant
use-cases must be represented.
o The use-case diagrams must be clear and easy so that anyone can understand them easily.

Importance of Use-Case Diagram

 Use-case diagram provides an outline related to all components in the system. Use-case
diagram helps to define the role of administrators, users, etc.
 The use-Case diagram helps to provide solutions and answers to various questions that
may pop up if you begin a project unplanned.
 It helps us to define the needs of the users extensively and explore how it will work.

Basic Use-Case Diagram Symbols and Notations

There are following use-case diagram symbols and notations:

System

With the help of the rectangle, we can draw the boundaries of the system, which includes use-cases.
We need to put the actors outside the system's boundaries.

Use-Case

With the help of the Ovals, we can draw the use-cases. With the verb we have to label the ovals in
order to represent the functions of the system.

Actors

Actors mean the system's users. If one system is the actor of the other system, then with the actor
stereotype, we have to tag the actor system.
Relationships

With the simple line we can represent relationships between an actor and use cases. For relationships
between use-case, we use arrows which are labeled either "extends" or "uses". The "extends"
relationship shows the alternative options under the specific use case. The "uses" relationship shows
that single use-case is required to accomplish a job.

Guidelines for Better Use-Cases

With regards to examine the system's requirements, use-case diagrams are another one to one. Use-
cases are simple to understand and visual. The following are some guidelines that help you to make
better use cases that are appreciated by your customers and peers the same.

Generally, the use-case diagram contains use-cases, relationships, and actors. Systems and boundaries
may be included in the complex larger diagrams. We'll talk about the guidelines of the use-case
diagram on the basis of the objects.

Actors

o The actor's name should be meaningful and relevant to the business


If the use-case interacting with the outside organization, then we have to give the actor's name
with the function instead of the organization name..
o Place inheriting actors below the parent actor
We have to place the inheriting actors below the parent actor because it makes the actors
more readable and easily highlights the use-cases, which are exact for that actor.
o External Systems are actors
If send-email is our use-case and when the use-case interrelates with the email management
software, then in this case, the software is an actor to that specific user-case.

Use-Cases

o The name of the use-case begins with a verb


The use-case models action, so the name of the use-case must start with a verb.
o The name of the use-case must be descriptive
The use-case is created to provide more information to others who are looking at a diagram,
such as instead of "Print," "print Invoice is good.
o Put the use-cases to the right of the included use-cases.
In order to add clarity and enhance readability, we have to place the included use-cases to the
right of the invoking use-cases.
o Place inheriting use-case below the parent use-case
In order to enhance the diagram's readability, we have to place the inheriting use-case below
the parent use-case.

Systems/Packages

o Give descriptive and meaningful names to these objects.


o Use them carefully and only if needed.
Relationships

o When we are using <<extend>> arrow, points to the base use-case.


o When we are using <<include>> then arrow points to the comprised use-case.
o Actor and use-case relationship do not display arrows.
o <<extend>> may have an optional extension condition.
o <<include>> and <<extend>> both are shown as dashed arrows.

Class-based Modelling

Class-based modeling is a stage of requirements modeling. In the context of software engineering,


requirements modeling examines the requirements a proposed software application or system must
meet in order to be successful. Typically, requirements modeling begins with scenario-based
modeling, which develops a use case that will help with the next stages, like data and class-based
modeling. Class-based modeling takes the use case and extracts from it the classes, attributes, and
operations the application will use. Like all modeling stages, the end result of class-based modeling is
most often a diagram or series of diagrams, most frequently created using UML, or rather, Unified
Modeling Language.

UML Class Diagram

The class diagram depicts a static view of an application. It represents the types of objects residing in
the system and the relationships between them. A class consists of its objects, and also it may inherit
from other classes. A class diagram is used to visualize, describe, document various different aspects
of the system, and also construct executable software code. It shows the attributes, classes, functions,
and relationships to give an overview of the software system. It constitutes class names, attributes,
and functions in a separate compartment that helps in software development.

Purpose of Class Diagrams

1. It analyses and designs a static view of an application.


2. It describes the major responsibilities of a system.
3. It is a base for component and deployment diagrams.
4. It incorporates forward and reverse engineering.

Benefits of Class Diagrams

1. It can represent the object model for complex systems.


2. It reduces the maintenance time by providing an overview of how an application is
structured before coding.
3. It provides a general schematic of an application for better understanding.
4. It represents a detailed chart by highlighting the desired code, which is to be programmed.
5. It is helpful for the stakeholders and the developers.

Vital components of a Class Diagram

The class diagram is made up of three sections:

 Upper Section: The upper section encompasses the name of the class. A class is a
representation of similar objects that shares the same relationships, attributes, operations,
and semantics. Some of the following rules that should be taken into account while
representing a class are given below:
1. Capitalize the initial letter of the class name.
2. Place the class name in the center of the upper section.
3. A class name must be written in bold format.
4. The name of the abstract class should be written in italics format.

 Middle Section: The middle section constitutes the attributes, which describe the quality
of the class. The attributes have the following characteristics:
 The attributes are written along with its visibility factors, which are public (+), private (-
), protected (#), and package (~).
 Lower Section: The lower section contain methods or operations. The methods are
represented in the form of a list, where each method is written in a single line. It
demonstrates how a class interacts with data.

Relationships

In UML, relationships are of three types:

o Dependency: A dependency is a semantic relationship between two or more classes where a


change in one class cause changes in another class. It forms a weaker relationship.
In the following example, Student_Name is dependent on the Student_Id.

o Generalization: A generalization is a relationship between a parent class (superclass) and a


child class (subclass). In this, the child class is inherited from the parent class.
For example, The Current Account, Saving Account, and Credit Account are the generalized
form of Bank Account.

o Association: It describes a static or physical connection between two or more objects. It


depicts how many objects are there in the relationship.
For example, a department is associated with the college.
Multiplicity: It defines a specific range of allowable instances of attributes. In case if a range is not
specified, one is considered as a default multiplicity.
For example, multiple patients are admitted to one hospital.

Aggregation: An aggregation is a subset of association, which represents has a relationship. It is


more specific then association. It defines a part-whole or part-of relationship. In this kind of
relationship, the child class can exist independently of its parent class.

The company encompasses a number of employees, and even if one employee resigns, the company
still exists.

Composition: The composition is a subset of aggregation. It portrays the dependency between the
parent and its child, which means if one part is deleted, then the other part also gets discarded. It
represents a whole-part relationship.

A contact book consists of multiple contacts, and if you delete the contact book, all the contacts will
be lost.

Abstract Classes

In the abstract class, no objects can be a direct entity of the abstract class. The abstract class can
neither be declared nor be instantiated. It is used to find the functionalities across the classes. The
notation of the abstract class is similar to that of class; the only difference is that the name of the
class is written in italics. Since it does not involve any implementation for a given function, it is best
to use the abstract class with multiple objects.

Let us assume that we have an abstract class named displacement with a method declared inside it,
and that method will be called as a drive (). Now, this abstract class method can be implemented by
any object, for example, car, bike, scooter, cycle, etc.
How to draw a Class Diagram?

1. To describe a complete aspect of the system, it is suggested to give a meaningful name to the
class diagram.
2. The objects and their relationships should be acknowledged in advance.
3. The attributes and methods (responsibilities) of each class must be known.
4. A minimum number of desired properties should be specified as more number of the
unwanted property will lead to a complex diagram.
5. Notes can be used as and when required by the developer to describe the aspects of a
diagram.
6. The diagrams should be redrawn and reworked as many times to make it correct before
producing its final version.

Class Diagram Example

Usage of Class diagrams

The class diagram is used to represent a static view of the system. It plays an essential role in the
establishment of the component and deployment diagrams. It helps to construct an executable code
to perform forward and backward engineering for any system, or we can say it is mainly used for
construction. It represents the mapping with object-oriented languages that are C++, Java, etc. Class
diagrams can be used for the following purposes:

1. To describe the static view of a system.


2. To show the collaboration among every instance in the static view.
3. To describe the functionalities performed by the system.
4. To construct the software application using object-oriented languages.
Functional Modelling

Functional Modelling gives the process perspective of the object-oriented analysis model and an
overview of what the system is supposed to do. It defines the function of the internal processes in the
system with the aid of Data Flow Diagrams (DFDs). It depicts the functional derivation of the data
values without indicating how they are derived when they are computed, or why they need to be
computed.

Data Flow Diagrams

Functional Modelling is represented through a hierarchy of DFDs. The DFD is a graphical


representation of a system that shows the inputs to the system, the processing upon the inputs, the
outputs of the system as well as the internal data stores. DFDs illustrate the series of transformations
or computations performed on the objects or the system, and the external controls and objects that
affect the transformation.

The four main parts of a DFD are −

 Processes,
 Data Flows,
 Actors, and
 Data Stores.

The other parts of a DFD are −

 Constraints, and
 Control Flows.

Features of a DFD

Processes

Processes are the computational activities that transform data values. A whole system can be
visualized as a high-level process. A process may be further divided into smaller components. The
lowest-level process may be a simple function.

Representation in DFD − A process is represented as an ellipse with its name written inside it and
contains a fixed number of input and output data values.

Example − The following figure shows a process Compute_HCF_LCM that accepts two integers as
inputs and outputs their HCF (highest common factor) and LCM (least common multiple).

Data Flows

Data flow represents the flow of data between two processes. It could be between an actor and a
process, or between a data store and a process. A data flow denotes the value of a data item at some
point of the computation. This value is not changed by the data flow.
Representation in DFD − A data flow is represented by a directed arc or an arrow, labelled with the
name of the data item that it carries.

In the above figure, Integer_a and Integer_b represent the input data flows to the process, while
L.C.M. and H.C.F. are the output data flows.

Actors

Actors are the active objects that interact with the system by either producing data and inputting
them to the system, or consuming data produced by the system. In other words, actors serve as the
sources and the sinks of data.

Representation in DFD − An actor is represented by a rectangle. Actors are connected to the inputs
and outputs and lie on the boundary of the DFD.

Example − The following figure shows the actors, namely, Customer and Sales_Clerk in a counter
sales system.

Data Stores

Data stores are the passive objects that act as a repository of data. Unlike actors, they cannot perform
any operations. They are used to store data and retrieve the stored data. They represent a data
structure, a disk file, or a table in a database.

Representation in DFD − A data store is represented by two parallel lines containing the name of
the data store. Each data store is connected to at least one process. Input arrows contain information
to modify the contents of the data store, while output arrows contain information retrieved from the
data store.

Example − The following figure shows a data store, Sales_Record, that stores the details of all sales.
Input to the data store comprises of details of sales such as item, billing amount, date, etc. To find
the average sales, the process retrieves the sales records and computes the average.
Constraints

Constraints specify the conditions or restrictions that need to be satisfied over time. They allow
adding new rules or modifying existing ones. Constraints can appear in all the three models of
object-oriented analysis.

Representation − A constraint is rendered as a string within braces.

Example − The following figure shows a portion of DFD for computing the salary of employees of a
company that has decided to give incentives to all employees of the sales department and increment
the salary of all employees of the HR department. It can be seen that the constraint {Dept:Sales}
causes incentive to be calculated only if the department is sales and the constraint {Dept:HR} causes
increment to be computed only if the department is HR.

Control Flows

A process may be associated with a certain Boolean value and is evaluated only if the value is true,
though it is not a direct input to the process. These Boolean values are called the control flows.

Representation in DFD − Control flows are represented by a dotted arc from the process producing
the Boolean value to the process controlled by them.

Example − The following figure represents a DFD for arithmetic division. The Divisor is tested for
non-zero. If it is not zero, the control flow OK has a value True and subsequently the Divide process
computes the Quotient and the Remainder.

Developing the DFD Model of a System

In order to develop the DFD model of a system, a hierarchy of DFDs are constructed. The top-level
DFD comprises of a single process and the actors interacting with it. At each successive lower level,
further details are gradually included.
Example − Let us consider a software system, Wholesaler Software, that automates the transactions
of a wholesale shop. The shop sells in bulks and has a clientele comprising of merchants and retail
shop owners. Each customer is asked to register with his/her particulars and is given a unique
customer code, C_Code. Once a sale is done, the shop registers its details and sends the goods for
dispatch. Each year, the shop distributes Christmas gifts to its customers, which comprise of a silver
coin or a gold coin depending upon the total sales and the decision of the proprietor.

The actors in the system are −

 Customers
 Salesperson
 Proprietor

In the next level DFD, as shown in the following figure, the major processes of the system are
identified, the data stores are defined and the interaction of the processes with the actors, and the
data stores are established.

In the system, three processes can be identified, which are

 Register Customers
 Process Sales
 Ascertain Gifts

The data stores that will be required are

 Customer Details
 Sales Details
 Gift Details

The following figure shows the details of the process Register Customer. There are three processes
in it, Verify Details, Generate C_Code, and Update Customer Details. When the details of the
customer are entered, they are verified. If the data is correct, C_Code is generated and the data store
Customer Details is updated.

Advantages and Disadvantages of DFD

Advantages Disadvantages
DFDs depict the boundaries of a system and hence DFDs take a long time to create, which may not
are helpful in portraying the relationship between be feasible for practical purposes.
the external objects and the processes within the
system.
They help the users to have a knowledge about the DFDs do not provide any information about the
system. time-dependent behavior, i.e., they do not
specify when the transformations are done.
The graphical representation serves as a blueprint They do not throw any light on the frequency of
for the programmers to develop a system. computations or the reasons for computations.

DFDs provide detailed information about the The preparation of DFDs is a complex process
system processes. that needs considerable expertise. Also, it is
difficult for a non-technical person to
understand.
They are used as a part of the system The method of preparation is subjective and
documentation. leaves ample scope to be imprecise.

Behavioural Modelling

Behavioral models contain procedural statements, which control the simulation and manipulate
variables of the data types. These statements are contained within the procedures. Each of the
procedures has an activity flow associated with it. During the behavioral model simulation, all the
flows defined by the always and initial statements start together at simulation time zero.
The initial statements are executed once, the always statements are executed repetitively.

Example

The register variables a and b are initialized to binary 1 and 0 respectively at simulation time
zero.The initial statement is completed and not executed again during that simulation run. This initial
statement is containing a begin-end block of statements. In this begin-end type block, a is initialized
first, followed by b.

Procedural Assignments

Procedural assignments are for updating integer, reg, time, and memory variables. There is a
significant difference between a procedural assignment and continuous assignment, such as:
1. Continuous assignments drive net variables, evaluated, and updated whenever an input operand
changes value. The procedural assignments update the value of register variables under the control of
the procedural flow constructs that surround them.
2. The right-hand side of a procedural assignment can be any expression that evaluates to a value.
However, part-selects on the right-hand side must have constant indices. The left-hand side indicates
the variable that receives the assignment from the right-hand side. The left-hand side of a procedural
assignment can take one of the following forms:

o Register, integer, real, or time variable: An assignment to the name reference of one of these
data types.
o Bit-select of a register, integer, real, or time variable: An assignment to a single bit that
leaves the other bits untouched.
o Part-select of a register, integer, real, or time variable: A part-select of two or more
contiguous bits that leave the rest of the bits untouched. For the part-select form, only
constant expressions are legal.
o Memory element: A single word of memory. Bit-selects and part-selects are illegal on
memory element references.
o Concatenation of any of the above: A concatenation of any of the previous four forms can be
specified, which effectively partitions the result of the right-hand side expression and then
assigns the partition parts, in order, to the various parts of the concatenation.

Delay in Assignment

In a delayed assignment, Δt time units pass before the statement is executed, and the left-hand
assignment is made. With an intra-assignment delay, the right side is evaluated immediately, but
there is a delay of Δt before the result is placed in the left-hand assignment.
If another procedure changes a right-hand side signal during Δt.

Syntax

1. Procedural Assignmentvariable = expression


2. Delayed assignment#Δt variable = expression;
3. Intra-assignment delayvariable = #Δt expression;

Blocking Assignments

A blocking procedural assignment statement must be executed before executing the statements that
follow it in a sequential block. The statement does not prevent the execution of statements that
follow it in a parallel block.

Syntax

<lvalue> = <timing_control> <expression>

o An lvalue is a data type that is valid for a procedural assignment statement.


o = is the assignment operator, and timing control is the optional intra -assignment delay.
o Continuous procedural assignments and continuous assignments also use the = assignment
operator used by blocking procedural assignments.

Non-blocking (RTL) Assignments

The non-blocking procedural assignment is used to schedule assignments without blocking the
procedural flow. We can use the non-blocking procedural statement whenever we want to make
several register assignments within the same time step without regard to order or dependence upon
each other.

Syntax

<lvalue> <= <timing_control> <expression>

o An lvalue is a data type that is valid for a procedural assignment statement.


o <= is the non-blocking assignment operator, and timing control is the optional intra-
assignment timing control.
o The simulator interprets the <= operator as a relational operator when we use it in an
expression and interprets the <= operator as an assignment operator when you use it in a
non-blocking procedural assignment construct.

Simulator evaluates and executes the non-blocking procedural assignment in two steps:

Step 1: The simulator evaluates the right-hand side and schedules the new value assignment at a
time specified by a procedural timing control.
Step 2: At the end of the time step, when the given delay has expired, or the appropriate event has
taken place, the simulator executes the assignment by assigning the value to the left-hand side.

Case Statement

The case statement is a unique multi-way decision statement that tests whether an expression
matches several other expressions, and branches accordingly. The case statement is useful for
describing, for example, the decoding of a microprocessor instruction. The case statement differs
from the multi-way if-else-if construct in two essential ways, such as:

1. The conditional expressions in the if-else-if construct are more general than comparing one
expression with several others, as in the case statement.
2. The case statement provides a definitive result when there are x and z values in an expression.

Looping Statements

There are four types of looping statements. They are used to controlling the execution of a statement
zero, one, or more times.

1. Forever continuously executes a statement.


2. Repeat executes a statement a fixed number of times.
3. While executes a statement until expression becomes false, if the expression starts false, the
statement is not executed at all.
4. For controls execution of its associated statements by a three-step process are:

Step 1: Executes an assignment normally used to initialize a variable that controls the number of
loops executed.
Step 2: Evaluates an expression. Suppose the result is zero, then the for loop exits. And if it is not
zero, for loop executes its associated statements and then performs step 3.
Step 3: Executes an assignment normally used to modify the loop control variable's value, then
repeats step 2.

Delay Controls

Verilog handles the delay controls in the following ways, such as:
1. Delay Control

<statement>
::= <delay_control> <statement_or_null>
<delay_control>
::= # <NUMBER>
||= # <identifier>
||= # ( <mintypmax_expression> )

The following example delays the execution of the assignment by 10-time units.

#10 rega = regb;

2. Event Control

The execution of a procedural statement can be synchronized with a value change on a net or
register, or the occurrence of a declared event.

*<SCALAR_EVENT_EXPRESSION> is an expression that resolves to a one-bit value.

Verilog syntax also used to detect change based on the direction of the change, which is toward the
value 1 (posedge) or the value 0 (negedge).

The behavior of posedge and negedge for unknown expression values are:

o A negedge is detected on the transition from 1 to unknown and from unknown to 0.


o A posedge is detected on the transition from 0 to unknown and from unknown to 1.

Procedures

1. Initial blocks
2. Always blocks

Initial Blocks

The initial and always statements are enabled at the beginning of the simulation. The initial blocks
execute only once, and its activity dies when the statement has finished.

Syntax

<initial_statement>
::= initial <statement>

Always Blocks

The always blocks repeatedly executes. Its activity dies only when the simulation is terminated.
There is no limit to the number of initial and always blocks defined in a module.

Syntax

<always_statement>
::= always <statement>
UNIT II SOFTWARE DESIGN

Design Concepts

Software design principles are concerned with providing means to handle the complexity of the design
process effectively. Effectively managing the complexity will not only reduce the effort needed for
design but can also reduce the scope of introducing errors during design.

Problem Partitioning

For small problem, we can handle the entire problem at once but for the significant problem, divide the
problems and conquer the problem it means to divide the problem into smaller pieces so that each piece
can be captured separately. For software design, the goal is to divide the problem into manageable
pieces.

Benefits of Problem Partitioning

1. Software is easy to understand


2. Software becomes simple
3. Software is easy to test
4. Software is easy to modify
5. Software is easy to maintain
6. Software is easy to expand

These pieces cannot be entirely independent of each other as they together form the system. They have
to cooperate and communicate to solve the problem. This communication adds complexity.

Abstraction

An abstraction is a tool that enables a designer to consider a component at an abstract level without
bothering about the internal details of the implementation. Abstraction can be used for existing element
as well as the component being designed.

Here, there are two common abstraction mechanisms

1. Functional Abstraction
2. Data Abstraction
Functional Abstraction

i. A module is specified by the method it performs.


ii. The details of the algorithm to accomplish the functions are not visible to the user of the
function.Functional abstraction forms the basis for Function oriented design approaches.

Data Abstraction

Details of the data elements are not visible to the users of data. Data Abstraction forms the basis
for Object Oriented design approaches.

Modularity

Modularity specifies to the division of software into separate modules which are differently named and
addressed and are integrated later on in to obtain the completely functional software. It is the only
property that allows a program to be intellectually manageable.

The desirable properties of a modular system are:

 Each module is a well-defined system that can be used with other applications.
 Each module has single specified objectives.
 Modules can be separately compiled and saved in the library.
 Modules should be easier to use than to build.
 Modules are simpler from outside than inside.

Advantages of Modularity

 It allows large programs to be written by several or different people


 It encourages the creation of commonly used routines to be placed in the library and used
by other programs.
 It simplifies the overlay procedure of loading a large program into main storage.
 It provides more checkpoints to measure progress.
 It provides a framework for complete testing, more accessible to test
 It produced the well designed and more readable program.

Disadvantages of Modularity

 Execution time maybe, but not certainly, longer


 Storage size perhaps, but is not certainly, increased
 Compilation and loading time may be longer
 Inter-module communication problems may be increased
 More linkage required, run-time may be longer, more source lines must be written, and
more documentation has to be done
Modular Design

Modular design reduces the design complexity and results in easier and faster implementation by
allowing parallel development of various parts of a system. We discuss a different section of modular
design in detail in this section:

1. Functional Independence: Functional independence is achieved by developing functions that


perform only one kind of task and do not excessively interact with other modules. Independence is
important because it makes implementation more accessible and faster. The independent modules are
easier to maintain, test, and reduce error propagation and can be reused in other programs as well. Thus,
functional independence is a good design feature which ensures software quality.

It is measured using two criteria:

o Cohesion: It measures the relative function strength of a module.


o Coupling: It measures the relative interdependence among modules.

2. Information hiding: The fundamental of Information hiding suggests that modules can be
characterized by the design decisions that protect from the others, i.e., In other words, modules should
be specified that data include within a module is inaccessible to other modules that do not need for such
information.

Strategy of Design

A good system design strategy is to organize the program modules in such a method that are easy to
develop and latter too, change. Structured design methods help developers to deal with the size and
complexity of programs. Analysts generate instructions for the developers about how code should be
composed and how pieces of code should fit together to form a program.To design a system, there are
two possible approaches:

1. Top-down Approach: This approach starts with the identification of the main components and then
decomposing them into their more detailed sub-components.

2. Bottom-up Approach: A bottom-up approach begins with the lower details and moves towards up
the hierarchy, as shown in fig. This approach is suitable in case of an existing system.
Design Model

Design modeling in software engineering represents the features of the software that helps
engineer to develop it effectively, the architecture, the user interface, and the component
level detail. Design modeling provides a variety of different views of the system like
architecture plan for home or building. Different methods like data-driven, pattern-driven, or
object-oriented methods are used for constructing the design model. All these methods use
set of design principles for designing a model.

Working of Design Modeling in Software Engineering

Designing a model is an important phase and is a multi-process that represent the data
structure, program structure, interface characteristic, and procedural details. It is mainly
classified into four categories – Data design, architectural design, interface design, and
component-level design.

 Data design: It represents the data objects and their interrelationship in an entity-relationship
diagram. Entity-relationship consists of information required for each entity or data objects
as well as it shows the relationship between these objects. It shows the structure of the data
in terms of the tables. It shows three type of relationship – One to one, one to many, and
many to many. In one to one relation, one entity is connected to another entity. In one many
relation, one Entity is connected to more than one entity.
 Architectural design: It defines the relationship between major structural elements of the
software. It is about decomposing the system into interacting components. It is expressed as
a block diagram defining an overview of the system structure – features of the components
and how these components communicate with each other to share data. It defines the
structure and properties of the component that are involved in the system and also the inter-
relationship among these components.
 User Interfaces design: It represents how the Software communicates with the user i.e. the
behavior of the system. It refers to the product where user interact with controls or displays
of the product. For example, Military, vehicles, aircraft, audio equipment, computer
peripherals are the areas where user interface design is implemented. UI design becomes
efficient only after performing usability testing. This is done to test what works and what
does not work as expected.
 Component level design: It transforms the structural elements of the software architecture
into a procedural description of software components. It is a perfect way to share a large
amount of data. Components need not be concerned with how data is managed at a
centralized level.

Principles of Design Model

 Design must be traceable to the analysis model:

Analysis model represents the information, functions, and behavior of the system. Design
model translates all these things into architecture – a set of subsystems that implement major
functions and a set of component kevel design that are the realization of Analysis classes.
This implies that design model must be traceable to the analysis model.

 Always consider architecture of the system to be built:

Software architecture is the skeleton of the system to be built. It affects interfaces, data
structures, behavior, program control flow, the manner in which testing is conducted,
maintainability of the resultant system, and much more.
 Focus on the design of the data:

Data design encompasses the manner in which the data objects are realized within the
design. It helps to simplify the program flow, makes the design and implementation of the
software components easier, and makes overall processing more efficient.

 User interfaces should consider the user first:

The user interface is the main thing of any software. No matter how good its internal
functions are or how well designed its architecture is but if the user interface is poor and
end-users don’t feel ease to handle the software then it leads to the opinion that the software
is bad.

 Components should be loosely coupled:

Coupling of different components into one is done in many ways like via a component
interface, by messaging, or through global data. As the level of coupling increases, error
propagation also increases, and overall maintainability of the software decreases. Therefore,
component coupling should be kept as low as possible.

 Interfaces both user and internal must be designed:

The data flow between components decides the processing efficiency, error flow, and design
simplicity. A well-designed interface makes integration easier and tester can validate the
component functions more easily.

 Component level design should exhibit Functional independence:

It means that functions delivered by component should be cohesive i.e. it should focus on
one and only one function or sub-function.

Conclusion

Here in this article, we have discussed the basics of design modeling in software engineering
along with its principles.

Software Architecture

The architecture of a system describes its major components, their relationships (structures), and
how they interact with each other. Software architecture and design includes several contributory
factors such as Business strategy, quality attributes, human dynamics, design, and IT environment.
We can segregate Software Architecture and Design into two distinct phases: Software Architecture
and Software Design. In Architecture, nonfunctional decisions are cast and separated by the
functional requirements. In Design, functional requirements are accomplished. Architecture serves as
a blueprint for a system. It provides an abstraction to manage the system complexity and establish a
communication and coordination mechanism among components.

 It defines a structured solution to meet all the technical and operational requirements, while
optimizing the common quality attributes like performance and security.
 Further, it involves a set of significant decisions about the organization related to software
development and each of these decisions can have a considerable impact on quality,
maintainability, performance, and the overall success of the final product. These decisions
comprise of −
o Selection of structural elements and their interfaces by which the system is
composed.
o Behavior as specified in collaborations among those elements.
o Composition of these structural and behavioral elements into large subsystem.
o Architectural decisions align with business objectives.
o Architectural styles guide the organization.

Software Design

Software design provides a design plan that describes the elements of a system, how they fit, and
work together to fulfill the requirement of the system. The objectives of having a design plan are as
follows −

 To negotiate system requirements, and to set expectations with customers, marketing, and
management personnel.
 Act as a blueprint during the development process.
 Guide the implementation tasks, including detailed design, coding, integration, and testing.

It comes before the detailed design, coding, integration, and testing and after the domain analysis,
requirements analysis, and risk analysis.

Goals of Architecture

The primary goal of the architecture is to identify requirements that affect the structure of the
application. A well-laid architecture reduces the business risks associated with building a technical
solution and builds a bridge between business and technical requirements.

 Expose the structure of the system, but hide its implementation details.
 Realize all the use-cases and scenarios.
 Try to address the requirements of various stakeholders.
 Handle both functional and quality requirements.
 Reduce the goal of ownership and improve the organization’s market position.
 Improve quality and functionality offered by the system.
 Improve external confidence in either the organization or system.

Limitations

 Lack of tools and standardized ways to represent architecture.


 Lack of analysis methods to predict whether architecture will result in an implementation
that meets the requirements.
 Lack of awareness of the importance of architectural design to software development.
 Lack of understanding of the role of software architect and poor communication among
stakeholders.
 Lack of understanding of the design process, design experience and evaluation of design.

Role of Software Architect

A Software Architect provides a solution that the technical team can create and design for the entire
application. A software architect should have expertise in the following areas −

Design Expertise

 Expert in software design, including diverse methods and approaches such as object-oriented
design, event-driven design, etc.
 Lead the development team and coordinate the development efforts for the integrity of the
design.
 Should be able to review design proposals and tradeoff among themselves.

Domain Expertise

 Expert on the system being developed and plan for software evolution.
 Assist in the requirement investigation process, assuring completeness and consistency.
 Coordinate the definition of domain model for the system being developed.

Technology Expertise

 Expert on available technologies that helps in the implementation of the system.


 Coordinate the selection of programming language, framework, platforms, databases, etc.

Methodological Expertise

 Expert on software development methodologies that may be adopted during SDLC (Software
Development Life Cycle).
 Choose the appropriate approaches for development that helps the entire team.

Hidden Role of Software Architect

 Facilitates the technical work among team members and reinforcing the trust relationship in
the team.
 Information specialist who shares knowledge and has vast experience.
 Protect the team members from external forces that would distract them and bring less value
to the project.
Deliverables of the Architect

 A clear, complete, consistent, and achievable set of functional goals


 A functional description of the system, with at least two layers of decomposition
 A concept for the system
 A design in the form of the system, with at least two layers of decomposition
 A notion of the timing, operator attributes, and the implementation and operation plans
 A document or process which ensures functional decomposition is followed, and the form of
interfaces is controlled

Quality Attributes

Quality is a measure of excellence or the state of being free from deficiencies or defects. Quality
attributes are the system properties that are separate from the functionality of the system.
Implementing quality attributes makes it easier to differentiate a good system from a bad one.
Attributes are overall factors that affect runtime behavior, system design, and user experience.

They can be classified as −

Static Quality Attributes

Reflect the structure of a system and organization, directly related to architecture, design, and source
code. They are invisible to end-user, but affect the development and maintenance cost, e.g.:
modularity, testability, maintainability, etc.

Dynamic Quality Attributes

Reflect the behavior of the system during its execution. They are directly related to system’s
architecture, design, source code, configuration, deployment parameters, environment, and
platform.They are visible to the end-user and exist at runtime, e.g. throughput, robustness,
scalability, etc.

Quality Scenarios

Quality scenarios specify how to prevent a fault from becoming a failure. They can be divided into
six parts based on their attribute specifications −

 Source − An internal or external entity such as people, hardware, software, or physical


infrastructure that generate the stimulus.
 Stimulus − A condition that needs to be considered when it arrives on a system.
 Environment − The stimulus occurs within certain conditions.
 Artifact − A whole system or some part of it such as processors, communication channels,
persistent storage, processes etc.
 Response − An activity undertaken after the arrival of stimulus such as detect faults, recover
from fault, disable event source etc.
 Response measure − Should measure the occurred responses so that the requirements can
be tested.

Architectural Styles

an architectural style is a large-scale, predefined solution structure. Using an architectural styles helps
us to build the system quicker than building everything from scratch. Architectural styles are similar
to patterns, but provide a solution for a larger challenge.
In this blog we study several architectural styles for communication in distributed systems. The REST
style (Representational State Transfer), the REST-like style, the RPC style (Remote Procedure Call),
the SOAP style and GraphQL. We compare the approaches, show advantages and disadvantages,
commonalities and differences. APIs can basically be realized using any of these styles. How do we
know, whether a particular architectural style is appropriate for a given API?

When realizing a new API, an appropriate API philosophy should be chosen, such
as GraphQL, REST, SOAP or RPC.

So which should you choose for your cool new API?

Once the bigger-picture, architectural design decisions are nailed, frontend design decisions can be
handled. These design decisions should be documented by refining and updating the API description.
The API description thus becomes an evolving, single source of truth about the current state of the
system.

REST Style

REST (Representational State Transfer) is an architectural style for services, and as such it defines a
set of architectural constraints and agreements. A service, which complies with the REST constraints,
is said to be RESTful.

REST is designed to make optimal use of an HTTP-based infrastructure and due to the success of the
web, HTTP-based infrastructure, such as servers, caches and proxies, are widely available. The web,
which is based on HTTP, provides some proof for an architecture that not only scales extremely well
but also has longevity. The basic idea of REST is to transfer the ideas that worked well for the web
and apply them to web services.

HATEOAS is an abbreviation for Hypermedia As The Engine Of Application State. HATEOAS is the
aspect of REST, which allows for dynamic architectures. It allows clients to explore any API without
any a-priori knowledge of data formats or of the API itself.

REST-like APIs

There is a large group of APIs, which claim to follow the REST Style, but actually, they don’t. They
only implement some elements of REST, but at its core, they are RPC APIs.Johnson Maturity Index
may be helpful

RPC Style

RPC is an abbreviation for Remote Procedure Call. RPC is an architectural style for distributed
systems. It has been around since the 1980s. Today the most widely used RPC styles are JSON-RPC
and XML-RPC. Even SOAP can be considered to follow an RPC architectural style.

The central concept in RPC is the procedure. The procedures do not need to run on the local machine,
but they can run on a remote machine within the distributed system. When using an RPC framework,
calling a remote procedure should be as simple as calling a local procedure.

SOAP Style

SOAP follows the RPC style (see previous section) and exposes procedures as central concepts (e.g.
getCustomer). It is standardized by the W3C and is the most widely used protocol for web services.
SOAP style architectures are in widespread use, however, typically only for company internal use or
for services called by trusted partners.

GraphQL Style

For a long time, REST was thought to be the only appropriate tool for building modern APIs. But in
recent years, another tool was added to the toolbox, when Facebook published GraphQL, the
philosophy, and framework powering its popular API. More and more tech companies
tried GraphQL and adopted it as one of their philosophies for API design. Some built a GraphQL API
next to their existing REST API, some replaced their REST API with GraphQL, and even others have
ignored the GraphQL trend to focus single-mindedly on their REST API.

But, not only the tech companies are divided. Following the discussions around REST and GraphQL,
there seem to be two camps of gurus leading very emotional discussions: “always use the hammer,”
one camp proclaims. “NO, always use the screwdriver,” the other camp insists. And for the rest of
us? Unfortunately, this situation is confusing, leading to paralysis and indecision about API design.

The intention of the Book on REST & GraphQL is to clear up the confusion and enable you to make
your own decision, the decision that is right for your API. By having the necessary criteria for
comparison and general properties, strengths, and weaknesses of the approach, you can choose if the
hammer or the screwdriver is better suited for your API project.

Conclusion

APIs can basically be realized using any of these styles. How do we know, whether a particular
architectural style is appropriate for a given API? The resulting API exposes many of the previously
stated desirable properties.

Most commonly, APIs are realized using REST over HTTP. This is why one can assume in practice
that APIs are realized with the REST style.

Architectural Design

The software needs the architectural design to represents the design of software. IEEE defines
architectural design as “the process of defining a collection of hardware and software components and
their interfaces to establish the framework for the development of a computer system.” The software
that is built for computer-based systems can exhibit one of these many architectural styles.
Each style will describe a system category that consists of :

 A set of components (eg: a database, computational modules) that will perform a function
required by the system.
 The set of connectors will help in coordination, communication, and cooperation between the
components.
 Conditions that how components can be integrated to form the system.
 Semantic models that help the designer to understand the overall properties of the system.
The use of architectural styles is to establish a structure for all the components of the system.
Taxonomy of Architectural styles:

1. Data centered architectures:


 A data store will reside at the center of this architecture and is accessed frequently by the
other components that update, add, delete or modify the data present within the store.
 The figure illustrates a typical data centered style. The client software access a central
repository. Variation of this approach are used to transform the repository into a blackboard
when data related to client or data of interest for the client change the notifications to client
software.
 This data-centered architecture will promote integrability. This means that the existing
components can be changed and new client components can be added to the architecture
without the permission or concern of other clients.
 Data can be passed among clients using blackboard mechanism.
2. Data flow architectures:
 This kind of architecture is used when input data to be transformed into output data through a
series of computational manipulative components.
 The figure represents pipe-and-filter architecture since it uses both pipe and filter and it has a
set of components called filters connected by pipes.
 Pipes are used to transmit data from one component to the next.
 Each filter will work independently and is designed to take data input of a certain form and
produces data output to the next filter of a specified form. The filters don’t require any
knowledge of the working of neighboring filters.
 If the data flow degenerates into a single line of transforms, then it is termed as batch
sequential. This structure accepts the batch of data and then applies a series of sequential
components to transform it.
3. Call and Return architectures: It is used to create a program that is easy to scale and modify.
Many sub-styles exist within this category. Two of them are explained below.
 Remote procedure call architecture: This components is used to present in a main program
or sub program architecture distributed among multiple computers on a network.
 Main program or Subprogram architectures: The main program structure decomposes
into number of subprograms or function into a control hierarchy. Main program contains
number of subprograms that can invoke other components.

1. Object Oriented architecture: The components of a system encapsulate data and the operations
that must be applied to manipulate the data. The coordination and communication between the
components are established via the message passing.
2. Layered architecture:
 A number of different layers are defined with each layer performing a well-defined set of
operations. Each layer will do some operations that becomes closer to machine instruction set
progressively.
 At the outer layer, components will receive the user interface operations and at the inner
layers, components will perform the operating system interfacing(communication and
coordination with OS)
 Intermediate layers to utility services and application software functions.
Component-Level Design
Component-based architecture focuses on the decomposition of the design into individual functional or
logical components that represent well-defined communication interfaces containing methods, events,
and properties. It provides a higher level of abstraction and divides the problem into sub-problems, each
associated with component partitions.
The primary objective of component-based architecture is to ensure component reusability. A
component encapsulates functionality and behaviors of a software element into a reusable and self-
deployable binary unit. There are many standard component frameworks such as COM/DCOM,
JavaBean, EJB, CORBA, .NET, web services, and grid services. These technologies are widely used in
local desktop GUI application design such as graphic JavaBean components, MS ActiveX components,
and COM components which can be reused by simply drag and drop operation.
Component-oriented software design has many advantages over the traditional object-oriented
approaches such as −
 Reduced time in market and the development cost by reusing existing components.
 Increased reliability with the reuse of the existing components.

What is a Component?
A component is a modular, portable, replaceable, and reusable set of well-defined functionality that
encapsulates its implementation and exporting it as a higher-level interface.
A component is a software object, intended to interact with other components, encapsulating certain
functionality or a set of functionalities. It has an obviously defined interface and conforms to a
recommended behavior common to all components within an architecture.
A software component can be defined as a unit of composition with a contractually specified interface
and explicit context dependencies only. That is, a software component can be deployed independently
and is subject to composition by third parties.
Views of a Component
A component can have three different views − object-oriented view, conventional view, and process-
related view.
Object-oriented view
A component is viewed as a set of one or more cooperating classes. Each problem domain class
(analysis) and infrastructure class (design) are explained to identify all attributes and operations that
apply to its implementation. It also involves defining the interfaces that enable classes to communicate
and cooperate.
Conventional view
It is viewed as a functional element or a module of a program that integrates the processing logic, the
internal data structures that are required to implement the processing logic and an interface that enables
the component to be invoked and data to be passed to it.
Process-related view
In this view, instead of creating each component from scratch, the system is building from existing
components maintained in a library. As the software architecture is formulated, components are selected
from the library and used to populate the architecture.
 A user interface (UI) component includes grids, buttons referred as controls, and utility
components expose a specific subset of functions used in other components.
 Other common types of components are those that are resource intensive, not frequently
accessed, and must be activated using the just-in-time (JIT) approach.
 Many components are invisible which are distributed in enterprise business applications and
internet web applications such as Enterprise JavaBean (EJB), .NET components, and CORBA
components.
Characteristics of Components
 Reusability − Components are usually designed to be reused in different situations in different
applications. However, some components may be designed for a specific task.
 Replaceable − Components may be freely substituted with other similar components.
 Not context specific − Components are designed to operate in different environments and
contexts.
 Extensible − A component can be extended from existing components to provide new behavior.
 Encapsulated − A A component depicts the interfaces, which allow the caller to use its
functionality, and do not expose details of the internal processes or any internal variables or
state.
 Independent − Components are designed to have minimal dependencies on other components.

Principles of Component−Based Design

A component-level design can be represented by using some intermediary representation (e.g.


graphical, tabular, or text-based) that can be translated into source code. The design of data structures,
interfaces, and algorithms should conform to well-established guidelines to help us avoid the
introduction of errors.
 The software system is decomposed into reusable, cohesive, and encapsulated component units.
 Each component has its own interface that specifies required ports and provided ports; each
component hides its detailed implementation.
 A component should be extended without the need to make internal code or design
modifications to the existing parts of the component.
 Depend on abstractions component do not depend on other concrete components, which
increase difficulty in expendability.
 Connectors connected components, specifying and ruling the interaction among components.
The interaction type is specified by the interfaces of the components.
 Components interaction can take the form of method invocations, asynchronous invocations,
broadcasting, message driven interactions, data stream communications, and other protocol
specific interactions.
 For a server class, specialized interfaces should be created to serve major categories of clients.
Only those operations that are relevant to a particular category of clients should be specified in
the interface.
 A component can extend to other components and still offer its own extension points. It is the
concept of plug-in based architecture. This allows a plugin to offer another plugin API.

Component-Level Design Guidelines

Creates a naming conventions for components that are specified as part of the architectural model and
then refines or elaborates as part of the component-level model.
 Attains architectural component names from the problem domain and ensures that they have
meaning to all stakeholders who view the architectural model.
 Extracts the business process entities that can exist independently without any associated
dependency on other entities.
 Recognizes and discover these independent entities as new components.
 Uses infrastructure component names that reflect their implementation-specific meaning.
 Models any dependencies from left to right and inheritance from top (base class) to bottom
(derived classes).
 Model any component dependencies as interfaces rather than representing them as a direct
component-to-component dependency.

Conducting Component-Level Design

Recognizes all design classes that correspond to the problem domain as defined in the analysis model
and architectural model.
 Recognizes all design classes that correspond to the infrastructure domain.
 Describes all design classes that are not acquired as reusable components, and specifies message
details.
 Identifies appropriate interfaces for each component and elaborates attributes and defines data
types and data structures required to implement them.
 Describes processing flow within each operation in detail by means of pseudo code or UML
activity diagrams.
 Describes persistent data sources (databases and files) and identifies the classes required to
manage them.
 Develop and elaborates behavioral representations for a class or component. This can be done
by elaborating the UML state diagrams created for the analysis model and by examining all use
cases that are relevant to the design class.
 Elaborates deployment diagrams to provide additional implementation detail.
 Demonstrates the location of key packages or classes of components in a system by using class
instances and designating specific hardware and operating system environment.
 The final decision can be made by using established design principles and guidelines.
Experienced designers consider all (or most) of the alternative design solutions before settling
on the final design model.
Advantages
 Ease of deployment − As new compatible versions become available, it is easier to replace
existing versions with no impact on the other components or the system as a whole.
 Reduced cost − The use of third-party components allows you to spread the cost of
development and maintenance.
 Ease of development − Components implement well-known interfaces to provide defined
functionality, allowing development without impacting other parts of the system.
 Reusable − The use of reusable components means that they can be used to spread the
development and maintenance cost across several applications or systems.
 Modification of technical complexity − A component modifies the complexity through the use
of a component container and its services.
 Reliability − The overall system reliability increases since the reliability of each individual
component enhances the reliability of the whole system via reuse.
 System maintenance and evolution − Easy to change and update the implementation without
affecting the rest of the system.
 Independent − Independency and flexible connectivity of components. Independent
development of components by different group in parallel. Productivity for the software
development and future software development.
User Experience Design
User interface design is also known as user interface engineering. User interface design means the
process of designing user interfaces for software and machines like a mobile device, home appliances,
computer, and another electronic device with the aim of increasing usability and improving the user
experience.
Choosing Interface Components
Users have become aware of interface components acting in a certain manner, so try to be predictable
and consistent in our selections and their layout. As a result, task completion, satisfaction, and
performance, will increase.
Interface components may involve:
Input Controls: Input Controls involve buttons, toggles, dropdown lists, checkboxes, date fields,
radio buttons, and text fields.
Navigational Components: Navigational components contain slider, tags, pagination, search field,
breadcrumb, icons.
Informational Components: Informational Components contain tooltips, modal windows, progress
bar, icons, notification message boxes.
Containers: Containers include accordion.
Many components may be suitable to display content at times. When this happens, it is crucial to
think about this trade-off.
Best Practices for Designing an Interface
It All starts with getting to know your users, which contains understanding about their interests,
abilities, tendencies, and habits. If you have figured out who your customer is, keep the following in
mind when designing your interface:

 Create consistently and use common UI components


 Use typography to make hierarchy and clarity.
 Make sure that the system communicates what's happening
 Use color and texture strategically
 Keep the interface simple
 Be purposeful in page layout
Create Consistently and Use Common UI Components
Users would feel more at ease and be able to complete tasks more easily if we use common
components in our UI. It's also important to generate pattern in language, design, and layout across
the website in order to help with productivity.
Use Typography in Order to Make Hierarchy and Clarity
Think about how we are going to use the typeface. Text in various sizes, fonts, and arrangements in
order to help increase readability, legibility, and scanability.
Make Sure that the System Communicates What's Happening
Always keep your user up to date on their change in state, location, errors, actions, etc. Using various
UI components to communicate status and, if needed, the next steps will help your user feel less
frustrated.
Use color and Texture Strategically
Using contrast, light, color, and texture to our benefit, we can draw attention to or draw attention
away from objects.
Keep the Interface Simple
Mostly the great interfaces are not visible to the user. They avoid needless components and use simple
terminology on labels and in messaging.
Be Purposeful in Page Layout
Take into account the spatial associations between the objects on the page and organize the page on
the basis of importance. Carefully positioning objects can aid scanning and readability by drawing
attention to the most appropriate pieces of information.
Designing User Interfaces for Users
User interfaces are the points of interaction between the user and developer. They come in three
different types of formats:
1. Graphical User Interface (GUIs)
In the Graphical user interface, the users can interact with visual representations on the digital control
panels. Example of GUI, a computer's desktop.
2. Gesture-Based Interfaces
In gesture-based interfaces, users can interact with 3D design spaces by moving their bodies. Example
of Gesture-Based Interfaces, Virtual Reality (VR) games.
3. Voice-Controlled Interfaces (VUIs)
In, Voice-controlled interfaces (VUIs), users can interact with the help of the voice. Example of
Voice-Controlled Interfaces (VUIs), Alexa on Amazon devices, and Siri on iPhone.
User Interface Design Processes
The user-interface design necessitates an in-depth understanding of user requirements. It primarily
focuses on the platform's requirements and user preferences.
Functionality Requirements Gathering
Creates a list of device functionalities that are needed to fulfil the user's project goal and specification.
User and Task Analysis
It is the kind of field research. It is the research of how the system's potential users perform the tasks
that the design would serve, and perform interviews to learn more about their goals.
Typical questions involve:

 What do you think the user would like the system to do?
 What role does the system fit in the user's everyday activities or workflow?
 How technically savvy is the user, and what other systems does the user already use?
 What styles of user interface look and feel do you think the user prefers?
Information Architecture
Process development or the system's information flow (means for phone tree systems, this will be a
choice tree flowchart for phone tree systems, and for the website, this will be site flow that displays
the page's hierarchy).
Prototyping
The wire-frame's the development either in the form of simple interactive screens or paper prototypes.
To focus on the interface, these prototypes are stripped of all look and feel components as well as the
majority of the content.
Usability Inspection
Allowing an evaluator to examine a user interface. It is typically less expensive to implement as
compared to usability testing, and in the development process, it can be used early. It may be used
early in the development process to determine requirements for the system, which are usually unable
to be tested on the users.
Usability Testing
Prototypes are tested on a real user, often using a method known as think-aloud protocol, in which we
can ask the user to speak about their views during the experience. The testing of user interface design
permits the designer to understand the reception from the viewer's perspective, making it easier to
create effective applications.
Graphical User Interface Design
It is the actual look and feel of the design of the final graphical user interface (GUI). These are the
control panels and faces of design; voice-controlled interfaces contain oral-auditory interaction, while
gesture-based interfaces users involve with 3D design spaces through physical motions.
Software Maintenance
After a new interface is deployed, it may be necessary to perform routine maintenance in order to fix
software bugs, add new functionality or fully update the system. When the decision is taken to update
the interface, the legacy system will go through a new iteration of the design process.
User Interface Design Requirements
The dynamic characteristics of a system are defined in terms of the dialogue requirements contained
in 7 principles of part 10 of the ergonomics standard, the ISO 9241. This standard provides a system
of ergonomic "principles" for the dialogue techniques along with the high-level concepts, examples,
and implementations. The principles of the dialogue reflect the interface's dynamic aspects and mostly
thought of as the interface's "feel." The following are the seven dialogue principles:

1. Suitability of the Task


The dialogue is appropriate for the task when it helps the user in completing the task efficiently and
effectively.
2. Self-Descriptiveness
When each dialogue phase is instantly understandable due to system feedback or clarified to the user
upon request, the dialogue is self-descriptive.
3. Controllability
When the user is capable to initiate and monitor the course and speed of the interaction until the aim is
achieved, then dialogue is controllable.
4. Conformity with User Expectations
If the dialogue is reliable and corresponds to the characteristics of the user, like experience, education,
task awareness, and generally accepted conventions, it conforms to user experience.
5. Error Tolerance
If, despite obvious errors in input, the desired outcome can be accomplished with no or limited action
from the user, then the dialogue is error-tolerant.
6. Suitability for Individualization
If the interface software can be changed to meet the job needs, individual interests, and abilities of the
user, the dialogue is able of individualization.
7. Suitability for Leaning
The dialogue support for learning as it assists and guides the user in learning how to use the system.
The ISO 9241 standard defines usability as the effective performance and the satisfaction of the
consumer. The following is an explanation of usability found in Part 11.

 The degree to which the overall system's expected objectives of use are met is how usable
it is (effectiveness).
 The resources must be spent in order to achieve the desired outcomes (efficiency).
 The degree to which the user finds the entire system acceptable (satisfaction).
Usability factors include effectiveness, efficiency, and satisfaction. In order to assess these factors,
they must first be split into sub-factors and then into usability measures.Part 12 of the ISO 9241
standard specifies the organization of information such as alignment, arrangement, location, grouping,
arrangement, display of the graphical objects, and the information's coding (colour, shape, visual cues,
size, abbreviation) by seven attributes. The seven-presentation characteristic are as follows:

 Clarity: - The information content is conveyed easily and correctly.


 Discriminability: - The displayed data can be separated with precision.
 Conciseness: - The users are not overburdened with irrelevant data.
 Consistency: - Consistency means a unique design with conformity with the expectation
of users.
 Detectability: - The attention of the user is directed towards the essential information
essential.
 Legibility: - Legibility means information is easy to read.
 Comprehensibility: - The meaning is straightforward, recognizable, unambiguous, and
easy to comprehend.
The user guidance in part 13 of the ISO 9241 standard states that it should be easily distinguishable
from other shown information and must be precise for the use of present context. The following five
methods can be used to provide user guidance:

 Prompts indicating that the system is available for input explicitly (specific prompts) or
implicitly (generic prompts).
 Feedback informing related to the input of the user timely, non-intrusive, and perceptible.
 Details about the application's current state, the system's hardware and software, and the
user's activities.
 Error management contains error detection, error correction, error message, and user
support for error management.
 Online assistance for both system-initiated and user-initiated requests with detailed
information for the current context of usage.
How to Make Great UIs
Remember that the users are people with needs like comfort and a mental capacity limit when creating
a stunning GUI. The following guidelines should be followed:
1. Create buttons, and other popular components that behave predictably (with responses
like pinch-to-zoom) so that users can use them without thinking. Form must follow
function.
2. Keep your discoverability high. Mark icons clearly and well-defined affordances, such as
shadows for buttons.
3. The interface should be simple (including elements that help users achieve their goals)
and create an "invisible" feel.
4. In terms of layout, respect the user's eyes and attention. Place emphasis on hierarchy and
readability:
 User proper alignment: Usually select edge (over center) alignment.
 Draw attention to Key features using:
o Colour, brightness, and contrast are all important factors to consider Excessive
use of colors or buttons should be avoided.
o Font sizes, italics, capitals, bold type/weighting, and letter spacing are all used to
create text. User should be able to Deduce meaning simply by scanning.
o Regardless of the context, always have the next steps that the user can naturally
deduce.
o Use proper UI design patterns to assist users in navigating and reducing burdens
such as pre-fill forms. Dark patterns like hard-to-see prefilled opt-in/opt-out
checkboxes and sneaking objects into the user's carts should be avoided.
o Keep user informed about system responses/actions with feedback.
Principles of User Interface Design
1. Clarity is Job
The interface's first and most essential task is to provide clarity. To be effective in using the interface
you designed, people need to identify what it is, regardless of why they will use it, understand what
the interface is doing in interaction with them. It assists them in anticipating what will occur as they
use it.
2. Keep Users in Control
Humans are most at ease when they have control of themselves and their surroundings. Unthoughtful
software robs people of their comfort by dragging them into unexpected encounters, unexpected
outcomes, and confusing pathways
3. Conserve Attention at All Cost
We live in a world that is constantly interrupted. It is difficult to read in peace these days without
anything attempting to divert our focus. Attention is a valuable commodity. Distracting content should
not be strewn around the side of your applications… keep in mind why the screen exists in the first
place.
4. Interfaces Exist to Enable Interaction
Interaction between humans and our world is allowed by interfaces. They can support, explain, allow,
display associations, illuminate, bring us together, separate us, handle expectations, and provide
access to service. Designing a user interface is not an artistic endeavour. Interfaces are not stand-alone
landmarks.
5. Keep Secondary Actions Secondary
Multiple secondary actions may be added to screens with a single primary action, but they must be
held secondary. Your article presents not so that individuals can post it on Twitter but so that people
can read and comprehend it.
6. Provide a Natural Next Step
Few interactions are intended to be the last, so consider designing the last move for every interaction
used with your interface. Predict what the next interaction will be and design to accommodate it. Just
as we are interested in human conversation, offer an opportunity for more discussion. Don't leave
anyone hanging because they did what you wish them to do.
7. Direct Manipulation is Best
There is no need for an interface if we can directly access the physical objects in our universe. We
build interfaces to help us interact with objects because this is not always easy, and objects are
becoming increasingly informational.
8. Highlight, Don't Determine, with Colour
When the light changes, the colour of the physical object changes. In the full light of day, we see very
different tree outlines against a sunset. As in the real world, where colour is a multi-shaded object,
colour does not decide anything in an interface. It can be useful for highlighting and directing focus,
but it should not be the only way to distinguish objects.
9. Progressive Disclosure
On each screen, just show what is needed. If people must make a decision, give them sufficient
information in order to make that decision, then go into more details on a subsequent screen. Avoid
the popular trap of over-explaining or showing all at once. Defer decisions to subsequent screens
wherever possible by gradually revealing information as needed. Your experiences would be clearer
as a result of this.
10. Strong Visual Hierarchies Work Best
When the visual elements on a computer are arranged in a simple viewing order, it creates a powerful
visual hierarchy. This means when users consistently see the same objects in the same order. The
weak visual hierarchies offer some guidance related to where one should gaze and relax and feel
disorganized and confused.
11. Help People Inline
Help is not needed in ideal interaction because the interface is usable and learner. The step below that,
fact, is one in which assistance is inline and contextual, accessible only when and where it is required
and concealed at all other times.
12. Build on Other Design Principle
Visual and graphic design, visualization, typography, information architecture, and copywriting all of
these disciplines are the part of the interface design. They may be briefly discussed or trained in.
Don't get caught up in turf battles or dismiss other disciplines; instead, take what you need from them
and keep moving forward.
13. Great Design is Invisible
The interesting thing about good design is that it usually goes unobserved by the people who use it.
One reason for this is that if the design is effective, then the user will be able to concentrate on their
own objectives rather than the interface.
14. Interfaces Exist to be Used
Interface design, like most design disciplines, is effective when people use what you have created.
Design fails if people choose not to utilize it, just like a beautiful chair which is painful to sit in. As a
result, interface design can be more related to building a user-friendly experience as it is about
designing a useful artifact.
15. A Crucial Moment: The Zero State
The first time a user interacts with an interface is critical, but designers often ignore it. It's better to
plan for the zero state, or the state where nothing has happened yet, to great support our users get up
to speed with our designs. This is not supposed to be a blank canvas
Mistakes to Avoid in UI Design

 Do not implement a user-centred design. This is an easy part to overlook, but it is one of
the most critical aspects of the UI design. User's desires, expectations, and the problems
should all be considered when designing.
 Excessive use of dynamic effects: Using a lot of animation effects is not always a sign of
a good design. As a result, limiting the use of decorative animations will help to improve
the user experience.
 Preparing so much in advance: Particularly in the early stages of design, we just need to
have the appropriate image of the design in our heads and get to work. However, this
strategy is not always successful.
 Not Leaning more about the target audience: - This point once again, demonstrates what
we have just discussed. Rather than designing with your own desires and taste in mind,
imagine yourself as the consumer.
Essential Tools for User Interface Design
1. Sketch
It is a user design tool mainly used by numerous UI and UX designers to design and prototyping
mobile and web applications. The Sketch is a vector graphics editor that permits designers to create
user interfaces efficiently and quickly.There are various features of Sketch:
o Slicing and Exporting
Sketch gives users a lot of slicing control, allowing them to choose slice, and export any layer
or object they want.
o Symbols
Using this feature, user can build pre-designed elements which can be easily re-used as well
as replicated in any artboard or project. This feature will help designers save time and build a
design library for potential projects.
o Plugins
Maybe a feature you are looking for is not available in the default sketch app. In that
situation, you don't have to worry; there are number of created plugins that can be
downloaded externally and added to our sketch app. The options are limitless!
2. Adobe XD
It is a vector-based tool. We use this tool for designing interfaces and prototyping for mobile
applications as well as the web. Adobe XD is just like Photoshop and illustrator, but it focuses on user
interface design. Adobe XD has the advantage of including UI kits for Windows, Apple, and Google
Material Design, which helps designers create user interfaces for each device.
Features of Adobe XD
o Voice Trigger
Voice Trigger is an innovative feature introduced by Adobe XD which permits prototypes to
be manipulated via voice commands.
o Responsive Resize
Using this feature, we can automatically adjust and resize objects/elements which are present
on the artboards based on the size of the screen or platform required.
o Collaboration
We can connect Adobe XD to other tools like Slack, allowing the team to collaborate across
platforms like Windows and macOS.
3. Invision Studios
It is a simple vector-based drawing tool with design, animation, and prototyping capabilities. Invision
studios is a relatively new tool, but it has ready demonstrated a high level of ambition through its
numerous available functionalities and remarkable prototyping capabilities
Features of the Invision Studios
o Advanced Animations
With the various animations provided by studios, animating your prototype has become even
more exciting. We can expect higher fidelity prototypes with this feature, including auto-layer
linking, timeline editions, and smart-swipe gestures.
o Responsive Design
The responsive design feature saves a lot of time because it eliminates the need of multiple
artboards when designing for numerous devices. Invision studios permit users to create a
single artboard that can be adjusted based on the intended device.
o Synced Workflow
Studios enable a synchronised workflow across all projects, from start to finish, in order to
support team collaboration. This involves real-time changes and live concept collaboration, as
well as the ability to provide instant feedback.
4. UXPin
Another amazing tool for the design user interface is UXPin that comes with the abilities of designing
and prototyping. In contrast to other user interface tools, this tool is recommended to be a better fit for
large design teams and projects. UXPin also comes with UI element libraries which give you access to
Material Design, iOS libraries, Bootstrap, and variety of icons.
Features of UXPin

 Mobile support
 Collaboration
 Presentation tools
 Drag and Drop
 Mockup Creation
 Protype Creation
 Interactive Elements
 Feedback Collection
 Feedback Management
5. Framer X
Framer X was released in 2018. It is one of the newest design tools which is used to design digital
products from mobile applications to websites. The interesting feature of this tool is the capability to
prototype along with the advanced interactions and animations while also integrating the code's
components.
Features of the Framer X

 From mockup to prototype, all in one canvas


 Framer X better support all types of web fonts
 Pixel-perfect designs with rulers and guides
 Get creative with precise color management

Pattern Based Design


A design patterns are well-proved solution for solving the specific problem/task.Now, a question will
be arising in your mind what kind of specific problem? Let me explain by taking an example.

Problem Given:
Suppose you want to create a class for which only a single instance (or object) should be created and
that single object can be used by all other classes.

Solution:
Singleton design pattern is the best solution of above specific problem. So, every design pattern
has some specification or set of rules for solving the problems. What are those specifications, you will
see later in the types of design patterns.
Advantage of design pattern:
1. They are reusable in multiple projects.
2. They provide the solutions that help to define the system architecture.
3. They capture the software engineering experiences.
4. They provide transparency to the design of an application.
5. They are well-proved and testified solutions since they have been built upon the
knowledge and experience of expert software developers.
6. Design patterns don?t guarantee an absolute solution to a problem. They provide clarity to
the system architecture and the possibility of building a better system.
When should we use the design patterns?
We must use the design patterns during the analysis and requirement phase of SDLC(Software
Development Life Cycle).Design patterns ease the analysis and requirement phase of SDLC by
providing information based on prior hands-on experiences.
Categorization of design patterns:
1. Core Java (or JSE) Design Patterns.
2. JEE Design Patterns.
Core Java Design Patterns
In core java, there are mainly three types of design patterns, which are further divided into their sub-
parts:
1.Creational Design Pattern
1. Factory Pattern
2. Abstract Factory Pattern
3. Singleton Pattern
4. Prototype Pattern
5. Builder Pattern.
2. Structural Design Pattern
1. Adapter Pattern
2. Bridge Pattern
3. Composite Pattern
4. Decorator Pattern
5. Facade Pattern
6. Flyweight Pattern
7. Proxy Pattern
3. Behavioral Design Pattern
1. Chain Of Responsibility Pattern
2. Command Pattern
3. Interpreter Pattern
4. Iterator Pattern
5. Mediator Pattern
6. Memento Pattern
7. Observer Pattern
8. State Pattern
9. Strategy Pattern
10. Template Pattern
11. Visitor Pattern
UNIT III SYSTEM DEPENDABILITY AND SECURITY
Dependable Systems
For many computer-based systems, the most important system property is the dependability of the
system. The dependability of a system reflects the user's degree of trust in that system. It reflects the
extent of the user's confidence that it will operate as users expect and that it will not 'fail' in normal
use. System failures may have widespread effects with large numbers of people affected by the
failure. Systems that are not dependable and are unreliable, unsafe or insecure may be rejected by
their users.
Causes of failure:
Hardware failure
Hardware fails because of design and manufacturing errors or because components have reached the
end of their natural life.
Software failure
Software fails due to errors in its specification, design or implementation.
Operational failure
Human operators make mistakes. Now perhaps the largest single cause of system failures in socio-
technical systems.
Dependability properties
Principal properties of dependability:

Principal properties:

 Availability: The probability that the system will be up and running and able to deliver
useful services to users.
 Reliability: The probability that the system will correctly deliver services as expected by
users.
 Safety: A judgment of how likely it is that the system will cause damage to people or its
environment.
 Security: A judgment of how likely it is that the system can resist accidental or deliberate
intrusions.
 Resilience: A judgment of how well a system can maintain the continuity of its critical
services in the presence of disruptive events such as equipment failure and cyberattacks.
Other properties of software dependability:

 Repairability reflects the extent to which the system can be repaired in the event of a
failure;
 Maintainability reflects the extent to which the system can be adapted to new
requirements;
 Survivability reflects the extent to which the system can deliver services whilst under
hostile attack;
 Error tolerance reflects the extent to which user input errors can be avoided and
tolerated.
Many dependability attributes depend on one another. Safe system operation depends on the system
being available and operating reliably. A system may be unreliable because its data has been
corrupted by an external attack.
How to achieve dependability?

 Avoid the introduction of accidental errors when developing the system.


 Design V & V processes that are effective in discovering residual errors in the system.
 Design systems to be fault tolerant so that they can continue in operation when faults
occur.
 Design protection mechanisms that guard against external attacks.
 Configure the system correctly for its operating environment.
 Include system capabilities to recognize and resist cyberattacks.
 Include recovery mechanisms to help restore normal system service after a failure.
Dependability costs tend to increase exponentially as increasing levels of dependability are required
because of two reasons. The use of more expensive development techniques and hardware that are
required to achieve the higher levels of dependability.
Socio-technical systems
Software engineering is not an isolated activity but is part of a broader systems engineering process.
Software systems are therefore not isolated systems but are essential components of broader systems
that have a human, social or organizational purpose.
 Equipment: hardware devices, some of which may be computers; most devices will include
an embedded system of some kind.
 Operating system: provides a set of common facilities for higher levels in the system.
 Communications and data management: middleware that provides access to remote
systems and databases.
 Application systems: specific functionality to meet some organization requirements.
 Business processes: a set of processes involving people and computer systems that support
the activities of the business.
 Organizations: higher level strategic business activities that affect the operation of the
system.
 Society: laws, regulation and culture that affect the operation of the system.
There are interactions and dependencies between the layers in a system and changes at one
level ripple through the other levels. For dependability, a systems perspective is essential.
Emergent properties
Emergent properties are properties of the system as a whole rather than properties that can be derived
from the properties of components of a system. Emergent properties are a consequence of
the relationships between system components.
Two types of emergent properties:
Functional properties
These appear when all the parts of a system work together to achieve some objective. For example, a
bicycle has the functional property of being a transportation device once it has been assembled from
its components.
Non-functional emergent properties
Examples are reliability, performance, safety, and security. These relate to the behavior of the system
in its operational environment. They are often critical for computer-based systems as failure to
achieve some minimal defined level in these properties may make the system unusable.
Some examples of emergent properties:

Property Description

The volume of a system (the total space occupied) varies depending on how the
Volume
component assemblies are arranged and connected.

System reliability depends on component reliability but unexpected interactions can


Reliability
cause new types of failures and therefore affect the reliability of the system.

The security of the system (its ability to resist attack) is a complex property that cannot
Security be easily measured. Attacks may be devised that were not anticipated by the system
designers and so may defeat built-in safeguards.

This property reflects how easy it is to fix a problem with the system once it has been
Repairability discovered. It depends on being able to diagnose the problem, access the components
that are faulty, and modify or replace these components.

This property reflects how easy it is to use the system. It depends on the technical
Usability
system components, its operators, and its operating environment.
Regulation and compliance
Many critical systems are regulated systems, which means that their use must be approved by an
external regulator before the systems go into service. Examples: nuclear systems, air traffic control
systems, medical devices. A safety and dependability case has to be approved by the regulator.
Redundancy and diversity
Redundancy: Keep more than a single version of critical components so that if one fails then a
backup is available.
Diversity: Provide the same functionality in different ways in different components so that they will
not fail in the same way.
Redundant and diverse components should be independent so that they will not suffer from 'common-
mode' failures.
Process activities, such as validation, should not depend on a single approach, such as testing, to
validate the system. Redundant and diverse process activities are important especially for verification
and validation. Multiple, different process activities the complement each other and allow for cross-
checking help to avoid process errors, which may lead to errors in the software.
Dependable processes
To ensure a minimal number of software faults, it is important to have a well-defined, repeatable
software process. A well-defined repeatable process is one that does not depend entirely on individual
skills; rather can be enacted by different people. Regulators use information about the process to
check if good software engineering practice has been used.
Dependable process characteristics:
Explicitly defined
A process that has a defined process model that is used to drive the software production process. Data
must be collected during the process that proves that the development team has followed the process
as defined in the process model.
Repeatable
A process that does not rely on individual interpretation and judgment. The process can be repeated
across projects and with different team members, irrespective of who is involved in the development.
Dependable process activities

 Requirements reviews to check that the requirements are, as far as possible, complete
and consistent.
 Requirements management to ensure that changes to the requirements are controlled
and that the impact of proposed requirements changes is understood.
 Formal specification, where a mathematical model of the software is created and
analyzed.
 System modeling, where the software design is explicitly documented as a set of
graphical models, and the links between the requirements and these models are
documented.
 Design and program inspections, where the different descriptions of the system are
inspected and checked by different people.
 Static analysis, where automated checks are carried out on the source code of the
program.
 Test planning and management, where a comprehensive set of system tests is designed.
Dependability Properties

 Correctness
 Reliability
 Robustness
 Safety
Correctness

 System behaviour can be only successful or failing


o Example: a program cannot be 30% correct
o A program is correct if all its possible behaviours are successes
 A (hardware/software) system is correct if all its sub-parts (e.g., sub-systems, components,
external libraries, devices) behave correctly
 A system is correct if it is consistent with all its specifications
 Specifications can be very badly defined!
 Correctness is seldom practical
 Failures might not yet be known: zero-days vulnerabilities
Reliability

 Statistical approximation to correctness: probability that a system deviates form the expected
behaviour
o Likelihood against given specifications
 Unlike correctness it is defined against an operational profile of a software system
 The probability that a given number of users (workload intensity) would access a system/
functionality/service/operation concurrently
 It is a quantitative characterization of how a system will be used
 It shows how to increase productivity and reliability and speed development by allocating
development resources to function on the basis of use
Major Measures of Reliability

 Availability: the portion of time in which the software operates with no down time
 Time Between Failures: the time elapsing between two consecutive failures
 Cumulative number of failures: the total number of failures occurred at time
Robustness

 A system maintains operations under exceptional circumstances


 It fails “softly” outside its normal operating parameters
 It is “fault tolerant”
 Despite faults, it operates
Example

 Unusual circumstance: unforeseen (not in the specifications) load of users accessing a web
site
 Robust software:
 A workaround: Maintain the same throughput speed while stopping last arrived users until the
load is decreased
 It does not decrease performance for registered users
 Action to be taken to increase robustness: Augment software specifications with appropriate
responses to given unusual circumstances (enrich the operational profile with unlikely
situations)
Safety

 Robustness in case of hazardous behaviour (hazard)


 A hazard is any agent that can cause harm or damage to humans, property, or the environment
 Safety is very focused on functionalities that can determine hazards, not concerned with other
issues on functionalities
 Often needed in critical systems, but not only:
o Word crashes -> reliability or robustness
o Word crashes and corrupts existing files -> safety
Hazard

 Safety is meaningless without a specification of hazards


 It is important to identify and classify hazards for the given software
Hazard - how to detect them

 Do it in the embedded environment (often hazards are related to specific environmental


circumstances)
 Separate the analysis of hazards from any other verification activity

Security

 Reflects a system’s ability to protect itself from attacks


 Security is increasingly important when systems are networked to each other
 Security is an essential pre-requisite for reliability and safety
Effects of security
• If a system is networked and insecure then statements about its reliability and safety are unreliable:
– Intrusion (attack) can change the system’s operating environment or data and invalidate the
assumptions (specifications) upon which the reliability and safety are made
Sociotechnical Systems
Within the STC we adopt a systems view of organisations, represented by the hexagon. It is this
hexagon that lies at the heart of our thinking.
Within a socio-technical systems perspective, any organisation, or part of it, is made up of a set of
interacting sub-systems, as shown in the diagram below. Thus, any organisation employs people with
capabilities, who work towards goals, follow processes, use technology, operate within a physical
infrastructure, and share certain cultural assumptions and norms.

Socio-technical theory has at its core the idea that the design and performance of any organisational
system can only be understood and improved if both ‘social’ and ‘technical’ aspects are brought
together and treated as interdependent parts of a complex system.
Organisational change programmes often fail because they are too focused on one aspect of the
system, commonly technology, and fail to analyse and understand the complex interdependencies that
exist. This is directly analogous to the design of a complex engineering product such as a gas turbine
engine. Just as any change to this complex engineering system has to address the knock-on effects
through the rest of the engine, so too does any change within an organisational system.
There will be few, if any, individuals who understand all the interdependent aspects of how complex
systems work. This is true of complex engineering products and it is equally true of organisational
systems. The implication is that understanding and improvement requires the input of all key
stakeholders, including those who work within different parts of the system. ‘User participation’
thereby is a pre-requisite for systemic understanding and change and, in this perspective, the term
‘user’ is broadly defined to include all key stakeholders.
The potential benefits of such an approach include:

 Strong engagement
 Reliable and valid data on which to build understanding
 A better understanding and analysis of how the system works now (the ‘as is’)
 A more comprehensive understanding of how the system may be improved (the ‘to
be’)
 Greater chance of successful improvements
The socio-technical perspective originates from pioneering work at the Tavistock Institute and has
been continued on a worldwide basis by key figures such as Harold Leavitt, Albert Cherns, Ken
Eason, Enid Mumford and many others.
Our use of the hexagon draws heavily on the work of Harold, J. Leavitt who viewed organisations as
comprising four key interacting variables, namely task, structure, technology and people (actors).

We have used this systems approach in a wide range of domains including overlapping projects
focused on:

 Computer systems
 New buildings
 New ways of working
 New services
 Behaviour change
 Safety and accidents
 Crowd behaviours
 Organisational resilience
 Sustainability (energy, water and waste)
 Green behaviours at work and in the home
 Engineering design
 Knowledge management
 Tele-health
 Social networks
 Organisational modelling and simulation
 Supply chain innovation
 Risk analysis
 Performance and productivity
 Process compliance
A systems perspective is an intellectually robust and useful way of looking at organisations. It speaks
well to our clients and provides a coherent vehicle for collaboration with other disciplines, most
obviously with our engineering colleagues. Our experience is that most of the difficult problems and
exciting opportunities we face in the world lie at the intersections between human behaviour and
engineering innovation. Systems theory provides a useful tool to help us understand and address these
challenges.
Redundancy and Diversity
Redundancy
In engineering, redundancy is the intentional duplication of critical components or functions of a
system with the goal of increasing reliability of the system, usually in the form of a backup or fail-
safe, or to improve actual system performance, such as in the case of GNSS receivers, or multi-
threaded computer processing.
In many safety-critical systems, such as fly-by-wire and hydraulic systems in aircraft, some parts of
the control system may be triplicated,[1] which is formally termed triple modular redundancy (TMR).
An error in one component may then be out-voted by the other two. In a triply redundant system, the
system has three sub components, all three of which must fail before the system fails. Since each one
rarely fails, and the sub components are expected to fail independently, the probability of all three
failing is calculated to be extraordinarily small; it is often outweighed by other risk factors, such
as human error.

A suspension bridge's numerous cables are a form of redundancy.


Redundancy sometimes produces less, instead of greater reliability – it creates a more complex system
which is prone to various issues, it may lead to human neglect of duty, and may lead to higher
production demands which by overstressing the system may make it less safe.
Forms of redundancy

 Hardware redundancy, such as dual modular redundancy and triple modular


redundancy
 Information redundancy, such as error detection and correction methods
 Time redundancy, performing the same operation multiple times such as multiple
executions of a program or multiple copies of data transmitted
 Software redundancy such as N-version programming
A modified form of software redundancy, applied to hardware may be:
 Distinct functional redundancy, such as both mechanical and hydraulic braking in a car.
Applied in the case of software, code written independently and distinctly different but
producing the same results for the same inputs.
Structures are usually designed with redundant parts as well, ensuring that if one part fails, the entire
structure will not collapse. A structure without redundancy is called fracture-critical, meaning that a
single broken component can cause the collapse of the entire structure. Bridges that failed due to lack
of redundancy include the Silver Bridge and the Interstate 5 bridge over the Skagit River.
Parallel and combined systems demonstrate different level of redundancy. The models are subject of
studies in reliability and safety engineering.
Dissimilar redundancy
Unlike traditional redundancy, which uses more than one of the same thing, dissimilar redundancy
uses different things. The idea is that the different things are unlikely to contain identical flaws. The
voting method may involve additional complexity if the two things take different amounts of time.
Dissimilar redundancy is often used with software, because identical software contains identical
flaws.
The chance of failure is reduced by using at least two different types of each of the following

 processors,
 operating systems,
 software,
 sensors,
 types of actuators (electric, hydraulic, pneumatic, manual mechanical, etc.)
 communications protocols,
 communications hardware,
 communications networks,
 communications paths
Geographic redundancy
Geographic redundancy corrects the vulnerabilities of redundant devices deployed by geographically
separating backup devices. Geographic redundancy reduces the likelihood of events such as power
outages, floods, HVAC failures, lightning strikes, tornadoes, building fires, wildfires, and mass
shootings would disable the system.
Geographic redundancy locations can be

 more than 62 miles (100 km) continental,


 more than 62 miles apart and less than 93 miles (150 km) apart,
 less than 62 miles apart, but not on the same campus, or
 different buildings that are more than 300 feet (91 m) apart on the same campus.
 The following methods can reduce the risks of damage by a fire conflagration:
 large buildings at least 80 feet (24 m) apart
 high-rise buildings at least 82 feet (25 m) apart
 open spaces clear of flammable vegetation within 200 feet (61 m) on each side of
objects
 different wings on the same building, in rooms that are separated by more than 300
feet (91 m)
 different floors on the same wing of a building in rooms that are horizontally offset
by a minimum of 70 feet (21 m) with fire walls between the rooms that are on
different floors
 two rooms separated by another room, leaving at least a 70-foot gap between the two
rooms
 there should be a minimum of two separated fire walls and on opposite sides of a
corridor
The Distant Early Warning Line was an example of Geographic redundancy. Those radar sites were a
minimum of 50 miles (80 km) apart, but provided overlapping coverage.
Functions of redundancy
The two functions of redundancy are passive redundancy and active redundancy. Both functions
prevent performance decline from exceeding specification limits without human intervention using
extra capacity.Passive redundancy uses excess capacity to reduce the impact of component failures.
One common form of passive redundancy is the extra strength of cabling and struts used in bridges.
This extra strength allows some structural components to fail without bridge collapse. The extra
strength used in the design is called the margin of safety.
Eyes and ears provide working examples of passive redundancy. Vision loss in one eye does not cause
blindness but depth perception is impaired. Hearing loss in one ear does not cause deafness but
directionality is lost. Performance decline is commonly associated with passive redundancy when a
limited number of failures occur.
Active redundancy eliminates performance declines by monitoring the performance of individual
devices, and this monitoring is used in voting logic. The voting logic is linked to switching that
automatically reconfigures the components. Error detection and correction and the Global Positioning
System (GPS) are two examples of active redundancy.
Electrical power systems use power scheduling to reconfigure active redundancy. Computing systems
adjust the production output of each generating facility when other generating facilities are suddenly
lost. This prevents blackout conditions during major events such as an earthquake.Fire Alarms,
Burglary Alarms, Telephone Central Office Exchanges, and similar other critical systems operate on
DC power.
Disadvantages
Charles Perrow, author of Normal Accidents, has said that sometimes redundancies backfire and
produce less, not more reliability. This may happen in three ways: First, redundant safety devices
result in a more complex system, more prone to errors and accidents. Second, redundancy may lead to
shirking of responsibility among workers. Third, redundancy may lead to increased production
pressures, resulting in a system that operates at higher speeds, but less safely.
Diversity
Diversity is a research field about the comprehension and engineering of diversity in the context of
software. The different areas of software diversity are discussed in surveys on diversity for fault-
tolerance or for security. A recent survey emphasizes on the most recent advances in the field.

The main areas are:

 design diversity, n-version programming, data diversity for fault tolerance


 randomization
 software variability
Domains
Software can be diversified in most domains:

 in firmware of embedded systems and sensors


 in internet applications
 in mobile applications
 in browser applications, incl. those using WebAssembly.
Techniques
Code transformations
It is possible to amplify software diversity through automated transformation processes that create
synthetic diversity. A "multicompiler" is compiler embedding a diversification engine.[10] A multi-
variant execution environment (MVEE) is responsible for selecting the variant to execute and
compare the output.
Fred Cohen was among the very early promoters of such an approach. He proposed a series of
rewriting and code reordering transformations that aim at producing massive quantities of different
versions of operating systems functions.
Another approach to increase software diversity of protection consists in adding randomness in
certain core processes, such as memory loading. Randomness implies that all versions of the same
program run differently from each other, which in turn creates a diversity of program behaviors. This
idea was initially proposed and experimented by Stephanie Forrest and her colleagues.
Transformation operators include:

 code layout randomization: reorder functions in code


 globals layout randomization: reorder and pad globals
 stack variable randomization: reorder variables in each stack frame
 heap layout randomization
As exploring the space of diverse programs is computationally expensive, finding efficient strategies
to conduct this exploration is important. To do so, recent work studies plastic regions in software
code:plastic regions are those parts is code more susceptible to be changed without disrupting the
functionalities provided by the piece of software. These regions can be specifically targeted by
automatic code transformation to create artificial diversity in existing software.
Natural software diversity
It is known that some functionalities are available in multiple interchangeable implementations, this
has been called natural software diversity. For example, a diversity of library that implement similar
features, naturally emerges in software repositories. This natural diversity can be exploited, for
example

Dependable Processes
Dependability is a measure of a system's availability, reliability, maintainability, and in some cases,
other characteristics such as durability, safety and security.In real-time computing, dependability is
the ability to provide services that can be trusted within a time-period. The service guarantees must
hold even when the system is subject to attacks or natural failures.
The International Electrotechnical Commission (IEC), via its Technical Committee TC 56 develops
and maintains international standards that provide systematic methods and tools for dependability
assessment and management of equipment, services, and systems throughout their life cycles
Dependability can be broken down into three elements:

 Attributes - a way to assess the dependability of a system


 Threats - an understanding of the things that can affect the dependability of a system
 Means - ways to increase a system's dependability
Elements of dependability
Attributes
Taxonomy showing relationship between Dependability & Security and Attributes, Threats and
Means
Attributes are qualities of a system. These can be assessed to determine its overall dependability
using Qualitative or Quantitative measures. Avizienis et al. define the following Dependability
Attributes:
 Availability - readiness for correct service
 Reliability - continuity of correct service
 Safety - absence of catastrophic consequences on the user(s) and the environment
 Integrity - absence of improper system alteration
 Maintainability - ability for easy maintenance (repair)
As these definitions suggested, only Availability and Reliability are quantifiable by direct
measurements whilst others are more subjective. For instance Safety cannot be measured directly via
metrics but is a subjective assessment that requires judgmental information to be applied to give a
level of confidence, whilst Reliability can be measured as failures over time.
Confidentiality, i.e. the absence of unauthorized disclosure of information is also used when
addressing security. Security is a composite of Confidentiality, Integrity, and Availability. Security is
sometimes classed as an attribute but the current view is to aggregate it together with dependability
and treat Dependability as a composite term called Dependability and Security.
Practically, applying security measures to the appliances of a system generally improves the
dependability by limiting the number of externally originated errors.
Threats
Threats are things that can affect a system and cause a drop in Dependability. There are three main
terms that must be clearly understood:

 Fault: A fault (which is usually referred to as a bug for historic reasons) is a defect in
a system. The presence of a fault in a system may or may not lead to a failure. For
instance, although a system may contain a fault, its input and state conditions may
never cause this fault to be executed so that an error occurs; and thus that particular
fault never exhibits as a failure.
 Error: An error is a discrepancy between the intended behavior of a system and its
actual behavior inside the system boundary. Errors occur at runtime when some part
of the system enters an unexpected state due to the activation of a fault. Since errors
are generated from invalid states they are hard to observe without special
mechanisms, such as debuggers or debug output to logs.
 Failure: A failure is an instance in time when a system displays behavior that is
contrary to its specification. An error may not necessarily cause a failure, for instance
an exception may be thrown by a system but this may be caught and handled using
fault tolerance techniques so the overall operation of the system will conform to the
specification.
It is important to note that Failures are recorded at the system boundary. They are basically Errors that
have propagated to the system boundary and have become observable. Faults, Errors and Failures
operate according to a mechanism. This mechanism is sometimes known as a Fault-Error-Failure
chain. Once a fault is activated an error is created. An error may act in the same way as a fault in that
it can create further error conditions, therefore an error may propagate multiple times within a system
boundary without causing an observable failure. If an error propagates outside the system boundary a
failure is said to occur. A failure is basically the point at which it can be said that a service is failing to
meet its specification.
Means[edit]
Since the mechanism of a Fault-Error-Chain is understood it is possible to construct means to break
these chains and thereby increase the dependability of a system. Four means have been identified so
far:
 Prevention
 Removal
 Forecasting
 Tolerance
Fault Prevention deals with preventing faults being introduced into a system. This can be
accomplished by use of development methodologies and good implementation techniques.
Fault Removal can be sub-divided into two sub-categories: Removal During Development and
Removal During Use.Removal during development requires verification so that faults can be detected
and removed before a system is put into production. Once systems have been put into production a
system is needed to record failures and remove them via a maintenance cycle.
Fault Forecasting predicts likely faults so that they can be removed or their effects can be
circumvented.
Fault Tolerance deals with putting mechanisms in place that will allow a system to still deliver the
required service in the presence of faults, although that service may be at a degraded level.
Persistence
Based on how faults appear or persist, they are classified as:

 Transient: They appear without apparent cause and disappear again without apparent cause
 Intermittent: They appear multiple times, possibly without a discernible pattern, and
disappear on their own
 Permanent: Once they appear, they do not get resolved on their own
Dependability of information systems and survivability
Some works on dependability use structured information systems, e.g. with SOA, to introduce the
attribute survivability, thus taking into account the degraded services that an Information System
sustains or resumes after a non-maskable failure.
The flexibility of current frameworks encourage system architects to enable reconfiguration
mechanisms that refocus the available, safe resources to support the most critical services rather than
over-provisioning to build failure-proof system.

Formal Methods and Dependability


Formal methods are mathematically rigorous techniques for the specification, development,
and verification of software and hardware systems.The use of formal methods for software and
hardware design is motivated by the expectation that, as in other engineering disciplines,
Formal methods employ a variety of theoretical computer science fundamentals,
including logic calculi, formal languages, automata theory, control theory, program semantics, type
systems, and type theory.
Background
Semi-Formal Methods are formalisms and languages that are not considered fully “formal”. It defers
the task of completing the semantics to a later stage, which is then done either by human
interpretation or by interpretation through software like code or test case generators.
Taxonomy
Formal methods can be used at a number of levels:
Level 0: Formal specification may be undertaken and then a program developed from this informally.
This has been dubbed formal methods lite. This may be the most cost-effective option in many cases.
Level 1: Formal development and formal verification may be used to produce a program in a more
formal manner. For example, proofs of properties or refinement from the specification to a program
may be undertaken. This may be most appropriate in high-integrity systems
involving safety or security.
Level 2: Theorem provers may be used to undertake fully formal machine-checked proofs. Despite
improving tools and declining costs, this can be very expensive and is only practically worthwhile if
the cost of mistakes is very high (e.g., in critical parts of operating system or microprocessor design).
Further information on this is expanded below.
As with programming language semantics, styles of formal methods may be roughly classified as
follows:
 Denotational semantics, in which the meaning of a system is expressed in the mathematical
theory of domains. Proponents of such methods rely on the well-understood nature of
domains to give meaning to the system; critics point out that not every system may be
intuitively or naturally viewed as a function.
 Operational semantics, in which the meaning of a system is expressed as a sequence of
actions of a (presumably) simpler computational model. Proponents of such methods point to
the simplicity of their models as a means to expressive clarity; critics counter that the problem
of semantics has just been delayed (who defines the semantics of the simpler model?).
 Axiomatic semantics, in which the meaning of the system is expressed in terms
of preconditions and postconditions which are true before and after the system performs a
task, respectively. Proponents note the connection to classical logic; critics note that such
semantics never really describe what a system does (merely what is true before and
afterwards).
Lightweight formal methods
Some practitioners believe that the formal methods community has overemphasized full formalization
of a specification or design.They contend that the expressiveness of the languages involved, as well as
the complexity of the systems being modelled, make full formalization a difficult and expensive task.
As an alternative, various lightweight formal methods, which emphasize partial specification and
focused application, have been proposed.
Uses
Specification
Formal methods may be used to give a description of the system to be developed, at whatever level(s)
of detail desired. This formal description can be used to guide further development activities (see
following sections); additionally, it can be used to verify that the requirements for the system being
developed have been completely and accurately specified, or formalising system requirements by
expressing them in a formal language with a precise and unambiguously defined syntax and
semantics.
Development
Formal development is the use of formal methods as an integrated part of a tool-supported system
development process.
Once a formal specification has been produced, the specification may be used as a guide while the
concrete system is developed during the design process. For example:

 If the formal specification is in operational semantics, the observed behavior of the concrete
system can be compared with the behavior of the specification (which itself should be
executable or simulatable). Additionally, the operational commands of the specification may
be amenable to direct translation into executable code.
 If the formal specification is in axiomatic semantics, the preconditions and postconditions of
the specification may become assertions in the executable code.
Verification
Formal verification is the use of software tools to prove properties of a formal specification, or to
prove that a formal model of a system implementation satisfies its specification.Once a formal
specification has been developed, the specification may be used as the basis for proving properties of
the specification, and by inference, properties of the system implementation.
Sign-off verification
Sign-off verification is the use of a formal verification tool that is highly trusted. Such a tool can
replace traditional verification methods (the tool may even be certified).
Human-directed proof
Sometimes, the motivation for proving the correctness of a system is not the obvious need for
reassurance of the correctness of the system, but a desire to understand the system better.
Consequently, some proofs of correctness are produced in the style of mathematical proof:
handwritten (or typeset) using natural language, using a level of informality common to such proofs.
A "good" proof is one that is readable and understandable by other human readers.
Critics of such approaches point out that the ambiguity inherent in natural language allows errors to
be undetected in such proofs; often, subtle errors can be present in the low-level details typically
overlooked by such proofs.
Automated proof
In contrast, there is increasing interest in producing proofs of correctness of such systems by
automated means. Automated techniques fall into three general categories:

 Automated theorem proving, in which a system attempts to produce a formal proof from
scratch, given a description of the system, a set of logical axioms, and a set of inference rules.
 Model checking, in which a system verifies certain properties by means of an exhaustive
search of all possible states that a system could enter during its execution.
 Abstract interpretation, in which a system verifies an over-approximation of a behavioural
property of the program, using a fixpoint computation over a (possibly complete) lattice
representing it.
Some automated theorem provers require guidance as to which properties are "interesting" enough to
pursue, while others work without human intervention. Model checkers can quickly get bogged down
in checking millions of uninteresting states if not given a sufficiently abstract model.
Critics note that some of those systems are like oracles: they make a pronouncement of truth, yet give
no explanation of that truth. There is also the problem of "verifying the verifier"; if the program which
aids in the verification is itself unproven, there may be reason to doubt the soundness of the produced
results. Some modern model checking tools produce a "proof log" detailing each step in their proof,
making it possible to perform, given suitable tools, independent verification.
The main feature of the abstract interpretation approach is that it provides a sound analysis, i.e. no
false negatives are returned. Moreover, it is efficiently scalable, by tuning the abstract domain
representing the property to be analyzed, and by applying widening operators to get fast convergence.
Applications
Formal methods are applied in different areas of hardware and software, including routers, Ethernet
switches, routing protocols, security applications, and operating system microkernels such as seL4.
There are several examples in which they have been used to verify the functionality of the hardware
and software used in DCs. IBM used ACL2, a theorem prover, in the AMD x86 processor
development process. Intel uses such methods to verify its hardware and firmware (permanent
software programmed into a read-only memory). Dansk Datamatik Center used formal methods in the
1980s to develop a compiler system for the Ada programming language that went on to become a
long-lived commercial product.
In software development
In software development, formal methods are mathematical approaches to solving software (and
hardware) problems at the requirements, specification, and design levels. Formal methods are most
likely to be applied to safety-critical or security-critical software and systems, such as avionics
software. Software safety assurance standards, such as DO-178C allows the usage of formal methods
through supplementation, and Common Criteria mandates formal methods at the highest levels of
categorization.
Another approach to formal methods in software development is to write a specification in some form
of logic—usually a variation of first-order logic (FOL)—and then to directly execute the logic as
though it were a program. The OWL language, based on Description Logic (DL), is an example.
There is also work on mapping some version of English (or another natural language) automatically to
and from logic, as well as executing the logic directly. Examples are Attempto Controlled English,
and Internet Business Logic, which do not seek to control the vocabulary or syntax.

Reliability Engineering
Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of
equipment to function without failure. Reliability describes the ability of a system or component to
function under stated conditions for a specified period of time. Reliability is closely related
to availability, which is typically described as the ability of a component or system to function at a
specified moment or interval of time.
Reliability engineering deals with the prediction, prevention and management of high levels of
lifetime engineering uncertainty and risks of failure. Although stochastic parameters define and affect
reliability, reliability is not only achieved by mathematics and statistics. "Nearly all teaching and
literature on the subject emphasize these aspects, and ignore the reality that the ranges of uncertainty
involved largely invalidate quantitative methods for prediction and measurement.
Objective
1) To apply engineering knowledge and specialist techniques to prevent or to reduce the
likelihood or frequency of failures.
2) To identify and correct the causes of failures that do occur despite the efforts to prevent
them.
3) To determine ways of coping with failures that do occur, if their causes have not been
corrected.
4) To apply methods for estimating the likely reliability of new designs, and for analysing
reliability data.
The reason for the priority emphasis is that it is by far the most effective way of working, in terms of
minimizing costs and generating reliable products. The primary skills that are required, therefore, are
the ability to understand and anticipate the possible causes of failures, and knowledge of how to
prevent them. It is also necessary to have knowledge of the methods that can be used for analysing
designs and data.
Scope and techniques
Reliability engineering for complex systems requires a different, more elaborate systems approach
than for non-complex systems. Reliability engineering may in that case involve:

 System availability and mission readiness analysis and related reliability and maintenance
requirement allocation
 Functional system failure analysis and derived requirements specification
 Inherent (system) design reliability analysis and derived requirements specification for both
hardware and software design
 System diagnostics design
 Fault tolerant systems (e.g. by redundancy)
 Predictive and preventive maintenance (e.g. reliability-centered maintenance)
 Human factors / human interaction / human errors
 Manufacturing- and assembly-induced failures (effect on the detected "0-hour quality" and
reliability)
 Maintenance-induced failures
 Transport-induced failures
 Storage-induced failures
 Use (load) studies, component stress analysis, and derived requirements specification
 Software (systematic) failures
 Failure / reliability testing (and derived requirements)
 Field failure monitoring and corrective actions
 Spare parts stocking (availability control)
 Technical documentation, caution and warning analysis
 Data and information acquisition/organisation (creation of a general reliability development
hazard log and FRACAS system)
 Chaos engineering
Effective reliability engineering requires understanding of the basics of failure mechanisms for which
experience, broad engineering skills and good knowledge from many different special fields of
engineering are required, for example:

 Tribology
 Stress (mechanics)
 Fracture mechanics / fatigue
 Thermal engineering
 Fluid mechanics / shock-loading engineering
 Electrical engineering
 Chemical engineering (e.g. corrosion)
 Material science
Definitions
Reliability may be defined in the following ways:
 The idea that an item is fit for a purpose with respect to time
 The capacity of a designed, produced, or maintained item to perform as required over time
 The capacity of a population of designed, produced or maintained items to perform as
required over time
 The resistance to failure of an item over time
 The probability of an item to perform a required function under stated conditions for a
specified period of time
 The durability of an object
Basics of a reliability assessment
Many engineering techniques are used in reliability risk assessments, such as reliability block
diagrams, hazard analysis, failure mode and effects analysis (FMEA),[12] fault tree
analysis (FTA), Reliability Centered Maintenance, (probabilistic) load and material stress and wear
calculations, (probabilistic) fatigue and creep analysis, human error analysis, manufacturing defect
analysis, reliability testing, etc.
Consistent with the creation of safety cases, for example per ARP4761, the goal of reliability
assessments is to provide a robust set of qualitative and quantitative evidence that use of a component
or system will not be associated with unacceptable risk. The basic steps to take[13] are to:

 Thoroughly identify relevant unreliability "hazards", e.g. potential conditions, events, human
errors, failure modes, interactions, failure mechanisms and root causes, by specific analysis or
tests.
 Assess the associated system risk, by specific analysis or testing.
 Propose mitigation, e.g. requirements, design changes, detection logic, maintenance, training,
by which the risks may be lowered and controlled for at an acceptable level.
 Determine the best mitigation and get agreement on final, acceptable risk levels, possibly
based on cost/benefit analysis.
Risk here is the combination of probability and severity of the failure incident (scenario) occurring.
The severity can be looked at from a system safety or a system availability point of view. Reliability
for safety can be thought of as a very different focus from reliability for system availability.
Reliability and availability program plan
Implementing a reliability program is not simply a software purchase; it is not just a checklist of items
that must be completed that will ensure one has reliable products and processes. A reliability program
is a complex learning and knowledge-based system unique to one's products and processes. It is
supported by leadership, built on the skills that one develops within a team, integrated into business
processes and executed by following proven standard work practices.[14]
A reliability program plan is used to document exactly what "best practices" (tasks, methods, tools,
analysis, and tests) are required for a particular (sub)system, as well as clarify customer requirements
for reliability assessment. For large-scale complex systems, the reliability program plan should be a
separate document. Resource determination for manpower and budgets for testing and other tasks is
critical for a successful program. In general, the amount of work required for an effective program for
complex systems is large.
Reliability requirements
For any system, one of the first tasks of reliability engineering is to adequately specify the reliability
and maintainability requirements allocated from the overall availability needs and, more importantly,
derived from proper design failure analysis or preliminary prototype test results. Clear requirements
(able to designed to) should constrain the designers from designing particular unreliable items /
constructions / interfaces / systems. Setting only availability, reliability, testability, or maintainability
targets (e.g., max. failure rates) is not appropriate. This is a broad misunderstanding about Reliability
Requirements Engineering. Reliability requirements address the system itself, including test and
assessment requirements, and associated tasks and documentation.
Reliability culture / human errors / human factors
In practice, most failures can be traced back to some type of human error

 Management decisions (e.g. in budgeting, timing, and required tasks)


 Systems Engineering: Use studies (load cases)
 Systems Engineering: Requirement analysis / setting
 Systems Engineering: Configuration control
 Assumptions
 Calculations / simulations / FEM analysis
 Design
 Design drawings
 Testing (e.g. incorrect load settings or failure measurement)
 Statistical analysis
 Manufacturing
 Quality control
 Maintenance
 Maintenance manuals
 Training
 Classifying and ordering of information
 Feedback of field information (e.g. incorrect or too vague)
Reliability prediction combines:

 creation of a proper reliability model (see further on this page)


 estimation (and justification) of input parameters for this model (e.g. failure rates for a
particular failure mode or event and the mean time to repair the system for a particular
failure)
 estimation of output reliability parameters at system or part level (i.e. system availability or
frequency of a particular functional failure) The emphasis on quantification and target setting
(e.g. MTBF) might imply there is a limit to achievable reliability, however, there is no
inherent limit and development of higher reliability does not need to be more costly.
Design for reliability
Design for Reliability (DfR) is a process that encompasses tools and procedures to ensure that a
product meets its reliability requirements, under its use environment, for the duration of its lifetime.
DfR is implemented in the design stage of a product to proactively improve product reliability. DfR is
often used as part of an overall Design for Excellence (DfX) strategy.
Statistics-based approach (i.e. MTBF)
Reliability design begins with the development of a (system) model. Reliability and availability
models use block diagrams and Fault Tree Analysis to provide a graphical means of evaluating the
relationships between different parts of the system. These models may incorporate predictions based
on failure rates taken from historical data. While the (input data) predictions are often not accurate in
an absolute sense, they are valuable to assess relative differences in design alternatives.
The most important fundamental initiating causes and failure mechanisms are to be identified and
analyzed with engineering tools. A diverse set of practical guidance as to performance and reliability
should be provided to designers so that they can generate low-stressed designs and products that
protect, or are protected against, damage and excessive wear. Proper validation of input loads
(requirements) may be needed, in addition to verification for reliability "performance" by testing.

A fault tree diagram


Physics-of-failure-based approach
For electronic assemblies, there has been an increasing shift towards a different approach
called physics of failure. This technique relies on understanding the physical static and dynamic
failure mechanisms. It accounts for variation in load, strength, and stress that lead to failure with a
high level of detail, made possible with the use of modern finite element method (FEM) software
programs that can handle complex geometries and mechanisms such as creep, stress relaxation,
fatigue, and probabilistic design (Monte Carlo Methods/DOE).
Reliability modeling
Reliability modeling is the process of predicting or understanding the reliability of a component or
system prior to its implementation. Two types of analysis that are often used to model a complete
system's availability behavior including effects from logistics issues like spare part provisioning,
transport and manpower are fault tree analysis and reliability block diagrams.
A reliability block diagram showing a "1oo3" (1 out of 3) redundant designed subsystem
For part level predictions, two separate fields of investigation are common:

 The physics of failure approach uses an understanding of physical failure mechanisms


involved, such as mechanical crack propagation or chemical corrosion degradation or failure;
 The parts stress modelling approach is an empirical method for prediction based on counting
the number and type of components of the system, and the stress they undergo during
operation.
Reliability theory
Reliability is defined as the probability that a device will perform its intended function during a
specified period of time under stated conditions. Mathematically, this may be expressed as where is
the failure probability density function and is the length of the period of time (which is assumed to
start from time zero).
There are a few key elements of this definition:
1) Reliability is predicated on "intended function:" Generally, this is taken to mean operation
without failure. However, even if no individual part of the system fails, but the system as
a whole does not do what was intended, then it is still charged against the system
reliability. The system requirements specification is the criterion against which reliability
is measured.
2) Reliability applies to a specified period of time. In practical terms, this means that a
system has a specified chance that it will operate without failure before time . Reliability
engineering ensures that components and materials will meet the requirements during the
specified time. Note that units other than time may sometimes be used (e.g. "a mission",
"operation cycles").
3) Reliability is restricted to operation under stated (or explicitly defined) conditions. This
constraint is necessary because it is impossible to design a system for unlimited
conditions. A Mars Rover will have different specified conditions than a family car. The
operating environment must be addressed during design and testing.
4) Two notable references on reliability theory and its mathematical and statistical
foundations are Barlow, R. E. and Proschan, F. (1982) and Samaniego, F. J. (2007).
Quantitative system reliability parameters—theory
Quantitative requirements are specified using reliability parameters. The most common reliability
parameter is the mean time to failure (MTTF), which can also be specified as the failure rate (this is
expressed as a frequency or conditional probability density function (PDF)) or the number of failures
during a given period.
In other cases, reliability is specified as the probability of mission success. For example, reliability of
a scheduled aircraft flight can be specified as a dimensionless probability or a percentage, as often
used in system safety engineering.
A special case of mission success is the single-shot device or system. These are devices or systems
that remain relatively dormant and only operate once. Examples include automobile airbags,
thermal batteries and missiles. Single-shot reliability is specified as a probability of one-time success
or is subsumed into a related parameter.
Reliability testing
The purpose of reliability testing is to discover potential problems with the design as early as possible
and, ultimately, provide confidence that the system meets its reliability requirements.
Reliability testing may be performed at several levels and there are different types of testing. Complex
systems may be tested at component, circuit board, unit, assembly, subsystem and system
levels.[23] (The test level nomenclature varies among applications.)
With each test both a statistical type 1 and type 2 error could be made and depends on sample size,
test time, assumptions and the needed discrimination ratio. There is risk of incorrectly accepting a bad
design (type 1 error) and the risk of incorrectly rejecting a good design (type 2 error).
It is not always feasible to test all system requirements. Some systems are prohibitively expensive to
test; some failure modes may take years to observe; some complex interactions result in a huge
number of possible test cases; and some tests require the use of limited test ranges or other resources.
Reliability test requirements
Reliability test requirements can follow from any analysis for which the first estimate of failure
probability, failure mode or effect needs to be justified. Evidence can be generated with some level of
confidence by testing. With software-based systems, the probability is a mix of software and
hardware-based failures. Testing reliability requirements is problematic for several reasons. A single
test is in most cases insufficient to generate enough statistical data. Multiple tests or long-duration
tests are usually very expensive.
Reliability engineering is used to design a realistic and affordable test program that provides empirical
evidence that the system meets its reliability requirements. Statistical confidence levels are used to
address some of these concerns. A certain parameter is expressed along with a corresponding
confidence level
Accelerated testing
The purpose of accelerated life testing (ALT test) is to induce field failure in the laboratory at a much
faster rate by providing a harsher, but nonetheless representative, environment. In such a test, the
product is expected to fail in the lab just as it would have failed in the field—but in much less time.
The main objective of an accelerated test is either of the following:

 To discover failure modes


 To predict the normal field life from the high stress lab life
An Accelerated testing program can be broken down into the following steps:

 Define objective and scope of the test


 Collect required information about the product
 Identify the stress(es)
 Determine level of stress(es)
 Conduct the accelerated test and analyze the collected data.
Common ways to determine a life stress relationship are:

 Arrhenius model
 Eyring model
 Inverse power law model
 Temperature–humidity model
 Temperature non-thermal model
Software reliability
Software reliability is a special aspect of reliability engineering. System reliability, by definition,
includes all parts of the system, including hardware, software, supporting infrastructure (including
critical external interfaces), operators and procedures.
Structural reliability
Structural reliability or the reliability of structures is the application of reliability theory to the
behavior of structures. It is used in both the design and maintenance of different types of structures
including concrete and steel structures.In structural reliability studies both loads and resistances are
modeled as probabilistic variables. Using this approach the probability of failure of a structure is
calculated.
Basic reliability and mission reliability
The above example of a 2oo3 fault tolerant system increases both mission reliability as well as safety.
However, the "basic" reliability of the system will in this case still be lower than a non-redundant
(1oo1) or 2oo2 system. Basic reliability engineering covers all failures, including those that might not
result in system failure, but do result in additional cost due to: maintenance repair actions; logistics;
spare parts etc.
Detectability and common cause failures
When using fault tolerant (redundant) systems or systems that are equipped with protection functions,
detectability of failures and avoidance of common cause failures becomes paramount for safe
functioning and/or mission reliability.
Reliability versus quality (Six Sigma)
Quality often focuses on manufacturing defects during the warranty phase. Reliability looks at the
failure intensity over the whole life of a product or engineering system from commissioning to
decommissioning. Six Sigma has its roots in statistical control in quality of manufacturing. Reliability
engineering is a specialty part of systems engineering.
The everyday usage term "quality of a product" is loosely taken to mean its inherent degree of
excellence. In industry, a more precise definition of quality as "conformance to requirements or
specifications at the start of use" is used.
Reliability operational assessment
Once systems or parts are being produced, reliability engineering attempts to monitor, assess, and
correct deficiencies. Monitoring includes electronic and visual surveillance of critical parameters
identified during the fault tree analysis design stage. Data collection is highly dependent on the nature
of the system. Most large organizations have quality control groups that collect failure data on
vehicles, equipment and machinery.
Reliability organizations
Systems of any significant complexity are developed by organizations of people, such as a
commercial company or a government agency. The reliability engineering organization must be
consistent with the company's organizational structure. For small, non-critical systems, reliability
engineering may be informal.
There are several common types of reliability organizations. The project manager or chief engineer
may employ one or more reliability engineers directly. In larger organizations, there is usually a
product assurance or specialty engineering organization, which may include
reliability, maintainability, quality, safety, human factors, logistics, etc. In such case, the reliability
engineer reports to the product assurance manager or specialty engineering manager.

Availability and Reliability


While both availability and reliability metrics measure uptime or the length of time that an asset is
operational, they differ in how the interval is being measured. Availability measures the ability of a
piece of equipment to be operated if needed, while reliability measures the ability of a piece of
equipment to perform its intended function for a specific interval without failure.
The difference between these measures allows for different perspectives on a plant’s ability to
perform. The distinct importance of one from the other is shown by their individual definitions.
Availability
Availability, also known as operational availability, is expressed as the percentage of time that an
asset is operating, compared to its total scheduled operation time. Alternatively, availability can be
defined as the duration of time that a plant or particular equipment is able to perform its intended
tasks.
How to calculate availability
Availability is calculated by dividing the actual operation time by the total scheduled operation time.
In equation form it can be written as:
Availability (%) = (Actual operation time/Scheduled operation time) x 100%
Actual operation time is defined as the total length of time that the asset is performing its intended
function. The scheduled operation time is the total period when the asset is expected to perform work.
Steps to improve availability
Central to increasing your equipment’s availability is streamlining your maintenance and operational
practices. These steps will help you begin the process of improving equipment availability in your
facility.
1. Measure your current availability
The first step to any kind of improvement is knowing where you currently stand. Determine how
many hours out of your scheduled time your equipment is in operation, and render that as a
percentage.
Once you know where you stand in terms of each asset’s actual availability, you’ll be in a better
position to determine how much work you’ll need to do in order to improve it.
2. Determine your achievable availability
Achievable availability isn’t the same as operational availability since it’s based on an ideal situation.
It’s how much availability your equipment would have if limiting factors, such as manpower, spare
parts, and maintenance practices, were handled absolutely perfectly.It’s also limited by the current
design of your facility’s systems.
To determine your achievable availability, you’ll need to benchmark yourself against similar facilities
within your industry. Once you know how well others in your industry are doing, you can adjust that
for current design constraints—such as distance to and from shops and stores, limitations on
equipment accessibility, etc.—and arrive at an estimate of how high you could push your availability.
3. Update operational practices
The majority of limitations on equipment availability come from operational procedures, not
maintenance practices. As such, your focus should be on making sure your operational procedures
don’t put unnecessary limits on your equipment’s availability and performance.
Take a look at your current practices for operating your equipment and see how potential failures,
product defects, and costs relate back to those practices. Then make adjustments as needed. While this
may take a bit of digging,
4. Implement effective PM practices
While operations do impact availability, maintenance still plays a vital role. Often, reactive
maintenance practices leave equipment exposed to potential failures that cause massively expensive—
and totally avoidable—downtime. Even a preventive approach may result in excessive downtime
if too many PMs are performed.In either case, too many hours are put into maintenance when the
equipment could be up and running. To reduce those hours, the following tips can help:

 Implement an effective PM plan to avoid equipment breakdowns.


 Focus PMs on preventing the most impactful downtime events.
 Perfect the timing on recurring PMs.
 Streamline PM scheduling.
 Improve MRO inventory management to make sure parts are available.
 Continuously strive to optimize PMs in your facility.
As you work to establish and improve your preventive maintenance program, you’ll see fewer
avoidable or unnecessary hours spent on maintenance.
5. Improve scheduling practices
Airtight scheduling practices are key to eliminating logistical delays when it comes to operating and
maintaining your equipment. When it comes to the maintenance schedule, some potential
improvements might include:

 Considering locations of equipment, machines, storerooms, etc.


 Arranging for equipment and parts to be available on time.
 Communicating with operations crews to have assets offline when needed.
 Accounting for skill limitations for each task.
 Scheduling tasks based on priority.
A well-constructed maintenance schedule will make sure PMs are handled in an efficient manner
while avoiding costly breakdowns.
6. Implement predictive maintenance
In an effort to further streamline preventive maintenance, it helps to have predictive analytics and
sensors in place. Predictive maintenance uses sensors to monitor assets and predict when important
PMs are needed based on your equipment’s needs. That way, PMs are performed exactly as often as
necessary—no more, no less.
Reliability
Reliability quantifies the likelihood of equipment to operate as intended without disruptions or
downtime. In other words, reliability can be seen as the probability of success and the dependability
of an asset to continuously be operational, without failures, for a period of time.
How to measure reliability
Because reliability is expressed as the duration of operation without failure, reliability can be
measured using the mean time between failure (MTBF) metric. Alternatively, the inverse of MTBF,
also known as failure rate can be used. MTBF quantifies the average duration that an asset operates as
intended without failures.
In equation form:
MTBF = Operating time (hours)/Number of failures
and
Failure rate = Number of failures/Unit of time (i.e. hours, weeks, months, etc.)
The operating time is the total time interval during which the asset is intended to be functional, and
the number of failures is the number of occurrences of failures or breakdowns.
Steps to improve reliability
To achieve world-class reliability in your facility, it’s not enough to just keep equipment up and
running as much as possible. You need to minimize disruptive breakdowns. These steps can help you
accomplish that.
1. Collect data on equipment health and failure modes
To begin preventing downtime events, you’ll need to collect data on your equipment’s health and
common failure modes. If you already have a database in place, draw upon that information. If not,
you’ll need to start collecting data.
Tip: Concentrate on your most critical assets. Criticality analysis will help you know which pieces of
equipment should be prioritized.
2. Perform FMEA on critical assets
Once you have some decent data on your equipment, it’s time to perform failure mode and effects
analysis (FMEA). FMEA involves analyzing potential failure modes for each piece of equipment and
determining which ones are most impactful. You’ll take measurements in four categories:

 Equipment costs
 Production
 Safety
 The environment
As you analyze each failure mode, you’ll be able to determine which ones are most important to
prevent.
3. Prioritize preventive maintenance tasks
Once you know the failure modes you need to prevent most, it’s time to prioritize your preventive
maintenance tasks. This step is fairly straightforward, but it does require knowing what tasks are
needed to prevent the most severe failure modes.
You may need to perform a bit of root cause analysis here. In order to avoid wasted PMs, you’ll want
to make sure the tasks you plan actually treat the equipment failures you want to prevent.
4. Optimize MRO inventory management
Your MRO inventory should be stocked with appropriate quantities of the right items. While it is
important to keep inventory costs down—meaning you shouldn’t keep too many items in stock—you
do need to make sure you have enough of each item in stock.
That means analyzing your work order history on each asset and determining what spare parts and
tools are used when, how many parts are needed, and how long it takes to replenish your stock of
those parts.
5. Train your team in best practices
Many equipment failures result from human error, so it’s important to make sure your operators and
maintenance technicians are well versed in best practices. Alongside having operating procedures in
place that maximize equipment availability, train your personnel on following those procedures with
precision.In addition, consider adding checklists to work orders and other documents used by your
personnel.
6. Focus on continuous improvement
As you work on improving reliability in your facility, don’t stop after each step. It’s a continuous
process, and you’ll need to keep working to improve upon each new procedure, practice, and task you
implement.
Be constantly on the lookout for ways to streamline your maintenance and production processes,
improve quality, and eliminate defects. Perform regular audits on your equipment and processes.
Through it, all, keep careful records. Doing so will give you the baseline knowledge you need to keep
moving forward with continuous improvement.
Tip: A CMMS can help you track the condition of your equipment, log work orders, and generate
reports that will help you in the process of continuous improvement.
Relationship between availability and reliability
Generally, availability and reliability go hand in hand, and an increase in reliability usually translates
to an increase in availability. However, it is important to remember that both metrics can produce
different results. Sometimes, you might have a highly available machine that is not reliable or vice
versa.
Take for example a general-purpose motor that is operating close to its maximum capacity. The motor
can run for several hours a day, implying a high availability. However, it needs to stop every half an
hour to resolve operational problems.
Conclusion on availability and reliability
As you focus on improving both availability and reliability in your facility, you’ll help improve the
overall quality and effectiveness of your processes. You’ll see fewer defects, more productivity, and
greater profitability in your facility.

Reliability Requirements
One of the most essential aspects of a reliability program is defining the reliability goals that a product
needs to achieve. This article will explain the proper ways to describe a reliability goal and also
highlight some of the ways reliability requirements are commonly defined improperly.
Designs are usually based on specifications. Reliability requirements are typically part of a technical
specifications document. They can be requirements that a company sets for its product and its own
engineers or what it reports as its reliability to its customers. They can also be requirements set for
suppliers or subcontractors. However, reliability can be difficult to specify.
What are the essential elements of a reliability requirement?
There are many facets to a reliability requirement statement.
Measurable:
Reliability metrics are best stated as probability statements that are measurable by test or analysis
during the product development time frame.

Customer usage and operating environment:


The demonstrated reliability goal has to take into account the customer usage and operating
environment. The combined customer usage and operating environment conditions must be
adequately defined in product requirements.The descriptions can be done in many ways. For instance:

 Using constant values. For example: Usage temperature is 25o C. This could be an average
value or, preferably, a high stress value that accommodates most customers and applications.
 Using limits. For example: Usage temperature is between -15o C and 40o C.
 Using distributions. For example: Usage temperature follows a normal distribution with
mean of 35o C and standard deviation of 5o C.
 Using time-dependent profiles. For example: Usage temperature starts at 70o C at t = 0,
increases linearly to 35o C within 3 hours, remains at that level for 10 hours, then increases
exponentially to 50o C within 2 hours and remains at that level for 20 hours. A mathematical
model (function) can be used to describe such profiles.
Time:
Time could mean hours, years, cycles, mileage, shots, actuations, trips, etc. It is whatever is associated
with the aging of the product. For example, saying that the reliability should be 90% would be
incomplete without specifying the time window. The correct way would be to say that, for example,
the reliability should be 90% at 10,000 cycles.
Failure definition:
The requirements should include a clear definition of product failure. The failure can be a complete
failure or degradation of the product. For example: part completely breaks, part cracks, crack length
exceeds 10 mm, part starts shaking, etc. The definition is incorporated into tests and should be used
consistently throughout the analysis.

Confidence:
A reliability requirement statement should be specified with a confidence level, which allows for
consideration of the variability of data being compared to the specification.

Understanding Reliability Requirements


Assuming that customer usage and operating environment conditions and what is meant by a product
"failure" have already been defined, let us examine the probability and life element of a reliability
specification. We will look at some common examples of reliability requirements and understand
what they mean. We will use an automotive product for illustration.
Requirement Example 1: Mean Life (MTTF) = 10,000 miles
The Mean Life (or Mean-Time-To-Failure [MTTF]) as a sole metric is flawed and misleading. It is
the expected value of the random variable (mean of the probability distribution). Historically, the use
of MTTF for reliability dates back to the time of wide use of the exponential distribution in the early
days of quantitative reliability analysis. The exponential distribution was used because of its
mathematical (computational) simplicity. The exponential distribution has just one parameter, the
MTTF (or its reciprocal, the "failure rate," which is constant, thus the reason for its simplicity). Few
products and components actually have a constant failure rate (i.e. no wearout, degradation, fatigue,
infant mortality, etc.).
The MTTF might be one of the most misunderstood metrics among reliability engineering
professionals. Some interpret it as "no failure by 10,000 miles," which is wrong! Some interpret it as
"by 10,000 miles, 50% of the product's population (50th percentile) will fail." The "mean," however, is
not the same as the "median," so this is only true in cases where the product failure distribution is a
symmetrical distribution, such as the normal distribution. If the product follows a non-symmetrical
distribution (such as Weibull, lognormal and exponential), which is usually the case in reliability
analysis situations, then the mean does not necessarily describe the 50th percentile, but could be the
20th percentile, 70th, 90th, etc., depending on the distribution type and the estimated parameters of that
distribution. In the case of the exponential distribution, the percentile that matches the mean life is
actually the 63.2%! If the intention of using the mean life as a metric is to describe the time by which
50% of the product's population will fail, then the appropriate metric to use would be the B50 life.
Let us use the following example for illustration. A company tested 8 units of a product manufactured
by two different suppliers. The failure results are shown next.

Supplier 1 (miles) Supplier 2 (miles)

866, 2243, 3871, 5985, 7593, 8702, 9627,


5798, 8209, 11363, 10501, 11390, 12416,
16044, 24889 13857
The two different data sets were modeled using a Weibull distribution and rank regression based on X
(RRX). The MTTFs calculated based on the two different distributions are:

 MTTF1 = 9999.6 miles


 MTTF2 = 9999.4 miles
These MTTFs are almost the same. So, based on this type of reliability metric, the two suppliers'
reliability can be considered to be equal.
In this example, because the Weibull distribution is not a symmetrical distribution, the MTTFs do not
correspond to the 50th percentile of failures. The actual percentiles can be calculated using the
reliability function. The percentile, P, of units that would fail by t = MTTF is:

 P1 = Q(MTTF1) = 1-R(t = MTTF1) = 63.21%


 P2 = Q(MTTF2) = 1-R(t = MTTF2) = 49.08%

The 50th percentile of failures can be computed using the B50 metric.

 B501 = 6,930 miles


 B502 = 10,066 miles
Attempting to use a single number to describe an entire lifetime distribution can be misleading and
may lead to poor business decisions.
Requirement Example 2: MTBF = 10,000 miles.
Unfortunately, the term MTBF (Mean-Time-Between-Failures) has often been used in place of MTTF
(Mean-Time-To-Failures). Many reliability textbooks and standards erroneously intermix these terms.
MTTF and MTBF are the same only in the case of a constant failure rate (exponential distribution
assumption). MTBF should be used when dealing with repairable systems, whereas MTTF should be
used when looking for the mean of the first time-to-failure (i.e. non-repairable systems).
Requirement Example 3: Failure rate = 0.0001 failures per mile.
The use of failure rate as a reliability requirement implies an exponential distribution, since this is the
only distribution commonly used for reliability (life data) analysis that has a constant failure rate. For
the exponential distribution, MTTF = 1/Failure Rate = 1/0.0001 = 10,000 miles. Thus, this reliability
requirement is equivalent to Example 1. Most distributions used for life data analysis have a failure
rate that varies with time. In these cases, MTTF is not equal to 1/Failure Rate
Requirement Example 4: B10 life = 10,000 miles.
BX refers to the time by which X% of the units in a population will have failed. This metric has its
roots in the ball and roller bearing industry. It then found its way to other industries and is now just a
statistical metric that is widely used. This reliability requirement means that 10% of the population
will fail by 10,000 miles. Or, in other words,
Requirement Example 5: 90% Reliability at 10,000 miles.
This is equivalent to the previous example.

 The time of interest is 10,000 miles. This could be design life, warranty period or whatever
operation/usage time is of interest to you and your customers.
 The probability that the product will not fail before 10,000 miles is 90%. Or, there is a
probability that 10% will fail by 10,000 miles.
Although the above two examples (4 and 5) are good metrics, they lack a specification of how much
confidence is to be had in estimating whether the product meets these reliability goals.
Requirement Example 6: 90% Reliability at 10,000 miles with 50% confidence.
Same as above (Example 5) with the following addition:
 The lower reliability estimate obtained from your tested sample (or data collected from the
field) is at the 50% confidence level.
This corresponds to the regression line that goes through the data in a regression plot obtained when a
distribution (such as a Weibull) model is fitted to times-to-failure. The line is at 50% confidence. In
other words, this means that there is a 50% chance that your estimated value of reliability is greater
than the true reliability value and there is a 50% chance that it is lower. Using a lower 50%
confidence on reliability is equivalent to not mentioning the confidence level at all!
Let us use the following example to illustrate calculating this reliability requirement.

Design A Failure Data Design B Failure Data


(miles) (miles)

11532, 14908, 16692,


21674, 23832, 25142,
26430, 26605, 27245, 18009, 22557
29038, 32816, 37475,
40101, 55969, 56798, 28255, 39164
61507, 65141, 73399,
73609, 75953

The two designs are modeled with a Weibull distribution and using rank regression on X as the
parameter estimation method. The following figure shows the probability plot for the two designs.
Requirement Example 7: 90% Reliability for 10,000 miles with 90% confidence.
Same as above (Example 6) with the exception that here, more confidence is required in the reliability
estimate. This statement means that the 90% lower confidence estimate on reliability at 10,000 miles
should be 90%.
Requirement Example 8: 90% Reliability for 10,000 miles with 90% confidence for a 98th percentile
customer.
Same as above (Example 7) with the following addition:
 The 98th percentile is a point on the usage stress curve. This describes the stress severity level
for which the reliability is estimated. It means that 98% of the customers who use the product,
or 98% of the range of environmental conditions applied to the product, will experience the
90% reliability.

To be able to estimate reliability at the 98th percentile of the stress level, units would have to be tested
at that stress level or, using accelerated testing methods, the units could be tested at different stress
levels and the reliability could be projected to the 98th percentile of the stress.
Conclusion
As demonstrated in this article, it is important to understand what a reliability requirement actually
means in terms of product performance and to select the metric that will accurately reflect the
expectations of the designers and end-users. The MTTF, MTBF and failure rate metrics are
commonly misunderstood and very often improperly applied.

Fault-tolerant Architectures
Fault tolerance is the property that enables a system to continue operating properly in the event of the
failure of one or more faults within some of its components. If its operating quality decreases at all,
the decrease is proportional to the severity of the failure, as compared to a naively designed system, in
which even a small failure can cause total breakdown.
A fault-tolerant design enables a system to continue its intended operation, possibly at a reduced level,
rather than failing completely, when some part of the system fails. The term is most commonly used
to describe computer systems designed to continue more or less fully operational with, perhaps, a
reduction in throughput or an increase in response time in the event of some partial failure.
Examples
"M2 Mobile Web", the original mobile web front end of Twitter, later served as fallback legacy
version to clients without JavaScript support and/or incompatible browsers until December 2020.
Hardware fault tolerance sometimes requires that broken parts be taken out and replaced with new
parts while the system is still operational (in computing known as hot swapping). Such a system
implemented with a single backup is known as single point tolerant and represents the vast majority of
fault-tolerant systems. In such systems the mean time between failures should be long enough for the
operators to have sufficient time to fix the broken devices (mean time to repair) before the backup
also fails.
Fault tolerance is notably successful in computer applications. Tandem Computers built their entire
business on such machines, which used single-point tolerance to create their NonStop systems
with uptimes measured in years.
Terminology
An example of graceful degradation by design in an image with transparency. Each of the top two
images is the result of viewing the composite image in a viewer that recognises transparency. The
bottom two images are the result in a viewer with no support for transparency. Because the
transparency mask (center bottom) is discarded, only the overlay (center top) remains; the image on
the left has been designed to degrade gracefully, hence is still meaningful without its transparency
information.
A highly fault-tolerant system might continue at the same level of performance even though one or
more components have failed. For example, a building with a backup electrical generator will provide
the same voltage to wall outlets even if the grid power fails.
Single fault condition
A single fault condition is a situation where one means for protection against a hazard is defective. If
a single fault condition results unavoidably in another single fault condition, the two failures are
considered as one single fault condition. A source offers the following example:
A single-fault condition is a condition when a single means for protection against hazard in equipment
is defective or a single external abnormal condition is present, e.g. short circuit between the live parts
and the applied part.
Criteria
Providing fault-tolerant design for every component is normally not an option. Associated redundancy
brings a number of penalties: increase in weight, size, power consumption, cost, as well as time to
design, verify, and test. Therefore, a number of choices have to be examined to determine which
components should be fault tolerant.

 How critical is the component? In a car, the radio is not critical, so this component has less
need for fault tolerance.
 How likely is the component to fail? Some components, like the drive shaft in a car, are not
likely to fail, so no fault tolerance is needed.
 How expensive is it to make the component fault tolerant? Requiring a redundant car
engine, for example, would likely be too expensive both economically and in terms of weight
and space, to be considered.
An example of a component that passes all the tests is a car's occupant restraint system. While we do
not normally think of the primary occupant restraint system, it is gravity. If the vehicle rolls over or
undergoes severe g-forces, then this primary method of occupant restraint may fail. Restraining the
occupants during such an accident is absolutely critical to safety, so we pass the first test. Accidents
causing occupant ejection were quite common before seat belts, so we pass the second test. The cost
of a redundant restraint method like seat belts is quite low, both economically and in terms of weight
and space, so we pass the third test.
Requirements
The basic characteristics of fault tolerance require:
1) No single point of failure – If a system experiences a failure, it must continue to operate
without interruption during the repair process.
2) Fault isolation to the failing component – When a failure occurs, the system must be able
to isolate the failure to the offending component. This requires the addition of dedicated
failure detection mechanisms that exist only for the purpose of fault isolation. Recovery
from a fault condition requires classifying the fault or failing component. The National
Institute of Standards and Technology (NIST) categorizes faults based on locality, cause,
duration, and effect.
3) Fault containment to prevent propagation of the failure – Some failure mechanisms can
cause a system to fail by propagating the failure to the rest of the system. An example of
this kind of failure is the "rogue transmitter" that can swamp legitimate communication in
a system and cause overall system failure.
4) Availability of reversion modes
In addition, fault-tolerant systems are characterized in terms of both planned service outages and
unplanned service outages. These are usually measured at the application level and not just at a
hardware level. The figure of merit is called availability and is expressed as a percentage.
Fault tolerance techniques
Research into the kinds of tolerances needed for critical systems involves a large amount of
interdisciplinary work. The more complex the system, the more carefully all possible interactions
have to be considered and prepared for. Considering the importance of high-value systems in
transport, public utilities and the military, the field of topics that touch on research is very wide: it can
include such obvious subjects as software modeling and reliability, or hardware design, to arcane
elements such as stochastic models, graph theory, formal or exclusionary logic, parallel processing,
remote data transmission, and more.
Replication

 Replication: Providing multiple identical instances of the same system or subsystem,


directing tasks or requests to all of them in parallel, and choosing the correct result on the
basis of a quorum;
 Redundancy: Providing multiple identical instances of the same system and switching to one
of the remaining instances in case of a failure (failover);
 Diversity: Providing multiple different implementations of the same specification, and using
them like replicated systems to cope with errors in a specific implementation.
All implementations of RAID, redundant array of independent disks, except RAID 0, are examples of
a fault-tolerant storage device that uses data redundancy.
Failure-oblivious computing
Failure-oblivious computing is a technique that enables computer programs to continue executing
despite errors.[20] The technique can be applied in different contexts. First, it can handle invalid
memory reads by returning a manufactured value to the program,[21] which in turn, makes use of the
manufactured value and ignores the former memory value it tried to access, this is a great contrast
to typical memory checkers, which inform the program of the error or abort the program.
The approach has performance costs: because the technique rewrites code to insert dynamic checks
for address validity, execution time will increase by 80% to 500%.
Recovery shepherding
Recovery shepherding is a lightweight technique to enable software programs to recover from
otherwise fatal errors such as null pointer dereference and divide by zero.[25] Comparing to the failure
oblivious computing technique, recovery shepherding works on the compiled program binary directly
and does not need to recompile to program.
It uses the just-in-time binary instrumentation framework Pin. It attaches to the application process
when an error occurs, repairs the execution, tracks the repair effects as the execution continues,
contains the repair effects within the application process, and detaches from the process after all repair
effects are flushed from the process state. It does not interfere with the normal execution of the
program and therefore incurs negligible overhead.
Redundancy
Redundancy is the provision of functional capabilities that would be unnecessary in a fault-free
environment. This can consist of backup components that automatically "kick in" if one component
fails. For example, large cargo trucks can lose a tire without any major consequences.
Two kinds of redundancy are possible: space redundancy and time redundancy. Space redundancy
provides additional components, functions, or data items that are unnecessary for fault-free operation.
Space redundancy is further classified into hardware, software and information redundancy,
depending on the type of redundant resources added to the system.
Disadvantages

 Interference with fault detection in the same component. To continue the above passenger
vehicle example, with either of the fault-tolerant systems it may not be obvious to the driver
when a tire has been punctured. This is usually handled with a separate "automated fault-
detection system".
 Interference with fault detection in another component. Another variation of this problem
is when fault tolerance in one component prevents fault detection in a different component.
For example, if component B performs some operation based on the output from component
A, then fault tolerance in B can hide a problem with A.
 Reduction of priority of fault correction. Even if the operator is aware of the fault, having a
fault-tolerant system is likely to reduce the importance of repairing the fault. If the faults are
not corrected, this will eventually lead to system failure, when the fault-tolerant component
fails completely or when all redundant components have also failed.
 Test difficulty. For certain critical fault-tolerant systems, such as a nuclear reactor, there is no
easy way to verify that the backup components are functional. The most infamous example of
this is Chernobyl, where operators tested the emergency backup cooling by disabling primary
and secondary cooling. The backup failed, resulting in a core meltdown and massive release
of radiation.
 Cost. Both fault-tolerant components and redundant components tend to increase cost. This
can be a purely economic cost or can include other measures, such as weight. Manned
spaceships, for example, have so many redundant and fault-tolerant components that their
weight is increased dramatically over unmanned systems, which don't require the same level
of safety.
 Inferior components. A fault-tolerant design may allow for the use of inferior components,
which would have otherwise made the system inoperable. While this practice has the potential
to mitigate the cost increase, use of multiple inferior components may lower the reliability of
the system to a level equal to, or even worse than, a comparable non-fault-tolerant system.

Programming for Reliability


Programmers have developed a series of best practices when programming to ensure reliability,
longevity and maintainability of developed code. In this lesson, we will look at a number of best
practices used in the world of programming that can be applied irrespective of the programming
language used.
The Need For Best Practices in Programming
The success of any project or implementation rests on its longevity and maintainability. The longevity
depends on its adaptability to change. In simple terms, if the project or application is not sustainable
in the absence of the creator then that project soon dies because nobody has the ability to take up the
reigns of continuity and maintenance.
Have a Plan
No planning - no coding. Coding without a plan is like building a house without a blueprint. Things
are added and changed in an adhoc manner which could lead to high costs and unreliability. A plan
ensures that the programmer is well versed with the requirements stipulated in the problem definition.
A plan should include:

 An organized file and folder structure for images, CSS files, Js files etc.
 A system that will allow future adaptability to cross-platforms.
 An organized system for reusable code such as menus, headers, functions and classes for
example.
The plan also ensures that the solution is implemented efficiently. Written code is far more
expensive than the overall plan. It costs less to trash a plan than to trash written code.
Make the Code Understandable
For the applications to be sustainable the written code should be easily understandable by any other
adequately experienced programmer, presently or in the future. Many applications are used and re-
adapted long after the coder created them. This involves the way a programmer comments, indents
and writes the code.
Readability
Readability is the ease with which a computer interprets the code to execute it and the programmer
can return to the code when necessary and be able to understand it. In a professional environment
involving a team of programmers, this characteristic is very crucial for the smooth flow of work.
Readability makes deciphering the code very easy.
Indentation
Indentation is the placement of text further to the left or to the right in comparison to the rest of the
text surrounding it. Indentation helps readability. For example, if a complex loop with multiple
decision conditions like if-else are properly indented, it makes it easier for someone to figure out
where each program block begins and ends.
Comments
Commenting of code helps the reader to better read and work through the code and figure out exactly
what is happening at every point in time. Comments are little explanations placed at strategic points in
the code to make things clearer. They usually carry as much information as possible in a short length
of text. Comments should be added even when code seems self explanatory.
Naming Conventions
Proper naming conventions work hand in hand with readability as a best practice. Adopt an easily
understandable naming format for functions, variables and classes names. For e.g. a variable for
storing student's age can be called studentAge, a function to calculate salary can be called
computeSalary
Validations Checks
Validation Checks refer to mechanisms which are incorporated into code to ensure that all input data
(values) conform to that input field's requirements. In other words, the user of an application, for
example, can only enter integers (whole numbers) into a 'credit card number field'. If the user attempts
to use letters or strings (letters mixed with numbers) an error will occur, generating an error message
and the entry will not be accepted.
Optimize Code Efficiency
It is one thing to write a working code but writing code that is efficient and executes quickly takes
additional skills. Efficiency can be achieved by the use of loops, arrays, proper use of boolean
functions, for example. In the following example we will see how a loop is used to improve code
efficiency. A loop is a sequence of instructions that repeatedly executes itself until a particular
condition is met. This helps code execute faster and there are as fewer lines of code as necessary.

Exception Handling
An Exception Handler is a set of code that determines a program's response when an unusual or
unpredictable event occurs which disrupts the normal sequence of its execution. These anomalies
usually occur due to operation system faults. For example, a corrupt drive that holds an application
file the program is attempting to access. The exception handler will generate an error message and the
application will respond accordingly. Exception handling makes sure that the program doesn't end
abruptly with an unknown error.
Reliability Measurement
Reliability refers to the consistency of a measure. Psychologists consider three types of consistency:
over time (test-retest reliability), across items (internal consistency), and across different researchers
(inter-rater reliability).
Test-Retest Reliability
When researchers measure a construct that they assume to be consistent across time, then the scores
they obtain should also be consistent across time. Test-retest reliability is the extent to which this is
actually the case. For example, intelligence is generally thought to be consistent across time. A person
who is highly intelligent today will be highly intelligent next week.
Assessing test-retest reliability requires using the measure on a group of people at one time, using it
again on the same group of people at a later time, and then looking at test-retest correlation between
the two sets of scores. This is typically done by graphing the data in a scatterplot and computing
Pearson’s r.
Again, high test-retest correlations make sense when the construct being measured is assumed to be
consistent over time, which is the case for intelligence, self-esteem, and the Big Five personality
dimensions. But other constructs are not assumed to be stable over time. The very nature of mood, for
example, is that it changes. So a measure of mood that produced a low test-retest correlation over a
period of a month would not be a cause for concern.
Internal Consistency
A second kind of reliability is internal consistency, which is the consistency of people’s responses
across the items on a multiple-item measure. In general, all the items on such measures are supposed
to reflect the same underlying construct, so people’s scores on those items should be correlated with
each other. On the Rosenberg Self-Esteem Scale, people who agree that they are a person of worth
should tend to agree that that they have a number of good qualities.
Like test-retest reliability, internal consistency can only be assessed by collecting and analyzing data.
One approach is to look at a split-half correlation. This involves splitting the items into two sets, such
as the first and second halves of the items or the even- and odd-numbered items. Then a score is
computed for each set of items, and the relationship between the two sets of scores is examined.
Interrater Reliability
Many behavioural measures involve significant judgment on the part of an observer or a rater. Inter-
rater reliability is the extent to which different observers are consistent in their judgments. For
example, if you were interested in measuring university students’ social skills, you could make video
recordings of them as they interacted with another student whom they are meeting for the first time.
Then you could have two or more observers watch the videos and rate each student’s level of social
skills. To the extent that each participant does in fact have some level of social skills that can be
detected by an attentive observer, different observers’ ratings should be highly correlated with each
other. Inter-rater reliability would also have been measured in Bandura’s Bobo doll study.
Validity
Validity is the extent to which the scores from a measure represent the variable they are intended to.
But how do researchers make this judgment? We have already considered one factor that they take
into account—reliability. When a measure has good test-retest reliability and internal consistency,
researchers should be more confident that the scores represent what they are supposed to. There has to
be more to it, however, because a measure can be extremely reliable but have no validity whatsoever.
As an absurd example, imagine someone who believes that people’s index finger length reflects their
self-esteem and therefore tries to measure self-esteem by holding a ruler up to people’s index fingers.
Discussions of validity usually divide it into several distinct “types.” But a good way to interpret these
types is that they are other kinds of evidence—in addition to reliability—that should be taken into
account when judging the validity of a measure. Here we consider three basic kinds: face validity,
content validity, and criterion validity.
Face Validity
Face validity is the extent to which a measurement method appears “on its face” to measure the
construct of interest. Most people would expect a self-esteem questionnaire to include items about
whether they see themselves as a person of worth and whether they think they have good qualities. So
a questionnaire that included these kinds of items would have good face validity.
Face validity is at best a very weak kind of evidence that a measurement method is measuring what it
is supposed to. One reason is that it is based on people’s intuitions about human behaviour, which are
frequently wrong. It is also the case that many established measures in psychology work quite well
despite lacking face validity.
Content Validity
Content validity is the extent to which a measure “covers” the construct of interest. For example, if a
researcher conceptually defines test anxiety as involving both sympathetic nervous system activation
(leading to nervous feelings) and negative thoughts, then his measure of test anxiety should include
items about both nervous feelings and negative thoughts. Or consider that attitudes are usually defined
as involving thoughts, feelings, and actions toward something. By this conceptual definition, a person
has a positive attitude toward exercise to the extent that he or she thinks positive thoughts about
exercising, feels good about exercising, and actually exercises. So to have good content validity, a
measure of people’s attitudes toward exercise would have to reflect all three of these aspects.
Criterion Validity
Criterion validity is the extent to which people’s scores on a measure are correlated with other
variables (known as criteria) that one would expect them to be correlated with. For example, people’s
scores on a new measure of test anxiety should be negatively correlated with their performance on an
important school exam. If it were found that people’s scores were in fact negatively correlated with
their exam performance, then this would be a piece of evidence that these scores really represent
people’s test anxiety. But if it were found that people scored equally well on the exam regardless of
their test anxiety scores, then this would cast doubt on the validity of the measure.
Criteria can also include other measures of the same construct. For example, one would expect new
measures of test anxiety or physical risk taking to be positively correlated with existing measures of
the same constructs. This is known as convergent validity.
Discriminant Validity
Discriminant validity, on the other hand, is the extent to which scores on a measure are not correlated
with measures of variables that are conceptually distinct. For example, self-esteem is a general
attitude toward the self that is fairly stable over time. It is not the same as mood, which is how good or
bad one happens to be feeling right now. So people’s scores on a new measure of self-esteem should
not be very highly correlated with their moods. If the new measure of self-esteem were highly
correlated with a measure of mood, it could be argued that the new measure is not really measuring
self-esteem; it is measuring mood instead.
Safety Engineering
Safety engineering is an engineering discipline that assures that engineered systems provide
acceptable levels of safety. It is strongly related to systems engineering, industrial engineering and the
subset system safety engineering. Safety engineering assures that a life-critical system behaves as
needed, even when components fail.
Safety engineering is a field of engineering that deals with accident prevention, risk of human error
reduction and safety provided by the engineered systems and designs. It is associated with industrial
engineering and system engineering and applied to manufacturing, public works and product designs
to make safety an integral part of operations.
The term safety refers to a condition of being safe or protected. Safety in the context of occupational
health and safety means a state of been protected against physical, psychological, occupational,
mechanical failure, damage, accident, death, injury, or such highly undesirable events. Safety can
therefore be defined as the protection of people from physical injury.
Health and safety are used together to indicate concern for the physical and mental wellbeing of the
individual at work. Safety is also describing as a condition, where positive control of known hazards
exists in an effort to achieve an acceptable degree of calculated risk such as a permissible exposure
limit.
Safety- is freedom from acceptable risk or harm.
Accident -is undesired event giving rise to death, ill health, injury, damage or other loss.
Incident –work related event(s) in which injury or ill health (regardless of severity) or fatality
occurred, or could have occurred.
Risk –the combination of the likelihood of an occurrence of a hazardous event or exposure and the
severity of injury oof the ill health that can be caused by event of the exposure.
Risk assessment-the process of evaluation the risk (S) arising from hazard, taking into account the
adequacy of existing controls and deciding whether or not the risk is acceptable.
Non -conformity- this can be a deviation from work standards, practices, procedures, regulations and
legal requirements
Six steps to safety: these steps are short reminders for safe operation, years of experience have shown
them to be the safest way to perform your daily work.
SAFE PRODUCTION RULES
Safe production rules are developed to reinforce the safety policy and to pursue the objective of zero
harm. They provide a basis for trying to eliminate fatal, serious accidents and occupation health risk
and were formulated through undertaking a historical review of fatal, serious accidents and
occupational hazard in the company.

 Safety is the number one priority


 Every employee has the right and responsibility to understand the risk inherent in the
task to be performed by them
 Every employee has the right and the responsibility to withdraw from a dangerous
situation
 Every employee must be provided with the required training, resources and personal
proactive equipment
 Every employee must be provided with the require information.
The fundamentals of safe production rules.

 Fit for work and unaffected by fatigue, altitude, drugs or alcohol


 Never tamper with any health and safety services
 Always withdraw from unsafe and / or unhealthy workplaces, or conditions and
report it
 Always risk assess task before commencing work
 Always apply the stop, think, fix, and continue philosophy to any dangerous situation
or condition
 Know whom to contact and what to do in an emergency
 Maintain emergency equipment
 Have the right tools and equipment and ensure that these are appropriate and are in
good working order
 Always wear and use personal protective equipment that are in good condition and
appropriate for the task
 Always use hearing protection devices in noisy areas
 Always determine whether a permit to work is required Before commencing a task. if
there is any doubt ask your supervisor.
 Always report all hazards, incidents, and accidents.
 Always ensure that all employees adhere to these safety rules.
Safety is one of the prime considerations in any organization, whether it is profitable or non-
profitable. Management is full responsible for planning and implementing all protective measures to
safeguard all employees and properties from any sort of hazard in the workplace. Safety is also
required by local laws, industrial regulations and practices. Employees need to be trained and
informed about all safety aspects they might encounter in their workplaces. Safety monitoring and
controlling is one of the major day to day tasks of management, since the accidents, damage, injury
and other health hazards cost money, hamper production or service and have tremendous negative
effect on employee morale and business goodwill.
Importance of safety engineering

 Reduce accidents
 Control and eliminate hazards
 Develop new methods and techniques to improve safety
 Maximize returns on safety efforts
 Maximize public confidence with respect to product safety.
ROLES OF SAFETY ENGINEERS
1.Safety engineers ensure the well-being of people and property.
2.These professionals combine knowledge of an engineering discipline, as well as health or safety
regulations related to their discipline to keep work environments, building and people safe from harm.
3. The work of safety engineers’ helps their employer’s lower costs of insurance and comply with
laws and regulations related to health and safety.
4.Inspections. One of the primary duties of safety engineers is to inspect machinery, equipment and
production facilities to identify potential dangers.
5.Safety engineers are also responsible for making sure that buildings meet all codes, and that
manufacturing equipment, storage facilities and products meet all applicable health and safety
regulations. Fire prevention and industrial safety engineers, in particular, spend a great deal of time
involved with inspection-related activities.
6. Safety engineers are also typically involved in consulting and planning activities. Having a safety
engineer involved from the planning stages of a project enables you to focus on safety as an integral
part of the process, rather than just as something tacked on at the end.
7.When working as consultants, safety engineers bring their education and experience to bear in
analyzing complex processes, conditions and behaviors, and apply a systemic approach to make sure
that nothing has been overlooked. Aerospace safety engineers, product safety engineers, and systems
safety engineers spend a lot of time planning, designing, and consulting.
8.They are involved in doing risk assessment
9. Investigated the causes of accidents, cases of work related diseases or ill health and dangerous
occurrences.

Safety-critical Systems
A safety-critical system (SCS) or life-critical system is a system whose failure or malfunction may
result in one (or more) of the following outcomes:

 death or serious injury to people


 loss or severe damage to equipment/property
 environmental harm
A safety-related system comprises everything needed to perform one or more safety functions, in
which failure would cause a significant increase in the safety risk for the people or environment
involved. Safety-related systems are those that do not have full responsibility for controlling hazards
such as loss of life, severe injury or severe environmental damage. The malfunction of a safety-
involved system would only be that hazardous in conjunction with the failure of other systems or
human error. Some safety organizations provide guidance on safety-related systems, for example the
Health and Safety Executive (HSE) in the United Kingdom.
Risks of this sort are usually managed with the methods and tools of safety engineering. A safety-
critical system is designed to lose less than one life per billion hours of operation. Typical design
methods include probabilistic risk assessment, a method that combines failure mode and effects
analysis (FMEA) with fault tree analysis. Safety-critical systems are increasingly computer-based.
Reliability regimes
 Fail-operational systems continue to operate when their control systems fail. Examples of
these include elevators, the gas thermostats in most home furnaces, and passively safe nuclear
reactors. Fail-operational mode is sometimes unsafe. Nuclear weapons launch-on-loss-of-
communications was rejected as a control system for the U.S. nuclear forces because it is fail-
operational: a loss of communications would cause launch, so this mode of operation was
considered too risky. This is contrasted with the fail-deadly behavior of the Perimeter system
built during the Soviet era.
 Fail-soft systems are able to continue operating on an interim basis with reduced efficiency
in case of failure. Most spare tires are an example of this: They usually come with certain
restriction and lead to lower fuel economy. Another example is the "Safe Mode" found in
most Windows operating systems.
 Fail-safe systems become safe when they cannot operate. Many medical systems fall into this
category. For example, an infusion pump can fail, and as long as it alerts the nurse and ceases
pumping, it will not threaten the loss of life because its safety interval is long enough to
permit a human response. In a similar vein, an industrial or domestic burner controller can
fail, but must fail in a safe mode. Famously, nuclear weapon systems that launch-on-
command are fail-safe, because if the communications systems fail, launch cannot be
commanded. Railway signalling is designed to be fail-safe.
 Fail-secure systems maintain maximum security when they cannot operate. For example,
while fail-safe electronic doors unlock during power failures, fail-secure ones will lock,
keeping an area secure.
 Fail-Passive systems continue to operate in the event of a system failure. An example
includes an aircraft autopilot. In the event of a failure, the aircraft would remain in a
controllable state and allow the pilot to take over and complete the journey and perform a safe
landing.
 Fault-tolerant systems avoid service failure when faults are introduced to the system. An
example may include control systems for ordinary nuclear reactors. The normal method to
tolerate faults is to have several computers continually test the parts of a system, and switch
on hot spares for failing subsystems. As long as faulty subsystems are replaced or repaired at
normal maintenance intervals, these systems are considered safe. The computers, power
supplies and control terminals used by human beings must all be duplicated in these systems
in some fashion.
Software engineering for safety-critical systems
Software engineering for safety-critical systems is particularly difficult. There are three aspects which
can be applied to aid the engineering software for life-critical systems. First is process engineering
and management. Secondly, selecting the appropriate tools and environment for the system. This
allows the system developer to effectively test the system by emulation and observe its effectiveness.
Thirdly, address any legal and regulatory requirements, such as FAA requirements for aviation. By
setting a standard for which a system is required to be developed under, it forces the designers to stick
to the requirements.
Examples of safety-critical systems
Infrastructure

 Circuit breaker
 Emergency services dispatch systems
 Electricity generation, transmission and distribution
 Fire alarm
 Fire sprinkler
 Fuse (electrical)
 Fuse (hydraulic)
 Life support systems
 Telecommunications
Medicine

 Heart-lung machines
 Mechanical ventilation systems
 Infusion pumps and Insulin pumps
 Radiation therapy machines
 Robotic surgery machines
 Defibrillator machines
 Pacemaker devices
 Dialysis machines
 Devices that electronically monitor vital functions (electrography; especially,
electrocardiography, ECG or EKG, and electroencephalography, EEG)
 Medical imaging devices (X-ray, computerized tomography- CT or CAT, different magnetic
resonance imaging- MRI- techniques, positron emission tomography- PET)
 Even healthcare information systems have significant safety implications
Recreation

 Amusement rides
 Climbing equipment
 Parachutes
 Scuba equipment
o Diving rebreather
o Dive computer (depending on use)
Transport

 Railway signalling and control systems


 Platform detection to control train doors
 Automatic train stop
 Airbag systems
 Braking systems
 Seat belts
 Power Steering systems
 Advanced driver-assistance systems
 Electronic throttle control
 Battery management system for hybrids and electric vehicles
 Electric park brake
 Shift by wire systems
 Drive by wire systems
 Park by wire
Aviation

 Air traffic control systems


 Avionics, particularly fly-by-wire systems
 Radio navigation RAIM
 Engine control systems
 Aircrew life support systems
 Flight planning to determine fuel requirements for a flight
Spaceflight

 Human spaceflight vehicles


 Rocket range launch safety systems
 Launch vehicle safety
 Crew rescue systems
 Crew transfer systems
Safety Requirements
The goal of safety requirements engineering is to identify protection requirements that ensure that
system failures do not cause injury or death or environmental damage. Safety requirements may be
'shall not' requirements i.e. they define situations and events that should never occur. Functional
safety requirements define: checking and recovery features that should be included in a system, and
features that provide protection against system failures and external attacks.
Safety requirements are those requirements that are defined for the purpose of risk reduction. Like any
other requirements, they may at first be specified at a high level. For example, a braking system may
be hydraulic. A safety requirement may be met by a combination of safety functions, and these may
be implemented in systems of different technologies – for example, a software-based system along
with management procedures, checklists, and validation procedures for using it. When a safety
function is implemented via software, there also needs to be a hardware platform, in which case a
computer system is necessary. Then, the same demands are made of the entire system as of the
software.
Hazard-driven analysis:
Hazard identification
Identify the hazards that may threaten the system. Hazard identification may be based on different
types of hazard: physical, electrical, biological, service failure, etc.
Hazard assessment
The process is concerned with understanding the likelihood that a risk will arise and the potential
consequences if an accident or incident should occur. Risks may be categorized as: intolerable (must
never arise or result in an accident), as low as reasonably practical - ALARP (must minimize the
possibility of risk given cost and schedule constraints), and acceptable (the consequences of the risk
are acceptable and no extra costs should be incurred to reduce hazard probability).
The acceptability of a risk is determined by human, social, and political considerations. In most
societies, the boundaries between the regions are pushed upwards with time i.e. society is less willing
to accept risk (e.g., the costs of cleaning up pollution may be less than the costs of preventing it but
this may not be socially acceptable). Risk assessment is subjective.
Hazard assessment process: for each identified hazard, assess hazard probability, accident severity,
estimated risk, acceptability.
Hazard analysis
Concerned with discovering the root causes of risks in a particular system. Techniques have been
mostly derived from safety-critical systems and can be: inductive, bottom-up: start with a proposed
system failure and assess the hazards that could arise from that failure; and deductive, top-down: start
with a hazard and deduce what the causes of this could be.
Fault-tree analysis is a deductive top-down technique.:

 Put the risk or hazard at the root of the tree and identify the system states that could lead
to that hazard.
 Where appropriate, link these with 'and' or 'or' conditions.
 A goal should be to minimize the number of single causes of system failure.
Risk reduction
The aim of this process is to identify dependability requirements that specify how the risks should be
managed and ensure that accidents/incidents do not arise. Risk reduction strategies: hazard avoidance;
hazard detection and removal; damage limitation.

Safety Engineering Processes


Process Safety is a disciplinary framework for managing the integrity of operating systems and
handling hazardous substances that involves the facilities to be well designed, safely operated, and
properly maintained the facilities and hazardous materials. The Process Safety focuses on the
prevention and control of incidents that have the potential to release hazardous materials or energy
that can cause toxic effects, fire, or explosion and could ultimately result in serious injuries, property
damage, lost production, and environmental impact.
Process Safety Engineering is a safety specialised process engineering discipline that is responsible
for the developing risk assessments and designing safety operating practices, and provides technical
leadership and support to identify hazards, assess risks and provide cost-efficient management
solutions. The Process Safety Engineering focuses on the prevention of fire and explosion, accidental
chemical release, and reactive chemistry, toxic exposure, overpressure/under pressure, equipment
malfunction, excessive temperature and thermal expansion, metal fatigue, corrosion, human factors,
and other similar conditions by the application of good engineering and design principles.
Process Safety Information (PSI) is the physical, chemical, and toxicological information related to
the chemicals, processes, and equipment. A PSI is concerning the hazards of the regulated materials
which is typically found in a material: safety data sheet (MSDS); Block flow diagram or simplified
process flow diagram; Process chemistry; Maximum intended inventory; Safe upper and lower limits
for such items as temperatures, pressures, flows or compositions; and an Evaluation of the
consequences of deviations.
Process Safety Management (PSM) is a safety management system concerned with the safety
hazards arising from process operations, and distinct from the management of conventional safety
(slips, trips, falls etc.). A PSM requires detailed knowledge of the chemical and process hazards
associated with the operations of the plant that is a regulation by the U.S. Occupational Safety and
Health Administration (OSHA).
Fire Fighting is an activity or a process of extinguishing fires burning.
Key Activities and Deliverables
Alarm Management is an application system for the classifying alerts, prioritising, grouping and
event notifications that controls and manages the design of alarm systems with human factors,
instrumentation engineering and systems. The Alarm Management System includes procedures,
documentations, characterisations, logics, prioritisations, schematics, software and hardware, and
maintenance, etc. (Refer to the Alarm Flooding)
Emergency Shutdown (ESD) system is a process safety control system which overrides the action of
the basic control system when predetermined conditions are violated that includes an emergency
shutdown valve (ESD Valve) and an associated valve actuator. The Emergency Shutdown Controller
provides output signals to the ESD valve in the event of a failure in the process control system.
Emergency Sequence is the detailed procedures on how to make plant and process safe, minimising
risks to operators and facilities at all stages covering the PPE, level of intervention which is safe and
when to evacuate. The Emergency Sequence in the process operation is an automatic sequence
initiated by an interlock that may consist of starting, stopping, opening, or closing equipment in order
to render the process safe.
Emergency Shutdown Device is a device that is designed the system shutdown safely from the
emergency conditions.
Emergency Relief Device is a device that is designed to prevent rise of internal fluid pressure in
excess of a specified value and mounted on tanks with fixed roof for volatile liquids. (Refer to the
Process Safety Valve (PSV) or Pressure Relief Valve (PRV))
Emergency Shutdown Valve (ESDV) is an actuated valve designed to close the flow of a hazardous
fluid when the detection of a dangerous event happens. ESDVs are the final defence against process
mal-operation; they have a function which requires much more reliable performance than standard on-
off valves. Whenever dedicated sensors identify an abnormally dangerous process situation, the power
to an ESD valve solenoid and the valve goes to the desired fail safe mode (fail close or fail open) as a
part of a Safety Instrumented System (SIS).
Remotely Operated Shut-Off Valve (ROSOV) is a type of Emergency Shutdown Valve (ESDV)
which allows a plant or facility to be isolated automatically from a safe location without the necessity
for manual intervention that is designed and installed for the purpose of quickly isolating plant items
which are used for the storage of hazardous substances. In the Remote Operated Shut-Off Valve
(ROSOV) scenario, in an emergency shutdown the actuator will immediately return to the
predetermined safe position and will be ready to operate on the next command when the ESD signal is
reinstated.
Boiling Liquid Expanding Vapor Explosion (BLEVE) is a vapour explosion caused by the rupture
or catastrophic failure of a vessel containing a pressurised liquid, which is handling and storing
pressure vessel or containing cargo liquid above the boiling point at nominal atmospheric pressure.
Fail to Danger is a failure mode of the protection system that becomes shutdown if there is a failure
in any of its components. The Fail to Danger fault of a hazardous condition arising the equipment,
process, or plant will continue to operate without being tripped but it has a direct and detrimental
effect on safety.
Fire Triangle is the three elements needed to ignite or for fire that is: 1) flammable substance
(combustible material): to burn; 2) oxygen: to combine and react; 3) heat or ignition source: to raise
the temperature of the combustible material to its burning or ignition temperature.
Flash Fire is defined from NFPA standard as a fire that spreads by means of a flame front rapidly
through a diffuse fuel, such as dust, gas, or the vapours of an ignitable liquid, without the production
of damaging pressure. Flash fires generate temperatures in the range from 1000ºF to 1900ºF.
NFPA (National Fire Protection Association) 55 is a standard for the storages, uses, and handling of
compressed gases and cryogenic fluids in portable and stationary containers, cylinders, and tanks that
covers facilitates protection from physiological, over-pressurisation, explosive, and flammability
hazards associated with compressed gases and cryogenic fluids.
Sprinkler is a device or equipment that sprays water for putting onto fires in a lot of small drops to
put them out.
Vapour Cloud Explosion (VCE) is an explosion resulting from the ignition of cloud of flammable
vapor, gas, or mist in which a flame speed accelerates to sufficiently high velocities to produce
significant overpressure. These explosions occur by a sequence of steps: 1) Sudden release of a large
quantity of flammable vapor; 2) Dispersion of the vapor throughout the plant site while mixing with
air; 3) Ignition of the resulting vapor cloud.
Safety Cases
Safety case should communicate a clear, comprehensive and defensible argument that a system is
acceptably safe to operate in a particular context The concept of the ‘safety case’ has already been
adopted across many industries (including defence, aerospace, and railways). Studying the safety
standards relating to these sectors, it is possible to identify a number of definitions of the safety case
some clearer than others. The definition given above attempts to cleanly define the core concept that
is in agreement with the majority of the definitions we have discovered.
The following are important aspects of the above definition:

 ‘argument’ – Above all, the safety case exists to communicate an argument. It is used to
demonstrate how someone can reasonably conclude that a system is acceptably safe from the
evidence available.
 ‘clear’ – A safety case is a device for communicating ideas and information, usually to a third
party (e.g. a regulator). In order to do this convincingly, it must be as clear as possible.
 ‘system’ – The system to which a safety case refers can be anything from a network of pipes or a
software configuration to a set of operating procedures. The concept is not limited to
consideration of conventional engineering ‘design’.
 ‘acceptably’ – Absolute safety is an unobtainable goal. Safety cases are there to convince
someone that the system is safe enough.
 ‘context’ – Context-free safety is impossible to argue. Almost any system can be unsafe if used in
an inappropriate or unexpected manner. It is part of the job of the safety case to define the context
within which safety is to be argued.
A safety case is a comprehensive and structured set of safety documentation which is aimed to ensure
that the safety of a specific vessel or equipment can be demonstrated by reference to:

 safety arrangements and organisation


 safety analyses
 compliance with the standards and best practice
 acceptance tests
 audits
 inspections
 feedback
 provision made for safe use including emergency arrangements
REQUIREMENTS, ARGUMENT AND EVIDENCE

The safety argument is that which communicates the relationship between the evidence and
objectives. Based on the author’s personal experience, gained from reviewing a number of safety
cases, and validated through discussion with many safety practitioners, a commonly observed failing
of safety cases is that the role of the safety argument is often neglected. In such safety cases, many
pages of supporting evidence are often presented but little is done to explain how this evidence relates
to the safety objectives. The reader is often left to guess at an unwritten and implicit argument.
Both argument and evidence are crucial elements of the safety case that must go hand-in-hand.
Argument without supporting evidence is unfounded, and therefore unconvincing. Evidence without
argument is unexplained it can be unclear that (or how) safety objectives have been satisfied. In the
following section we examine how safety arguments may be clearly communicated within safety case
reports.
SAFETY CASE DEVELOPMENT LIFECYCLE
It is increasingly recognised by both safety case practitioners and many safety standards that safety
case development, contrary to what may historically have been practised, cannot be left as an activity
to be performed towards the end of the safety lifecycle. This view of safety case production being left
until all analysis and development is completed.

 Large amounts of re-design resulting from a belated realisation that a satisfactory safety argument
cannot be constructed. In extreme cases, this has resulted in ‘finished’ products having to be
completely discarded and redeveloped.
 Less robust safety arguments being presented in the final safety case. Safety case developers are
forced to argue over a design as it is given to them – rather than being able to influence the design
in such a way as to improve safety and improve the nature of the safety argument. This can result
in, for example, probabilistic arguments being relied upon more heavily than deterministic
arguments based upon explicit design features (the latter being often more convincing).
 Lost safety rationale. The rationale concerning the safety aspects of the design is best recorded at
‘design-time’. Where capture of the safety argument is left until after design and implementation
– it is possible to lose some of the safety aspects of the design decision making process which, if
available, could strengthen the final safety case.
PRELIMINARY SAFETY ARGUMENTS

 Scope - Boundary of concern, standards to be addressed, relationship to other systems / extant


safety cases
 System Description - High-Level (Preliminary) Overview of the System: Key functions + Outline
of Physical Elements
 System Hazards - Results of Preliminary Hazard Analysis - Key Credible Hazards.
 Safety Requirements - Description of Top-level safety requirements (emerging from study of the
standards, and the Preliminary Hazard Analysis), e.g. Failure Rate in particular Failure Modes.
 Risk Assessment - Results of Risk Estimation exercise, Accident Sequences, HRI used and the
resulting Risk Classes for all identified Hazards.
 Hazard Control / Risk Reduction Measures – At this stage - how the project plans to tackle each
identified risk - design measures, protection systems, redundancy etc.
 Safety Analysis / Test - At this stage - how the project intends to provide evidence of successful
deployment of risk reduction measures, meeting failure rate targets, demonstrating correctness.
 Safety Management System - Reference to contents of Safety Plan for roles, responsibilities,
procedures.
 Development Process Justification - An outline of the development procedures, design
methodologies to be used, coding standards, change control procedures etc. and how these will be
shown to meet integrity level, or development assurance level, requirements.
 Conclusions - At this stage - the key reasons why the project believes that the system will be safe
to deploy the system, what will be concluded from analysis and test evidence etc.

Security Engineering
Security engineering is about building systems to remain dependable in the face of malice, error, or
mischance. As a discipline, it focuses on the tools, processes, and methods needed to design,
implement, and test complete systems, and to adapt existing systems as their environment evolves.
Security engineering must start early in the application deployment process. In fact, each step in the
application deployment should be started early security planning, securing the system, developing the
system with security, and testing the system with security. The security of a system can be threatened
via two violations:

 Threat: A program that has the potential to cause serious damage to the system.
 Attack: An attempt to break security and make unauthorized use of an asset.
Security violations affecting the system can be categorized as malicious and accidental
threats. Malicious threats, as the name suggests are a kind of harmful computer code or web script
designed to create system vulnerabilities leading to back doors and security breaches. Accidental
Threats, on the other hand, are comparatively easier to be protected against. Example: Denial of
Service DDoS attack. Security can be compromised via any of the breaches mentioned:

 Breach of confidentiality: This type of violation involves the unauthorized reading of


data.
 Breach of integrity: This violation involves unauthorized modification of data.
 Breach of availability: It involves unauthorized destruction of data.
 Theft of service: It involves the unauthorized use of resources.
 Denial of service: It involves preventing legitimate use of the system. As mentioned
before, such attacks can be accidental in nature.
Security Goal:
1. Integrity:
The objects in the system mustn’t be accessed by any unauthorized user & any user not
having sufficient rights should not be allowed to modify the important system files and
resources.
2. Secrecy:
The objects of the system must be accessible only to a limited number of authorized users.
Not everyone should be able to view the system files.
3. Availability:
All the resources of the system must be accessible to all the authorized users i.e. only one
user/process should not have the right to hog all the system resources. If such kind of situation
occurs, denial of service could happen. In this kind of situation, malware might hog the
resources for itself & thus preventing the legitimate processes from accessing the system.
Types of Program Threats:
1. Virus:
An infamous threat, known most widely. It is a self-replicating and malicious thread that
attaches itself to a system file and then rapidly replicates itself, modifying and destroying
essential files leading to a system breakdown.
2. Trojan Horse:
A code segment that misuses its environment is called a Trojan Horse. They seem to be
attractive and harmless cover programs but are really harmful hidden programs that can be
used as the virus carrier. In one of the versions of Trojan, the User is fooled to enter
confidential login details on an application. Those details are stolen by a login emulator and
can be further used as a way of information breaches.
3. Trap Door:
Trap Doors are quite difficult to detect as to analyze them, one needs to go through the source
code of all the components of the system. In other words, if we may have to define a trap door
then it would be like, a trap door is actually a kind of a secret entry point into a running or
static program that actually allows anyone to gain access to any system without going through
the usual security access procedures.
4. Logic Bomb:
A program that initiates a security attack only under a specific situation. To be very precise, a
logic bomb is actually the most malicious program which is inserted intentionally into the
computer system and that is triggered or functions when specific conditions have been met for
it to work.
5. Worm:
A computer worm is a type of malware that replicates itself and infects other computers while
remaining active on affected systems. A computer worm replicates itself in order to infect
machines that aren’t already infested. It frequently accomplishes this by taking advantage of
components of an operating system that are automatic and unnoticed by the user. Worms are
frequently overlooked until their uncontrolled replication depletes system resources, slowing
or stopping other activities.
Types of System Threats
1. Worm:
An infection program that spreads through networks. Unlike a virus, they target mainly LANs. A
computer affected by a worm attacks the target system and writes a small program “hook” on it. This
hook is further used to copy the worm to the target computer. This process repeats recursively, and
soon enough all the systems of the LAN are affected. It uses the spawn mechanism to duplicate itself.
The worm spawns copies of itself, using up a majority of system resources and also locking out all
other processes.

2. Port Scanning:
It is a means by which the cracker identifies the vulnerabilities of the system to attack. It is an
automated process that involves creating a TCP/IP connection to a specific port. To protect the
identity of the attacker, port scanning attacks are launched from Zombie Systems, that is systems that
were previously independent systems that are also serving their owners while being used for such
notorious purposes.
Denial of Service:
Such attacks aren’t aimed for the purpose of collecting information or destroying system files. Rather,
they are used for disrupting the legitimate use of a system or facility. These attacks are generally
network-based. They fall into two categories:

 Attacks in this first category use so many system resources that no useful work can be
performed.
 Attacks in the second category involve disrupting the network of the facility. These attacks
are a result of the abuse of some fundamental TCP/IP principles.
Security Measures Taken
 Physical:
The sites containing computer systems must be physically secured against armed and
malicious intruders. The workstations must be carefully protected.
 Human:
Only appropriate users must have the authorization to access the system. Phishing and
Dumpster Diving must be avoided.
 Operating system:
The system must protect itself from accidental or purposeful security breaches.
 Networking System:
Almost all of the information is shared between different systems via a network. Intercepting
these data could be just as harmful as breaking into a computer. Henceforth, Network should
be properly secured against such attacks.
Safety and Organizations
1. Health and safety executives
2. Institute of Occupational Safety and Health (IOSH)
3. NEBOSH
4. National Safety Council
5. National Institute for Occupational Safety and Health
6. Health and Safety Authority
7. Occupational safety and Health Administration (OSHA)
8. European Agency for Safety and Health at work
9. Safe work Australia
10. British Safety Council
11. Occupation Safety and Health Consultants Register
12. National Compliance and Risk Qualification
13. Canadian Center for Occupational Health and Safety
14. Occupational Safety and Health Review Commission
15. Mines safety and Health Administration
16. State Administration for work safety
17. Korea Occupational Safety and Health Agency
18. Board of Canadian Registered Safety Professionals
19. American Society of Safety Professionals
Health and safety executives:
The Health and Safety Executive is the body responsible for the encouragement, regulation and
enforcement of workplace health, safety and welfare, and for research into occupational risks in Great
Britain. It is a non-departmental public body of the United Kingdom with its headquarters.
Institute of Occupational Safety and Health (IOSH):
The Institution of Occupational Safety and Health (IOSH) is the world’s leading professional body for
people responsible for safety and health in the workplace. OSH acts as a champion, supporter, adviser,
advocate and trainer for safety and health professionals working in organisations of all sizes. We give
the safety and health profession a consistent, independent, authoritative voice at the highest levels.
NEBOSH:
National Examination Board in Occupational Safety and Health is a UK-based independent
examination board delivering vocational qualifications in health, safety & environmental practice and
management. It was founded in 1979 and has charitable status.
National Safety Council:
The National Safety Council is a 501 nonprofit, public service organization promoting health and
safety in the United States of America. Headquartered in Itasca, Illinois, NSC is a member
organization, founded in 1913 and granted a congressional charter in 1953.
National Institute for Occupational Safety and Health:
The National Institute for Occupational Safety and Health is the United States federal agency
responsible for conducting research for the prevention of work-related injury and illness.
Health and Safety Authority:
The Health and Safety Authority is the national body in Ireland with responsibility for occupational
health and safety. Its role is to secure health and safety at work.
Occupational safety and Health Administration (OSHA):
The Occupational Safety and Health Administration is an agency of the United States Department of
Labor. Congress established the agency under the Occupational Safety and Health Act, which
President Richard M. Nixon signed into law on December 29, 1970.
European Agency for Safety and Health at work:
The European Agency for Safety and Health at Work is a decentralised agency of the European Union
with the task of collecting, analysing and disseminating relevant information that can serve the needs
of people involved in safety and health at work.
Safe work Australia:
SWA is an Australian government statutory body established in 2008 to develop national policy
relating to WHS and workers’ compensation. We are jointly funded by the Commonwealth, state and
territory governments through an Intergovernmental Agreement. We perform our functions in
accordance with our Corporate plan and Operational plan, which are agreed annually by Ministers for
Work Health and Safety.
British Safety Council:
The British Safety Council, a Registered Charity founded by James Tye in 1957, is one of the world’s
leading Health and Safety organisations alongside the likes of IOSH & IIRSM unlike these the
Council’s members are mostly companies.
Occupation Safety and Health Consultants Register:
The Occupational Safety and Health Consultants Register (OSHCR) is a public register of UK-based
health and safety advice consultants, set up to assist UK employers and business owners with general
advice on workplace health and safety issues. The register was established in response to the health
and safety consultants should professional bodies and a web-based directory established.
National Compliance and Risk Qualification:
National Compliance and Risk Qualifications – NCRQ – has been established by a number of leading
experts in health and safety. This includes representatives of some of the UK’s largest employers,
including the BBC, Royal Mail, Siemens plc, and local authorities, specialists from the Health and
Safety Executive, legal experts, and academics.
Canadian Center for Occupational Health and Safety:
The Canadian Centre for Occupational Health and Safety (CCOHS) is an independent departmental
corporation under Schedule II of the Financial Administration Act and is accountable to Parliament
through the Minister of Labour. CCOHS functions as the primary national agency in Canada for the
advancement of safe and healthy workplaces and preventing work-related injuries, illnesses and
deaths.
Occupational Safety and Health Review Commission:
The Occupational Safety and Health Review Commission (OSHRC) is an independent federal agency
created under the Occupational Safety and Health Act to decide contests of citations or penalties
resulting from OSHA inspections of American work places.
Mines safety and Health Administration
The Mine Safety and Health Administration (MSHA) is an agency of the United States Department of
Labor which administers the provisions of the Federal Mine Safety and Health Act of 1977 (Mine
Act) to enforce compliance with mandatory safety and health standards as a means to eliminate fatal
accidents, to reduce the frequency and severity of nonfatal accidents, to minimize health hazards, and
to promote improved safety and health conditions in the nation’s mines
State Administration for work safety:
The State Administration of Work Safety, reporting to the State Council, is the non-ministerial agency
of the Government of the People’s Republic of China responsible for the regulation of risks to
occupational safety and health in China.
Korea Occupational Safety and Health Agency:
Korea Occupational Safety & Health Agency is a body in South Korea, which serves to protect the
health and safety of Korean workers. It was late 1980s that KOSHA (Korea Occupational Safety &
Health Agency) Law was released to the public. After the KOSHA Act was released in 1986, the
labor department of Korea, which is the competent organization of KOSHA, moved to the next step
that set up the plan for establishing KOSHA and inaugurated the institution committee for KOSHA.
Board of Canadian Registered Safety Professionals:
The Board of Canadian Registered Safety Professionals provides certification of occupational health
and safety professionals in Canada and has an established Code of Ethics.
American Society of Safety Professionals:
The American Society of Safety Professionals is a global association for occupational safety and
health professionals. For more than 100 years, the association have supported occupational safety and
health (OSH) professionals in their efforts to prevent workplace injuries, illnesses and fatalities. We
provide education, advocacy, standards development and a professional community to our members
in order to advance their careers and the OSH profession as a whole.

Security Requirements
A security requirement is a statement of needed security functionality that ensures one of many
different security properties of software is being satisfied. Security requirements are derived from
industry standards, applicable laws, and a history of past vulnerabilities. Security requirements define
new features or additions to existing features to solve a specific security problem or eliminate a
potential vulnerability.

Security requirements provide a foundation of vetted security functionality for an application. Instead
of creating a custom approach to security for every application, standard security requirements allow
developers to reuse the definition of security controls and best practices. Those same vetted security
requirements provide solutions for security issues that have occurred in the past. Requirements exist
to prevent the repeat of past security failures.
Type of security requirements:
Security requirements can be formulated on different abstraction levels. At the highest abstraction
level, they basically just reflect security objectives. An example of a security objectives could be "The
system must maintain the confidentially of all data that is classified as confidential".
More useful for a SW architect or a system designer are however security requirements that describe
more concretely what must be done to assure the security of a system and its data. There are 4
different security requirement types:
 Secure Functional Requirements, this is a security related description that is integrated into
each functional requirement. Typically, this also says what shall not happen. This requirement
artifact can for example be derived from misuse cases
 Functional Security Requirements, these are security services that needs to be achieved by
the system under inspection. Examples could be authentication, authorization, backup, server-
clustering, etc. This requirement artifact can be derived from best practices, policies, and
regulations.
 Non-Functional Security Requirements, these are security related architectural
requirements, like "robusteness" or "minimal performance and scalability". This requirement
type is typically derived from architectural principals and good practice standards.
 Secure Development Requirements, these requirements describe required activities during
system development which assure that the outcome is not subject to vulnerabilities. Examples
could be "data classification", "coding guidelines" or "test methodology". These requirements
are derived from corresponding best practice frameworks like "CLASP".
Implementation
Successful use of security requirements involves four steps. The process includes discovering /
selecting, documenting, implementing, and then confirming correct implementation of new security
features and functionality within an application.
Discovery and Selection
The process begins with discovery and selection of security requirements. In this phase, the developer
is understanding security requirements from a standard source such as ASVS and choosing which
requirements to include for a given release of an application. The point of discovery and selection is
to choose a manageable number of security requirements for this release or sprint, and then continue
to iterate for each sprint, adding more security functionality over time.
Investigation and Documentation
During investigation and documentation, the developer reviews the existing application against the
new set of security requirements to determine whether the application currently meets the requirement
or if some development is required. This investigation culminates in the documentation of the results
of the review.
Test
Test cases should be created to confirm the existence of the new functionality or disprove the
existence of a previously insecure option.
Vulnerabilities Prevented
Security requirements define the security functionality of an application. Better security built in from
the beginning of an applications life cycle results in the prevention of many types of vulnerabilities.

Secure System Design


Security and risk management can be applied starting on the design phase of the system. Translating
the requirements — including the security requirements — into a workable system design before we
proceed with the implementation is a good start for a secure system development.

The image above shows the security mechanisms at work when a user is accessing a web-based
application. Common security concerns of a software system or an IT infrastructure system still
revolves around the CIA triad as described in the previous section.
When designing a system, we first need to see the general architecture of the system that should be
implemented for the business requirements to be fulfilled.

The above image shows the general architecture of a microservices-based web application a common
approach for today’s HTTP-based web applications and services.
Suppose we’re designing a microservices based system and trying to plan for the system security from
the architecture design. We started by performing a risk assessment to see which parts of the system
have the highest risk. The system consists of an API gateway, an authentication service, a user
configuration service, a payment service, and a transaction service.
The five services serve as different components and functions of the system, each carries their own
risks. But let’s focus on the service that serves as the front-line defense of the system: the API
gateway.
The API gateway is the one accepting requests directly from the public Internet, and the machine it’s
deployed on is at more risk to be compromised by an attacker compared to the other services deployed
on machines that are not directly exposed to the public Internet. The API gateway will need to parse
and process request securely so attackers wouldn’t be able to exploit the request parser by sending a
disfigured HTTP request.
Disfigured request that’s not properly handled may cause the API gateway to crash or to be
manipulated to execute instructions it’s not supposed to execute. It’s a good idea to put the API
gateway behind a firewall that can help filter out malicious requests and stop exploit attempts before
they reach the API gateway — but the firewall itself might be exploitable, so pick something that’s
already battle-proven and quick to patch whenever a vulnerability is found.
While the other services also have their own risks we should handle, the API gateway and the
authentication service are to be prioritized due to the higher risks they pose to the whole system if
compromised.
By putting API gateway as the front-line — with some extra protection such as firewall rules — we
can avoid exposing every service from direct access. Since only the API gateway is hit with traffic
directly from the public Internet, we can focus on securing the API gateway from any risks involving
disfigured requests and ensuring the requests forwarded by the API gateway to each respective service
are already safe.
Imagine if we let every single service to be directly accessible from the public Internet. We’ll need to
ensure every single one of them has the same standards for implementation security regarding how to
handle raw requests. This setup would be much more expensive to maintain as the number of services
we have increases, as we need to secure every single one of them instead of just one key service that
acts as a bridge between the public Internet and services in the internal network.
Poorly-planned system security on architectural level would leave us with the extra work of securing
many things that we shouldn’t even bother with, if only we designed the system architecture properly
from the start.
Building a secure system is not easy, and there will never be enough resource to make a system
perfectly secure. But by performing a risk assessment on the system we’re trying to secure, we’ll be
able to identify which parts of the system need to be prioritized.
The risk assessment approach can be used for performing a security assessment on an existing system,
but it’s also useful when we’re trying to design a system from scratch. By applying the principles to
our system architecture design and adding mechanisms to mitigate possible issues, we can avoid
possible severe risks in the system from the start.
Even for a system that’s designed with security in mind at the beginning, the system will grow more
and more complex as time goes on. The complexity will add more risks to the system, as a more
complex system’s behaviors tend to be more unpredictable. We can manage the system’s complexity
by performing some system maintenance tasks by restructuring parts of the system in order to
simplify the overall design and interaction between components, and also removing parts that are no
longer used.
Security Testing and Assurance
Security Testing is a type of Software Testing that uncovers vulnerabilities, threats, risks in a software
application and prevents malicious attacks from intruders. The purpose of Security Tests is to identify
all possible loopholes and weaknesses of the software system which might result in a loss of
information, revenue, repute at the hands of the employees or outsiders of the Organization.
The main goal of Security Testing is to identify the threats in the system and measure its potential
vulnerabilities, so the threats can be encountered and the system does not stop functioning or can not
be exploited. It also helps in detecting all possible security risks in the system and helps developers to
fix the problems through coding.
Types of Security Testing
 Vulnerability Scanning: This is done through automated software to scan a system against
known vulnerability signatures.
 Security Scanning: It involves identifying network and system weaknesses, and later
provides solutions for reducing these risks. This scanning can be performed for both Manual
and Automated scanning.
 Penetration testing: This kind of testing simulates an attack from a malicious hacker. This
testing involves analysis of a particular system to check for potential vulnerabilities to an
external hacking attempt.
 Risk Assessment: This testing involves analysis of security risks observed in the
organization. Risks are classified as Low, Medium and High. This testing recommends
controls and measures to reduce the risk.
 Security Auditing: This is an internal inspection of Applications and Operating systems for
security flaws. An audit can also be done via line by line inspection of code
 Ethical hacking: It’s hacking an Organization Software systems. Unlike malicious hackers,
who steal for their own gains, the intent is to expose security flaws in the system.
 Posture Assessment: This combines Security scanning, Ethical Hacking and Risk
Assessments to show an overall security posture of an organization.
How to do Security Testing
It is always agreed, that cost will be more if we postpone security testing after software
implementation phase or after deployment. So, it is necessary to involve security testing in the SDLC
life cycle in the earlier phases.
SDLC Phases Security Processes

Requirements Security analysis for requirements and check abuse/misuse cases

Design Security risks analysis for designing. Development of Test


Plan including security tests

Coding and Unit Static and Dynamic Testing and Security White Box Testing
Testing

Integration Testing Black Box Testing

System Testing Black Box Testing and Vulnerability scanning

Implementation Penetration Testing, Vulnerability Scanning

Support Impact analysis of Patches

Methodologies/ Approach / Techniques for Security Testing

 Tiger Box: This hacking is usually done on a laptop which has a collection of OSs and
hacking tools. This testing helps penetration testers and security testers to conduct
vulnerabilities assessment and attacks.
 Black Box: Tester is authorized to do testing on everything about the network topology and
the technology.
 Grey Box: Partial information is given to the tester about the system, and it is a hybrid of
white and black box models.
Security Testing Tool
1) Acunetix
Intuitive and easy to use, Acunetix by Invicti helps small to medium-sized organizations ensure their
web applications are secure from costly data breaches. It does so by detecting a wide range of web
security issues and helping security and development professionals act fast to resolve them.
2) Intruder
Intruder is a powerful, automated penetration testing tool that discovers security weaknesses across
your IT environment. Offering industry-leading security checks, continuous monitoring and an easy-
to-use platform, Intruder keeps businesses of all sizes safe from hackers.
3) Owasp
The Open Web Application Security Project (OWASP) is a worldwide non-profit organization
focused on improving the security of software. The project has multiple tools to pen test various
software environments and protocols.
4) WireShark
Wireshark is a network analysis tool previously known as Ethereal. It captures packet in real time and
display them in human readable format. Basically, it is a network packet analyzer- which provides the
minute details about your network protocols, decryption, packet information, etc. It is an open source
and can be used on Linux, Windows, OS X, Solaris, NetBSD, FreeBSD and many other systems. The
information that is retrieved via this tool can be viewed through a GUI or the TTY mode TShark
Utility.
5) W3af
w3af is a web application attack and audit framework. It has three types of plugins; discovery, audit
and attack that communicate with each other for any vulnerabilities in site, for example a discovery
plugin in w3af looks for different url’s to test for vulnerabilities and forward it to the audit plugin
which then uses these URL’s to search for vulnerabilities.
Security Assurance
1. Security Hardening
2. Security Testing
3. Vulnerability Management
Security Hardening
Security hardening describes the minimization of a system’s attack surface and proper configuration
of security functions. The former may be achieved by disabling unnecessary components, removing
superfluous system accounts, and closing any communication interfaces not in use – just to name a
few. The latter configuration task focuses on security controls within the system itself and ensures that
these can perform their functions as intended. This can include the configuration of host-based
firewalls, intrusion detection/ prevention capabilities, or operating system controls, such as SELinux.
Security hardening is particularly important before a system is deployed, but should be verified
regularly thereafter to confirm that the system still meets the defined hardening standard in the context
of its current operating environment.
Security Testing
Security testing aims to validate a system’s security posture by trying to identify any weaknesses or
vulnerabilities possibly remaining after security hardening. This activity can take many different
forms, depending on the complexity of the system under test and the available resources and skills. In
its most basic form, it may comprise an automated vulnerability scan from the outside as well as an
authenticated scan from the perspective of a user on the system. More advanced tests would go a step
further by analyzing the system’s responses and reasoning about communication flows that may
afford an attacker with a way into the system. Established best practices, such as the OWASP Top 10,
can serve as a useful guide here to focus the test activities on the most common vulnerabilities.
Beyond that, fully manual test could dig even deeper, for example, trying to discover vulnerabilities in
the systems source code if available.
Similar to hardening of the system, security testing should also be performed before and during a
systems operation. Regular, automated security scans can be a great tool to identify new
vulnerabilities early on.
Vulnerability Management
Vulnerability management takes the results of the security tests performed and attempts to mitigate
them. This includes the analysis of each finding (Is this actually an issue in the context of this
system?), prioritization (How big of an issue is it?), and mitigation (How can it be fixed?). While the
last part should be fairly obvious, the first two are just as essential since it is important to take a risk-
based approach to vulnerability mitigation. No system will ever be completely free of vulnerabilities,
but the goal should be to avoid the ones that are critical and easily abusable.
We hope with this article we were able to provide you with a good overview of security assurance.
Please note, the term is not strictly defined, so in some organizations further aspects may be
considered part of it, such as a secure software development process. Let us know what other security
basics you would like us to cover or which we should explore in more detail.
Resilience Engineering
resilience is the ability to absorb or avoid damage without suffering complete failure and is an
objective of design, maintenance and restoration for buildings and infrastructure, as well as
communities. A more comprehensive definition is that it is the ability to respond, absorb, and adapt
to, as well as recover in a disruptive event. A resilient structure/system/community is expected to be
able to resist to an extreme event with minimal damages and functionality disruptions during the
event; after the event, it should be able to rapidly recovery its functionality similar to or even better
than the pre-event level.
The concept of resilience originated from engineering and then gradually applied to other fields. It is
related to that of vulnerability. Both terms are specific to the event perturbation, meaning that a
system/infrastructure/community may be more vulnerable or less resilient to one event than another
one. However, they are not the same. One obvious difference is that vulnerability focuses on the
evaluation of system susceptibility in the pre-event phase; resilience emphasizes the dynamic features
in the pre-event, during-event, and post-event phases.
Resilience is a multi-facet property, covering four dimensions: technical, organization, social and
economic. Therefore, using one metric may not be representative to describe and quantify resilience.
In engineering, resilience is characterized by four Rs: robustness, redundancy, resourcefulness, and
rapidity. Current research studies have developed various ways to quantify resilience from multiple
aspects, such as functionality- and socioeconomic- related aspects.
Engineering resilience has inspired other fields and influenced the way how they interpret resilience,
e.g. supply chain resilience.
Engineering resilience refers to the functionality of a system in relation to hazard mitigation. Within
this framework, resilience is calculated based on the time it takes a system to return to a single state
equilibrium. Researchers at the MCEER (Multi-Hazard Earthquake Engineering research center) have
identified four properties of resilience: Robustness, resourcefulness, redundancy and rapidity.

 Robustness: the ability of systems to withstand a certain level of stress without suffering loss of
function.
 Resourcefulness: the ability to identify problems and resources when threats may disrupt the
system.
 Redundancy: the ability to have various paths in a system by which forces can be transferred to
enable continued function
 Rapidity: the ability to meet priorities and goals in time to prevent losses and future disruptions.
How to build in resilience
React to failures. When errors occur, teams respond to them. When a failure occurs and there is no
response, you are not adapting.
Log correctly. It is easiest to treat failures when their cause is known. Building good logging reports
into the application can help identify errors quickly, allowing tech/support staff to easily handle and
treat the errors. Good logging is critical to root cause analysis.
Check your metrics. Building resiliency should consider important metrics like mean time to failure
(MTTF) and mean time to recovery (MTTR) in order isolate impacted components and restore
optimal performance.
Know your options. Backup plans are illustrative of preparedness, not paranoia. When plan A fails,
and your company already has plans B, C, and D in place, your ability to respond to the failure
increases greatly.
Resilience in Cloud Computing
Resilience computing is a form of computing that distributes redundant IT resources for operational
purposes. In this computing, IT resources are pre-configured so that these sources are needed at
processing time; Can be used in processing without interruption.
The characteristic of flexibility in cloud computing can refer to redundant IT resources within a single
cloud or across multiple clouds. By taking advantage of the flexibility of cloud-based IT services,
cloud consumers can improve both the efficiency and availability of their applications.
Fixes and continues operation. Cloud Resilience is a term used to describe the ability of servers,
storage systems, data servers, or entire networks to remain connected to the network without
interfering with their functions or losing their operational capabilities. For a cloud system to remain
resilient, it needs to cluster the servers, has redundant workloads, and even rely on multiple physical
servers. High-quality products and services will accomplish this task.
Complex systems
A recurring theme in resilience engineering is about reasoning holistically about systems, as opposed
to breaking things up into components and reasoning about components separately. This perspective
is known as systems thinking, which is a school of thought that has been influential in the resilience
engineering community.
When you view the world as a system, the idea of cause becomes meaningless, because there’s no
way to isolate an individual cause. Instead, the world is a tangled web of influences.
You’ll often hear the phrase socio-technical system. This language emphasizes that systems should be
thought of as encompassing both humans and technologies, as opposed to thinking about
technological aspects in isolation.
Five Pillars of Resilience Engineering
Monitoring and Visibility
It’s critical to implement constant monitoring to ensure your team can act quickly in the case of an
emergency. You have to monitor at the application level, identify your critical user flows, and ensure
you create synthetic transactions and heuristics monitoring to identify signs of disruption before the
experience for your customers starts to degrade.
One way you can challenge your engineers to prepare for the unknown is through regular games and
testing opportunities like SRT (site reliability testing) and outage simulations. In these games, we
divide the team in half. One team is tasked with understanding how to monitor several metrics of the
new technology to ensure it’s working correctly and to take manual action if needed to restore service
when a disruption is identified. The other team will purposely introduce several disruption modes and
monitor how they affect the system. It’s okay and even encouraged to push teams over the edge,
forcing them to reassess themselves and learn for next time.
A “Redundancy is King” Attitude
To ensure resilience engineering, it’s critical to have no single point of failure and proactively prepare
for where you might need “backup.” This can look like multiple cells supported by several servers and
all backed by different data centers. When you send your credentials to authenticate, if one subsystem
isn’t working, you can redirect to another, so the authentication works and appears seamless to the
end-user. We’ve spent a lot of time understanding failure modes and making sure our architecture can
immediately work around those modes.
A “No Mysteries” Mindset
Embracing a “no mystery” culture comes down to being willing and motivated to find the root cause
of any issue that happens in your production system, no matter the complexity. Every engineer must
maintain a mindset of curiosity and exploration and never settle for not knowing.
Strong Automation
Automation is an absolute requirement, but the only thing worse than having no automation at all is
having bad automation. A bug in your automation can take an entire system down faster than a human
can restore it and bring it back to operation.
The key to implementing effective automation is to treat it as production software, meaning strong
software development principles should apply. Even if your automation starts as a small number of
scripts, you need to consider a release cycle, testing automation, deployment, and rollback procedures.
This may seem overkill for your team initially, but your whole system will eventually depend on your
automation making the right decisions and having no bugs when executing. It’s hard to retrofit good
SDLC processes for your automation if they’re not incorporated from the beginning.
The Right Team
An organization that practices and prioritizes resilience engineering starts with its people. Long gone
are the days when an engineer would write software and then pass it off for someone else to test it and
run it. Today, every engineer today is responsible for ensuring their software is robust, reliable, and
always on. Resiliency engineering is hard and requires a lot of passionate engineers, so make sure you
reward and recognize your team; ensure they know you understand the complexity of the challenges.
This takes a cultural shift and starts with who you hire. When you’re interviewing, ensure you hire
people who are proud of what they’ve built in previous roles and who get satisfaction from solving
tough problems while keeping a product running.

Cybersecurity
The technique of protecting internet-connected systems such as computers, servers, mobile devices,
electronic systems, networks, and data from malicious attacks is known as cybersecurity. We can
divide cybersecurity into two parts one is cyber, and the other is security. Cyber refers to the
technology that includes systems, networks, programs, and data. And security is concerned with the
protection of systems, networks, applications, and information. In some cases, it is also
called electronic information security or information technology security.
Cybersecurity is the protection of Internet-connected systems, including hardware, software, and data
from cyber attackers. It is primarily about people, processes, and technologies working together to
encompass the full range of threat reduction, vulnerability reduction, deterrence, international
engagement, and recovery policies and activities, including computer network operations, information
assurance, law enforcement, etc.
Cyber-attack is now an international concern. It has given many concerns that could endanger the
global economy. As the volume of cyber-attacks grows, companies and organizations, especially
those that deal with information related to national security, health, or financial records, need to take
steps to protect their sensitive business and personal information.
Types of Cyber Security

 Network Security: It involves implementing the hardware and software to secure a computer
network from unauthorized access, intruders, attacks, disruption, and misuse. This security
helps an organization to protect its assets against external and internal threats.
 Application Security: It involves protecting the software and devices from unwanted threats.
This protection can be done by constantly updating the apps to ensure they are secure from
attacks. Successful security begins in the design stage, writing source code, validation, threat
modeling, etc., before a program or device is deployed.
 Information or Data Security: It involves implementing a strong data storage mechanism to
maintain the integrity and privacy of data, both in storage and in transit.
 Identity management: It deals with the procedure for determining the level of access that
each individual has within an organization.
 Operational Security: It involves processing and making decisions on handling and securing
data assets.
 Mobile Security: It involves securing the organizational and personal data stored on mobile
devices such as cell phones, computers, tablets, and other similar devices against various
malicious threats. These threats are unauthorized access, device loss or theft, malware, etc.
 Cloud Security: It involves in protecting the information stored in the digital environment or
cloud architectures for the organization. It uses various cloud service providers such as AWS,
Azure, Google, etc., to ensure security against multiple threats.
 Disaster Recovery and Business Continuity Planning: It deals with the processes,
monitoring, alerts, and plans to how an organization responds when any malicious activity is
causing the loss of operations or data. Its policies dictate resuming the lost operations after
any disaster happens to the same operating capacity as before the event.
 User Education: It deals with the processes, monitoring, alerts, and plans to how an
organization responds when any malicious activity is causing the loss of operations or data.
Its policies dictate resuming the lost operations after any disaster happens to the same
operating capacity as before the event.
Why is Cyber Security important?
Today we live in a digital era where all aspects of our lives depend on the network, computer and
other electronic devices, and software applications. All critical infrastructure such as the banking
system, healthcare, financial institutions, governments, and manufacturing industries use devices
connected to the Internet as a core part of their operations. Some of their information, such as
intellectual property, financial data, and personal data, can be sensitive for unauthorized access or
exposure that could have negative consequences. This information gives intruders and threat actors to
infiltrate them for financial gain, extortion, political or social motives, or just vandalism.
Cyber-attack is now an international concern that hacks the system, and other security attacks could
endanger the global economy. Therefore, it is essential to have an excellent cybersecurity strategy to
protect sensitive information from high-profile security breaches. Furthermore, as the volume of
cyber-attacks grows, companies and organizations, especially those that deal with information related
to national security, health, or financial records, need to use strong cybersecurity measures and
processes to protect their sensitive business and personal information.
Cyber Security Goals
Cyber Security's main objective is to ensure data protection. The security community provides a
triangle of three related principles to protect the data from cyber-attacks. This principle is called the
CIA triad. The CIA model is designed to guide policies for an organization's information security
infrastructure. When any security breaches are found, one or more of these principles has been
violated.
We can break the CIA model into three parts: Confidentiality, Integrity, and Availability. It is actually
a security model that helps people to think about various parts of IT security.
Confidentiality
Confidentiality is equivalent to privacy that avoids unauthorized access of information. It involves
ensuring the data is accessible by those who are allowed to use it and blocking access to others. It
prevents essential information from reaching the wrong people. Data encryption is an excellent
example of ensuring confidentiality.
Integrity
This principle ensures that the data is authentic, accurate, and safeguarded from unauthorized
modification by threat actors or accidental user modification. If any modifications occur, certain
measures should be taken to protect the sensitive data from corruption or loss and speedily recover
from such an event. In addition, it indicates to make the source of information genuine.
Availability
This principle makes the information to be available and useful for its authorized people always. It
ensures that these accesses are not hindered by system malfunction or cyber-attacks.
Types of Cyber Security Threats
Malware
Malware means malicious software, which is the most common cyber attacking tool. It is used by the
cybercriminal or hacker to disrupt or damage a legitimate user's system. The following are the
important types of malware created by the hacker:

 Virus: It is a malicious piece of code that spreads from one device to another. It can clean
files and spreads throughout a computer system, infecting files, stoles information, or damage
device.
 Spyware: It is a software that secretly records information about user activities on their
system. For example, spyware could capture credit card details that can be used by the
cybercriminals for unauthorized shopping, money withdrawing, etc.
 Trojans: It is a type of malware or code that appears as legitimate software or file to fool us
into downloading and running. Its primary purpose is to corrupt or steal data from our device
or do other harmful activities on our network.
 Ransomware: It's a piece of software that encrypts a user's files and data on a device,
rendering them unusable or erasing. Then, a monetary ransom is demanded by malicious
actors for decryption.
 Worms: It is a piece of software that spreads copies of itself from device to device without
human interaction. It does not require them to attach themselves to any program to steal or
damage the data.
 Adware: It is an advertising software used to spread malware and displays advertisements on
our device. It is an unwanted program that is installed without the user's permission. The main
objective of this program is to generate revenue for its developer by showing the ads on their
browser.
 Botnets: It is a collection of internet-connected malware-infected devices that allow
cybercriminals to control them. It enables cybercriminals to get credentials leaks,
unauthorized access, and data theft without the user's permission.
Phishing
Phishing is a type of cybercrime in which a sender seems to come from a genuine organization like
PayPal, eBay, financial institutions, or friends and co-workers. They contact a target or targets via
email, phone, or text message with a link to persuade them to click on that links. This link will
redirect them to fraudulent websites to provide sensitive data such as personal information, banking
and credit card information, social security numbers, usernames, and passwords. Clicking on the link
will also install malware on the target devices that allow hackers to control devices remotely.
Man-in-the-middle (MITM) attack
A man-in-the-middle attack is a type of cyber threat (a form of eavesdropping attack) in which a
cybercriminal intercepts a conversation or data transfer between two individuals. Once the
cybercriminal places themselves in the middle of a two-party communication, they seem like genuine
participants and can get sensitive information and return different responses. The main objective of
this type of attack is to gain access to our business or customer data. For example, a cybercriminal
could intercept data passing between the target device and the network on an unprotected Wi-Fi
network.
Distributed denial of service (DDoS)
It is a type of cyber threat or malicious attempt where cybercriminals disrupt targeted servers,
services, or network's regular traffic by fulfilling legitimate requests to the target or its surrounding
infrastructure with Internet traffic. Here the requests come from several IP addresses that can make
the system unusable, overload their servers, slowing down significantly or temporarily taking them
offline, or preventing an organization from carrying out its vital functions.
Brute Force
A brute force attack is a cryptographic hack that uses a trial-and-error method to guess all possible
combinations until the correct information is discovered. Cybercriminals usually use this attack to
obtain personal information about targeted passwords, login info, encryption keys, and Personal
Identification Numbers (PINS).
SQL Injection (SQLI)
SQL injection is a common attack that occurs when cybercriminals use malicious SQL scripts for
backend database manipulation to access sensitive information. Once the attack is successful, the
malicious actor can view, change, or delete sensitive company data, user lists, or private customer
details stored in the SQL database.
Domain Name System (DNS) attack
A DNS attack is a type of cyberattack in which cyber criminals take advantage of flaws in the Domain
Name System to redirect site users to malicious websites (DNS hijacking) and steal data from affected
computers. It is a severe cybersecurity risk because the DNS system is an essential element of the
internet infrastructure.
Benefits of Cybersecurity

 Cyberattacks and data breach protection for businesses.


 Data and network security are both protected.
 Unauthorized user access is avoided.
 After a breach, there is a faster recovery time.
 End-user and endpoint device protection.
 Regulatory adherence.
 Continuity of operations.
 Developers, partners, consumers, stakeholders, and workers have more faith in the
company's reputation and trust.
Resilient Systems Design
System is resilient if it continues to carry out its mission in the face of adversity (i.e., if it provides
required capabilities despite excessive stresses that can cause disruptions). Being resilient is important
because no matter how well a system is engineered, reality will sooner or later conspire to disrupt the
system. Residual defects in the software or hardware will eventually cause the system to fail to
correctly perform a required function or cause it to fail to meet one or more of its quality requirements
(e.g., availability, capacity, interoperability, performance, reliability, robustness, safety, security, and
usability). The lack or failure of a safeguard will enable an accident to occur. An unknown or
uncorrected security vulnerability will enable an attacker to compromise the system. An external
environmental condition (e.g., loss of electrical supply or excessive temperature) will disrupt service.
Due to these inevitable disruptions, availability and reliability by themselves are insufficient, and thus
a system must also be resilient. It must resist adversity and provide continuity of service, possibly
under a degraded mode of operation, despite disturbances due to adverse events and conditions. It
must also recover rapidly from any harm that those disruptions might have caused. As in the old
Timex commercial, a resilient system "can take a licking and keep on ticking."
A system is resilient to the degree to which it rapidly and effectively protects its critical capabilities
from disruption caused by adverse events and conditions. Implicit in the preceding definition is the
idea that adverse events and conditions will occur. System resilience is about what the system does
when these potentially disruptive events occur and conditions exist.

Protection consists of the following four functions:


 Resistance is the system's ability to passively prevent or minimize harm from occurring
during the adverse event or condition. Resilience techniques for passive resistance include a
modular architecture that prevents failure propagation between modules, a lack of single
points of failure, and the shielding of electrical equipment, computers, and networks from
electromagnetic pulses (EMP).
 Detection is the system's ability to actively detect (via detection techniques):
- Loss or degradation of critical capabilities
- Harm to assets needed to implement critical capabilities
- Adverse events and conditions that can cause harm critical capabilities or related assets
 Reaction is the system's ability to actively react to the occurrence of an ongoing adverse event
or respond to the existence of an adverse condition. On detecting an adversity, a system might
stop or avoid the adverse event, eliminate the adverse condition, and thereby eliminate or
minimize further harm. Reaction techniques include employing exception handling, degraded
modes of operation, and redundancy with voting and
 Recovery is the system's ability to actively recover from harm after the adverse event is over.
Recovery can be complete in the sense that the system is returned to full operational status
with all damaged/destroyed assets having been repaired or replaced. Recovery can also be
partial or minimal. Recovery might also include the system evolving or adapting to avoid
future occurrences of the adverse events or conditions.
Properties
1 Resilience transcends scales. Strategies to address resilience apply at scales of individual
buildings, communities, and larger regional and ecosystem scales; they also apply at different time
scale from immediate to long-term.
2 Resilient systems provide for basic human needs. These include potable water, sanitation, energy,
livable conditions, lighting, safe air, occupant health, and food; these should be equitably distributed.
3 Diverse and redundant systems are inherently more resilient. More diverse communities,
ecosystems, economies, and social systems are better able to respond to interruptions or change,
making them inherently more resilient. While sometimes in conflict with efficiency and green
building priorities, redundant systems for such needs as electricity, water, and transportation, improve
resilience.
4 Simple, passive, and flexible systems are more resilient. Passive or manual-override systems are
more resilient than complex solutions that can break down and require ongoing maintenance. Flexible
solutions are able to adapt to changing conditions both in the short- and long-term.
5 Durability strengthens resilience. Strategies that increase durability enhance resilience. Durability
involves not only building practices, but also building design (beautiful buildings will be maintained
and last longer), infrastructure, and ecosystems.
6 Locally available, renewable, or reclaimed resources are more resilient. Reliance on abundant
local resources, such as solar energy, annually replenished groundwater, and local food provides
greater resilience than dependence on non-renewable resources or resources from far away.
7 Resilience anticipates interruptions and a dynamic future. Adaptation to a changing climate
with higher temperatures, more intense storms, sea level rise, flooding, drought, and wildfire is a
growing necessity, while non-climate-related natural disasters, such as earthquakes and solar flares,
and anthropogenic actions like terrorism and cyberterrorism, also call for resilient design. Responding
to change is an opportunity for a wide range of system improvements.
8 Find and promote resilience in nature. Natural systems have evolved to achieve resilience; we
can enhance resilience by relying on and applying lessons from nature. Strategies that protect the
natural environment enhance resilience for all living systems
9 Social equity and community contribute to resilience. Strong, culturally diverse communities in
which people know, respect, and care for each other will fare better during times of stress or
disturbance. Social aspects of resilience can be as important as physical responses.
10 Resilience is not absolute. Recognize that incremental steps can be taken and that total
resilience in the face of all situations is not possible. Implement what is feasible in the short term and
work to achieve greater resilience in stages.
UNIT IV-SERVICE ORIENTED SOFTWARE ENGINEERING,
SYSTEMS ENGINEERING AND REAL TIME SOFTWARE
ENGINEERING
Service-oriented Architecture
A Service-Oriented Architecture or SOA is a design pattern which is designed to build distributed
systems that deliver services to other applications through the protocol. It is only a concept and not
limited to any programming language or platform.
Service-orientation promotes is loose coupling between services. SOA separates functions into
distinct units, or services, which developers make accessible over a network in order to allow users to
combine and reuse them in the production of applications. These services and their corresponding
consumers communicate with each other by passing data in a well-defined, shared format, or by
coordinating an activity between two or more services.
There are two major roles within Service-oriented Architecture:
1. Service provider: The service provider is the maintainer of the service and the organization
that makes available one or more services for others to use. To advertise services, the provider
can publish them in a registry, together with a service contract that specifies the nature of the
service, how to use it, the requirements for the service, and the fees charged.
2. Service consumer: The service consumer can locate the service metadata in the registry and
develop the required client components to bind and use the service.

Service-Oriented Terminologies

 Services - The services are the logical entities defined by one or more published interfaces.
 Service provider - It is a software entity that implements a service specification.
 Service consumer - It can be called as a requestor or client that calls a service provider. A
service consumer can be another service or an end-user application.
 Service locator - It is a service provider that acts as a registry. It is responsible for examining
service provider interfaces and service locations.
 Service broker - It is a service provider that pass service requests to one or more additional
service providers.
Characteristics of SOA

 They are loosely coupled.


 They support interoperability.
 They are location-transparent
 They are self-contained.
Components of SOA

Functional aspects

 Transport - It transports the service requests from the service consumer to the service
provider and service responses from the service provider to the service consumer.
 Service Communication Protocol - It allows the service provider and the service consumer
to communicate with each other.
 Service Description - It describes the service and data required to invoke it.
 Service - It is an actual service.
 Business Process - It represents the group of services called in a particular sequence
associated with the particular rules to meet the business requirements.
 Service Registry - It contains the description of data which is used by service providers to
publish their services.
Guiding Principles of SOA:
1. Standardized service contract: Specified through one or more service description documents.
2. Loose coupling: Services are designed as self-contained components, maintain relationships that
minimize dependencies on other services.
3. Abstraction: A service is completely defined by service contracts and description documents.
They hide their logic, which is encapsulated within their implementation.
4. Reusability: Designed as components, services can be reused more effectively, thus reducing
development time and the associated costs.
5. Autonomy: Services have control over the logic they encapsulate and, from a service consumer
point of view, there is no need to know about their implementation.
6. Discoverability: Services are defined by description documents that constitute supplemental
metadata through which they can be effectively discovered. Service discovery provides an
effective means for utilizing third-party resources.
7. Composability: Using services as building blocks, sophisticated and complex operations can be
implemented. Service orchestration and choreography provide a solid support for composing
services and achieving business goals.
Advantages of SOA:

 Service reusability: In SOA, applications are made from existing services. Thus, services can
be reused to make many applications.
 Easy maintenance: As services are independent of each other they can be updated and
modified easily without affecting other services.
 Platform independent: SOA allows making a complex application by combining services
picked from different sources, independent of the platform.
 Availability: SOA facilities are easily available to anyone on request.
 Reliability: SOA applications are more reliable because it is easy to debug small services
rather than huge codes
 Scalability: Services can run on different servers within an environment, this increases
scalability
Disadvantages of SOA:

 High overhead: A validation of input parameters of services is done whenever services


interact this decreases performance as it increases load and response time.
 High investment: A huge initial investment is required for SOA.
 Complex service management: When services interact they exchange messages to tasks. the
number of messages may go in millions. It becomes a cumbersome task to handle a large
number of messages.
Applications of SOA:
1. SOA infrastructure is used by many armies and air forces to deploy situational awareness
systems.
2. SOA is used to improve healthcare delivery.
3. Nowadays many apps are games and they use inbuilt functions to run. For example, an app
might need GPS so it uses the inbuilt GPS functions of the device. This is SOA in mobile
solutions.
4. SOA helps maintain museums a virtualized storage pool for their information and content.

RESTful Services
RESTful Services are client and server applications that communicate over the WWW. RESTful
Services are REST Architecture based Services. In REST Architecture, everything is a resource.
RESTful Services provides communication between software applications running on different
platforms and frameworks. We can consider web services as code on demand. A RESTful Service is a
function or method which can be called by sending an HTTP request to a URL, and the service returns
the result as the response. In this tutorial, you will learn the basics of RSETful Services with suitable
examples and projects.
Following four HTTP methods are commonly used in REST based architecture.

 GET − Provides a read only access to a resource.


 POST − Used to create a new resource.
 DELETE − Used to remove a resource.
 PUT − Used to update a existing resource or create a new resource.
Web services based on REST Architecture are known as RESTful web services. These webservices
uses HTTP methods to implement the concept of REST architecture. A RESTful web service usually
defines a URI, Uniform Resource Identifier a service, provides resource representation such as JSON
and set of HTTP Methods.
RESTful Services Resources
REST architecture treats every content as a resource. These resources can be Text Files, Html Pages,
Images, Videos or Dynamic Business Data. REST Server simply provides access to resources and
REST client accesses and modifies the resources. Here each resource is identified by URIs/ Global
IDs. REST uses various representations to represent a resource where Text, JSON, XML. The most
popular representations of resources are XML and JSON.
Representation of Resources
A resource in REST is a similar Object in Object Oriented Programming or is like an Entity in a
Database. Once a resource is identified then its representation is to be decided using a standard format
so that the server can send the resource in the above said format and client can understand the same
format.
REST does not impose any restriction on the format of a resource representation. A client can ask for
JSON representation whereas another client may ask for XML representation of the same resource to
the server and so on. It is the responsibility of the REST server to pass the client the resource in the
format that the client understands.
Following are some important points to be considered while designing a representation format of a
resource in RESTful Web Services.

 Understandability − Both the Server and the Client should be able to understand and utilize
the representation format of the resource.
 Completeness − Format should be able to represent a resource completely. For example, a
resource can contain another resource. Format should be able to represent simple as well as
complex structures of resources.
 Linkablity − A resource can have a linkage to another resource, a format should be able to
handle such situations.
However, at present most of the web services are representing resources using either XML or JSON
format. There are plenty of libraries and tools available to understand, parse, and modify XML and
JSON data.
RESTful Services Messages
RESTful Web Services make use of HTTP protocols as a medium of communication between client
and server. A client sends a message in form of a HTTP Request and the server responds in the form
of an HTTP Response. This technique is termed as Messaging. These messages contain message data
and metadata i.e. information about message itself. Let us have a look on the HTTP Request and
HTTP Response messages for HTTP 1.1.
HTTP Request
An HTTP Request has five major parts

 Verb − Indicates the HTTP methods such as GET, POST, DELETE, PUT, etc.
 URI − Uniform Resource Identifier (URI) to identify the resource on the server.
 HTTP Version − Indicates the HTTP version. For example, HTTP v1.1.
 Request Header − Contains metadata for the HTTP Request message as key-value pairs. For
example, client (or browser) type, format supported by the client, format of the message body,
cache settings, etc.
 Request Body − Message content or Resource representation.
HTTP Response

An HTTP Response has four major parts −

 Status/Response Code − Indicates the Server status for the requested resource. For example,
404 means resource not found and 200 means response is ok.
 HTTP Version − Indicates the HTTP version. For example HTTP v1.1.
 Response Header − Contains metadata for the HTTP Response message as keyvalue pairs.
For example, content length, content type, response date, server type, etc.
 Response Body − Response message content or Resource representation.
RESTful Services Statelessness
RESTful Web Service should not keep a client state on the server. This restriction is called
Statelessness. It is the responsibility of the client to pass its context to the server and then the server
can store this context to process the client's further request. For example, session maintained by server
is identified by session identifier passed by the client.
RESTful Web Services should adhere to this restriction, that the web service methods are not storing
any information from the client they are invoked from.
RESTful Services Security
As RESTful Services work with HTTP URL Paths, it is very important to safeguard a RESTful Web
Service in the same manner as a website is secured.
Following are the best practices to be adhered to while designing a RESTful Service −

 Validation − Validate all inputs on the server. Protect your server against SQL or NoSQL
injection attacks.
 Session Based Authentication − Use session based authentication to authenticate a user
whenever a request is made to a Web Service method.
 No Sensitive Data in the URL − Never use username, password or session token in a URL,
these values should be passed to Web Service via the POST method.
 Restriction on Method Execution − Allow restricted use of methods like GET, POST and
DELETE methods. The GET method should not be able to delete data.
 Validate Malformed XML/JSON − Check for well-formed input passed to a web service
method.
 Throw generic Error Messages − A web service method should use HTTP error messages
like 403 to show access forbidden, etc.

Service Engineering
Service engineering, also called service-oriented software engineering, is a software engineering
process that attempts to decompose the system into self-running units that either perform services or
expose services (reusable services). Service oriented applications are designed around loosely-coupled
services, meaning there are simple standards and protocols which are followed by all concerned,
while behind them are a wide variety of technological services which can be far more complex. The
reusable services are often provided by many different service providers, all of whom collaborate
dynamically with service users and service registries.
The Actors in Service Engineering
There are three types of actors in a service-oriented environment. These are:

 Service providers: These are software services that publish their capabilities and their
availability with service registries.
 Service users: These are software systems (which may be services themselves) that use the
services provided by service providers. Service users can use service registries to discover
and locate the service providers they need.
 Service registries: These are constantly evolving catalogs of information that can be queried
to see what type of services are available.
Characteristics of Services in Service Engineering

 The provision of the service is independent of the application using the service.
 Services are platform independent and implementation language independent.
 They are easier to test since they are small and independent. This makes them more reliable
for use in applications.
 Since services are individual pieces of functionality rather than a large piece of code, they can
be reused in multiple applications, therefore lowering the cost of development of future tools.
 Services can be developed in parallel since they are independent of each other. This reduces
the time it takes to develop the software.
 Since the location of a service doesn't matter, the service can be moved to a more powerful
server if needed. There can also be separate instances of the service running on different
servers.
Service Engineer Responsibilities:

 Using various strategies and tools to provide effective solutions to customers' concerns.
 Communicating with clients, engineers, and other technicians to ensure that services are
delivered effectively.
 Promptly following up on service requests and providing customer feedback.
 Monitoring equipment and machinery performance and developing preventative maintenance
measures.
 Conducting quality assurance and safety checks on all equipment.
 Delivering demonstrations to ensure that customers are educated on safe and effective
equipment use.
 Providing recommendations about new features and product improvements.
 Monitoring inventory and reordering materials when needed.
 Conducting research and attending workshops to remain abreast of industry developments.
 Writing reports and presenting findings to Managers and Supervisors on a regular basis.
Service Engineer Requirements:

 An associate's degree in engineering or similar.


 A bachelor's degree in engineering is preferred.
 A relevant certificate or license is advantageous.
 A valid driver's license and willingness to travel.
 Excellent active listening and customer service skills.
 The ability to deal with multiple requests without being overwhelmed.
 The ability to remain professional under pressure.
 Superb work ethic and a growth mindset.

Service Composition
Service composition is a collection of services where, many smaller services are combined together to
a larger service.
Below diagram illustrates the service composition:

 In the above diagram, Service A, Service B and Service C are smaller services.
 Large service is composed by combining services A,B and C together.
Service Composition Performance
The services communicate with each other through a network just like component composition where
inter-service communication is too slow as compared to inter-component communication taking place
in the same application. The performance will be bad if the services communicate internally through
ESB (Enterprise Service Bus) and larger services are decomposed to many smaller services.
Service compositions can be categorized into primitive and complex variations. Simple logic was
implemented through point-to-point exchanges or primitive compositions in early service-oriented
solutions. As the technology developed, complex compositions became more familiar.

Systems engineering
Systems engineering is an interdisciplinary field of engineering and engineering management that
focuses on how to design, integrate, and manage complex systems over their life cycles. At its core,
systems engineering utilizes systems thinking principles to organize this body of knowledge. The
individual outcome of such efforts, an engineered system, can be defined as a combination of
components that work in synergy to collectively perform a useful function.
Issues such as requirements engineering, reliability, logistics, coordination of different teams, testing
and evaluation, maintainability and many other disciplines necessary for successful system design,
development, implementation, and ultimate decommission become more difficult when dealing with
large or complex projects. Systems engineering deals with work-processes, optimization methods, and
risk management tools in such projects. Systems engineering ensures that all likely aspects of a
project or system are considered and integrated into a whole.
The systems engineering process is a discovery process that is quite unlike a manufacturing process.
A manufacturing process is focused on repetitive activities that achieve high quality outputs with
minimum cost and time. The systems engineering process must begin by discovering the real
problems that need to be resolved, and identifying the most probable or highest impact failures.
Holistic view
Systems engineering focuses on analyzing and eliciting customer needs and required functionality
early in the development cycle, documenting requirements, then proceeding with design synthesis and
system validation while considering the complete problem, the system lifecycle. This includes fully
understanding all of the stakeholders involved. Oliver et al. claim that the systems engineering
process can be decomposed into

 a Systems Engineering Technical Process, and


 a Systems Engineering Management Process.
Within Oliver's model, the goal of the Management Process is to organize the technical effort in the
lifecycle, while the Technical Process includes assessing available information, defining effectiveness
measures, to create a behavior model, create a structure model, perform trade-off analysis, and create
sequential build & test plan.
Scope
Systems engineering is to see it as a method, or practice, to identify and improve common rules that
exist within a wide variety of systems. Keeping this in mind, the principles of systems engineering
holism, emergent behavior, boundary, et al. can be applied to any system, complex or otherwise,
provided systems thinking is employed at all levels. Besides defense and aerospace, many information
and technology based companies, software development firms, and industries in the field of
electronics & communications require systems engineers as part of their team.
Systems engineering encourages the use of modeling and simulation to validate assumptions or
theories on systems and the interactions within them.
Use of methods that allow early detection of possible failures, in safety engineering, are integrated
into the design process. At the same time, decisions made at the beginning of a project whose
consequences are not clearly understood can have enormous implications later in the life of a system,
and it is the task of the modern systems engineer to explore these issues and make critical decisions.
No method guarantees today's decisions will still be valid when a system goes into service years or
decades after first conceived. However, there are techniques that support the process of systems
engineering. Examples include soft systems methodology, Jay Wright Forrester's System dynamics
method, and the Unified Modeling Language (UML) all currently being explored, evaluated, and
developed to support the engineering decision process.
Managing complexity
Systems engineering encourages the use of tools and methods to better comprehend and manage
complexity in systems. Some examples of these tools can be seen here:

 System architecture,
 System model, Modeling, and Simulation,
 Optimization,
 System dynamics,
 Systems analysis,
 Statistical analysis,
 Reliability analysis, and
 Decision making
Taking an interdisciplinary approach to engineering systems is inherently complex since the behavior
of and interaction among system components is not always immediately well-defined or understood.
Defining and characterizing such systems and subsystems and the interactions among them is one of
the goals of systems engineering. In doing so, the gap that exists between informal requirements from
users, operators, marketing organizations, and technical specifications is successfully bridged.
Systems engineering processes
Systems engineering processes encompass all creative, manual and technical activities necessary to
define the product and which need to be carried out to convert a system definition to a sufficiently
detailed system design specification for product manufacture and deployment. Design and
development of a system can be divided into four stages, each with different definitions:

 task definition (informative definition),


 conceptual stage (cardinal definition),
 design stage (formative definition), and
 implementation stage (manufacturing definition).
Models
Models play important and diverse roles in systems engineering. A model can be defined in several
ways, including:

 An abstraction of reality designed to answer specific questions about the real world
 An imitation, analogue, or representation of a real world process or structure; or
 A conceptual, mathematical, or physical tool to assist a decision maker.
Together, these definitions are broad enough to encompass physical engineering models used in the
verification of a system design, as well as schematic models like a functional flow block diagram and
mathematical models used in the trade study process. This section focuses on the last.
The main reason for using mathematical models and diagrams in trade studies is to provide estimates
of system effectiveness, performance or technical attributes, and cost from a set of known or
estimable quantities. Typically, a collection of separate models is needed to provide all of these
outcome variables. The heart of any mathematical model is a set of meaningful quantitative
relationships among its inputs and outputs. These relationships can be as simple as adding up
constituent quantities to obtain a total, or as complex as a set of differential equations describing the
trajectory of a spacecraft in a gravitational field. Ideally, the relationships express causality, not just
correlation. Furthermore, key to successful systems engineering activities are also the methods with
which these models are efficiently and effectively managed and used to simulate the systems.

Sociotechnical Systems
Sociotechnical systems (STS) in organizational development is an approach to complex
organizational work design that recognizes the interaction between people and technology in
workplaces. The term also refer to coherent systems of human relations, technical objects, and
cybernetic processes that inhere to large, complex infrastructures. Social society, and its constituent
substructures, qualify as complex sociotechnical systems.
Sociotechnical theory is about joint optimization, with a shared emphasis on achievement of both
excellence in technical performance and quality in people's work lives. Sociotechnical theory, as
distinct from sociotechnical systems, proposes a number of different ways of achieving joint
optimisation. They are usually based on designing different kinds of organisation, according to which
the functional output of different sociotechnical elements leads to system efficiency, productive
sustainability, user satisfaction, and change management.
Sociotechnical refers to the interrelatedness of social and technical aspects of an organization.
Sociotechnical theory is founded on two main principles:
 One is that the interaction of social and technical factors creates the conditions for successful
organizational performance. This interaction consists partly of linear "cause and effect"
relationships and partly from "non-linear", complex, even unpredictable relationships.
Whether designed or not, both types of interaction occur when socio and technical elements
are put to work.
 The corollary of this, and the second of the two main principles, is that optimization of each
aspect alone tends to increase not only the quantity of unpredictable, "un-designed"
relationships, but those relationships that are injurious to the system's performance.
Sustainability
Standalone, incremental improvements are not sufficient to address current, let alone future
sustainability challenges. These challenges will require deep changes of sociotechnical systems.
Theories on innovation systems; sustainable innovations; system thinking and design; and
sustainability transitions, among others, have attempted to describe potential changes capable of
shifting development towards more sustainable directions.
Sociotechnical perspectives also form a crucial role in the creation of systems that have long term
sustainability. In the development of new systems, the consideration of sociotechnical factors from the
perspectives of the affected stakeholders ensures that a sustainable system is created which is both
engaging and benefits everyone involved.
Any organisation that tries into becoming sustainable must take into consideration the many
dimensions - financial, ecological and (socio-)technical. However, for many stakeholders the main
aim of sustainability is to be economically viable. Without long term economic sustainability, the very
existence of the organisation’s existence could come under question, potentially shutting the business
down.
Benefits of sociotechnical systems

 Viewing the work system as a whole, making it easier to discuss and analyse
 More organised approach by even outlining basic understanding of a work system
 A readily usable analysis method making it more adaptable for performing analysis of a work
system
 Does not require guidance by experts and researchers
 Reinforces the idea that a work system exists to produce a product(s)/service(s)
 Easier to theorize potential staff reductions, job roles changing and reorganizations
 Encourages motivation and good will while reducing the stress from monitoring
 Conscientious that documentation and practice may differ
Process improvement
Process improvement in organizational development is a series of actions taken to identify, analyze
and improve existing processes within an organization to meet new goals and objectives. These
actions often follow a specific methodology or strategy to create successful results.
Task analysis
Task analysis is the analysis of how a task is accomplished, including a detailed description of both
manual and mental activities, task and element durations, task frequency, task allocation, task
complexity, environmental conditions, necessary clothing and equipment, and any other unique
factors involved in or required for one or more people to perform a given task. This information can
then be used for many purposes, such as personnel selection and training, tool or equipment design,
procedure design and automation.
Job design
Job design or work design in organizational development is the application of sociotechnical systems
principles and techniques to the humanization of work, for example, through job enrichment. The
aims of work design to improved job satisfaction, to improved through-put, to improved quality and to
reduced employee problems, e.g., grievances, absenteeism.
Evolution of socio-technical systems
The evolution of socio-technical design has seen its development from being approached as a social
system exclusively. The realisation of the joint optimisation of social and technical systems was later
realised. It was divided into sections where primary work which looks into principles and description,
and how to incorporate technical designs on a macrosocial level.
Conceptual Design
Conceptual design is a framework for establishing the underlying idea behind a design and a plan for
how it will be expressed visually.It is related to the term “concept art”, which is an illustration (often
used in the preproduction phase of a film or a video game) that conveys the vision of the artist for
how the final product might take form. Similarly, conceptual design occurs early on in the design
process, generally before fine details such as exact color choices or illustration style. The only tools
required are a pen and paper.
Conceptual design has the root word “concept,” which describes the idea and intention behind the
design. This is contrasted by “execution”, which is the implementation and shape that a design
ultimately takes.
Essentially, the concept is the plan, and the execution is the follow-through action. Designs are often
evaluated for quality in both of these areas: concept vs execution. In other words, a critic might ask:
what is a design trying to say, and how well does it say it?Conceptual design is what allows designers
to evoke the underlying idea design through imagery. Design by Katrin Chern
Most importantly, you can’t have one without the other. A poorly executed design with a great
concept will muddle its message with an unappealing art style. A well-executed design with a poor
concept might be beautiful, but it will do a poor job of connecting with viewers and/or expressing a
brand.For the purposes of this article, we’ll focus on the concept whereas execution involves
studying the particulars of design technique.
The purpose of conceptual design
The purpose of conceptual design is to give visual shape to an idea. Towards that end, there are three
main facets to the goals of conceptual design:
Conceptual design bridges the gap between the message and the imagery. Design by MangoCrew
To establish a basis of logic
Artistic disciplines have a tendency to be governed by emotion and gut feeling. Designs, however, are
meant to be used. Whether it is a piece of software or a logo, a design must accomplish something
practical such as conveying information or expressing a brand—all on top of being aesthetically
pleasing.Conceptual design is what grounds the artwork in the practical questions of why and how.
To create a design language
Since the concept is essentially just an idea, designers must bridge the gap between abstract thought
and visual characteristics. Design language describes using design elements purposefully to
communicate and evoke meaning.
As explained earlier, the conceptual design phase isn’t going to go as far as planning every stylistic
detail, but it will lay the groundwork for meaningful design choices later on.Conceptual design exists
to make sure that imagery communicates its message effectively. Design by BINATANG
To achieve originality
There’s a famous saying that nothing is original, and this is true to an extent. The practice of design—
like any artistic discipline—is old, with designers building on the innovations of those who came
before.
But you should at least aspire to stand on the shoulders of those giants. And the concept and ideation
phase in the design process is where truly original creative sparks are most likely to happen.
The conceptual design approach
Now that we understand what conceptual design is and its purpose, we can talk about how it is done.
The conceptual design approach can be broken down into four steps and we’ll discuss each in detail.
It is important to note that these steps don’t have to be completed in any particular order. For
example, many designers jump to doodling without any concrete plan of what they are trying to
achieve. How a person comes up with ideas is personal and depends on whatever helps them think.
It can also be related to how you best learn—e.g. people who learn best by taking notes might have an
easier time organizing their concepts by writing them down. And sometimes taking a more analytical
approach (such as research) early on can constrain creativity whereas the opposite can also lead to
creativity without a purpose.
Whatever order you choose, we would recommend that you do go through all of the steps to get a
concept that is fully thought through. With that out of the way, let’s dive into the conceptual design
process.
First you have to unravel the problem. Conceptual design by Fe Melo
1. Definition
You must start your design project by asking why the project is necessary. What is the specific goal of
the design and what problem is it meant to solve?
Defining the problem can be a lot trickier than it at first appears because problems can be complex.
Often, a problem can be a symptom of deeper issues, and you want to move beyond the surface to
uncover the root causes.
One technique for doing so is known as the Five Whys, in which you are presented with a problem
and keep asking “Why?” until you arrive at a more nuanced understanding. Otherwise, if you fail to
get to the exact root of the problem, your design solution would have been ultimately flawed. And the
design solution—the answer to the problem—is just another way of describing the concept.
2. Research
Designs must eventually occupy space (whether physical or digital) in the real world. For this reason,
a design concept must be grounded in research, where you will understand the context in which the
design must fit.
Researching the people who will interact with the design is essential to solidifying the concept.
Design by Digital Man ✅

This can start with getting information on the client themselves—who is the brand and what is their
history and mission, their personality? You must also consider the market.
Who are the people that will interact with the design? In order for the concept to speak effectively to
these people, you must conduct target audience research to understand who they are and what they are
looking for in a design. Similarly, researching similar designs from competitors can help you
understand industry conventions as well as give you ideas for how to set your concept apart.
Finally, you will want to research the work of other designers in order to gather reference material and
inspiration, especially from those you find particularly masterful. Doing so can show you conceptual
possibilities you might never have imagined, challenging you to push your concepts. You’ll want to
collect these in a mood board, which you will keep handy as you design.
3. Verbal ideation
Concepts are essentially thoughts—which is to say, they are scattered words in our minds. In order to
shape a concept into something substantial, you need to draw some of those words out. This phase is
generally referred to as brainstorming, in which you will define your concept verbally.
In graphic design, especially in regards to logos, the brand name is often the starting point for
generating concepts of representational imagery. Design by -Z-. This can be as straightforward as
simply posing the problem (see the first step) and creating a list of potential solutions.
There are also some helpful word-based techniques, such as mind-mapping or free association. In
both of these cases, you generally start with a word or phrase (for logos, this is usually the brand name
and for other designs, it can be based on some keywords from the brief).You then keep writing
associated words that pop into your head until you have a long list. It is also important to give
yourself a time limit so that you brainstorm quickly without overthinking things.
The purpose of generating words is that these can help you come up with design characteristics (in the
next step) to express your concept. For example, the word “freedom” can translate into loose flowing
lines or an energetic character pose.
Ultimately, it is helpful to organize these associated ideas into a full sentence or phrase that articulates
your concept and what you are trying to accomplish. This keeps your concept focused throughout the
design process.
4. Visual ideation
At some point, concepts must make the leap from abstract ideas to a visual design. Designers usually
accomplish this through sketching.One helpful approach is to create thumbnails, which are sketches of
a design that are small enough to fit several on the same page.
Like brainstorming (or verbal ideation) the goal is to come up with sketches fast so that your ideas can
flow freely. You don’t want to get hung up on your first sketch or spend too much time on minute
detail. Right now, you are simply visualizing possible interpretations of the concept.The concept is
often visually expressed through a sketch. The final design may differ from the conceptual sketch,
once the design has been refined with detail and color. Concept illustration by simply.dikka
This phase is important because while you may think you have the concept clear in your mind, seeing
it on the page is the true test of whether it holds water. You may also surprise yourself with a sketch
that articulates your concept better than you could have planned.Once you have a couple sketches that
you like, you can refine this into a much larger and more detailed sketch. This will give you a
presentable version from which you can gather feedback.
Dream big with conceptual design
The remainder of the design process is spent executing the concept. You will use the software of your
choice to create a working version of your design, such as a prototype or mockup. Assuming your
design is approved by the client, test users or any other stakeholders, you can go about creating the
final version. If not, use conceptual design to revisit the underlying concept.
Conceptual design is the bedrock of any design project. For this reason, it is extremely important to
get right. Creating a concept can be difficult and discouraging—over time, you might find your
garbage bin overflowing with rejected concepts.
But this is exactly why it is so helpful to have a delineated process like conceptual design to guide you
through the messy work of creating ideas. But at the end of the day, getting a design of value will
require both a great concept and a skilled designer.
System Procurement
The responsibility of ensuring an uninterrupted supply of goods and services required for the smooth
functioning of the organization lies with the procurement department. The procurement manager
ensures that the right quantity of goods at optimal costs are available for various departments of the
organization, without compromising on the quality of goods/services.
Importance of Procurement Management System
A well-managed procurement function improves business outcomes significantly. Before making a
purchase, the procurement department needs to analyze the market to ensure that the best deal is
made. Here are some of the important functions carried out by the procurement department:
Working on Purchase Deals:
Purchase managers need to perform an in-depth analysis of the market before concluding a purchase.
Getting the best deal for goods and services in terms of pricing and quality is an important
procurement function. Thorough evaluation of all the vendors based on their reputation, timely
delivery of orders, and quoted price needs to be done by the procurement team.
Compliance Management:
One of the most important responsibilities of the procurement manager is to ensure that every
purchase order adheres to the policies and processes defined within the organization and compliant
with the procurement laws and regulations. To ensure compliance with laws and regulations, the
procurement manager needs to stay updated on the changes in laws and regulations and update the
company policies accordingly. An e-procurement system that flags inconsistencies and non-
compliance will aid the procurement manager to identify and resolve compliance related issues
efficiently.
Establish Strong Vendor Relationships:
Maintaining a strong and reliable vendor base is extremely important in procurement. Continuous
review of vendor contracts and relationships is needed to ensure that the organization’s requirements
are adequately met. All vendors approved by the procurement department must provide good quality
service and goods and adhere to terms and policies while in contract with the company. Procurement
KPIs on vendor performance must be reviewed periodically to ensure consistent performance.
Management and Coordination of Procurement Staff:
The procurement manager must enable seamless communication between all the stakeholders. The
procurement management system must ensure that daily tasks and operations within the procurement
team are well planned and coordinated. Completing tasks before deadlines, quick resolution of issues
and process bottlenecks, and quality inspection of delivered goods and services are some of the
routine tasks performed by the procurement staff.
Maintaining and Updating Data:
Procurement data needs to be maintained and updated for audit and regulatory compliance. Data on
the items purchased, cost price, supplier information, inventory list, and delivery information must be
updated by the team. Accurate documentation forms the basis for reporting and budgeting.
An e-procurement software empowers the procurement team to carry out its functions efficiently with
optimal resource utilization. Procurement solutions ensure seamless communication and coordination
between various procurement tasks, accurate and consistent documentation, and cost and time
savings. Procurement staff is free from repetitive tasks so that they can focus on finer details and
contribute more towards organizational growth. An e-procurement system enables CPOs to make
data-driven business decisions.
Archaic procurement systems are not equipped to deal with the complexity of modern procurement
processes. Manual procurement systems slow down the processing speed and introduce errors and
discrepancies in the procurement process. Deploying a cloud procurement system eliminates the
inefficiencies and discrepancies in the procurement workflow.
Procurement System
A procurement or purchasing system helps organizations streamline and automate the process of
purchasing goods or services and manage inventory. All the processes related to the procurement
function can be efficiently handled through the procurement management system.
What is the meaning of procurement management? Procurement management is also referred to as the
source-to-settle or procure-to-pay process. The procurement management system manages all the
steps from sourcing to payout. In addition to managing the entire purchase lifecycle, procurement
management is also about managing vendor relationships and streamlining the procurement process.
Procurement systems empower procurement managers to manage all the steps in the purchasing
process with ease. Procurement management is a complex process that spans several interrelated
activities like transactional purchasing of services and goods, inventory management, integration with
accounts payable, and updating supporting documents. A procurement application helps manage
activities like:

 Preparing and sending purchase requisitions


 Preparing and approving purchase orders
 Choosing vendors through quotation managemnet
 Managing vendor relationships
 Approving delivered goods and services
 Review and approval of invoices
Components of a Procurement System
Procurement systems or solutions are designed to overcome the disadvantages of manual procurement
systems and accelerate and streamline the procurement lifecycle. All the components of the
procurement system should be taken care of by the e-procurement software. The modules of an ideal
procurement system are:
Purchase Requisitions:
Purchase requisitions contain all the information regarding the requirement for goods or services. Any
department having a requirement for goods or services prepares a purchase requisition with all the
necessary information to the procurement department. An automated procurement system helps in
quick approvals so that the procurement team can start vendor identification. The status of purchase
requests can also be viewed transparently by all stakeholders.
Purchase Orders:
Purchase orders (PO) are generated by the procurement department and sent to the vendor for
procuring goods or services. A digital procurement system allows users to automatically generate
purchase orders from the information derived from purchase requisitions. Automated procurement
solutions enable the procurement team to track, review, and manage the status of POs at all times
from any device at any time.
Purchase Invoices:
2 way or 3-way matching of purchase invoices for validating the information contained in the
purchase requisitions, purchase orders, and invoices is an important step in the procurement process.
The procurement system must enable automated matching of invoices for quick and timely approvals
and processing of payments without any delays.
Third-party Integrations:
The procurement software must integrate with third-party applications such as sourcing databases and
vendor systems. Seamless integration with these apps ensures that work is not interrupted, and
procurement team need not spend too much time gathering data manually from external sources.
Vendor Management:
Timely processing of purchase invoices ensures the strengthening of vendor relationships. The
procurement management system must provide a centralized database that captures all the vendor
information accurately and consistently. Enrolling new vendors and evaluating existing vendors can
be done quickly and easily with a centralized vendor database.
Procurement Analytics:
Procure-to-pay software must provide key actionable insights from the procurement data. The solution
must enable real-time tracking of key performance metrics and generation of customized reports for
leadership to make strategic business decisions.
Need for Procurement Management System
Saving business costs in an increasingly complex and dynamic market scenario requires intuitive and
agile tools that streamline the procurement process. Online procurement software improves the
performance of the procurement function by integrating supplier management, sourcing, PO
management, and risk management into a single procure-to-pay platform.
Here are why businesses need procurement software:
Spend management: Managing organizational spending can be done by streamlining the
procurement process. Controlling the purchases that occur beyond the formal processes and policies is
a challenge for organizations. Implementing a procurement software system standardizes the
operations within the procurement system. Standardized procurement workflows enable employees to
follow a clear process and direct purchases to authorized vendors. A procurement system makes it
easy to implement spend management policies through compliance management and audit-ready
purchase order workflows.
Streamline source-to-settle lifecycle: Redundant and cumbersome manual processes bring down the
efficiency of the organization. Procurement software streamlines the entire procurement cycle, from
sourcing to settlement. Automating the repetitive steps in procurement enables easy management of
purchase approvals and frees up resources for strategic activities.
Standardize contract management: Purchase contracts are legally binding documents that need to
be free from deviations and inconsistencies. Procurement applications standardize and streamline
the contract management process so that deviations are instantly identified, approvals are accelerated,
and alerts and notifications are issued to concerned authorities for timely contract renewals.
Improve vendor management: Qualifying and evaluating a potential supplier’s capacity, compliance
with the company policies, and viability is important to ensure that the vendor provides more strategic
value. Procurement software makes vendor management more effective by automatically verifying
vendor credibility and issue alerts when contracts are due for renewal.
Top 6 Must-Have Features of a Procurement System
Powerful procurement systems are packed with features that handle all the tasks of the procurement
function. Businesses must choose a procurement solution that is easy to deploy, scalable,
customizable, and value for money. Above all, the procurement solution must fit the unique
requirements of your business. Here are the top 6 must-have features of a procurement system:
1. Integrated and Intuitive Interface: Manual procurement systems are decentralized and
siloed systems that result in standalone records and separate documentation. Automated
procurement solutions provide a unified view of purchasing information. Complete
information of order requests, order status, approvals, supplier information, and invoice
information is presented in a centralized user interface.
2. Vendor relationship management: Vendor relationship management is a core procurement
function that helps identify, onboard, manage, and analyze vendors that span across
multiple departments. Top procurement software allows the procurement team to manage
vendor relationships effectively. The software enables easy and quick onboarding,
qualifying, and managing of vendors. Allowing vendors into the procurement system will
empower them to track the status of orders, delivery schedules, payments received, and
product shortages electronically.
3. Compatibility with other systems: The procurement system should seamlessly integrate
with other business systems. Compatibility with other business systems allows users to
get visibility of finance data, scan data history, deduplicate, and update data across
finance and procurement systems.
4. Flexible, Scalable, and customizable: Procurement software must fit the unique
requirements of your business. Easy customization of the features based on business
requirements enables companies to derive value for their investment. As a business
grows, the procurement solution must be able to scale up to increasing business workload.
And the procurement system must be flexible to adapt to changing market requirements.
5. Robust data analytics and reporting: The management looks to get actionable insights
from the procurement system for budget forecasts and making strategic business
decisions. The relevant data should be presented to the management in an easily
understandable, prescriptive, and predictive manner. The procurement system must be
able to manipulate and display procurement data in easily understandable formats like
charts, pivot tables, etc. for analysis and reporting.
6. Cloud-based access: New age businesses require omni channel, on-the-go, always
accessible procurement solutions. Cloud-based procurement solutions provide anytime,
anywhere access to procurement data and on-the-go approvals. The core functionality of
cloud-based procurement solutions includes supplier management, contract management,
sourcing, requisitioning, payment, and purchasing.
Advanced features like integrated payment settlement, inventory management, product search, and
automated expense controls are also provided by procurement systems.
Cflow is a powerful workflow automation platform that can optimize key business functions within
minutes. Customized workflows pertaining to purchase requisitions, quotation management, purchase
orders, and invoice payments can be created and connected seamlessly. Cflow’s document designer
enables automatic generation of purchase orders based on a configurable rules engine. Our no-code
cloud BPM solution helps businesses to skip cumbersome paperwork and streamline the procurement
process.
Conclusion
Procurement management has become more complex and demanding in the last decade. While each
organization has varied sourcing and purchasing requirements, the primary intent of deploying a
procurement solution is to improve the performance of the procurement function. The procurement
management system is all about bringing efficiency and optimization into the procurement function.
Automated process workflows improve productivity significantly and eliminate the chaos from
manual procurement processes. Build the best procure-to-pay systems with Cflow. Explore Cflow
by signing up for the free trial right away.

System Development
Systems development is the procedure of defining, designing, testing, and implementing a new
software application or program. It comprises of the internal development of customized systems, the
establishment of database systems, or the attainment of third party developed software. In this system,
written standards and techniques must monitor all information systems processing functions. The
management of company must describe and execute standards and embrace suitable system
development life cycle practise that manage the process of developing, acquiring, implementing, and
maintaining computerized information systems and associated technology.
System Development Management Life-cycle
It is maintained in management studies that effectual way to protect information and information
systems is to incorporate security into every step of the system development process, from the
initiation of a project to develop a system to its disposition. The manifold process that begins with the
initiation, analysis, design, and implementation, and continues through the maintenance and disposal
of the system, is called the System Development Life Cycle (SDLC).
Phases of System Development
A system development project comprises of numerous phases, such as feasibility analysis,
requirements analysis, software design, software coding, testing and debugging, installation and
maintenance.
1. A feasibility study is employed to decide whether a project should proceed. This will
include an initial project plan and budget estimates for future stages of the project. In the
example of the development of a central ordering system, a feasibility study would look
at how a new central ordering system might be received by the various departments and
how costly the new system would be relative to improving each of these individual
systems.
2. Requirement analysis recognises the requirements for the system. This includes a detailed
analysis of the specific problem being addressed or the expectations of a particular
system. It can be said that analysis will coherent what the system is supposed to do. For
the central ordering system, the analysis would cautiously scrutinize existing ordering
systems and how to use the best aspects of those systems, while taking advantage of the
potential benefits of more centralized systems.
3. The design phase consist of determining what programs are required and how they are
going to interact, how each individual program is going to work, what the software
interface is going to look like and what data will be required. System design may use
tools such as flowcharts and pseudo-code to develop the specific logic of the system.
4. Implementation stage include the design which is to be translated into code. This requires
choosing the most suitable programming language and writing the actual code needed to
make the design work. In this stage, the central ordering system is essentially coded using
a particular programming language. This would also include developing a user interface
that the various departments are able to use efficiently.
5. Testing and debugging stage encompasses testing individual modules of the system as
well as the system as a whole. This includes making sure the system actually does what is
expected and that it runs on intended platforms. Testing during the early stages of a
project may involve using a prototype, which meets some of the very basic requirements
of the system but lacks many of the details.
6. In Installation phase, the system is implemented so that it becomes part of the workflows
of the organization. Some training may be needed to make sure employees get happy with
using the system. At this stage, the central ordering system is installed in all departments,
replacing the older system.
7. All systems need some types of maintenance. This may consist of minor updates to the
system or more drastic changes due to unexpected circumstances. As the organization and
its departments evolve, the ordering process may require some modifications. This makes
it possible to get the most out of a new centralized system.
phases of the system development cycle

Whitten and Bentley (1998) recommended following categories of system development project
lifecycle:
1. Planning
2. Analysis
3. Design
4. Implementation
5. Support
There are many different SDLC models and methodologies, but each usually consists of a series of
defined steps such as Fountain, Spiral, rapid prototyping, for any SDLC model that is used,
information security must be integrated into the SDLC to ensure appropriate protection for the
information that the system will transmit, process, and store.
System development life-cycle models (Source: Conrick, 2006))

Fountain Model Recognizes that there is considerable overlap of activities throughout the
development cycle.
Spiral model Emphasis the need to go back and reiterate earlier stages like a series of short water
fall cycle, each producing an early prototype representing the part of entire cycle.

Build and fix Write some programming code, keeps modifying it until the customer is happy.
model Without planning this is very open ended and risky.

Rapid Emphasis is on creating a prototype that look and act like the desired product in order
prototyping to test its usefulness. Once the prototype is approved, it is discarded and real
model software is written.

Incremental Divides the product into builds, where sections of the projects are created and tested
model separately.

Synchronize Combines the advantages of spiral models with technology of overseeing and
and stabilise managing source code. This method allows many teams to work efficiently in
model parallel. It was defined by David Yoffe of Harvard University and Michael
Cusumano of Massachusetts institute of technology who studied Microsoft
corporation developed internet explorer and how the Netscape communication
corporation developed communicator finding common thread In the ways the two
companies worked.

Waterfall Model
The Waterfall Model signifies a traditional type of system development project lifecycle. It builds
upon the basic steps associated with system development project lifecycle and uses a top-down
development cycle in completing the system.
Walsham (1993) outlined the steps in the Waterfall Model which are as under:
1. A preliminary evaluation of the existing system is conducted and deficiencies are then
identified. This can be done by interviewing users of the system and consulting with
support personnel.
2. The new system requirements are defined. In particular, the deficiencies in the existing
system must be addressed with specific proposals for improvement.
3. The proposed system is designed. Plans are developed and delineated concerning the
physical construction, hardware, operating systems, programming, communications, and
security issues.
4. The new system is developed and the new components and programs are obtained and
installed.
5. Users of the system are then trained in its use, and all aspects of performance are tested. If
necessary, adjustments must be made at this stage.
6. The system is put into use. This can be done in various ways. The new system can be
phased in, according to application or location, and the old system is gradually replaced.
In some cases, it may be more cost-effective to shut down the old system and implement
the new system all at once.
7. Once the new system is up and running for a while, it should be exhaustively evaluated.
Maintenance must be kept up rigorously at all times.
8. Users of the system should be kept up-to-date concerning the latest modifications and
procedures.
On the basis of the Waterfall Model, if system developers find problems associated with a
step, an effort is made to go back to the previous step or the specific step in which the
problem occurred, and fix the problem by completing the step once more.
the model's development schedule

Fountain model: The Fountain model is a logical enhancement to the Waterfall model. This model
allows for the advancement from various stages of software development regardless of whether or not
enough tasks have been completed to reach it.
Prototyping Model: The prototyping paradigm starts with collecting the requirements. Developer
and customer meet and define the overall objectives for the software, identify whatever requirements
are known, and outline areas where further definition is mandatory. The prototype is appraised by the
customer/user and used to improve requirements for the software to be developed.
Major Advantages of this Model include
1. When prototype is presented to the user, he gets a proper clearness and functionality of
the software and he can suggest changes and modifications.
2. It determines the concept to prospective investors to get funding for project and thus gives
clear view of how the software will respond.
3. It decreases risk of failure, as potential risks can be recognized early and alleviation steps
can be taken thus effective elimination of the potential causes is possible.
4. Iteration between development team and client provides a very good and conductive
environment during project. Both the developer side and customer side are coordinated.
5. Time required to complete the project after getting final the SRS reduces, since the
developer has a better idea about how he should approach the project.
Main drawbacks of this model are that Prototyping is typically done at the cost of the developer. So it
should be done using nominal resources. It can be done using Rapid Application Development tools.
Sometimes the start-up cost of building the development team, focused on making the prototype is
high. It is a slow process and too much involvement of client is not always favoured by the creator.
Figure: different phases of Prototyping model
Uses of prototyping:
1. Verifying user needs
2. Verifying that design = specifications
3. Selecting the “best” design
4. Developing a conceptual understanding of novel situations
5. Testing a design under varying environments
6. Demonstrating a new product to upper management
7. Implementing a new system in the user environment quickly
Rapid Application Development
This model is based on prototyping and iterative development with no detailed planning involved. The
process of writing the software itself involves the planning required for developing the product. Rapid
Application development focuses on gathering customer requirements through workshops or focus
groups, early testing of the prototypes by the customer using iterative concept, reuse of the existing
prototypes (components), continuous integration and rapid delivery. There are three main phases to
Rapid Application Development:
1. Requirements planning
2. RAD design workshop
3. Implementation
RAD Model

RAD is used when the team includes programmers and analysts who are experienced with it, there are
pressing reasons for speeding up application development, the project involves a novel ecommerce
application and needs quick results and users are sophisticated and highly engaged with the goals of
the company.
Spiral Model: The spiral model was developed by Barry Boehm in 1988 (Boehm, 1986). This model
is developed to Spiral Model to address the inadequacies of the Waterfall Model. Boehm stated that
“the major distinguishing feature of the Spiral Model is that it creates a risk-driven approach to the
software process rather than a primarily document-driven or code-driven process. A Spiral Model the
first model to elucidate why the iteration matters. The spiral model consists of four phases:
1. Planning
2. Risk Analysis
3. Engineering
4. Evaluation
Major benefits of this model include:
1. Changing requirements can be accommodated.
2. Allows for extensive use of prototypes.
3. Requirements can be captured more accurately.
4. Users see the system early.
5. Development can be divided in to smaller parts and more risky parts can be developed
earlier which helps better risk management.
Main drawbacks of this model are as under:
1. Management is more complex.
2. Conclusion of project may not be recognized early.
3. Not suitable for small or low risk projects (expensive for small projects).
4. Process is difficult
5. Spiral may go indeterminately.
6. Large numbers of intermediate stages require unnecessary documentation.
The spiral model is normally used in huge projects. For example, the military had adopted the spiral
model for its Future Combat Systems program. The spiral model may suit small software applications.
Incremental model: Incremental model is a technique of software development in which the model is
analysed, designed, tested, and implemented incrementally. Some benefits of this model are that it
handles large projects, it has the functionality of the water fall and the prototyping model.
Disadvantages of this model are that when remedying a problem in a functional unit, then all the
functional units will have to be corrected thus taking a lot of time. It needs good planning and
designing.
Increment model of SDLC

There are numerous benefits of integrating security into the system development life cycle that are as
under:
1. Early documentation and alleviation of security vulnerabilities and problems with the
configuration of systems, resulting in lower costs to implement security controls and
mitigation of vulnerabilities;
2. Awareness of potential engineering challenges caused by mandatory security controls.
3. Identification of shared security services and reuse of security strategies and tools that
will reduce development costs and improve the system’s security posture through the
application of proven methods and techniques.
4. Assistance of informed executive decision making through the application of a
comprehensive risk management process in a timely manner.
5. Documentation of important security decisions made during the development process to
inform management about security considerations during all phases of development.
6. Enhanced organization and customer confidence to facilitate adoption and use of systems,
and improved confidence in the continued investment in government systems.
7. Improved systems interoperability and integration that would be difficult to achieve if
security is considered separately at various system levels.
Strengths of System Development Life Cycle
1. Methodologies incorporating this approach have been well tried and tested.
2. This cycle divides development into distinct phases.
3. Makes tasks more manageable.
4. It Offers opportunity for more control over development process.
5. It Provides standards for documentation.
6. It is better than trial and error.
Weaknesses of System Development Life Cycle
1. It fails to realise the “big picture” of strategic management.
2. It is too inflexible to cope with changing requirements.
3. It stresses on “hard” thinking (which is often reflected in documentation that is too
technical).
4. It unable to capture true needs of users.

System Operation and Evolution

An operating system (OS) is a software program that serves as a conduit between computer
hardware and the user. It is a piece of software that coordinates the execution of application
programs, software resources, and computer hardware. It also aids in the control of software
and hardware resources such as file management, memory management, input/output, and a
variety of peripheral devices such as a disc drive, printers, and so on. To run other
applications, every computer system must have at least one operating system. Browsers, MS
Office, Notepad Games, and other applications require an environment to execute and fulfill
their functions. This blog explains the evolution of operating systems over the past years.
Evolution of Operating Systems
Operating systems have progressed from slow and expensive systems to today's technology,
which has exponentially increased computing power at comparatively modest costs. So let's have a
detailed look at the evolution of operating systems.

The operating system can be classified into four generations, as follows:


First-generation (Serial Processing)
The evolution of operating systems began with serial processing. It marks the start of the
development of electronic computing systems as alternatives to mechanical computers.
Because of the flaws in mechanical computing devices, humans' calculation speed is limited,
and they are prone to making mistakes. Because there is no operating system in this
generation, the computer system is given instructions that must be carried out immediately.
Programmers were incorporated into hardware components without using an operating system
by the 1940s and 1950s. The challenges here are scheduling and setup time. The user logs in
for machine time by wasting computational time. Setup time is required when loading the
compiler, saving the compiled program, the source program, linking, and buffering. The
process is restarted if an intermediate error occurs.
Second generation (Batch System)
The batched systems marked the second generation in the evolution of operating systems. In
the second generation, the batch processing system was implemented, which allows a job or
task to be done in a series and then completed sequentially. The computer system in this
generation does not have an operating system, although there are various operating system
functionalities available, such as FMS and IBSYS. It is used to improve computer utilization
and application. On cards and tapes, jobs were scheduled and submitted. Then, using Job
Control Language, they were successively executed on the monitors. The first computers
employed in the batch operation method created a computer batch of jobs that never paused or
stopped. The software is written on punch cards and then transferred to the tape's processing
unit. When the computer finishes one job, it immediately moves on to the next item on the
tape.
Third Generation (Multi Programmed Batched System)
The evolution of operating systems embarks the third generation with the multi-programmed
batched systems. In the third generation, the operating system was designed to serve
numerous users at the same time. Interactive users can communicate with a computer via an
online terminal, making the operating system multi-user and multiprogramming. It is used to
execute several jobs that should be kept in the main memory. The processor determines which
program to run through job scheduling algorithms.
Fourth generation
The operating system is employed in this age for computer networks where users are aware of
the existence of computers connected to one another.
The era of networked computing has already begun, and users are comforted by a Graphical
User Interface (GUI), which is an incredibly comfortable graphical computer interface. In
the fourth generation, the time-sharing operating system and the Macintosh operating system
came into existence.
Time-Sharing Operating System
The Time-sharing of operating systems had a great impact on the evolution of operating
systems. Multiple users can access the system via terminals at the same time, and the
processor's time is divided among them. Printing ports were required for programs having a
command-line user interface, which required written responses to prompts or written
commands. The interaction is scrolled down like a roll of paper. It was previously used to
develop batch replacement systems. The user interfaces directly with the computer via
printing ports, much like an electric teletype. Few users shared the computer immediately,
and each activity was completed in a fraction of a second before moving on to the next. By
establishing iterations when they are receiving full attention, the fast server may act on a large
number of users' processes at once.
Macintosh Operating System
It was based on decades of research into graphical operating systems and applications for
personal computers. The photo depicts a Sutherland pioneer program sketchpad that was
developed in 1960, employing many of the characteristics of today's graphical user interface,
but the hardware components cost millions of dollars and took up a room. The initiative on
massive computers and hardware improvements made the Macintosh commercially and
economically viable after many research gaps. Many research laboratories are still working on
research prototypes like sketchpads. It served as the foundation for anticipated products.
Real-time Software Engineering
Computers are used to control a wide range of systems from simple domestic machines, through
games controllers, to entire manufacturing plants. Their software must react to events generated by
the hardware and, often, issue control signals in response to these events.
Responsiveness in real-time is the critical difference between embedded systems and other software
systems, such as information systems, web-based systems or personal software systems. For non-real-
time systems, correctness can be defined by specifying how system inputs map to corresponding
outputs that should be produced by the system. In a real-time system, the correctness depends both on
the response to an input and the time taken to generate that response.
A real-time system is a software system where the correct functioning of the system depends on the
results produced by the system and the time at which these results are produced. A soft real-time
system is a system whose operation is degraded if results are not produced according to the specified
timing requirements. A hard real-time system is a system whose operation is incorrect if results are
not produced according to the timing specification.
Characteristics of embedded systems:
 Embedded systems generally run continuously and do not terminate.
 Interactions with the system's environment are unpredictable.
 There may be physical limitations that affect the design of a system.
 Direct hardware interaction may be necessary.
 Issues of safety and reliability may dominate the system design.
Embedded system design
The design process for embedded systems is a systems engineering process that has to
consider, in detail, the design and performance of the system hardware. Part of the design
process may involve deciding which system capabilities are to be implemented in software
and which in hardware. Low-level decisions on hardware, support software and system timing
must be considered early in the process.
 Periodic stimuli occur at predictable time intervals. For example, the system may examine a
sensor every 50 milliseconds and take action (respond) depending on that sensor value (the
stimulus).
 Aperiodic stimuli occur irregularly and unpredictably and are may be signalled using the
computer's interrupt mechanism. An example of such a stimulus would be an interrupt
indicating that an I/O transfer was complete and that data was available in a buffer.

Because of the need to respond to timing demands made by different stimuli/responses, the system
architecture must allow for fast switching between stimulus handlers. Timing demands of different
stimuli are different so a simple sequential loop is not usually adequate. Real-time systems are
therefore usually designed as cooperating processes with a real-time executive controlling these
processes.
 Sensor control processes collect information from sensors. May buffer information collected
in response to a sensor stimulus.
 Data processor carries out processing of collected information and computes the system
response.
 Actuator control processes generate control signals for the actuators.
Processes in a real-time system have to be coordinated and share information.
Process coordination mechanisms ensure mutual exclusion to shared resources. When one process is
modifying a shared resource, other processes should not be able to change that resource. When
designing the information exchange between processes, you have to take into account the fact that
these processes may be running at different speeds.
Producer processes collect data and add it to the buffer. Consumer processes take data from the buffer
and make elements available. Producer and consumer processes must be mutually excluded from
accessing the same element.
The effect of a stimulus in a real-time system may trigger a transition from one state to another. State
models are therefore often used to describe embedded real-time systems. UML state diagrams may be
used to show the states and state transitions in a real-time system.
Architectural patterns for real-time software
Characteristic system architectures for embedded systems:
 Observe and React pattern is used when a set of sensors are routinely monitored and
displayed.
 Environmental Control pattern is used when a system includes sensors, which provide
information about the environment and actuators that can change the environment.
 Process Pipeline pattern is used when data has to be transformed from one representation to
another before it can be processed.
Observe and React pattern description
The input values of a set of sensors of the same types are collected and analyzed. These values are
displayed in some way. If the sensor values indicate that some exceptional condition has arisen, then
actions are initiated to draw the operator's attention to that value and, in certain cases, to take actions
in response to the exceptional value.
 Stimuli: Values from sensors attached to the system.
 Responses: Outputs to display, alarm triggers, signals to reacting systems.
 Processes: Observer, Analysis, Display, Alarm, Reactor.
 Used in: Monitoring systems, alarm systems.
Environmental Control pattern description
The system analyzes information from a set of sensors that collect data from the system's
environment. Further information may also be collected on the state of the actuators that are
connected to the system. Based on the data from the sensors and actuators, control signals are sent to
the actuators that then cause changes to the system's environment.
 Stimuli: Values from sensors attached to the system and the state of the system actuators.
 Responses: Control signals to actuators, display information.
 Processes: Monitor, Control, Display, Actuator Driver, Actuator monitor.
 Used in: Control systems.

Process Pipeline pattern description


A pipeline of processes is set up with data moving in sequence from one end of the pipeline to
another. The processes are often linked by synchronized buffers to allow the producer and consumer
processes to run at different speeds.
 Stimuli: Input values from the environment or some other process
 Responses: Output values to the environment or a shared buffer
 Processes: Producer, Buffer, Consumer
 Used in: Data acquisition systems, multimedia systems

Timing analysis
The correctness of a real-time system depends not just on the correctness of its outputs but also on the
time at which these outputs were produced. In a timing analysis, you calculate how often each process
in the system must be executed to ensure that all inputs are processed and all system responses
produced in a timely way. The results of the timing analysis are used to decide how frequently each
process should execute and how these processes should be scheduled by the real-time operating
system.
Factors in timing analysis:
 Deadlines: the times by which stimuli must be processed and some response produced by the
system.
 Frequency: the number of times per second that a process must execute so that you are
confident that it can always meet its deadlines.
 Execution time: the time required to process a stimulus and produce a response.
Real-time operating systems
Real-time operating systems are specialized operating systems which manage the processes in the
RTS. Responsible for process management and resource (processor and memory) allocation. May be
based on a standard kernel which is used unchanged or modified for a particular application. Do not
normally include facilities such as file management.
Real-time operating system components:
 Real-time clock provides information for process scheduling.
 Interrupt handler manages aperiodic requests for service.
 Scheduler chooses the next process to be run.
 Resource manager allocates memory and processor resources.
 Dispatcher starts process execution.

The scheduler chooses the next process to be executed by the processor. This depends on a scheduling
strategy which may take the process priority into account. The resource manager allocates memory
and a processor for the process to be executed.
Scheduling strategies:
 Non pre-emptive scheduling: once a process has been scheduled for execution, it runs to
completion or until it is blocked for some reason (e.g. waiting for I/O).
 Pre-emptive scheduling: the execution of an executing processes may be stopped if a higher
priority process requires service.
 Scheduling algorithms include round-robin, rate monotonic, and shortest deadline first.
Embedded System Design

An Embedded system is a controller, which controls many other electronic devices. It is a


combination of embedded hardware and software. There are two types of embedded
systems microprocessors and micro-controller. Micro-processor is based on von Neumann
model/architecture (where program + data resides in the same memory location), it is an important
part of the computer system, where external processors and peripherals are interfaced to it.
What is an Embedded System Design?
Definition: A system designed with the embedding of hardware and software together for a specific
function with a larger area is embedded system design. In embedded system design, a microcontroller
plays a vital role. Micro-controller is based on Harvard architecture, it is an important component of
an embedded system.

Types of Embedded Systems


 Stand-Alone Embedded System
 Real-Time Embedded System
 Networked Appliances
 Mobile devices
Elements of Embedded Systems
 Processor
 Microprocessor
 Microcontroller
 Digital signal processor.
Steps in the Embedded System Design Process
The different steps in the embedded system design flow/flow diagram include the following.
Abstraction
In this stage the problem related to the system is abstracted.
Hardware – Software Architecture: Proper knowledge of hardware and software to be known
before starting any design process.
Extra Functional Properties: Extra functions to be implemented are to be understood completely
from the main design.
System Related Family of Design: When designing a system, one should refer to a previous system-
related family of design.
Modular Design: Separate module designs must be made so that they can be used later on when
required.
Mapping: Based on software mapping is done. For example, data flow and program flow are mapped
into one.
User Interface Design: In user interface design it depends on user requirements, environment
analysis and function of the system. For example, on a mobile phone if we want to reduce the power
consumption of mobile phones we take care of other parameters, so that power consumption can be
reduced.
Refinement: Every component and module must be refined appropriately so that the software team
can understand.
Architectural description language is used to describe the software design.
 Control Hierarchy
 Partition of structure
 Data structure and hierarchy
 Software Procedure.
Embedded System Design Software Development Process Activities
There are various design metric required to design any system to function properly, they are
Embedded Software Development Process Activities
Embedded software development process activities mainly include the following.
Specifications: Proper specifications are to be made so that the customer who uses the product can
go through the specification of the product and use it without any confusion. Designers mainly focus
on specifications like hardware, design constraints, life cycle period, resultant system behavior.
Architecture: Hardware and Software architecture layers are specified.
Components: In this layer, components design is done. Components like single process processor,
memories- RAM/ROM, peripheral devices, buses..etc.
System Integration: In this layer, all the components are integrated into the system and tested
whether its meeting designers, expectations.
Challenges in Embedded System Design
While designing any embedded system, designers face lots of challenges like as follows,
 Environment adaptability
 Power consumption
 Area occupied
 Packaging and integration
 Updating in hardware and software
 Security
 There are various challenges the designers face while testing the design like Embedded
hardware testing, Verification stage, Validation Maintainability.
Embedded System Design Examples
 Automatic chocolate vending machine (ACVM)
 Digital camera
 Smart card
 Mobile phone
 Mobile computer..etc.
Automatic Chocolate Vending Machine (ACVM)
The design function of ACVM is to provide chocolate to the child whenever the child inserts a coin
into ACVM.
Design Steps
The design steps mainly include the following.
1. Requirements
2. Specifications
3. Hardware and software functioning.
Requirements
When a child inserts a coin into the machine and selects the particular chocolate that he wants to
purchase.
Inputs
 Coins, user selection.
 An interrupt is generated at each port whenever a coin is inserted.
 A separate notification is sent to each port.
Outputs
 Chocolate
 Refund
 A message is displayed on LCD like date, time, welcome message.
System Function
 Using a graphical user interface, the child commands to the system which chocolate the child
wants to purchase.
 Where the graphical user interface has an LCD, keypad, touch screen.
 The machine delivers the chocolate when the child inserts the coin if the coins inserted are
excess than the actual cost of selected chocolate.
 Using a Universal synchronous bus, the owner of the ACVM can keep track of client location.
Design Metrics
Power Dissipation
The design should be made as per display size and mechanical components.
Process Deadline
Timmer must be set, so that whenever the child inserts the coin the ACVM must respond within few
seconds in delivering the chocolates and refunding if excess.
For example, if the response time is 10seconds, the ACVM should deliver the chocolate and refund
the money if excess within 10 seconds as soon as the child inserts the coin and place a request for
chocolate.
Specifications
From the below ACVM system, when the child inserts the coin. The coins get segregated according to
the ports presented, Port1, Port2, Port5. On receiving coin an interrupt is generated by the port, this
interrupt is sent to reading the amount value and increasing.

An LCD present here displays the messages like cost, time, welcome..etc. A port delivery exists
where the chocolates are collected.
Hardware
ACVM hardware architecture has the following hardware specifications
 Microcontroller 8051
 64 KB RAM and 8MB ROM
 64 KB Flash memory
 Keypad
 Mechanical coin sorter
 Chocolate channel
 Coin channel
 USB wireless modem
 Power supply
Software of ACVM
Many programs have to be written so that they can be reprogrammed when required in RAM /ROM
like,
 Increase in chocolate price
 Updating messages to be displayed in LCD
 Change in features of the machine.
An Embedded System is a combination of hardware + software to perform a particular function.
There are of two types microprocessors and microcontrollers. While designing an embedded
system certain design constraints and specifications are to consider, so that the developer can
meet the customer expectations and deliver on time. An application of Embedded
system design ACVM explained in this content. Here is a question what is the cause for
environmental constraints while designing an embedded system?

Architectural Patterns for Real-time Software


1. Layered pattern
2. Client-server pattern
3. Master-slave pattern
4. Pipe-filter pattern
5. Broker pattern
6. Peer-to-peer pattern
7. Event-bus pattern
8. Model-view-controller pattern
9. Blackboard pattern
10. Interpreter pattern
1. Layered pattern
This pattern can be used to structure programs that can be decomposed into groups of subtasks, each
of which is at a particular level of abstraction. Each layer provides services to the next higher layer.
The most commonly found 4 layers of a general information system are as follows.

 Presentation layer (also known as UI layer)


 Application layer (also known as service layer)
 Business logic layer (also known as domain layer)
 Data access layer (also known as persistence layer)
Usage

 General desktop applications.


 E commerce web applications.

2. Client-server pattern
This pattern consists of two parties; a server and multiple clients. The server component will provide
services to multiple client components. Clients request services from the server and the server
provides relevant services to those clients. Furthermore, the server continues to listen to client
requests.
Usage

 Online applications such as email, document sharing and banking.

3. Master-slave pattern
This pattern consists of two parties; master and slaves. The master component distributes the work
among identical slave components, and computes a final result from the results which the slaves
return.
Usage

 In database replication, the master database is regarded as the authoritative source, and
the slave databases are synchronized to it.
 Peripherals connected to a bus in a computer system (master and slave drives).

4. Pipe-filter pattern
This pattern can be used to structure systems which produce and process a stream of data. Each
processing step is enclosed within a filter component. Data to be processed is passed through pipes.
These pipes can be used for buffering or for synchronization purposes.
Usage

 Compilers. The consecutive filters perform lexical analysis, parsing, semantic analysis,
and code generation.
 Workflows in bioinformatics.

5. Broker pattern
This pattern is used to structure distributed systems with decoupled components. These components
can interact with each other by remote service invocations. A broker component is responsible for the
coordination of communication among components.
Usage
 Message broker software such as Apache ActiveMQ, Apache Kafka, RabbitMQ and JBoss
Messaging.

6. Peer-to-peer pattern
In this pattern, individual components are known as peers. Peers may function both as a client,
requesting services from other peers, and as a server, providing services to other peers. A peer may
act as a client or as a server or as both, and it can change its role dynamically with time.
Usage

 File-sharing networks such as Gnutella and G2


 Multimedia protocols such as P2PTV and PDTP.
 Cryptocurrency-based products such as Bitcoin and Blockchain

7. Event-bus pattern
This pattern primarily deals with events and has 4 major components; event source, event
listener, channel and event bus. Sources publish messages to particular channels on an event bus.
Listeners subscribe to particular channels. Listeners are notified of messages that are published to a
channel to which they have subscribed before.
Usage

 Android development
 Notification services
8. Model-view-controller pattern
This pattern, also known as MVC pattern, divides an interactive application in to 3 parts as,
1. model — contains the core functionality and data
2. view — displays the information to the user (more than one view may be defined)
3. controller — handles the input from the user
Usage

 Architecture for World Wide Web applications in major programming languages.


 Web frameworks such as Django and Rails.

9. Blackboard pattern
This pattern is useful for problems for which no deterministic solution strategies are known. The
blackboard pattern consists of 3 main components.

 blackboard — a structured global memory containing objects from the solution space
 knowledge source — specialized modules with their own representation
 control component — selects, configures and executes modules.
Usage

 Speech recognition
 Vehicle identification and tracking
 Protein structure identification
 Sonar signals interpretation.

10. Interpreter pattern


This pattern is used for designing a component that interprets programs written in a dedicated
language. It mainly specifies how to evaluate lines of programs, known as sentences or expressions
written in a particular language. The basic idea is to have a class for each symbol of the language.
Usage

 Database query languages such as SQL.


 Languages used to describe communication protocols.
Timing Analysis
Timing Analysis is a vital attribute in real-time systems. Timing constraints decides the total
correctness of the result in real-time systems. The correctness of results in real-time system does not
depends only on logical correctness but also the result should be obtained within the time constraint.
There might be several events happening in real time system and these events are scheduled by
schedulers using timing constraints.

Classification of Timing Analysis:


Timing constraints associated with the real-time system is classified to identify the different types of
timing constraints in a real-time system. Timing constraints are broadly classified into two categories:

1. Performance Analysis:
The constraints enforced on the response of the system is known as Performance Constraints. This
basically describes the overall performance of the system. This shows how quickly and accurately the
system is responding. It ensures that the real-time system performs satisfactorily.

2. Behavioral Analysis:
The constraints enforced on the stimuli generated by the environment is known as Behavioral
Constraints. This basically describes the behavior of the environment. It ensures that the environment
of a system is well behaved.

Further, the both performance and behavioral constraints are classified into three categories: Delay
Constraint, Deadline Constraint, and Duration Constraint. These are explained as following below.

1. Delay Analysis
A delay constraint describes the minimum time interval between occurrence of two
consecutive events in the real-time system. If an event occurs before the delay constraint, then
it is called a delay violation. The time interval between occurrence of two events should be
greater than or equal to delay constraint.

If D is the actual time interval between occurrence of two events and d is the delay constraint,
then

D >= d
2. Deadline Analysis
A deadline constraint describes the maximum time interval between occurrence of two
consecutive events in the real-time system. If an event occurs after the deadline constraint,
then the result of event is considered incorrect. The time interval between occurrence of two
events should be less than or equal to deadline constraint.

If D is the actual time interval between occurrence of two events and d is the deadline
constraint, then

D <= d

3. Duration Constraint –
Duration constraint describes the duration of an event in real-time system. It describes the
minimum and maximum time period of an event. On this basis it is further classified into two
types:

 Minimum Duration Constraint: It describes that after the initiation of an event, it


can not stop before a certain minimum duration.

 Maximum Duration Constraint: It describes that after the starting of an event, it


must end before a certain maximum duration elapses

Real-time Operating Systems


A real-time operating system (RTOS) is a special-purpose operating system used in computers that
has strict time constraints for any job to be performed. It is employed mostly in those systems in
which the results of the computations are used to influence a process while it is executing. Whenever
an event external to the computer occurs, it is communicated to the computer with the help of some
sensor used to monitor the event.

This process is completely uninterrupted unless a higher priority interrupt occurs during its execution.
Therefore, there must be a strict hierarchy of priority among the interrupts. The interrupt with the
highest priority must be allowed to initiate the process ,
Real-time operating systems employ special-purpose operating systems because conventional
operating systems do not provide such performance.
The various examples of Real-time operating systems are:
o MTS
o Lynx
o QNX
o VxWorks etc.
Applications of Real-time operating system (RTOS):
RTOS is used in real-time applications that must work within specific deadlines. Following are the
common areas of applications of Real-time operating systems are given below.
o Real-time running structures are used inside the Radar gadget.
o Real-time running structures are utilized in Missile guidance.
o Real-time running structures are utilized in on line inventory trading.
o Real-time running structures are used inside the cell phone switching gadget.
o Real-time running structures are utilized by Air site visitors to manipulate structures.
o Real-time running structures are used in Medical Imaging Systems.
o Real-time running structures are used inside the Fuel injection gadget.
o Real-time running structures are used inside the Traffic manipulate gadget.
o Real-time running structures are utilized in Autopilot travel simulators.
Types of Real-time operating system

Hard Real-Time operating system:


In Hard RTOS, all critical tasks must be completed within the specified time duration, i.e., within the
given deadline. Not meeting the deadline would result in critical failures such as damage to equipment
or even loss of human life.
For Example,
Let's take an example of airbags provided by carmakers along with a handle in the driver's seat. When
the driver applies brakes at a particular instance, the airbags grow and prevent the driver's head from
hitting the handle. Had there been some delay even of milliseconds, then it would have resulted in an
accident.
Similarly, consider an on-stock trading software. If someone wants to sell a particular share, the
system must ensure that command is performed within a given critical time. Otherwise, if the market
falls abruptly, it may cause a huge loss to the trader.
Soft Real-Time operating system:
Soft RTOS accepts a few delays via the means of the Operating system. In this kind of RTOS, there
may be a closing date assigned for a particular job, but a delay for a small amount of time is
acceptable. So, cut off dates are treated softly via means of this kind of RTOS.
For Example,
This type of system is used in Online Transaction systems and Livestock price quotation Systems.
Firm Real-Time operating system:
In Firm RTOS additionally want to observe the deadlines. However, lacking a closing date
might not have a massive effect, however may want to purposely undesired effects, like a
massive discount within the fine of a product.
For Example, this system is used in various forms of Multimedia applications.
Real-Time System Characteristics

 Determinism: Repeating an input will result in the same output.


 High performance: RTOS systems are fast and responsive, often executing actions within a
small fraction of the time needed by a general OS.
 Safety and security: RTOSes are frequently used in critical systems when failures can have
catastrophic consequences, such as robotics or flight controllers. To protect those around
them, they must have higher security standards and more reliable safety features.
 Priority-based scheduling: Priority scheduling means that actions assigned a high priority
are executed first, and those with lower priority come after. This means that an RTOS will
always execute the most important task.
 Small footprint: Versus their hefty general OS counterparts, RTOSes weigh in at just a
fraction of the size.
Advantages of Real-time operating system:
The benefits of real-time operating system are as follows-:
o Easy to layout, develop and execute real-time applications under the real-time operating
system.
o The real-time working structures are extra compact, so those structures require much less
memory space.
o In a Real-time operating system, the maximum utilization of devices and systems.
o Focus on running applications and less importance to applications that are in the queue.
o Since the size of programs is small, RTOS can also be embedded systems like in transport and
others.
o These types of systems are error-free.
o Memory allocation is best managed in these types of systems.
Disadvantages of Real-time operating system:
The disadvantages of real-time operating systems are as follows-
o Real-time operating systems have complicated layout principles and are very costly to
develop.
o Real-time operating systems are very complex and can consume critical CPU cycles.
UNIT V SOFTWARE TESTING AND SOFTWARE
CONFIGURATION MANAGEMENT

Software Testing Strategy


Software testing is a process of identifying the correctness of software by considering its all attributes
(Reliability, Scalability, Portability, Re-usability, Usability) and evaluating the execution of software
components to find the software bugs or errors or defects.
Software Testing is a type of investigation to find out if there is any default or error present in the
software so that the errors can be reduced or removed to increase the quality of the software and to
check whether it fulfills the specifies requirements or not.
Software testing is widely used technology because it is compulsory to test each and every software
before deployment.
Software testing provides an independent view and objective of the software and gives surety of
fitness of the software. It involves testing of all components under the required services to confirm
that whether it is satisfying the specified requirements or not. The process is also providing the client
with information about the quality of the software. Testing is mandatory because it will be a
dangerous situation if the software fails any of time due to lack of testing. So, without testing software
cannot be deployed to the end user.
Strategy of testing

1. Before testing starts, it’s necessary to identify and specify the requirements of the product in
a quantifiable manner.
Different characteristics quality of the software is there such as maintainability that means the
ability to update and modify, the probability that means to find and estimate any risk, and
usability that means how it can easily be used by the customers or end-users. All these
characteristic qualities should be specified in a particular order to obtain clear test results without
any error.
2. Specifying the objectives of testing in a clear and detailed manner.
Several objectives of testing are there such as effectiveness that means how effectively the
software can achieve the target, any failure that means inability to fulfill the requirements and
perform functions, and the cost of defects or errors that mean the cost required to fix the error. All
these objectives should be clearly mentioned in the test plan.
3. For the software, identifying the user’s category and developing a profile for each user.
Use cases describe the interactions and communication among different classes of users and the
system to achieve the target. So as to identify the actual requirement of the users and then testing
the actual use of the product.
4. Developing a test plan to give value and focus on rapid-cycle testing.
Rapid Cycle Testing is a type of test that improves quality by identifying and measuring the any
changes that need to be required for improving the process of software. Therefore, a test plan is an
important and effective document that helps the tester to perform rapid cycle testing.

5. Robust software is developed that is designed to test itself.


The software should be capable of detecting or identifying different classes of errors. Moreover,
software design should allow automated and regression testing which tests the software to find out
if there is any adverse or side effect on the features of software due to any change in code or
program.
6. Before testing, using effective formal reviews as a filter.
Formal technical reviews is technique to identify the errors that are not discovered yet. The
effective technical reviews conducted before testing reduces a significant amount of testing efforts
and time duration required for testing software so that the overall development time of software is
reduced.
7. Conduct formal technical reviews to evaluate the nature, quality or ability of the test
strategy and test cases.
The formal technical review helps in detecting any unfilled gap in the testing approach. Hence, it
is necessary to evaluate the ability and quality of the test strategy and test cases by technical
reviewers to improve the quality of software.
8. For the testing process, developing a approach for the continuous development.
As a part of a statistical process control approach, a test strategy that is already measured should
be used for software testing to measure and control the quality during the development of
software.
Type of Software testing
Manual testing
The process of checking the functionality of an application as per the customer needs without taking
any help of automation tools is known as manual testing. While performing the manual testing on any
application, we do not need any specific knowledge of any testing tool, rather than have a proper
understanding of the product so we can easily prepare the test document.
Manual testing can be further divided into three types of testing, which are as follows:

 White box testing


 Black box testing
 Gray box testing
Automation testing
Automation testing is a process of converting any manual test cases into the test scripts with the help
of automation tools, or any programming language is known as automation testing. With the help of
automation testing, we can enhance the speed of our test execution because here, we do not require
any human efforts. We need to write a test script and execute those scripts.

Unit Testing
Unit testing involves the testing of each unit or an individual component of the software application. It
is the first level of functional testing. The aim behind unit testing is to validate unit components with
its performance.
A unit is a single testable part of a software system and tested during the development phase of the
application software.
The purpose of unit testing is to test the correctness of isolated code. A unit component is an
individual function or code of the application. White box testing approach used for unit testing and
usually done by the developers.
Whenever the application is ready and given to the Test engineer, he/she will start checking every
component of the module or module of the application independently or one by one, and this process
is known as Unit testing or components testing.
Why Unit Testing?
In a testing level hierarchy, unit testing is the first level of testing done before integration and other
remaining levels of the testing. It uses modules for the testing process which reduces the dependency
of waiting for Unit testing frameworks, stubs, drivers and mock objects are used for assistance in unit
testing.
Generally, the software goes under four level of testing: Unit Testing, Integration Testing, System
Testing, and Acceptance Testing but sometimes due to time consumption software testers does
minimal unit testing but skipping of unit testing may lead to higher defects during Integration Testing,
System Testing, and Acceptance Testing or even during Beta Testing which takes place after the
completion of software application.
Some crucial reasons are listed below:

 Unit testing helps tester and developers to understand the base of code that makes them
able to change defect causing code quickly.
 Unit testing helps in the documentation.
 Unit testing fixes defects very early in the development phase that's why there is a
possibility to occur a smaller number of defects in upcoming testing levels.
 It helps with code reusability by migrating code and test cases.
How to execute Unit Testing
In order to execute Unit Tests, developers write a section of code to test a specific function in
software application. Developers can also isolate this function to test more rigorously which reveals
unnecessary dependencies between function being tested and other units so the dependencies can be
eliminated. Developers generally use UnitTest framework to develop automated test cases for unit
testing.
Unit Testing is of two types

 Manual
 Automated
Unit testing is commonly automated but may still be performed manually. Software Engineering does
not favor one over the other but automation is preferred. A manual approach to unit testing may
employ a step-by-step instructional document.
Under the automated approach-

 A developer writes a section of code in the application just to test the function. They would
later comment out and finally remove the test code when the application is deployed.
 A developer could also isolate the function to test it more rigorously. This is a more thorough
unit testing practice that involves copy and paste of code to its own testing environment than
its natural environment. Isolating the code helps in revealing unnecessary dependencies
between the code being tested and other units or data spaces in the product. These
dependencies can then be eliminated.
 A coder generally uses a UnitTest Framework to develop automated test cases. Using an
automation framework, the developer codes criteria into the test to verify the correctness of
the code. During execution of the test cases, the framework logs failing test cases. Many
frameworks will also automatically flag and report, in summary, these failed test cases.
Depending on the severity of a failure, the framework may halt subsequent testing.
 The workflow of Unit Testing is 1) Create Test Cases 2) Review/Rework 3) Baseline 4)
Execute Test Cases.
Unit Testing Tools

 NUnit
 JUnit
 PHPunit
 Parasoft Jtest
 EMMA
Junit: Junit is a free to use testing tool used for Java programming language. It provides assertions to
identify test method. This tool test data first and then inserted in the piece of code.
NUnit: NUnit is widely used unit-testing framework use for all .net languages. It is an open source
tool which allows writing scripts manually. It supports data-driven tests which can run in parallel.
Parasoft Jtest: Parasoft Jtestis open source Unit testing tool. It is a code coverage tool with line and
path metrics. It allows mocking API with recording and verification syntax. This tool offers Line
coverage, Path Coverage, and Data Coverage.
EMMA: EMMA is an open-source toolkit for analyzing and reporting code written in Java language.
Emma support coverage types like method, line, basic block. It is Java-based so it is without external
library dependencies and can access the source code.
PHPUnit: PHPUnit is a unit testing tool for PHP programmer. It takes small portions of code which
is called units and test each of them separately. The tool also allows developers to use pre-define
assertion methods to assert that a system behave in a certain manner.
Test Driven Development (TDD) & Unit Testing
Unit testing in TDD involves an extensive use of testing frameworks. A unit test framework is used in
order to create automated unit tests. Unit testing frameworks are not unique to TDD, but they are
essential to it. Below we look at some of what TDD brings to the world of unit testing:

 Tests are written before the code


 Rely heavily on testing frameworks
 All classes in the applications are tested
 Quick and easy integration is made possible
Unit Testing Best Practices

 Unit Test cases should be independent. In case of any enhancements or change in


requirements, unit test cases should not be affected.
 Test only one code at a time.
 Follow clear and consistent naming conventions for your unit tests
 In case of a change in code in any module, ensure there is a corresponding unit Test Case for
the module, and the module passes the tests before changing the implementation
 Bugs identified during unit testing must be fixed before proceeding to the next phase in SDLC
 Adopt a “test as your code” approach. The more code you write without testing, the more
paths you have to check for errors.
Unit Testing Advantage

 Developers looking to learn what functionality is provided by a unit and how to use it can
look at the unit tests to gain a basic understanding of the unit API.
 Unit testing allows the programmer to refactor code at a later date, and make sure the module
still works correctly (i.e. Regression testing). The procedure is to write test cases for all
functions and methods so that whenever a change causes a fault, it can be quickly identified
and fixed.
 Due to the modular nature of the unit testing, we can test parts of the project without waiting
for others to be completed.
Unit Testing Disadvantages

 Unit testing can’t be expected to catch every error in a program. It is not possible to
evaluate all execution paths even in the most trivial programs
 Unit testing by its very nature focuses on a unit of code. Hence it can’t catch integration
errors or broad system level errors.

Integration Testing
Integration Testing is defined as a type of testing where software modules are integrated logically and
tested as a group. A typical software project consists of multiple software modules, coded by different
programmers. The purpose of this level of testing is to expose defects in the interaction between these
software modules when they are integrated
Integration Testing focuses on checking data communication amongst these modules. Hence it is also
termed as ‘I & T’ (Integration and Testing), ‘String Testing’ and sometimes ‘Thread Testing’.

Once all the components or modules are working independently, then we need to check the data flow
between the dependent modules is known as integration testing.
Reason Behind Integration Testing
1. Each module is designed by individual software developer whose programming logic
may differ from developers of other modules so; integration testing becomes essential to
determine the working of software modules.
2. To check the interaction of software modules with the database whether it is an erroneous
or not.
3. Requirements can be changed or enhanced at the time of module development. These new
requirements may not be tested at the level of unit testing hence integration testing
becomes mandatory.
4. Incompatibility between modules of software could create errors.
5. To test hardware's compatibility with software.
6. If exception handling is inadequate between modules, it can create bugs.
Integration Testing Techniques
Black Box Testing

 State Transition technique


 Decision Table Technique
 Boundary Value Analysis
 All-pairs Testing
 Cause and Effect Graph
 Equivalence Partitioning
 Error Guessing
White Box Testing

 Data flow testing


 Control Flow Testing
 Branch Coverage Testing
 Decision Coverage Testing
Types of Integration Testing

 Incremental integration testing


 Non-incremental integration testing

Incremental Approach
In the Incremental Approach, modules are added in ascending order one by one or according to need.
The selected modules must be logically related. Generally, two or more than two modules are added
and tested to determine the correctness of functions. The process continues until the successful testing
of all the modules.

 Top-Down approach
 Bottom-Up approach

Top-Down Approach
The top-down testing strategy deals with the process in which higher level modules are tested with
lower level modules until the successful completion of testing of all the modules. Major design flaws
can be detected and fixed early because critical modules tested first. In this type of method, we will
add the modules incrementally or one by one and check the data flow in the same order.
In the top-down approach, we will be ensuring that the module we are adding is the child of the
previous one like Child C is a child of Child B and so on as we can see in the below image:

Advantages:

 Identification of defect is difficult.


 An early prototype is possible.
Disadvantages:

 Due to the high number of stubs, it gets quite complicated.


 Lower level modules are tested inadequately.
 Critical Modules are tested first so that fewer chances of defects.
Bottom-Up Method
The bottom to up testing strategy deals with the process in which lower level modules are tested with
higher level modules until the successful completion of testing of all the modules. Top level critical
modules are tested at last, so it may cause a defect.

In the bottom-up method, we will ensure that the modules we are adding are the parent of the
previous one as we can see in the below image:
Advantages

 Identification of defect is easy.


 Do not need to wait for the development of all the modules as it saves time.
Disadvantages

 Critical modules are tested last due to which the defects can occur.
 There is no possibility of an early prototype.
Hybrid Testing Method
In this approach, both Top-Down and Bottom-Up approaches are combined for testing. In this
process, top-level modules are tested with lower level modules and lower level modules tested with
high-level modules simultaneously. There is less possibility of occurrence of defect because each
module interface is tested.

Advantages

 The hybrid method provides features of both Bottom Up and Top Down methods.
 It is most time reducing method.
 It provides complete testing of all modules.
Disadvantages

 This method needs a higher level of concentration as the process carried out in both
directions simultaneously.
 Complicated method.
Non- incremental integration testing
We will go for this method, when the data flow is very complex and when it is difficult to find who is
a parent and who is a child. And in such case, we will create the data in any module bang on all other
existing modules and check if the data is present. Hence, it is also known as the Big bang method.
Big Bang Method
In this approach, testing is done via integration of all modules at once. It is convenient for small
software systems, if used for large software systems identification of defects is difficult.
Since this testing can be done after completion of all modules due to that testing team has less time for
execution of this process so that internally linked interfaces and high-risk critical modules can be
missed easily.

Advantages:

 It is convenient for small size software systems.


Disadvantages:

 Identification of defects is difficult because finding the error where it came from is a
problem, and we don't know the source of the bug.
 Small modules missed easily.
 Time provided for testing is very less.
 We may miss to test some of the interfaces.
Entry and Exit Criteria of Integration Testing
Entry and Exit Criteria to Integration testing phase in any software development model
Entry Criteria:

 Unit Tested Components/Modules


 All High prioritized bugs fixed and closed
 All Modules to be code completed and integrated successfully.
 Integration tests Plan, test case, scenarios to be signed off and documented.
 Required Test Environment to be set up for Integration testing
Exit Criteria:

 Successful Testing of Integrated Application.


 Executed Test Cases are documented
 All High prioritized bugs fixed and closed
 Technical documents to be submitted followed by release Notes.
Best Practices for Integration Testing

 First, determine the Integration Test Strategy that could be adopted and later prepare the test
cases and test data accordingly.
 Study the Architecture design of the Application and identify the Critical Modules. These
need to be tested on priority.
 Obtain the interface designs from the Architectural team and create test cases to verify all of
the interfaces in detail. Interface to database/external hardware/software application must be
tested in detail.
 After the test cases, it’s the test data which plays the critical role.
 Always have the mock data prepared, prior to executing. Do not select test data while
executing the test cases.

Validation Testing
Validation is determining if the system complies with the requirements and performs functions for
which it is intended and meets the organization’s goals and user needs.
Validation test, it takes great responsibility as you need to test all the critical business requirements
based on the user’s needs. There should not be even a single miss on the requirements asked by the
user. Hence a keen knowledge of validation testing is much important.
Validation Testing ensures that the product actually meets the client's needs. It can also be defined as
to demonstrate that the product fulfills its intended use when deployed on appropriate environment.

Validation Testing Workflow


When to use Validation Testing
Validation tests must be run after every feature or step in the development process is completed. For
example, unit tests, a form of validation tests, are run after every unit of code has been created.
Integration tests are run after multiple modules have been completed individually and are ready to be
combined.
Stages Involved

 Design Qualification: This includes creating the test plan based on the business
requirements. All the specifications need to be mentioned clearly.
 Installation Qualification: This includes software installation based on the requirements.
 Operational Qualification: This includes the testing phase based on the User
requirement specification.
This may include Functionality testing:

 Unit Testing – Black box, White box, Gray box.


 Integration Testing – Top-down, Bottom-up, Big bang.
 System Testing – Sanity, Smoke, and Regression Testing.
 Performance Qualification: UAT(User Acceptance testing) – Alpha and Beta testing.
 Production
Design Qualification
Design qualification simply means that you have to prepare the design of the software in such a way
so that it meets the user specifications. Primarily you need to get the User Requirements Specification
(URS) document from the client to proceed with the design.
Installation Qualification
 Installation qualification contains details like which and how many test environments would
be used, what access level is required for the testers in each environment along with the test
data required. It may include browser compatibility, tools required for execution, devices
required for testing, etc. The system being developed should be installed in accordance with
user requirements.
 Test data may be required for testing some applications and it needs to be given by the proper
person. It is a vital pre-requisite.
 Some applications may require a database. We have to keep all data required for testing ready
in a database to validate the specifications.
Operational Qualification
Operational qualification ensures that every module and sub-module designed for the application
under test functions properly as it is expected to in the desired environment.
Validation testing plays a major role in validation testing. It simply means that you have to validate
the functionality of the application by each and every critical requirement mentioned. This paves the
way to map the requirements mentioned in the Functional Specification document and ensures that the
product meets all the requirements mentioned.
System Testing
System Testing is a level of testing that validates the complete and fully integrated software product.
The purpose of a system test is to evaluate the end-to-end system specifications. Usually, the software
is only one element of a larger computer-based system. Ultimately, the software is interfaced with
other software/hardware systems. System Testing is defined as a series of different tests whose sole
purpose is to exercise the full computer-based system.
System Testing is basically performed by a testing team that is independent of the development team
that helps to test the quality of the system impartial. It has both functional and non-functional testing.
System Testing is performed after the integration testing and before the acceptance testing.
To check the end-to-end flow of an application or the software as a user is known as System testing.
In this, we navigate (go through) all the necessary modules of an application and check if the end
features or the end business works fine, and test the product as a whole system.
It is end-to-end testing where the testing environment is similar to the production environment.
System Testing is Blackbox
Two Category of Software Testing

 Black Box Testing


 White Box Testing
System test falls under the black box testing category of software testing.
White box testing is the testing of the internal workings or code of a software application. In contrast,
black box or System Testing is the opposite. System test involves the external workings of the
software from the user’s perspective.
What do you verify in System Testing?

 Testing the fully integrated applications including external peripherals in order to check how
components interact with one another and with the system as a whole. This is also called End
to End testing scenario.
 Verify thorough testing of every input in the application to check for desired outputs.
 Testing of the user’s experience with the application.
System Testing Hierarchy

As with almost any software engineering process, software testing has a prescribed order in which
things should be done. The following is a list of software testing categories arranged in chronological
order. These are the steps taken to fully test new software in preparation for marketing it:
 Unit testing performed on each module or block of code during development. Unit
Testing is normally done by the programmer who writes the code.
 Integration testing done before, during and after integration of a new module into the
main software package. This involves testing of each individual code module. One piece
of software can contain several modules which are often created by several different
programmers. It is crucial to test each module’s effect on the entire program model.
 System testing done by a professional testing agent on the completed software product
before it is introduced to the market.
 Acceptance testing – beta testing of the product done by the actual end users.
Types of System Testing

1. Usability Testing – mainly focuses on the user’s ease to use the application, flexibility in
handling controls and ability of the system to meet its objectives
2. Load Testing – is necessary to know that a software solution will perform under real-life
loads.
3. Regression Testing – involves testing done to make sure none of the changes made over
the course of the development process have caused new bugs. It also makes sure no old
bugs appear from the addition of new software modules over time.
4. Recovery Testing – is done to demonstrate a software solution is reliable, trustworthy
and can successfully recoup from possible crashes.
5. Migration Testing – is done to ensure that the software can be moved from older system
infrastructures to current system infrastructures without any issues.
6. Functional Testing – Also known as functional completeness testing, Functional
Testing involves trying to think of any possible missing functions. Testers might make a
list of additional functionalities that a product could have to improve it during functional
testing.
7. Hardware/Software Testing – IBM refers to Hardware/Software testing as “HW/SW
Testing”. This is when the tester focuses his/her attention on the interactions between the
hardware and software during system testing.
System Testing Process
 Test Environment Setup: Create testing environment for the better quality testing.
 Create Test Case: Generate test case for the testing process.
 Create Test Data: Generate the data that is to be tested.
 Execute Test Case: After the generation of the test case and the test data, test cases are
executed.
 Defect Reporting: Defects in the system are detected.
 Regression Testing: It is carried out to test the side effects of the testing process.
 Log Defects: Defects are fixed in this step.
 Retest: If the test is not successful then again test is performed.

Regression Testing
Regression testing is performed under system testing to confirm and identify that if there's any defect
in the system due to modification in any other part of the system. It makes sure, any changes done
during the development process have not introduced a new defect and also gives assurance; old
defects will not exist on the addition of new software over the time.
Tools used for System Testing :
1. JMeter
2. Gallen Framework
3. Selenium
Advantages of System Testing :

 The testers do not require more knowledge of programming to carry out this testing.
 It will test the entire product or software so that we will easily detect the errors or defects
which cannot be identified during the unit testing and integration testing.
 The testing environment is similar to that of the real time production or business environment.
 It checks the entire functionality of the system with different test scripts and also it covers the
technical and business requirements of clients.
 After this testing, the product will almost cover all the possible bugs or errors and hence the
development team will confidently go ahead with acceptance testing.
Disadvantages of System Testing :

 This testing is time consuming process than another testing techniques since it checks the
entire product or software.
 The cost for the testing will be high since it covers the testing of entire software.
 It needs good debugging tool otherwise the hidden errors will not be found.
Debugging
In the development process of any software, the software program is religiously tested, troubleshot,
and maintained for the sake of delivering bug-free products. There is nothing that is error-free in the
first go.
So, it's an obvious thing to which everyone will relate that as when the software is created, it contains
a lot of errors; the reason being nobody is perfect and getting error in the code is not an issue, but
avoiding it or not preventing it, is an issue!
All those errors and bugs are discarded regularly, so we can conclude that debugging is nothing but a
process of eradicating or fixing the errors contained in a software program.
Debugging works stepwise, starting from identifying the errors, analyzing followed by removing the
errors. Whenever a software fails to deliver the result, we need the software tester to test the
application and solve it.
Since the errors are resolved at each step of debugging in the software testing, so we can conclude that
it is a tiresome and complex task regardless of how efficient the result was.
Why do we need Debugging?
Debugging gets started when we start writing the code for the software program. It progressively
starts continuing in the consecutive stages to deliver a software product because the code gets merged
with several other programming units to form a software product.
Following are the benefits of Debugging:

 Debugging can immediately report an error condition whenever it occurs. It prevents


hampering the result by detecting the bugs in the earlier stage, making software
development stress-free and smooth.
 It offers relevant information related to the data structures that further helps in easier
interpretation.
 Debugging assist the developer in reducing impractical and disrupting information.
 With debugging, the developer can easily avoid complex one-use testing code to save
time and energy in software development.
Steps involved in Debugging
1. Identify the Error: Identifying an error in a wrong may result in the wastage of time. It is
very obvious that the production errors reported by users are hard to interpret, and sometimes
the information we receive is misleading. Thus, it is mandatory to identify the actual error.
2. Find the Error Location: Once the error is correctly discovered, you will be required to
thoroughly review the code repeatedly to locate the position of the error. In general, this step
focuses on finding the error rather than perceiving it.
3. Analyze the Error: The third step comprises error analysis, a bottom-up approach that starts
from the location of the error followed by analyzing the code. This step makes it easier to
comprehend the errors. Mainly error analysis has two significant goals, i.e., evaluation of
errors all over again to find existing bugs and postulating the uncertainty of incoming
collateral damage in a fix.
4. Prove the Analysis: After analyzing the primary bugs, it is necessary to look for some extra
errors that may show up on the application. By incorporating the test framework, the fourth
step is used to write automated tests for such areas.
5. Cover Lateral Damage: The fifth phase is about accumulating all of the unit tests for the
code that requires modification. As when you run these unit tests, they must pass.
6. Fix & Validate: The last stage is the fix and validation that emphasizes fixing the bugs
followed by running all the test scripts to check whether they pass.
Debugging Strategies

 For a better understanding of a system, it is necessary to study the system in depth. It makes it
easier for the debugger to fabricate distinct illustrations of such systems that are needed to be
debugged.
 The backward analysis analyzes the program from the backward location where the failure
message has occurred to determine the defect region. It is necessary to learn the area of
defects to understand the reason for defects.
 In the forward analysis, the program tracks the problem in the forward direction by utilizing
the breakpoints or print statements incurred at different points in the program. It emphasizes
those regions where the wrong outputs are obtained.
 To check and fix similar kinds of problems, it is recommended to utilize past experiences.
The success rate of this approach is directly proportional to the proficiency of the debugger.
Debugging Tools
The debugging tool can be understood as a computer program that is used to test and debug several
other programs. Presently, there are many public domain software such as gdb and dbx in the market,
which can be utilized for debugging. These software offers console-based command-line interfaces.
Some of the automated debugging tools include code-based tracers, profilers, interpreters, etc.
Here is a list of some of the widely used debuggers:

 Radare2
 WinDbg
 Valgrind
Radare2
Radare2 is known for its reverse engineering framework as well as binary analysis. It is made up of a
small set of utilities, either utilized altogether or independently from the command line. It is also
known as r2.
It is constructed around disassembler for computer software for generating assembly language source
code from machine-executable code. It can support a wide range of executable formats for distinct
architectures of processors and operating systems.
WinDbg
WinDbg is a multipurpose debugging tool designed for Microsoft Windows operating system. This
tool can be used to debug the memory dumps created just after the Blue Screen of Death that further
arises when a bug check is issued. Besides, it is also helpful in debugging the user-mode crash dumps,
which is why it is called post-mortem debugging.
Valgrind
The Valgrind exist as a tool suite that offers several debugging and profiling tools to facilitate users in
making faster and accurate program. Memcheck is one of its most popular tools, which can
successfully detect memory-related errors caused in C and C++ programs as it may crash the program
and result in unpredictable behavior.
White-Box Testing
White box is used because of the internal perspective of the system. The clear box or white box or
transparent box name denote the ability to see through the software's outer shell into its inner
workings.
Developers do white box testing. In this, the developer will test every line of the code of the program.
The developers perform the White-box testing and then send the application or the software to the
testing team, where they will perform the black box testing and verify the application along with the
requirements and identify the bugs and sends it to the developer.
The developer fixes the bugs and does one round of white box testing and sends it to the testing team.
Here, fixing the bugs implies that the bug is deleted, and the particular feature is working fine on the
application.
Here, the test engineers will not include in fixing the defects for the following reasons:

 Fixing the bug might interrupt the other features. Therefore, the test engineer should
always find the bugs, and developers should still be doing the bug fixes.
 If the test engineers spend most of the time fixing the defects, then they may be unable to
find the other bugs in the application.
The white box testing contains various tests, which are as follows:

 Path testing
 Loop testing
 Condition testing
 Testing based on the memory perspective
 Test performance of the program
Path testing
In the path testing, we will write the flow graphs and test all independent paths. Here writing the flow
graph implies that flow graphs are representing the flow of the program and also show how every
program is added with one another as we can see in the below image:

And test all the independent paths implies that suppose a path from main() to function G, first set the
parameters and test if the program is correct in that particular path, and in the same way test all other
paths and fix the bugs.
Loop testing
In the loop testing, we will test the loops such as while, for, and do-while, etc. and also check for
ending condition if working correctly and if the size of the conditions is enough.
For example: we have one program where the developers have given about 50,000 loops.
{
while(50,000)
……
……
}
We cannot test this program manually for all the 50,000 loops cycle. So we write a small program that
helps for all 50,000 cycles, as we can see in the below program, that test P is written in the similar
language as the source code program, and this is known as a Unit test. And it is written by the
developers only.
Test P
{
……
…… }
As we can see in the below image that, we have various requirements such as 1, 2, 3, 4. And then, the
developer writes the programs such as program 1,2,3,4 for the parallel conditions. Here the
application contains the 100s line of codes.

The developer will do the white box testing, and they will test all the five programs line by line of
code to find the bug. If they found any bug in any of the programs, they will correct it. And they again
have to test the system then this process contains lots of time and effort and slows down the product
release time.
Now, suppose we have another case, where the clients want to modify the requirements, then the
developer will do the required changes and test all four program again, which take lots of time and
efforts.
These issues can be resolved in the following ways:
In this, we will write test for a similar program where the developer writes these test code in the
related language as the source code. Then they execute these test code, which is also known as unit
test programs. These test programs linked to the main program and implemented as programs.

Therefore, if there is any requirement of modification or bug in the code, then the developer makes
the adjustment both in the main program and the test program and then executes the test program.
Condition testing
In this, we will test all logical conditions for both true and false values; that is, we will verify for
both if and else condition.
For example:
if(condition) - true
{
…..
}
else - false
{
…..
}
The above program will work fine for both the conditions, which means that if the condition is
accurate, and then else should be false and conversely.
Generic steps of white box testing

 Design all test scenarios, test cases and prioritize them according to high priority number.
 This step involves the study of code at runtime to examine the resource utilization, not
accessed areas of the code, time taken by various methods and operations and so on.
 In this step testing of internal subroutines takes place. Internal subroutines such as
nonpublic methods, interfaces are able to handle all types of data appropriately or not.
 This step focuses on testing of control statements like loops and conditional statements to
check the efficiency and accuracy for different data inputs.
 In the last step white box testing includes security testing to check all possible security
loopholes by looking at how the code handles security.
Reasons for white box testing

 It identifies internal security holes.


 To check the way of input inside the code.
 Check the functionality of conditional loops.
 To test function, object, and statement at an individual level.
Advantages of White box testing

 White box testing optimizes code so hidden errors can be identified.


 Test cases of white box testing can be easily automated.
 This testing is more thorough than other testing approaches as it covers all code paths.
 It can be started in the SDLC phase even without GUI.
Disadvantages of White box testing

 White box testing is too much time consuming when it comes to large-scale programming
applications.
 White box testing is much expensive and complex.
 It can lead to production error because it is not detailed by the developers.
 White box testing needs professional programmers who have a detailed knowledge and
understanding of programming language and implementation.
Techniques Used in White Box Testing
Data Flow Data flow testing is a group of testing strategies that examines the control flow
Testing of programs in order to explore the sequence of variables according to the
sequence of events.

Control Control flow testing determines the execution order of statements or instructions
Flow of the program through a control structure. The control structure of a program is
Testing used to develop a test case for the program. In this technique, a particular part of
a large program is selected by the tester to set the testing path. Test cases
represented by the control graph of the program.

Branch Branch coverage technique is used to cover all branches of the control flow
Testing graph. It covers all the possible outcomes (true and false) of each condition of
decision point at least once.

Statement Statement coverage technique is used to design white box test cases. This
Testing technique involves execution of all statements of the source code at least once. It
is used to calculate the total number of executed statements in the source code,
out of total statements present in the source code.

Decision This technique reports true and false outcomes of Boolean expressions.
Testing Whenever there is a possibility of two or more outcomes from the statements
like do while statement, if statement and case statement (Control flow
statements), it is considered as decision point because there are two outcomes
either true or false.
Basis Path Testing
Basis Path Testing in software engineering is a White Box Testing method in which test cases are
defined based on flows or logical paths that can be taken through the program. The objective of basis
path testing is to define the number of independent paths, so the number of test cases needed can be
defined explicitly to maximize test coverage.
In software engineering, Basis path testing involves execution of all possible blocks in a program and
achieves maximum path coverage with the least number of test cases. It is a hybrid method of branch
testing and path testing methods.
Steps for Basis Path testing
The basic steps involved in basis path testing include

 Draw a control graph (to determine different program paths)


 Calculate Cyclomatic complexity (metrics to determine the number of independent paths)
 Find a basis set of paths
 Generate test cases to exercise each path
Example
Consider the code snippet below, for which we will conduct basis path testing:
int num1 = 6;
int num2 = 9;
if(num2 == 0){
cout<<"num1/num2 is undefined"<<endl;
}else{
if(num1 > num2){
cout<<"num1 is greater"<<endl;
}else{
cout<<"num2 is greater"<<endl;
}
}

Step 1: Draw the control flow graph


The control flow graph of the code above will be as follows:
Step 2: Calculate cyclomatic complexity
The cyclomatic complexity of the control flow graph above will be:

where,

 E = The number of edges in the control flow graph.


 N = The number of nodes in the control flow graph.
 P = The number of connected components in the control flow graph.
Step 3: Identify independent paths
The independent paths in the control flow graph are as follows:

 Path 1: 1A-2B-3C-4D-5F-9
 Path 2: 1A-2B-3C-4E-6G-7I-9
 Path 3: 1A-2B-3C-4E-6H-8J-9
Step 4: Design test cases
The test cases to execute all paths above will be as follows:

Path Input values


Path 1: 1A-2B-3C-4D-5F-9 num1 = 9
num2 = 0

Path 2: 1A-2B-3C-4E-6G-7I-9 num1 = 4


num2 = 2
Path 3: 1A-2B-3C-4E-6H-8J-9 num1 = 6
num2 = 8

Independent Paths
An independent path in the control flow graph is the one which introduces at least one new edge that
has not been traversed before the path is defined. The cyclomatic complexity gives the number of
independent paths present in a flow graph. This is because the cyclomatic complexity is used as an
upper-bound for the number of tests that should be executed in order to make sure that all the
statements in the program have been executed at least once.
Advantages of Basic Path Testing

 It helps to reduce the redundant tests


 It focuses attention on program logic
 It helps facilitates analytical versus arbitrary case design
 Test cases which exercise basis set will execute every statement in a program at least
once
Control Structure Testing
Control structure testing is used to increase the coverage area by testing various control structures
present in the program. The different types of testing performed under control structure testing are as
follows-
 Condition Testing
 Data Flow Testing
 Loop Testing
1. Condition Testing: Condition testing is a test cased design method, which ensures that the logical
condition and decision statements are free from errors. The errors present in logical conditions can be
incorrect boolean operators, missing parenthesis in a booleans expression, error in relational
operators, arithmetic expressions, and so on. The common types of logical conditions that are tested
using condition testing are-
1. A relation expression, like E1 op E2 where ‘E1’ and ‘E2’ are arithmetic expressions and ‘OP’
is an operator.
2. A simple condition like any relational expression preceded by a NOT (~) operator. For
example, (~E1) where ‘E1’ is an arithmetic expression and ‘a’ denotes NOT operator.
3. A compound condition consists of two or more simple conditions, Boolean operator, and
parenthesis. For example, (E1 & E2)|(E2 & E3) where E1, E2, E3 denote arithmetic
expression and ‘&’ and ‘|’ denote AND or OR operators.
4. A Boolean expression consists of operands and a Boolean operator like ‘AND’, OR, NOT.
For example, ‘A|B’ is a Boolean expression where ‘A’ and ‘B’ denote operands and | denotes
OR operator.
2. Data Flow Testing: The data flow test method chooses the test path of a program based on the
locations of the definitions and uses all the variables in the program. The data flow test approach is
depicted as follows suppose each statement in a program is assigned a unique statement number and
that theme function cannot modify its parameters or global variables. For example, with S as its
statement number.
DEF (S) = {X | Statement S has a definition of X}
USE (S) = {X | Statement S has a use of X}
If statement S is an if loop statement, them its DEF set is empty and its USE set depends on the state
of statement S. The definition of the variable X at statement S is called the line of statement S’ if the
statement is any way from S to statement S’ then there is no other definition of X. A definition use
(DU) chain of variable X has the form [X, S, S’], where S and S’ denote statement numbers, X is in
DEF(S) and USE(S’), and the definition of X in statement S is line at statement S’. A simple data flow
test approach requires that each DU chain be covered at least once. This approach is known as the DU
test approach. The DU testing does not ensure coverage of all branches of a program. However, a
branch is not guaranteed to be covered by DU testing only in rare cases such as then in which the
other construct does not have any certainty of any variable in its later part and the other part is not
present. Data flow testing strategies are appropriate for choosing test paths of a program containing
nested if and loop statements.
3. Loop Testing : Loop testing is actually a white box testing technique. It specifically focuses on the
validity of loop construction. Following are the types of loops.
1. Simple Loop – The following set of test can be applied to simple loops, where the maximum
allowable number through the loop is n.
1. Skip the entire loop.
2. Traverse the loop only once.
3. Traverse the loop two times.
4. Make p passes through the loop where p<n.
5. Traverse the loop n-1, n, n+1 times.
Concatenated Loops – If loops are not dependent on each other, contact loops can be tested using the
approach used in simple loops. if the loops are interdependent, the steps are followed in nested loops.

Nested Loops – Loops within loops are called as nested loops. when testing nested loops, the number
of tested increases as level nesting increases. The following steps for testing nested loops are as
follows-
1. Start with inner loop. set all other loops to minimum values.
2. Conduct simple loop testing on inner loop.
3. Work outwards.
4. Continue until all loops tested.
Unstructured loops – This type of loops should be redesigned, whenever possible, to reflect the use
of unstructured the structured programming constructs.
Black-Box Testing
Black box testing is a technique of software testing which examines the functionality of software
without peering into its internal structure or coding. The primary source of black box testing is a
specification of requirements that is stated by the customer.
In this method, tester selects a function and gives input value to examine its functionality, and checks
whether the function is giving expected output or not. If the function produces correct output, then it
is passed in testing, otherwise failed. The test team reports the result to the development team and
then tests the next function. After completing testing of all functions if there are severe problems, then
it is given back to the development team for correction.

Generic steps of black box testing

 The black box test is based on the specification of requirements, so it is examined in the
beginning.
 In the second step, the tester creates a positive test scenario and an adverse test scenario by
selecting valid and invalid input values to check that the software is processing them correctly
or incorrectly.
 In the third step, the tester develops various test cases such as decision table, all pairs test,
equivalent division, error estimation, cause-effect graph, etc.
 The fourth phase includes the execution of all test cases.
 In the fifth step, the tester compares the expected output against the actual output.
 In the sixth and final step, if there is any flaw in the software, then it is cured and tested again.
Types of Black Box Testing
There are many types of Black Box Testing but the following are the prominent ones –

 Functional testing – This black box testing type is related to the functional requirements of a
system; it is done by software testers.
 Non-functional testing – This type of black box testing is not related to testing of specific
functionality, but non-functional requirements such as performance, scalability, usability.
 Regression testing – Regression Testing is done after code fixes, upgrades or any other
system maintenance to check the new code has not affected the existing code.
Test procedure
The test procedure of black box testing is a kind of process in which the tester has specific knowledge
about the software's work, and it develops test cases to check the accuracy of the software's
functionality.
There are various techniques used in black box testing for testing like decision table technique,
boundary value analysis technique, state transition, All-pair testing, cause-effect graph technique,
equivalence partitioning technique, error guessing technique, use case technique and user story
technique. All these techniques have been explained in detail within the tutorial.
Test cases
Test cases are created considering the specification of the requirements. These test cases are generally
created from working descriptions of the software including requirements, design parameters, and
other specifications. For the testing, the test designer selects both positive test scenario by taking valid
input values and adverse test scenario by taking invalid input values to determine the correct output.
Test cases are mainly designed for functional testing but can also be used for non-functional testing.
Test cases are designed by the testing team, there is not any involvement of the development team of
software.
Techniques Used in Black Box Testing

Decision Table Decision Table Technique is a systematic approach where various input
Technique combinations and their respective system behavior are captured in a tabular
form. It is appropriate for the functions that have a logical relationship
between two and more than two inputs.

Boundary Value Boundary Value Technique is used to test boundary values, boundary values
Techniqu are those that contain the upper and lower limit of a variable. It tests, while
entering boundary value whether the software is producing correct output or
not.

State Transition State Transition Technique is used to capture the behavior of the software
Technique application when different input values are given to the same function. This
applies to those types of applications that provide the specific number of
attempts to access the application.

All-pair Testing All-pair testing Technique is used to test all the possible discrete
Technique combinations of values. This combinational method is used for testing the
application that uses checkbox input, radio button input, list box, text box, etc.

Cause-Effect Cause-Effect Technique underlines the relationship between a given result


Technique and all the factors affecting the result.It is based on a collection of
requirements.

Equivalence Equivalence partitioning is a technique of software testing in which input data


Partitioning divided into partitions of valid and invalid values, and it is mandatory that all
Technique partitions must exhibit the same behavior.

Error Guessing Error guessing is a technique in which there is no specific method for
Technique identifying the error. It is based on the experience of the test analyst, where
the tester uses the experience to guess the problematic areas of the software.

Use Case Use case Technique used to identify the test cases from the beginning to the
Technique end of the system as per the usage of the system. By using this technique, the
test team creates a test scenario that can exercise the entire software based on
the functionality of each function from start to end.

Tools used for Black Box Testing:


Tools used for Black box testing largely depends on the type of black box testing you are doing.

 For Functional/ Regression Tests you can use – QTP, Selenium


 For Non-Functional Tests, you can use – LoadRunner, Jmeter
Difference between white-box testing and black-box testing

White-box testing Black box testing


The developers can perform white The test engineers perform the black box
box testing. testing.
To perform WBT, we should have an To perform BBT, there is no need to have
understanding of the programming an understanding of the programming
languages. languages.
In this, we will look into the source In this, we will verify the functionality of
code and test the logic of the code. the application based on the requirement
specification.
In this, the developer should know In this, there is no need to know about
about the internal design of the code. the internal design of the code.

Software Configuration Management


Software Configuration Management(SCM) is a process to systematically manage, organize, and
control the changes in the documents, codes, and other entities during the Software Development Life
Cycle. The primary goal is to increase productivity with minimal mistakes. SCM is part of cross-
disciplinary field of configuration management and it can accurately determine who made which
revision.
Why do we need Software Configuration management?

 There are multiple people working on software which is continually updating


 It may be a case where multiple version, branches, authors are involved in a software config
project, and the team is geographically distributed and works concurrently
 Changes in user requirement, policy, budget, schedule need to be accommodated.
 Software should able to run on various machines and Operating Systems
 Helps to develop coordination among stakeholders
 SCM process is also beneficial to control the costs involved in making changes to a system

Any change in the software configuration Items will affect the final product. Therefore, changes to
configuration items need to be controlled and managed.
Tasks in SCM process

 Configuration Identification
 Baselines
 Change Control
 Configuration Status Accounting
 Configuration Audits and Reviews
Configuration Identification:
Configuration identification is a method of determining the scope of the software system. With the
help of this step, you can manage or control something even if you don’t know what it is. It is a
description that contains the CSCI type (Computer Software Configuration Item), a project identifier
and version information.
Activities during this process:

 Identification of configuration Items like source code modules, test case, and
requirements specification.
 Identification of each CSCI in the SCM repository, by using an object-oriented approach
 The process starts with basic objects which are grouped into aggregate objects. Details of
what, why, when and by whom changes in the test are made
 Every object has its own features that identify its name that is explicit to all other objects
 List of resources required such as the document, the file, tools, etc.
Example:
Instead of naming a File login.php its should be named login_v1.2.php where v1.2 stands for the
version number of the file
Instead of naming folder “Code” it should be named “Code_D” where D represents code should be
backed up daily.
Baseline:
A baseline is a formally accepted version of a software configuration item. It is designated and fixed
at a specific time while conducting the SCM process. It can only be changed through formal change
control procedures. In simple words, baseline means ready for release.
Activities during this process:

 Facilitate construction of various versions of an application


 Defining and determining mechanisms for managing various versions of these work products
 The functional baseline corresponds to the reviewed system requirements
 Widely used baselines include functional, developmental, and product baselines
Change Control:
Change control is a procedural method which ensures quality and consistency when changes are made
in the configuration object. In this step, the change request is submitted to software configuration
manager.
Activities during this process:

 Control ad-hoc change to build stable software development environment. Changes are
committed to the repository
 The request will be checked based on the technical merit, possible side effects and overall
impact on other configuration objects.
 It manages changes and making configuration items available during the software lifecycle
Configuration Status Accounting:
Configuration status accounting tracks each release during the SCM process. This stage involves
tracking what each version has and the changes that lead to this version.
Activities during this process:

 Keeps a record of all the changes made to the previous baseline to reach a new baseline
 Identify all items to define the software configuration
 Monitor status of change requests
 Complete listing of all changes since the last baseline
 Allows tracking of progress to next baseline
 Allows to check previous releases/versions to be extracted for testing
Configuration Audits and Reviews:
Software Configuration audits verify that all the software product satisfies the baseline needs. It
ensures that what is built is what is delivered.
Activities during this process:

 Configuration auditing is conducted by auditors by checking that defined processes are being
followed and ensuring that the SCM goals are satisfied.
 To verify compliance with configuration control standards. auditing and reporting the changes
made
 SCM audits also ensure that traceability is maintained during the process.
 Ensures that changes made to a baseline comply with the configuration status reports
 Validation of completeness and consistency
Participant of SCM process:

1. Configuration Manager

 Configuration Manager is the head who is Responsible for identifying configuration


items.
 CM ensures team follows the SCM process
 He/She needs to approve or reject change requests
2. Developer

 The developer needs to change the code as per standard development activities or change
requests. He is responsible for maintaining configuration of code.
 The developer should check the changes and resolves conflicts
3. Auditor

 The auditor is responsible for SCM audits and reviews.


 Need to ensure the consistency and completeness of release.
4. Project Manager:

 Ensure that the product is developed within a certain time frame


 Monitors the progress of development and recognizes issues in the SCM process
 Generate reports about the status of the software system
 Make sure that processes and policies are followed for creating, changing, and testing
5. User
The end user should understand the key SCM terms to ensure he has the latest version of the software
Software Configuration Management Plan
The SCMP (Software Configuration management planning) process planning begins at the early
coding phases of a project. The outcome of the planning phase is the SCM plan which might be
stretched or revised during the project.

 The SCMP can follow a public standard like the IEEE 828 or organization specific
standard
 It defines the types of documents to be management and a document naming. Example
Test_v1
 SCMP defines the person who will be responsible for the entire SCM process and
creation of baselines.
 Fix policies for version management & change control
 Define tools which can be used during the SCM process
 Configuration management database for recording configuration information.
Software Configuration Management Tools
Concurrency Management:
When two or more tasks are happening at the same time, it is known as concurrent operation.
Concurrency in context to SCM means that the same file being edited by multiple persons at the same
time. If concurrency is not managed correctly with SCM tools, then it may create many pressing
issues.
Version Control:
SCM uses archiving method or saves every change made to file. With the help of archiving or save
feature, it is possible to roll back to the previous version in case of issues.
Importance of SCM
It is practical in controlling and managing the access to various SCIs e.g., by preventing the two
members of a team for checking out the same component for modification at the same time.
SCM Repository
Software Configuration Management (SCM) is any kind of practice that tracks and provides control
over changes to source code. Software developers sometimes use revision control software to
maintain documentation and configuration files as well as source code. Revision control may
also track changes to configuration files.
As teams design, develop and deploy software, it is common for multiple versions of the same
software to be deployed in different sites and for the software's developers to be working
simultaneously on updates. Bugs or features of the software are often only present in certain versions.
Therefore, for the purposes of locating and fixing bugs, it is vitally important to be able to retrieve and
run different versions of the software to determine in which version the problem occurs. It may also
be necessary to develop two versions of the software concurrently (for instance, where one version
has bugs fixed, but no new features, while the other version is where new features are worked.
At the simplest level, developers could simply retain multiple copies of the different versions of the
program, and label them appropriately. This simple approach has been used in many large software
projects. While this method can work, it is inefficient as many near-identical copies of the program
have to be maintained. This requires a lot of self-discipline on the part of developers and often leads
to mistakes. Since the code base is the same, it also requires granting read-write-execute permission to
a set of developers, and this adds the pressure of someone managing permissions so that the code base
is not compromised, which adds more complexity. Consequently, systems to automate some or all of
the revision control process have been developed. This ensures that the majority of management of
version control steps is hidden behind the scenes.
Moreover, in software development, legal and business practice and other environments, it has
become increasingly common for a single document or snippet of code to be edited by a team, the
members of which may be geographically dispersed and may pursue different and even contrary
interests. Sophisticated revision control that tracks and accounts for ownership of changes to
documents and code may be extremely helpful or even indispensable in such situations.
o Synchronization
We can synchronize our code so programmers can get the latest code and also able to fetch up
the updated code at any time from the respiratory.
o Short and Long term undo
In some cases, when the file gets really mesh up, we are able to do short-term undo to the last
version, or we can do the long-term undo, which would roll back to the previous version. We
are also able to track the changes, we can see the commit changes for the changes that they
have done. We are also able to see the ownership of the commits that have been made on the
branch.
o Track changes
We can track our changes and we can also track the changes when someone makes any
changes. We will be able to see their commits for the changes that they have done.
o Ownership
We are able to see the ownership of the commits that have been made on the branch, basically
on the master branch.
o Branching and merging
We can do branching and merging, which is very important in source code management,
where we can create for our source code to create our own changes on it and then merge back
it into our master branch.
SCM Process
It uses the tools which keep that the necessary change has been implemented adequately to the
appropriate component. The SCM process defines a number of tasks:

 Identification of objects in the software configuration


 Version Control
 Change Control
 Configuration Audit
 Status Reporting

Identification
Basic Object: Unit of Text created by a software engineer during analysis, design, code, or test.
Aggregate Object: A collection of essential objects and other aggregate objects. Design Specification
is an aggregate object.
Each object has a set of distinct characteristics that identify it uniquely: a name, a description, a list of
resources, and a "realization."
Version Control
Version Control combines procedures and tools to handle different version of configuration objects
that are generated during the software process.
Clemm defines version control in the context of SCM: Configuration management allows a user to
specify the alternative configuration of the software system through the selection of appropriate
versions. This is supported by associating attributes with each software version, and then allowing a
configuration to be specified [and constructed] by describing the set of desired attributes.
Change Control
James Bach describes change control in the context of SCM is: Change Control is Vital. But the
forces that make it essential also make it annoying.
We worry about change because a small confusion in the code can create a big failure in the product.
But it can also fix a significant failure or enable incredible new capabilities. We worry about change
because a single rogue developer could sink the project, yet brilliant ideas originate in the mind of
those rogues, and
A burdensome change control process could effectively discourage them from doing creative work. A
change request is submitted and calculated to assess technical merit; potential side effects, the overall
impact on other configuration objects and system functions, and projected cost of the change.
The results of the evaluations are presented as a change report, which is used by a change control
authority (CCA) - a person or a group who makes a final decision on the status and priority of the
change. The "check-in" and "check-out" process implements two necessary elements of change
control-access control and synchronization control.
Configuration Audit
SCM audits to verify that the software product satisfies the baselines requirements and ensures that
what is built and what is delivered.
SCM audits also ensure that traceability is maintained between all CIs and that all work requests are
associated with one or more CI modification. SCM audits are the "watchdogs" that ensures that the
integrity of the project's scope is preserved.
Status Reporting
Configuration Status reporting (sometimes also called status accounting) providing accurate status and
current configuration data to developers, testers, end users, customers and stakeholders through admin
guides, user guides, FAQs, Release Notes, Installation Guide, Configuration Guide, etc.
Types of Supply Chain Models

 Continuous Flow Model: One of the more traditional supply chain methods, this model is
often best for mature industries. The continuous flow model relies on a manufacturer
producing the same good over and over and expecting customer demand will little variation.
 Agile Model: This model is best for companies with unpredictable demand or customer-order
products. This model prioritizes flexibility, as a company may have a specific need at any
given moment and must be prepared to pivot accordingly.
 Fast Model: This model emphasizes the quick turnover of a product with a short life cycle.
Using a fast chain model, a company strives to capitalize on a trend, quickly produce goods,
and ensure the product is fully sold before the trend ends.
 Flexible Model: The flexible model works best for companies impacted by seasonality. Some
companies may have much higher demand requirements during peak season and low volume
requirements in others. A flexible model of supply chain management makes sure production
can easily be ramped up or wound down.
 Efficient Model: For companies competing in industries with very tight profit margins, a
company may strive to get an advantage by making their supply chain management process
the most efficient. This includes utilizing equipment and machinery in the most ideal ways in
addition to managing inventory and processing orders most efficiently.
 Custom Model: If any model above doesn't suit a company's needs, it can always turn
towards a custom model. This is often the case for highly specialized industries with high
technical requirements such as an automobile manufacturer.
Example of SCM
Understanding the importance of SCM to its business, Walgreens Boots Alliance Inc. decided to
transform its supply chain by investing in technology to streamline the entire process. For several
years the company has been investing and revamping its supply chain management process.
Walgreens was able to use big data to help improve its forecasting capabilities and better manage the
sales and inventory management processes.

You might also like