MCS-227
MCS-227
Unit 5: Scaling
Utility computing basically refers to the utility computing technologies and the business models that
are offered by a service provider to the IT customers. The client is charged as per their consumption.
Examples of these IT services are storage, computing power, and applications.
The term utility is basically the utility services like water, telephone, electricity, and gas that are
provided by any utility company. In a similar manner, the customer when receives utility computing,
its computing power on the shared computer network bills is decided on the basis of the consumption
which is measured.
Utility computing is similar to virtualization and the total web storage space amount with the
computing power that is made available to the user is higher as compared to a single time-sharing
computer. The web service is possible through a number of backend web servers. The web servers
could be dedicated and used as a cluster form which is created and then gets leased to the end-user.
Distributed computing is the method where a single such calculation is done on multiple web servers.
In utility computing, there is a provider who will own the storage or power resources. The customer is
charged based on how much they make use of the services. The customer is not charged each month
and the services are not sold outright. Depending on the resources that are offered utility computing
could also be called Infrastructure as a Service or IaaS and Hardware as a Service or HaaS.
Their function is similar to the other basic utilities. It is like you or any major company uses
electricity. Both of you do not pay a flat monthly rate but pay the amount as per the electricity that
you consume.
There are companies that offer a different kind of utility computing where the user will rent a cloud
computer and use it in order to run the applications or an algorithm or anything that may need a lot of
computing power. You pay per second or per hour and do not pay a flat fee to use the service.
Utility computing is beneficial because of its flexibility. Since you do not own the resource and are
not leasing them for long it is easy to change the amount of power that you buy. You are free to grow
or to shrink the service amount within a few seconds based on your business requirements.
On-demand Self Service: A consumer can request and receive access to a service offering,
without an administrator or some sort of support staff having to fulfil the request
manually.
Broad network Access: The servers can be accessed from any location using any type of
device – anywhere access and anytime.
Resource Pooling: Resource can be storage, memory, network bandwidth, virtual
machines, etc. which can be consumed by the cloud users. Resource Pooling means that
multiple customers are serviced from the same physical resources.
Measured Services: Pay according to the services you use.
Rapid Elasticity and Stability: One of the great things about cloud computing is the ability
to quickly provision resources in the cloud as organizations need them and then to remove
them when they don’t need them.
Easy maintenance: Maintenance of the cloud is easier.
Security: Copy of our data on various servers i.e., if 1 fails, data is safe on the other.
Cloud computing allows storing data like files, images, audios, and videos, etc on the cloud
storage. The organization need not set physical storage systems to store a huge volume of
business data which costs so high nowadays. As they are growing technologically, data
generation is also growing with respect to time, and storing that becoming problem. In that
situation, Cloud storage is providing this service to store and access data any time as per
requirement.
2.
3. Backup and Recovery:
Cloud vendors provide security from their side by storing safe to the data as well as providing
a backup facility to the data. They offer various recovery application for retrieving the lost
data. In the traditional way backup of data is a very complex problem and also it is very
difficult sometimes impossible to recover the lost data. But cloud computing has made
backup and recovery applications very easy where there is no fear of running out of backup
media or loss of data.
4.
5. Big data Analysis:
We know the volume of big data is so high where storing that in traditional data management
system for an organization is impossible. But cloud computing has resolved that problem by
allowing the organizations to store their large volume of data in cloud storage without
worrying about physical storage. Next comes analyzing the raw data and finding out insights
or useful information from it is a big challenge as it requires high-quality tools for data
analytics. Cloud computing provides the biggest facility to organizations in terms of storing
and analyzing big data.
6. E-commerce Application:
Cloud-based e-commerce allows responding quickly to the opportunities which are emerging.
Users respond quickly to the market opportunities as well as the traditional e-commerce
responds to the challenges quickly. Cloud-based e-commerce gives a new approach to doing
business with the minimum amount as well as minimum time possible. Customer data,
product data, and other operational systems are managed in cloud environments.
Cloud computing in the education sector brings an unbelievable change in learning by providing e-
learning, online distance learning platforms, and student information portals to the students. It is a
new trend in education that provides an attractive environment for learning, teaching, experimenting,
etc to students, faculty members, and researchers. Everyone associated with the field can connect to
the cloud of their organization and access data and information from there.
6.E-Governance Application :
Cloud computing can provide its services to multiple activities conducted by the government. It can
support the government to move from the traditional ways of management and service providers to an
advanced way of everything by expanding the availability of the environment, making the
environment more scalable and customized. It can help the government to reduce the unnecessary cost
in managing, installing, and upgrading applications and doing all these with help of could computing
and utilizing that money public service.
In the medical field also nowadays cloud computing is used for storing and accessing the data as it
allows to store data and access it through the internet without worrying about any physical setup. It
facilitates easier access and distribution of information among the various medical professional and
the individual patients. Similarly, with help of cloud computing offsite buildings and treatment
facilities like labs, doctors making emergency house calls and ambulances information, etc can be
easily accessed and updated remotely instead of having to wait until they can access a hospital
computer.
8. Entertainment Applications:
Many people get entertainment from the internet, in that case, cloud computing is the perfect place for
reaching to a varied consumer base. Therefore different types of entertainment industries reach near
the target audience by adopting a multi-cloud strategy. Cloud-based entertainment provides various
entertainment applications such as online music/video, online games and video conferencing,
streaming services, etc and it can reach any device be it TV, mobile, set-top box, or any other form. It
is a new form of entertainment called On-Demand Entertainment (ODE).
2.0 INTRODUCTION
The purpose of this chapter is to provide a broad range of cloud deployment methods, which are one of the most
essential topics in cloud computing. The various methods in which the cloud computing environment may be set up
or the various ways in which the cloud can be deployed are referred to as deployment models. It is critical to have a
basic understanding of deployment models since setting up a cloud is the most basic requirement before moving on
to any other aspects of cloud computing. This chapter discusses the basic three core cloud computing service
models, namely IaaS, PaaS, and SaaS. The end user's and service provider roles may differ depending on the
services given and subscribed to. In addition, the end user and service provider responsibility of IaaS, PaaS, and
SaaS are discussed in this chapter. This chapter also covers appropriateness, and benefits and drawbacks of various
cloud service models. This chapter consists of a brief overview of various other service models such as NaaS,
STaaS, DBaaS, SECaaS, and IDaaS. The cloud architecture is initially described in this chapter. Cloud architecture
is made up of a series of components arranged in a hierarchical order that collectively define how the cloud
functions. The cloud anatomy is explained in the next section, followed by an overview of cloud network
connection.
2.1 OBJECTIVES
After completion of this unit, you will be able to:
Minimal Investment: This model eliminates the need for extra hardware expenditures.
No startup costs: Users can rent the computing resources on pay-per-use, there is no need of establishing
infrastructure from user side in turn reduces the startup costs.
Infrastructure Management is not required: There is no need of any hardware to be set up from user
side but everything is operated and controlled by service provider.
Zero maintenance: The service provider is responsible for all maintenance work from infrastructure to
software applications.
Dynamic Scalability: On-demand resources are provisioned dynamically as per customer requirements.
2.2.2 Private Cloud: It is a cloud environment created specifically for a single enterprise. It is also known as on-
premise cloud. It allows access to infrastructure and services inside the boundaries of an organization or company.
Private cloud is more secure when compared to similar models. Because the private cloud is usually owned,
deployed and managed by the organization itself, the chance of data leakage is very less. Because all users are
members of the same organization, there is no risk from anybody else. In private clouds, only authorized users have
access, allowing organizations to better manage their data and security. The following Fig. 2.2.2 represents the
private cloud.
2.2.3. Community Cloud: The community cloud is the extension of private cloud and this kind of model is
sharing cloud infrastructure among multiple organizations in the same community or area. Organizations,
businesses, financial institutions and banks etc. are examples of this category. The infrastructure is provided for
exclusive usage by a group of users from companies with similar computing requirements in a community cloud
environment. The following Fig. 2.2.3 represents the community cloud.
2.2.4. Hybrid Cloud: It is a kind of integrated cloud computing, which means that it may be a combination of
private, public, and community cloud, all of which are integrated into a single architecture but remain
independent entities inside the overall system. This aims to combine the benefits of both private and public
clouds. The most common way to use the hybrid cloud is to start with a private cloud and then use the public
cloud for more resources. It is possible to utilize the public cloud for non-critical tasks like development and
testing. On the other hand, critical tasks such as processing company data are carried out on a private cloud. The
following Fig. 2.2.4 represents the hybrid cloud.
• Flexibility and control: Companies with greater flexibility may create customized solutions to match their
specific requirements.
• Cost: Cost is less compared to public cloud users paid only for additional resources used from public
cloud.
• Partial Security: The hybrid cloud is generally a mix of public and private clouds. Although the private
cloud is considered as secure and the hybrid cloud includes public cloud, poses a significant chance of
security breach. As a result, it can only be described as partially secure.
2.3 CHOOSING APPROPRIATE DEPLOYMENT MODELS
The instances where this cloud model may be employed are referred to as selecting an acceptable deployment
model. It also denotes the best circumstances and environment in which this cloud model may be implemented.
The term suitability in terms of cloud refers to the conditions under which this cloud model is appropriate. It also
denotes the best circumstances and environment in which to use this cloud model, such as the following:
Enterprises or businesses that demand their own cloud for personal or business purposes.
Business organizations have appropriate financial resources, since operating and sustaining a cloud is an
expensive effort.
Business organizations consider the data security to be important.
Enterprises want to get complete control and autonomy over cloud resources.
Private cloud is suitable for organizations with less number of employees.
Organizations that already have a pre-built infrastructure will choose private cloud for managing
resources efficiently.
The private cloud model is not appropriate in the following circumstances:
The Community cloud is suitable for the organizations with the following concerns:
Organizations want to get complete control and autonomy over cloud resources.
Doesn't really want to collaborate with other organizations
Organizations that desire a private cloud environment with public cloud scalability
Businesses that demand greater protection compared to the public cloud.
Hybrid
Characteristics Public Private Community
Demand for
in-house Shared among Required for private
Not required Mandatory
infrastructure organizations cloud
Requires an
Requires an operational IT staff Complex because
Ease of use Very easy to use operational from multiple involves more than one
IT staff organizations deployment model
High
Scalability Very High Limited Limited
2.4 CLOUD SERVICE DELIVERY MODELS
Cloud computing model is used to deliver the services to end users from a pool of shared resources such as compute
systems, network components, storage systems, database servers and software applications as a pay-as-you-go
service rather of purchasing or owning them. The services are delivered and operated by the cloud provider, which
reduces the end user's management effort. Cloud computing allows the delivery of a wide range of services
categorized into three basic types of delivery models as follows:
Infrastructure as a Service
Platform as a Service
Software as a Service
Different cloud services are aimed towards different type of users, as shown in Fig. 2.4.1. For instance, consider the
IaaS model is aimed at infrastructure architects, whereas PaaS is aimed at software developers and SaaS is aimed at
cloud users.
The resources are provisioned to the users of IaaS, to run any kind of software, including operating systems and
applications, by giving them access to fundamental computer resources like processing, storage, and networks.
There is no control over the physical infrastructure, but the user has control over operating systems, storage and
installed software, as well as specific networking components (for example host and firewalls). A service model
known as IaaS refers to the usage of a third-party provider's virtual physical infrastructure in place of one's own
(network, storage, and servers). Because IT resources are housed on external servers, they may be accessed by
anybody with an internet connection.
The IT architect or infrastructure architect is the target audience for IaaS. The infrastructure architect may choose
the virtual machine instance based on their requirements. The physical servers are managed by the service providers.
As a result, the complexity of managing the physical infrastructure is removed or hidden from the IT architects. The
following services might be provided by a regular IaaS provider.
Compute: Virtual computing power and main memory are provided to end users as part of Computing as a
Service.
Storage: It provides back-end storage for storing files and VM images.
Network: There are many number of networking components like bridges, routers and, switches are
provided virtually.
Load balancers: These are used to manage the sudden spikes in usage of infrastructure for balancing the
load
Pros and Cons of IaaS
IaaS is a one of the most prominent cloud computing service delivery model. It provides more benefits to the IT
architects.
1. Charging based on usage: The services of IaaS are provisioned on a pay-per-use basis to users. Customers are
paid for only what they have used. This strategy reduces the needless expenditure of investment on hardware
purchases.
2. Reduced cost: IaaS providers allow their customers to rent computing resources on a subscription basis instead of
investing on physical infrastructure to run their operations. IaaS eliminates the need to purchase physical resources,
lowering the total cost of investment.
3. Elastic resources: IaaS provides resources depending on user requirement. The resources can be scale up and
scale down by using load balancers. Load balancers automate the process of dynamic scaling by sending additional
requests are redirected the new resources.
4. Better resource utilization: The most important factor of IaaS provider is the resource utilization. To get return
on investment by utilizing the infrastructure resources efficiently.
5. Supports green IT: Dedicated servers are utilized for many business requirements in conventional IT
architecture. The power consumption will be more due to the large number of servers deployed. IaaS eliminates the
need for dedicated servers since a single infrastructure is shared among several clients, decreasing the number of
servers in turn decreases the power consumption resulting in Green IT.
• Despite the fact that IaaS saves investment cost for start-up companies, but it lacks security for data protection.
1. Security issues: IaaS is providing services through Virtualization technology through hypervisors.. There are
several chances to attack the compromised hypervisors. If hypervisors are compromised, any virtual machines may
be simply attacked. The majority of IaaS providers are unable to ensure complete security for virtual machines and
the data stored on them.
2. Interoperability issues: IaaS service providers don't have any standard operating procedures. Any VM transfer
from one IaaS provider to another is a difficult one. Customers may encounter the issue of vendor lock-in issue.
3. Performance issues: It is providing resources from distributed servers, those are connected through a network..
The network latency is a key factor in determining performance of the service. Due to latency concerns, the VM's
performance might suffer from time to time.
Microsoft Azure
Rackspace
AWS
Google Compute Engine
2.4.2 PaaS: Virtualized development environment
The PaaS user or developer can develop their applications on virtualized development platform provided by PaaS
provider. The users doesn't have the control on the development platform and underlying infrastructure like servers,
storage , network and operating system but the user has control on the deployed applications as well data related to
that applications.
Developers can build their applications online using programming languages supported on provider platform and
deploy their applications using testing tools supporting the same platform. Pass users utilizing the services offered
by the providers through the internet. As a result, the cost of obtaining and maintaining a large number of tools for
constructing an application is decreased. PaaS services include a wide range of programming languages supported
on platforms, databases, and testing software tools. PaaS providers provide a wide range of software development
and deployment capabilities including load balancers.
1. Programming languages: PaaS providers offer a scope for multiple programming languages in which users can
develop their own applications. Some examples of languages are python, java, Scala, PHP and Go etc.
2. Application platforms: PaaS providers offer a variety of application platforms, those are used to develop
applications. The popular examples of platforms are Joomla, Node.js, Drupal, WordPress, Django and Rails
3. Database: Applications need backend for storing data. Database is always associate with frontend application to
access data. Databases are provided by PaaS providers as part of their PaaS platforms. Some of the prominent
databases offered by PaaS vendors are Redis, MongoDB, ClearDB, Membase, PostgreSQL, and Cloudant.
4. Testing tools: Testing tools are provided by PaaS providers as part of their PaaS platforms. Testing tools are
required to test application after development.
The complexity of platform and underlying infrastructure maintenance is managed by PaaS provider. This allows
developers to concentrate more on the application development.
In addition, PaaS provides the following advantages:
1. App development and deployment: PaaS provides all the necessary development and testing tools in one place,
allowing you to build, test, and deploy software quickly. After the developer completes the development process,
most PaaS services automate the testing and deployment process. This is faster than conventional development
platforms in developing and deploying applications.
2. Reduces investment cost: The majority of conventional development platforms need high-end infrastructure
leads to increase the investment cost for application development. Using PaaS services eliminates the requirement
for developers to purchase licensed development and testing tools. On the other side, PaaS lets programmers rent
everything they need to create, test and deploy their applications. The total investment cost for the application
development is reduced because of expensive infrastructure is not required.
3. Team collaboration: Traditional development platforms do not offer much in the way of collaborative
development. PaaS allows developers from multiple locations to collaborate on a single project. The online shared
development platform supplied by PaaS providers makes this feasible.
4. Produces scalable applications: Applications need scale-up or scale-down the resources based on their load. In
case of scale-up, companies must keep an additional server to handle the increased traffic. New start-up companies
have a tough time expanding their server infrastructure in response to rising demand. PaaS services, on the other
hand, provide built-in scalability to applications produced on the PaaS platform.
When compared to the traditional development environment, PaaS offers several advantages to developers.
On the other side, it has several disadvantages, which are listed below:
1. Vendor lock-in: Vendor lock-in is a key disadvantage of PaaS providers. Lack of standards is the primary cause
of vendor lock-in. PaaS providers do not adhere to any common standards for providing services. The adoption of
proprietary technology by PaaS providers is another factor for vendor lock-in. The majority of PaaS companies
employ proprietary technologies that are incompatible with those offered by other PaaS providers. PaaS services
have a vendor lock-in issue that prevents applications from being transferred one provider to another.
2. Security problems: Security is a big concern with PaaS services. Many developers are hesitant to use PaaS
services since their data is stored on third-party servers off-site. Obviously, many PaaS providers have their own
security mechanism to prevent user data from security breaches, but feeling safety of on-premise deployment is not
same as off-premise deployment.. When choosing a PaaS provider, developers should compare the PaaS provider's
regulatory, compliance, and security standards to their own security needs.
3. Less flexibility: PaaS limit developer’s ability to create their own application stack. Most PaaS providers give
access to a wide range of programming languages, database software’s, and testing tools but user doesn’t have
control on platform. Developers can only customize or build new programming languages for PaaS platform from a
few providers. The majority of PaaS vendors still do not give developers with enough flexibility.
4. Depends on Internet connection: Developers must have an internet connection in order to utilize PaaS services.
The majority of PaaS providers do not provide offline access but very few can provide offline access. With a poor
Internet connection, the PaaS platform's usability will not meet the developer expectations.
Examples of PaaS:
The end user has the option of using the provider's cloud-based applications. It is possible to access the software
from multiple client devices using a web browser or other client interface (such as web-based e-mail). The
customer has no access or control over the cloud infrastructure, which includes networks, servers, operating
systems, storage, software platforms, and configuration settings. An internet based, no-installation kind of
software as a service has been provided on subscription and these services may be accessed from any location in
the globe.
SaaS applications are provided on-demand through the internet, users can access these applications through web
enabled interface without software installation on end-user machines. Users have complete control over when,
how and how often they use SaaS services. SaaS services can be accessed through web browser on any device,
including computers, tablets and smart devices. Some SaaS services can be accessed by a thin client, which
does not have as much storage space as a standard desktop computer and cannot run many applications. Thin
clients for accessing SaaS applications have a longer lifespan, lower power consumption and lower cost are all
advantages of using these devices. A SaaS provider might provide a variety of services, including business
management services, social media services, document management software’s and mail services.
1. Business services: In order to attract new customers, the majority of SaaS suppliers now provide a wide
range of commercial services. SaaS include ERP, CRM, billing, sales and human resources.
2. Social media networks: Several social networking service providers have used SaaS as a method of assuring
their long-term survival because of the widespread usage of social networking sites by the general public.
Because the number of users on social networking sites is growing at a rapid rate, cloud computing is the ideal
solution for varying load.
3. Document management: Because most businesses rely heavily on electronic documents, most SaaS
companies have begun to provide services for creating, managing, and tracking them.
4. E-mail services: Many people utilize e-mail services these days. The potential growth in e-mail usage is
unexpected. Most e-mail providers started offering their services as SaaS services to deal with the unexpected
amount of users and demand on e-mail services.
SaaS provides software applications that are used by a wide range of consumers and small organizations
because of the cost benefits they provide.
1. No client-side installation: Client-side software installation is not required for SaaS services. Without any
installation, end users may receive services straight from the service provider's data centre. Consuming SaaS
services does not need the use of high-end hardware. It may be accessible by thin clients or any mobile device.
2. Cost savings: Because SaaS services are billed on a utility-based or pay-as-you-go basis, end customers must
pay only for what they have utilized. Most SaaS companies provide a variety of subscription options to suit the
needs of various consumers. Sometimes free SaaS services are provided to end users.
3. Less maintenance: The service provider is responsible for automating application updates, monitoring, and
doing other routine maintenance then the user is not responsible for maintain the software.
4. Ease of access: It is possible to access SaaS services from any device that has access to the Internet. The use
of SaaS services is not limited to a certain set of devices. It features are making it adaptable to all devices.
5. Dynamic scaling: On-premise software makes dynamic scalability harder since it requires extra hardware.
Because SaaS services make use of cloud elastic resources, they can manage any sudden spike in load without
disrupting the application's usual operation.
6. Disaster recovery: Every SaaS service is maintained with suitable backup and recovery techniques. A large
number of servers are used to store the replicas. The SaaS may be accessed from another server if the allocated
one fails. This solves the problem of single point of failure. It also ensures high availability of application.
7. Multi-tenancy: Multi-tenancy refers to sharing same application among multiple users improves resource
use for providers and decreases cost for users.
Data security is the biggest problem with SaaS services. Almost every organization is concerned about the
safety of the data stored on the provider's datacenter.
Some of the problems with SaaS services include the following:
1. Security: When transitioning to a SaaS application, security is a big issue. Data leakage is possible because the
SaaS application is shared by many end users. The data is kept in the datacenter of the service provider. We can't
trust our company's sensitive and secret data on third-party service provider. To avoid data loss, the end user must
be careful when choosing a SaaS provider.
2. Requirements for connectivity: In order to use SaaS applications, users must have internet connection. If the
user's internet connection is low in some cases then the user is unable to use the services. In SaaS applications, the
high-speed internet connection is the major problem.
3. Loss of control: The end user has no control over the data since it is kept in a third-party off-premise location.
Examples of SaaS
Figure 2.4.1 illustrates the three types of cloud computing services that are offered to clients. It's important
to note that cloud service delivery is made up of three distinct components: infrastructure, platform, and
software. The end user's responsibility in IaaS is development platform and the application that runs on top of
it are properly maintained. The underlying hardware must be maintained by the IaaS service providers. In
PaaS, end users are only responsible for developing and deploying the application and its data only. In SaaS,
user do not have any control over infrastructure management, development platform and end-user application,
all maintenance is handled by SaaS providers. The responsibility of the provider and user is indicated in Figure
2.4.2
Fig. 2.4.2 Service provider and User management responsibilities of SPI model
2.4.4 Other services
1. Network as a Service (NaaS): It allows end users to make use of virtual network services provided by the service
provider. It is a pay-per-use approach similar to other cloud service models, NaaS allows users to access virtual
network services through the Internet. In on-premise organizations, they have spent expenditure on network
equipment to run their own networks in their own datacenters. On the other hand, Naas are transformed into a utility
to make virtual organizations, virtual organization interface cards, virtual switches, virtual switches and other
systems administration components in the cloud environment. There are a number of popular services provided by
NaaS, including VPNs, bandwidth-on-demand, and virtualized mobile networks.
2. DEaaS (Desktop as a Service): It allows end customers to enjoy desktop virtualization service without having to
acquire and manage their own computing infrastructure. It is a pay-per-use model in which the provider handles data
storage, backup, security and updates on the back end. DEaaS services are easy to set up, secure, and provide a
better user experience across a wide range of devices.
3. STorage as a Service (STaaS): It provides end users with the opportunity to store data on the service provider's
storage services. Users may access their files from anywhere and at any time with STaaS. Virtual storage emulates
from physical storage is abstracted by the STaaS provider. STaaS is a utility-based cloud business model. Customers
may rent storage space from the STaaS provider and they can access from any location. STaaS provides disaster
recovery backup storage solution.
4. Database as a Service (DBaaS) : This service that allows end users to access databases without having to install
or manage them. Installing and maintaining databases is the responsibility of the service provider. End consumers
may utilize the services immediately and pay for them based on their use. Database administration is automated
using DBaaS. The database services may be accessed by end users using the service provider's APIs or web
interfaces. The database management procedure is made easier using DBaaS. DBaaS provides popular services such
as ScaleDB , SimpleDB, DynamicDB, MongoDB and GAE data store.
5. Data as a Service (DaaS): An on demand service provided by a cloud vendor to users to access the data over the
Internet. Data consists of text, photos, audio, and videos etc. all are part of the data. Other service models for
example SaaS and STaaS are closely related to DaaS. For offering a composite service, DaaS may simply include in
either SaaS or STaaS. Geographical data services and financial data services are two areas where DaaS is widely
employed. Agility, cost efficiency, and data quality are some of the benefits of DaaS.
6. SECurity as a Service (SECaaS): It is a pay-per-use security service that allows the user to access the cloud
provider's security service. The service provider combines its security services for the benefit of end customers in
SECaaS. It provides a wide range of security-related functions, including authentication, virus and malware /
spyware protection, intrusion detection, and security event management. Infrastructure and applications within a
company or organization are often protected by SECaaS service providers. SECaaS services are provided by Cisco,
McAfee or Panda etc.
……………………………………………………………………………
……………………………………………………………………………
The cloud architecture is divided into four major levels based on their functionality. Below Fig. 2.5.1 is a
diagrammatic illustration of cloud computing architecture.
Computing systems and storage systems are linked together through networks. A network, such as a local area
network (LAN) connects physical computing devices to one another, allowing applications running on the compute
systems to communicate with one another. A network connects compute and storage systems to access the data on
the storage systems. The cloud serves computing resources from several cloud datacenters, networks link the
scattered datacenters and allowing the datacenters to function as a single giant datacenter. Networks also link
various clouds to one another, allowing them to share cloud resources and services (as in the hybrid cloud model).
The hierarchical structure of a cloud is called cloud anatomy. Cloud anatomy differs from architecture. It does not
include the communication channel on which it deliver the services, whereas architecture completely describes the
communication technology on which it operates. Cloud architecture is a hierarchical structure of technology on
which it defines and operates. Anatomy might therefore be considered as subset of cloud architecture. Figure 2.6.1
represents the cloud anatomy structure, which serves as the foundation for the cloud.
Fig.2.6.1 Layers of Cloud Anatomy
1. Application: Top most layer is the application layer. This layer may be used to execute any kind of software
application.
2. Platform: This layer exists below the application layer. It consists of executable platforms those are provided for
the execution developer applications.
3. Infrastructure: This layer lies below the platform layer. Infrastructure includes virtualized computational
resources are provided to the users to connect with other system components. It allows the users to manage both
applications and platforms. This allows the user to do computations based on their requirements.
4. Virtualization: It's a vital technology that allows cloud computing to function. It is the process of making
abstraction of actual physical hard ware resources are provided in virtual manner. It changes the way of providing
the same hardware resources are distributed to multiple tenants independently.
5. Physical hardware: The bottom most layer is the physical hardware layer. It consists of servers, network
components, databases and storage units.
2.7 NETWORK CONNECTIVITY IN CLOUD COMPUTING
The cloud resources include servers, storage, network bandwidth, and other computer equipment are distributed over
numerous locations and linked via networks. When an application is submitted for execution in the cloud, the
necessary and appropriate resources are used to run the application that connects these resources through the
internet. Network performance will be a major factor in the success of many cloud computing applications. Because
cloud computing offers a variety of deployment choices, a network connection viewpoint will be used to examine
cloud deployment models and their accessible components.
There following are the different types of network connectivity in cloud computing:
If we want to minimize latency without sacrificing security, we must choose an appropriate routing strategy,
decreases communication latency by decreasing the number of transit hops in the path from cloud provider to
consumer, for instance. When a connection is made available via internet for peer to peer systems through a
federation of connected providers (also known as Internet service providers (ISPs).
In private cloud, the cloud and network connectivity is within organization premises. The connectivity with in
private cloud is provided through Internet VPN or VPN service. All services are accessed quickly through well-
established pre-cloud infrastructure. Moving to private clouds does not affect the ability to access application
performance
Intra cloud networking is the most complex networking and connection challenge in cloud computing. The most
challenging aspect of private cloud is the private intra cloud networking. The applications running in this
environment are linked to intra cloud connection. Intra networking connects the provider datacenters owned by an
organization. Intra cloud networking will be used by all cloud computing systems to link users to the resource to
which their application has been assigned. Once the link is established to the resource, intra networking used to
serve the application to multiple users based on service oriented architecture (SOA). If the SOA concept is followed,
traffic may flow between application components and between the application and the user. The performance of
such connections will therefore have an influence on the overall performance of cloud computing.
Modern approaches should be used to assess cloud computing networks and connections, Globalization and
changing organization needs, particularly those related with expanded internet use, require more prominent
adaptability in the present corporate organization.
Check Your Progress 2
.……………………………………………………………………………
……………………………………………………………………………
…………………………………………………………………………..
2.8 SUMMARY
We covered the three SPI cloud service types as well as the four cloud delivery models in this chapter. We also
looked at how much influence a consumer had over the various arrangements. After that, we looked at cloud
deployment and cloud service models from a variety of perspectives, leading to a discussion of how clouds arise and
how clouds are utilized. To begin, the deployment models are the foundation and must be understood before moving
on to other components of the cloud. The size, location, and complexity of these deployment models are all taken
into account.
In this chapter, we'll look at four different deployment models. Each deployment model is described, along with its
characteristics and applicability for various types of demands. Each deployment model is significant in its own right.
These deployment patterns are crucial, and they frequently have a significant influence on enterprises that rely on
the cloud. A wise deployment model decision always pays off in the long run, avoiding significant losses. As a
result, deployment models are given a lot of weight. Before diving into the complexities of cloud computing, it's
vital to understand a few key concepts, including one of the most significant: cloud architecture.
Before getting into the complexities of cloud computing, it's vital to understand a few key concepts, including one of
the most significant: cloud architecture. It has a basic structure with component dependencies indicated. Anatomy is
the same way as architecture; however it does not take into account any dependencies as architecture does. The
cloud network connection, which is at the heart of the cloud concept, is also critical. The network is the foundation
on which the cloud is built.
2.9 SOLUTIONS/ANSWERS
Microsoft Azure
Rackspace Cloud
Amazon Web Services (AWS)
Alibaba Cloud
IBM Cloud
SAP
Google Cloud
VMWare
Oracle
Salesforce
2. Distinguish between public and private clouds.
Private Cloud
Public Cloud
It is managed by cloud service provider
It is managed by organization operational staff
On-demand scalability
Limited scalability
Multitenant architecture supports multiple users
Dedicated architecture supports users from single
from different organizations
organization
Services hosted on Shared servers
Services hosted on dedicated servers
Establishes connection to users through private
Establishes connection to users through internet
network within the organization
Cloud anatomy describes the layers of cloud computing paradigm at service provider side. Cloud anatomy and cloud
architecture both are not same but anatomy is considered as part of cloud architecture. cloud architecture completely
specifies and explains the technology under which it operates but in anatomy does not include technology on which
it operates.
Virtual private network (VPN) establishes a secured private corporate network connection within private cloud to
access the services. The technology and methodologies are local to the organization network structure in the private
cloud. This cloud network might be an Internet-based VPN or a service supplied by the network operator.
1. Cloud Computing: Principles and Paradigms, Rajkumar Buyya, James Broberg and Andrzej M.
Goscinski, Wiley, 2011.
2. Mastering Cloud Computing, Rajkumar Buyya, Christian Vecchiola, and Thamarai Selvi, Tata McGraw
Hill, 2013.
3. Essentials of cloud Computing: K. Chandrasekhran, CRC press, 2014.
Unit 3: Resource Virtualization
Structure
3.1 Introduction
3.2 Objective
3.3 Virtualization and Underlying Abstraction
3.3.1 Virtualizing Physical Computing Resources
3.4 Advantages of Virtualization
3.5 Machine or Server Level Virtualization
3.6 Exploring Hypervisor or Virtual Machine Monitor
3.6.1 Hypervisor Based Virtualization Approaches
(Full Virtualization, Para Virtualization, Hardware-Assisted Virtualization)
3.7 Operating System-Level Virtualization
3.8 Network Level Virtualization
3.9 Storage Level Virtualization
3.10 Desktop Level Virtualization
3.11 XenServer Vs VMware
3.1 INTRODUCTION
Cloud Computing has gained immense popularity due to the availability of scalable Infrastructure
as a Services, Platform as a Service, and Software as a Services. This is a framework where
different kinds of services related to networks, computing resources, storage, development
platform, and application are provisioned through the internet. In this respect, the basic
information of cloud computing is already discussed in the previous unit. In this unit, we will
discuss the basics of virtualization, its advantages, and its underlying abstraction. It is to be noted
that virtualization is the fundamental technology that helps to create an abstraction layer that
hides the intricacy of the underlying hardware. The virtualization technique provides a secure and
isolated environment for any user application such that one running application does not affect
the execution of another application. Further, in this unit, we will learn about server-level
virtualization and explore different hypervisor-based virtualization approaches. We will also
discuss operating system-level virtualization, network virtualization, storage virtualization, and
desktop virtualization. Finally, a brief comparison will be done on hypervisors like XenServer
and VMware.
3.2 OBJECTIVE
Virtualization allows the creation of an abstract layer over the available System hardware
elements like processor, storage, memory, and different customized computing environments.
The computing environment which is created is termed virtual as it simulates an environment
similar to a real computer with an operating system. The use of the virtual version of the
infrastructure is smooth as the user finds almost no difference in the experience when compared
to a real computing environment. One of the very good examples of virtualization is hardware
virtualization. In this kind of virtualization, customized virtual machines that work similarly to
the real computing systems are created. Software that runs on this virtual machine cannot directly
access the underlying hardware resources. For example, consider a computer system that runs
Linux operating system and simultaneously host a virtual machine that runs Windows operating
system. Here, the Windows operating system will only have access to hardware that is allocated
to virtual machines. Hardware virtualization plays an important role in provisioning the IaaS
service of cloud computing. Some of the other virtualization technologies for which virtual
environments are provided are networking, storage, and desktop. The overall environment of
virtualization may be divided into three layers: host layer, virtualization layer, and guest layer.
The host layer denotes a physical hardware device on which the guest is maintained.
Virtualization layer act as the middleware which creates a virtual environment similar to the real
computer environment to execute a guest virtual application. Here guests always communicate
through the virtualization layer and it may denote a virtual machine or any other virtual
application. A diagrammatic representation of the virtualization environment is shown in Figure
1.
Figure 1: Diagram showing the virtualization environment.
From the above discussion, it should be noted that in reality, the virtualization environment is a software
program, and hence virtualization technology has better control and flexibility over the underlying
environment. The capability of software to imitate a real computing environment has facilitated the
utilization of resources in an efficient way. In the last few years, virtualization technology has drastically
evolved and the current version of technology allows us to make use of the maximum benefit that
virtualization provides. In this respect some of the important characteristics of virtualization can be
discussed as follows:
➔ Advancement in Security: In reality, more than one guest virtual machine runs on a single host
machine, and on each virtual machine different virtual applications are executed. Further, it is
very important to run each virtual machine in isolation such that no two applications running on
different virtual machines interfere with each other. In this respect, virtual machine manager
(VMM) plays an important role by managing virtual machines efficiently and providing enough
security. The operations of the different virtual machines are observed by VMM and filtered
accordingly such that no unfavorable activity is permitted. Sometimes it becomes important to
hide some sensitive or important data of the host from other guest applications running on the
same system. This kind of functionality is automatically provided by the virtualization
environment.
➔ Managing of Execution: In addition to the features like security, sharing, aggregation, emulation,
and isolation are also considered to be important features of virtualization. The explanation of
these features are as follows:
◆ Sharing: Virtualization technology allows the execution of more than one guest virtual
machine over a single host physical machine. Here, the same hardware resources are
being shared by all the guest virtual machines. Here sharing of existing hardware
resources and using individual physical machines to their optimum capacity help to
minimize the requirement of a number of servers and the power consumption.
Virtualization technology is adopted by different areas of computing. Further, based on the requirements
and uses different virtualization techniques were developed and each technique has its own unique
characteristics. In this regard Figure 3. shows a detailed classification of virtualization techniques. We
will be discussing some of the techniques in detail in the later sections.
Figure 3: A classification of virtualization technique
………………………………………………………………………………………………………
…………
………………………………………………………………………………………………………
…………
………………………………………………………………………………………………………
…………
As discussed earlier, virtualization creates an abstracted layer over the available hardware elements, such
as a processor, storage, and memory allowing them to disperse over several Virtual Computers, also
known as Virtual Machines (VMs). The importance of virtualization was realized when IT industries
were facing difficulty to overcome the problem of x86 servers which enable running of only a single
operating system and application. The virtualization technology paved the way for the existing IT
industry by maximizing the utilization of individual servers and enabling them to operate at their
maximum capacity. In this regard Figure 4. shows the difference in traditional and virtual architecture.
Further when we compare the older virtualization technique with the current version then we will notice
that the older virtualization technique used to support only a single CPU and it was slow. Further, the
current version of virtualization techniques has improved a lot and it was found that virtual machines may
execute server applications as well as bare metal computer systems.
In order to improve performance, and to maximize the availability and reliability of the service,
virtualization allows virtual machines to move from one host machine to another and this is called a
virtual machine migration. The migration of virtual machines is achievable as the underlying environment
is virtual. The virtual machine migration can be achieved offline or live. In case of offline migration the
guest virtual machine is temporarily stopped and after copying the image of the virtual machine’s
memory to the destination host machine virtual machine is restarted. Next in the case of live migration an
active virtual machine is moved from one host machine to another. It should also be noted that
virtualization technology prefers to migrate virtual machines from one host machine to another when
some kind of load balancing is required. The type of virtual machine is chosen based on the requirement,
that is if downtime is permissible then offline migration is preferred, or else live migration is preferred.
Virtualization allows for more efficient use of underlying resources, resulting in a higher return on a
company's hardware investment. Some other advantages of virtualization may also be highlighted and it
can be summarized as follows:
➔ Reducing Power Need: Virtualization helps to run more than one operating system and
application on a single physical system. This allows to reduce the requirement of more servers
and hence reducing the requirement of energy for running and cooling the physical machines.
➔ Lower Cost: Virtualization of hardware or software resources help to maximize the utilization of
individual resources without compromising with the performance. Thus the extra investment on
the servers is minimized by running more than one operating system and application on a single
server. In addition to this, the requirements for extra space are also reduced. In this way
virtualization technology is helping IT industries to achieve maximum benefit at a minimal cost.
➔ Better Availability: Virtualization technology allows to overcome the problem of sudden
downtime due to hardware fault or human-induced fault. That is virtualization provides a fault-
tolerant environment in which applications are run seamlessly. Virtualization allows better
control and flexibility over the underlying environment when compared to the standalone system.
Further, during the time of fault or system maintenance, virtualization technology may use live
migration techniques to migrate virtual machines from one server to another. Any application or
operating system crash results in downtime and lowers user productivity. As a result,
administrators can use virtualization to run many redundant virtual computers that can readily
handle this situation. However running numerous redundant Physical Servers, on the other hand,
will be costly.
➔ Resource Efficiency: We may run numerous applications on a single server with virtualization,
each with its own virtual machine, operating system, and without sacrificing the Quality of
Services like reliability and availability. In this way, virtualization allows efficient use of the
underlying physical hardware.
➔ Easier Management: In software-defined virtual machines, it is much easier to implement any
new rule or policy, making it much easier to create or alter policies. This may be possible as
virtualization technology provides better control over the virtual environment.
➔ Faster Provisioning: The process of setting up hardware for each application is time-consuming,
requires more space, and costs more money. Further provisioning a virtual machine (VM) is
faster, cheaper, and efficient and can be managed smoothly. Thus virtualization technology may
help to create the required configured virtual machines in minimum time and may also be able to
scale up or scale down the required demands in minimum time. Here it should be noted that the
problem of scalability may also be handled efficiently by virtualization techniques.
➔ Efficient resource management: As discussed earlier, virtualization provides better control and
flexibility when compared to traditional architecture. Virtualization allows IT administrators to
create and allocate the virtual machine faster and live- migrate the virtual machine from one
server to another when required to increase the availability and reliability of the services. In order
to manage the virtualized environment, there are a number of virtualization management tools
available and the selection of appropriate tools may help to manage the virtual resources
efficiently. This tool may help to seamlessly migrate the virtual machine from one system to
another with zero downtime. This may be required when any server needs maintenance or is not
performing well.
➔ Single point Administration: The virtualized environment can be managed and monitored
through single virtualization management tools. However, the selection of efficient tools that
provide all the virtualization services properly is important. The appropriate tool will help to
create and provision virtual machines efficiently, balance the workload, manage the security of
the individual virtual machines, monitor the performance of the infrastructure, and guarantee to
maximize the utilization of the resources. Here all the different services can be administered by a
single tool.
3.5 Machine or Server Level Virtualization
Server virtualization is a technique to divide a physical server into various small virtual servers
and each of these independent virtual servers runs its own operating system. These virtual servers
are also called virtual machines and the process of creation of such virtual machines is achieved
by hypervisors like Microsoft Hyper-V, Citrix XenServer, Oracle VM, Red Hat’s Kernel-based
Virtual Machine, VMware vSphere. Here it should be noted that each virtual machine runs in
isolation on the same host physical machine and are unaware of any other virtual machine
running on the same host physical machine. To achieve this kind of functionality and
transparency different kinds of virtualization techniques are used. Further, there are different
types of server-level virtualization and they are as follows:
★ Hypervisor
★ Para Virtualization
★ Full Virtualization
★ Hardware-Assisted Virtualization
★ Kernel level Virtualization
★ System-Level or Operating System Virtualization
There are numerous advantages associated with server virtualization. Some of them are as
follows:
➔ In the case of server virtualization, each virtual machine may be restarted independently
without affecting the execution of other virtual machines running on the same host
physical machine.
➔ Server virtualization can partition a single physical server into many small virtual servers
and allows to utilize the hardware of the existing physical servers efficiently. Therefore
this minimizes the requirement of the extra physical servers and the initial investment
cost.
➔ As each small virtual server executes in isolation, if any virtual machine faces any kind of
issues then it will not affect the execution of other virtual machines running on the same
host physical machine.
In addition to some of the advantages server virtualization also have some disadvantages and they
are as follows:
➔ In the case of a host physical machine, the server faces any problem and it goes offline
then all the guest virtual machines will also get affected and will go offline. This will
decrease the overall uptime of the services or applications running on an individual
virtual machine.
➔ Server virtualization allows the running of many numbers of virtual machines on the
same physical server, this may reduce the performance of the overall virtualized
environment.
➔ Generally, server virtualization environments are not easy to set up and manage.
3.6 Hypervisor
The hypervisor can be seen as an emulator or simply a software layer that can efficiently
coordinate and run independent virtual machines over single physical hardware such that
each virtual machine has physical access to the resources it needs. It also ensures that
virtual machines have their own address space and execution on one virtual machine does
not conflict with the other virtual machine running on the same host physical machine.
Prior to the notion of Hypervisor, most computers could only run one operating system at
most and this increased the reliability of the services and applications because the entire
system's hardware had to handle requests from a single operating system. However, the
demerit of this idea is that the system cannot utilize all of the computing capacity.
However, using a hypervisor minimizes the need for space, energy, and maintenance. The
hypervisor is also referred to as a virtual machine monitor and it helps to manage virtual
machines and their physical resource demands. It isolates virtual machines from one
another by logically provisioning and assigning computing power, memory, and storage.
Thus at any point of time if any virtual machine operation is vulnerable then it will not
affect the execution of another machine.
There are basically two types of hypervisor (i) Type 1 or bare metal and (ii) Type 2 or
Hosted. Hypervisors enable virtualization because they translate requests across virtual
and physical resources. Type 1 hypervisors may also be embedded into the firmware
around the same layer as the motherboard basics input/output system (BIOS). This helps
the host operating system to access and use the virtualization software.
➔ Type 1 hypervisor: This is also termed as “Bare metal” hypervisor. This type of
hypervisor runs directly on the underlying physical resources. For running this
kind of hypervisor operating system is not required and it itself acts as a host
operating system. These kinds of hypervisors are most commonly used in virtual
server scenarios (See Figure 5.).
Pros: These types of Hypervisor are highly effective as they can directly
communicate with physical hardware. It also raises the level of security, and
there was nothing in between them that could undermine security.
Cons: To administrate different VMs and manage the host hardware, a Type 1
hypervisor frequently requires a separate administration system.
Example:
Pros: A type 2 hypervisor allows for rapid and easy access to a Guest OS while
the main operating system runs on the host physical machine. This kind of
facility immensely helps the end-user in their work. For example, a user can use
Cortana to access their favorite Linux-based tool (in Windows, only found a
speech dictation system ).
Cons: Type 2 hypervisors can cause performance overhead because they always
need a host operating system in between the guest Operating system and
underlying physical device. It also poses latency concerns and a potential security
risk if the Host OS is compromised.
Figure 6. Type 2 Hypervisor
Example:
………………………………………………………………………………………………
………
………………………………………………………………………………………………
………
………………………………………………………………………………………………
………
………………………………………………………………………………………………
………
………………………………………………………………………………………………
………
………………………………………………………………………………………………
………
3) Compare between Type 1 hypervisor and Type 2 hypervisor.
………………………………………………………………………………………………
………
………………………………………………………………………………………………
………
………………………………………………………………………………………………
………
A binary translation and direct execution are used together to accomplish full
virtualization. The hardware CPU runs non-sensitive commands at normal speed for full
virtualization hypervisors. Operating system-related instructions are interpreted on the fly
for further use. As similar kinds of guest operating system instances can execute on
virtualized or real physical systems, the full virtualization technique delivers the most
required isolation and security solution for virtual instances running on the virtual
environment (see Figure 7).
Further, binary translation is a method of establishing full virtualization that does not
necessitate hardware virtualization. It entails looking for "unsafe" instructions in the
virtual guest's executable code, translating them into "safe" equivalents, and running the
translated code. If we talk with respect to VMware hypervisor, both direct execution and
binary translation techniques may be used to virtualize an operating system.
Figure 7: The figure depicts the full virtualization paradigm.
3.6.2 Paravirtualization:-
The other name for this virtualization is native virtualization, accelerated virtualization,
or hardware virtualization. In this type of virtualization, a special CPU instruction is
provided by real physical hardware to support virtualization. The adopted methodology is
very portable as the virtual machine manager can run an unaltered guest operating
system. This kind of methodology minimizes the implementation complexity of the
hypervisor and allows the hypervisor to manage the virtualized environment efficiently.
This sort of virtualization technique was initially launched on the IBM System / 370 in
1972, and it was made available on Intel and AMD CPUs in 2006. In this kind of
virtualization methodology, sensitive calls are by default forwarded to the hypervisor. It
is no longer necessary to use binary translation during full virtualization or hyper calls
during paravirtualization. See Figure 9 depicts the hardware-assisted virtualization
techniques.
Figure 9: The figure depicts the hardware-assisted virtualization techniques.
Next, we will discuss the major difference between two very well-known hypervisors Citrix
XenServer and VMware.
VMware is generally used by small and Citrix XenServer is a virtualization platform that is
mid-sized businesses. VMware requires a utilized by individuals as well as small and medium
proprietary license and is Provided per- businesses. XenServer is Open source and also provides
processor basis. per-server licensing. However, the free version also
includes almost all the features.
Features like dynamic resource allocation is The features like dynamic resource allocation is not
supported supported
VMware has 128 Virtual CPUs (VCPUs) Citrix XenServer has 32 Virtual CPUs per Virtual
per Virtual machine. It can run on either machine. It can only run on Intel-Vt or AMD-V
Intel-Vt or AMD-V intelligent devices. intelligent systems.
Only MS-DOS and FreeBSD are supported Citrix XenServer supports various host OS such as Win
as hosts in VMware vSphere. As a guest OS, NT Server, Win XP, Linux ES, e.t.c. Citrix XenServer
VMware vSphere supports MS-DOS, Sun also supports various guest operating systems, but not
Java Desktop System, and Solaris X86 MS-DOS, Sun Java Desktop Environment, or Solaris
Platform Edition. X86 platform edition. To run, it will need AMD-V
competent hardware.
Support Failover and Live migration. Doesn’t support Failover or even Live migration.(*
Supports Dynamic Resource allocation and Newer version supports Live migration but not that
Thin Provisioning. efficiently).
Supports only Thin Provisioning.
The graphic support is not exhaustive. The graphic support is exhaustive and had better
support than VMware.
BusyBox is used by the VMware server It provides almost all the required features and ability to
management system for managing the create and manage the virtualization environment and it
environment. uses XenCenter for managing the environment.
Check your Progress 3
1) What is the difference between full virtualization and paravirtualization?
………………………………………………………………………………………………
………
………………………………………………………………………………………………
………
………………………………………………………………………………………………
………
b. A binary translation and direct execution are used together to accomplish full
virtualization. [
]
3.12 SUMMARY
Virtualization is the fundamental technology that helps to create an abstraction layer over the
available System hardware elements like processor, storage, and memory. Virtualization allows to
hide the intricacy of the underlying environment and provides a secure and isolated environment
for any user application. The created computing environment is virtual and it simulates an
environment similar to a real computer. The use of the virtual infrastructure is smooth as the user
finds almost no difference in the experience when compared to a real computing environment. In
this regard, a detailed overview of virtualization is given in this unit. We have discussed some
very important topics related to virtualization like advantages of virtualization, different
virtualization techniques, and its characteristics with an example. For further clarity of existing
virtualization techniques like full virtualization and paravirtualization, we have compared the two
very well-known hypervisors Citrix XenServer and VMware.
3.13 SOLUTIONS/ANSWERS
Check your Progress 1
Ans 1: Cloud Computing is a framework where different kinds of services related to networks,
computing resources, storage, development platform, and application are provisioned through the
internet. Further, Virtualization is the fundamental technology that creates an abstraction to hide
the complexity of computing infrastructure, storage, and networking. The virtualization technique
provides a secure and isolated environment for cloud users such that the computing environment
of one user does not affect the computing environment of another user.
Ans 2: In the case of virtualization more than one guest virtual machine runs on a single host
machine, and on each virtual machine different virtual applications are executed. Further, it is
very important to run each virtual machine in isolation such that no two applications running on
different virtual machines interfere with each other. In this respect, virtual machine manager
(VMM) plays an important role by managing virtual machines efficiently and providing enough
security. The operations of the different virtual machines are observed by VMM and filtered
accordingly such that no unfavorable activity is permitted. Sometimes it becomes important to
hide some sensitive or important data of the host from other guest applications running on the
same system. This kind of functionality is automatically provided by the virtualization
environment with the help of VMM.
Ans 3: In the case of emulation, the virtualization environment allows different guest applications
to run on top of the host physical machine. Here the underlying virtualized environment is a
software program and hence can be controlled more efficiently. Further, based on the requirement
of guest application or program the underlying environment can be adjusted or modified for
smooth execution.
In case of isolation, the virtualization environment enables guest virtual machines to run in
isolation such that no virtual machines running on the same host physical machine interfere with
each other. The guest virtual application accesses the underlying resources through the
abstraction layer. The virtual machine manager monitors the operation of each guest application
and tries to prevent vulnerable activity operation if any.
Ans 3: Type 1 hypervisor: This is also termed as “Bare metal” hypervisor. This type of
hypervisor runs directly on the underlying physical resources. For running this kind of hypervisor
operating system is not required and it itself acts as a host operating System. These kinds of
hypervisors are most commonly used in virtual server scenarios. The examples are Hyper-V
hypervisor, Citrix XenServer, and ESXi hypervisor.
Type 2 hypervisor: This hypervisor is not compatible with the hardware it is running on. It runs
as a program on a computer's operating system. This type of hypervisor takes the help of an
operating system to deliver virtualization-based services. Type 2 hypervisors are best suited for
endpoint devices such as personal computers that run an alternative operating system known as
Guest OS. An example is VMware Workstation.
Ans 2:
a. True
b. True
c. True
d. True
e. False
9. FURTHER READINGS
There are a host of resources available for further reading on the topic of Virtualization.
1. R. Buyya, C. Vecchiola,, and S. T. Selvi, S. T. (2013). Mastering cloud computing:
foundations and applications programming. Newnes.
2. S. A. Babu, M. J. Hareesh, J. P. Martin, S. Cherian, and Y. Sastri, "System Performance
Evaluation of Para Virtualization, Container Virtualization, and Full Virtualization Using
Xen, OpenVZ, and XenServer," 2014 Fourth International Conference on Advances in
Computing and Communications, 2014, pp. 247-250, doi: 10.1109/ICACC.2014.66.
3. https://www.ibm.com/in-en/cloud/learn/hypervisors#toc-type-1-vs--Ik2a8-2y
4. https://www.vmware.com/topics/glossary/content/hypervisor.html
5. https://www.sciencedirect.com/topics/computer-science/full-
virtualization#:~:text=Full%20virtualization%20is%20a%20virtualization,run%20in%20
each%20individual%20VM.
UNIT 4 RESOURCE POOLING, SHARING AND
PROVISIONING
4.1 Introduction
4.2 Objectives
4.3 Resource Pooling
4.4 Resource Pooling Architecture
4.4.1 Server Pool
4.4.2 Storage Pool
4.4.3 Network Pool
4.5 Resource Sharing
4.5.1 Multi Tenancy
4.5.2 Types of Tenancy
4.5.3 Tenancy at Different Level of Cloud Services
4.6 Resource Provisioning and Approaches
4.6.1 Static Approach
4.6.2 Dynamic Approach
4.6.3 Hybrid Approach
4.7 VM Sizing
4.8 Summary
4.1 INTRODUCTION
Resource pooling is the one of the essential attributes of Cloud Computing technology which
separates cloud computing approach from the traditional IT approach. Resource pooling along
with virtualization and sharing of resources, leads to dynamic behavior of the cloud. Instead of
allocating resources permanently to users, they are dynamically provisioned on a need basis.
This leads to efficient utilization of resources as load or demand changes over a period of time.
Multi-tenancy allows a single instance of an application software along with its supporting
infrastructure to be used to serve multiple customers. It is not only economical and efficient to
the providers, but may also reduce the charges for the consumers.
4.2 OBJECTIVES
After going through this unit, you should be able to:
1
4.3 RESOURCE POOLING
Resource pool is a collection of resources available for allocation to users. All types of resources
– compute, network or storage, can be pooled. It creates a layer of abstraction for consumption
and presentation of resources in a consistent manner. A large pool of physical resources is
maintained in cloud data centers and presented to users as virtual services. Any resource from
this pool may be allocated to serve a single user or application, or can be even shared among
multiple users or applications. Also, instead of allocating resources permanently to users, they
are dynamically provisioned on a need basis. This leads to efficient utilization of resources as
load or demand changes over a period of time.
For creating resource pools, providers need to set up strategies for categorizing and management
of resources. The consumers have no control or knowledge of the actual locations where the
physical resources are located. Although some service providers may provide choice for
geographic location at higher abstraction level like- region, country, from where customer can
get resources. This is generally possible with large service providers who have multiple data
centers across the world.
Each pool of resources is made by grouping multiple identical resources for example – storage
pools, network pools, server pools etc. A resource pooling architecture is then built by
2
combining these pools of resources. An automated system is needed to be established in order to
ensure efficient utilization and synchronization of pools.
Computation resources are majorly divided into three categories – Server , Storage and Network.
Sufficient quantities of physical resources of all three types are hence maintained in a data
center.
Server pools are composed of multiple physical servers along with operating system, networking
capabilities and other necessary software installed on it. Virtual machines are then configured
over these servers and then combined to create virtual server pools. Customers can choose virtual
machine configurations from the available templates (provided by cloud service provider) during
provisioning. Also, dedicated processor and memory pools are created from processors and
memory devices and maintained separately. These processor and memory components from their
respective pools can then be linked to virtual servers when demand for increased capacity arises.
They can further be returned to the pool of free resources when load on virtual servers decreases.
Storage resources are one of the essential components needed for improving performance, data
management and protection. It is frequently accessed by users or applications as well as needed
to meet growing requirements, maintaining backups, migrating data, etc.
Storage pools are composed of file based, block based or object based storage made up of
storage devices like- disk or tapes and available to users in virtualized mode.
1. File based storage – it is needed for applications that require file system or shared file access.
It can be used to maintain repositories, development, user home directories, etc.
2. Block based storage – it is a low latency storage needed for applications requiring frequent
access like databases. It uses block level access hence needs to be partitioned and formatted
before use.
3. Object based storage – it is needed for applications that require scalability, unstructured data
and metadata support. It can be used for storing large amounts of data for analytics, archiving or
backups.
3
Resources in pools can be connected to each other, or to resources from other pools, by network
facility. They can further be used for load balancing, link aggregation, etc.
Network pools are composed of different networking devices like- gateways, switches, routers,
etc. Virtual networks are then created from these physical networking devices and offered to
customers. Customers can further build their own networks using these virtual networks.
Generally, dedicated pools of resources of different types are maintained by data centers. They
may also be created specific to applications or consumers. With the increasing number of
resources and pools, it becomes very complex to manage and organize pools. Hierarchical
structure can be used to form parent-child, sibling, or nested pools to facilitate diverse resource
pooling requirements.
Cloud computing technology makes use of resource sharing in order to increase resource
utilization. At a time, a huge number of applications can be running over a pool. But they may
not attain peak demands at the same time. Hence, sharing them among applications can increase
average utilization of these resources.
Although resource sharing offers multiple benefits like – increasing utilization, reduces cost and
expenditure, but also introduces challenges like – assuring quality of service (QoS) and
performance. Different applications competing for the same set of resources may affect run time
behavior of applications. Also, the performance parameters like- response and turnaround time
are difficult to predict. Hence, sharing of resources requires proper management strategies in
order to maintain performance standards.
4.5.1 Multi-tenancy
4
Multi-tenancy is one of the important characteristics found in public clouds. Unlike traditional
single tenancy architecture which allocates dedicated resources to users, multi-tenancy is an
architecture in which a single resource is used by multiple tenants (customers) who are isolated
from each other. Tenants in this architecture are logically separated but physically connected. In
other words, a single instance of a software can run on a single server but can server multiple
tenants. Here, data of each tenant is kept separately and securely from each other. Fig 1 shows
single tenancy and multi-tenancy scenarios.
Multi-tenancy leads to sharing of resources by multiple users without the user being aware of it.
It is not only economical and efficient to the providers, but may also reduce the charges for the
consumers. Multi-tenancy is a feature enabled by various other features like- virtualization,
resource sharing, dynamic allocation from resource pools.
In this model, physical resources cannot be pre-occupied by a particular user. Neither the
resources are allocated to an application dedicatedly. They can be utilized on a temporary basis
by multiple users or applications as and when needed. The resources are released and returned to
a pool of free resources when demand is fulfilled which can further be used to serve other
requirements. This increases the utilization and decreases investment.
5
In single tenancy architecture, a single instance of an application software along with its
supporting infrastructure, is used to serve a single customer. Customers have their own
independent instances and databases which are dedicated to them. Since there is no sharing with
this type of tenancy, it provides better security but costs more to the customers.
1. Single multi-tenant database - It is the simplest form where a single application instance
and a database instance is used to host the tenants. It is a highly scalable architecture where more
tenants can be added to the. It also reduces cost due to sharing of resources but increases
operational complexity.
2. One database per tenant – It is another form where a single application instance and
separate database instances are used for each tenant. Its scalability is low and costs higher as
compared to a single multi-tenant database due to overhead included by adding each database.
Due to separate database instances, its operational complexity is less.
3. One app instance and one database per tenant - It is the architecture where the whole
application is installed separately for each tenant. Each tenant has its own separate app and
database instance. This allows a high degree of data isolation but increases the cost.
Multi-tenancy can be applied not only in public clouds but also in private or community
deployment models. Also, it can be applied to all three service models – Infrastructure as a
Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). Multi-tenancy
when performed at infrastructure level, makes other levels also multi-tenant to certain extent.
Multi-tenancy at IaaS level can be done by virtualization of resources and customers sharing the
same set of resources virtually without affecting others. In this, customers can share
infrastructure resources like- servers, storage and network.
6
Multi-tenancy at PaaS level can be done by running multiple applications from different vendors
over the same operating system. This removes the need for separate virtual machine allocation
and leads to customers sharing operating systems. It increases utilization and ease maintenance.
Multi-tenancy at SaaS level can be done by sharing a single application instance along with a
database instance. Hence a single application serves multiple customers. Customers may be
allowed to customize some of the functionalities like- change view of interface but they are not
allowed to edit applications since it is serving other customers also.
Resource provisioning is required to be done efficiently. Physical resources are not allocated to
users directly. Instead, they are made available to virtual machines, which in turn are allocated to
users and applications. Resources can be assigned to virtual machines using various
provisioning approaches. There can be three types of resources provisioning approaches– static,
dynamic and hybrid.
In static resource provisioning, resources are allocated to virtual machines only once, at the
beginning according to user’s or application’s requirement. It is not expected to change further.
Hence, it is suitable for applications that have predictable and static workloads. Once a virtual
machine is created, it is expected to run without any further allocations.
7
Although there is no runtime overhead associated with this type of provisioning, it has several
limitations. For any application, it may be very difficult to predict future workloads. It may lead
to over-provisioning or under-provisioning of resources. Under-provisioning is the scenario
when actual demand for resources exceeds the available resources. It may lead to service
downtime or application degradation. This problem may be avoided by reserving sufficient
resources in the beginning. But reserving large amounts of resources may lead to another
problem called Over-provisioning. It is a scenario in which the majority of the resources remain
un-utilized. It may lead to inefficiency to the service provided and incurs unnecessary cost to the
consumers. Fig 2 shows the under-provisioning and Fig 3 shows over-provisioning scenarios.
8
In dynamic provisioning, as per the requirement, resources can be allocated or de-allocated
during run-time. Customers in this case don’t need to predict resource requirements. Resources
are allocated from the pool when required and removed from the virtual machine and returned
back to the pool of free resources when no more are required. This makes the system elastic.
This approach allows customers to be charged per usage basis.
Dynamic provisioning is suited for applications where demands for resources are un-predictable
or frequently varies during run-time. It is best suited for scalable applications. It can adapt to
changing needs at the cost of overheads associated with run-time allocations. This may lead to a
small amount of delay but solves the problem of over-provisioning and under-provisioning.
Dynamic provisioning although solves the problems associated with static approach but may lead
to overheads at run-time. Hybrid approach solves the problem by combining the capabilities of
static and dynamic provisioning. Static provisioning can be done in the beginning when creating
a virtual machine in order to limit the complexity of provisioning. Dynamic provisioning can be
done later for re-provisioning when the workload changes during run-time. This approach can be
efficient for real-time applications.
4.7 VM SIZING
Virtual machine (VM) sizing is the process of estimating the amount of resources that a VM
should be allocated. Its objective is to make sure that VM capacity is kept proportionate to the
workload. This estimation is based upon various parameters specified by the customer. VM
sizing is done at the beginning in case of static provisioning. In dynamic provisioning, VM size
can be changed depending upon the application workload.
1. Individual VM based – In this case, depending upon the previous workload patterns,
resources are allocated VM-by-VM initially. Resources can be later allocated from the pool
when load reaches beyond expectations.
2. Joint-VM based – In this case, allocation to VMs are done in a combined way. Resources
assigned to a VM initially can be reassigned to another VM hosted on the same physical
machine. Hence it leads to overall efficient utilization.
9
Check Your Progress 3
4.8 SUMMARY
In this unit an important attribute of Cloud Computing technology called Resource pooling is
discussed. It is a collection of resources available for allocation to users. A large pool of physical
resources - storage, network and server pools are maintained in cloud data centers and presented
to users as virtual services. Resources may be allocated to serve a single user or application, or
can be even shared among multiple users or applications. Resources can be assigned to virtual
machines using static, dynamic and hybrid provisioning approaches.
1. Resource pool is a collection of resources available for allocation to users. All types of
resources – compute, network or storage, can be pooled. It creates a layer of abstraction for
consumption and presentation of resources in a consistent manner. A large pool of physical
resources is maintained in cloud data centers and presented to users as virtual services. Any
resource from this pool may be allocated to serve a single user or application, or can be even
shared among multiple users or applications. Also, instead of allocating resources permanently to
users, they are dynamically provisioned on a need basis. This leads to efficient utilization of
resources as load or demand changes over a period of time.
a) Server pools - They are composed of multiple physical servers along with operating
system, networking capabilities and other necessary software installed on it.
b) Storage pools – They are composed of file based, block based or object based storage
made up of storage devices like- disk or tapes and available to users in virtualized mode.
c) Network pools - They are composed of different networking devices like- gateways,
switches, routers, etc. Virtual networks are then created from these physical networking
10
devices and offered to customers. Customers can further build their own networks using
these virtual networks.
3. Storage pools are composed of file based, block based or object based storage.
a) File based storage – it is needed for applications that require file system or shared file
access. It can be used to maintain repositories, development, user home directories, etc.
b) Block based storage – it is a low latency storage needed for applications requiring
frequent access like databases. It uses block level access hence needs to be partitioned
and formatted before use.
c) Object based storage – it is needed for applications that require scalability,
unstructured data and metadata support. It can be used for storing large amounts of data
for analytics, archiving or backups.
1. In single tenancy architecture, a single instance of an application software along with its
supporting infrastructure, is used to serve a single customer. Customers have their own
independent instances and databases which are dedicated to them. Since there is no sharing with
this type of tenancy, it provides better security but costs more to the customers.
In multi-tenancy architecture, a single instance of an application software along with its
supporting infrastructure, can be used to serve multiple customers. Customers share a single
instance and database. Customer’s data is isolated from each other and remains invisible to
others. Since users are sharing the resources, it costs less to them as well as is efficient for the
providers.
11
Answers to Check Your Progress 3
2. There can be three types of resources provisioning approaches– static, dynamic and hybrid.
3. Under-provisioning is the scenario when actual demand for resources exceeds the available
resources. It may lead to service downtime or application degradation. This problem may be
avoided by reserving sufficient resources in the beginning.
Reserving large amounts of resources may lead to another problem called Over-provisioning. It
is a scenario in which the majority of the resources remain un-utilized. It may lead to
inefficiency to the service provided and incurs unnecessary cost to the consumers.
12
RESOURCE PROVISIONING,
LOAD BALANCING AND
SECURITY
UNIT 5 SCALING
Structure:-
5.1 Introduction
5.2 Objective
5.3 Scaling primitives
5.4 Scaling Strategies
5.4.1 Proactive Scaling
5.4.2 Reactive Scaling
5.4.3 Combinational Scaling
5.5 Auto Scaling in Cloud
5.6 Types of Scaling
5.6.1 Vertical Scaling or Scaling Up
5.6.2 Horizontal Scaling or Scaling Out
5.1 INTRODUCTION
In this unit we will focus on the various methods and algorithms used in the
process of scaling. We will discuss various types of scaling, their usage and a
few examples. We will also discuss the importance of various techniques in
saving cost and man efforts by using the concepts of cloud scaling in highly
dynamic situations. The suitability of scaling techniques in different scenarios
is also discussed in detail.
5.2 OBJECTIVES
1
SCALING
After going through this unit you should be able to:
➔ describe scaling and its advantage;
1. Minimum cost: The user has to pay a minimum cost for access usage of
hardware after upscaling. The hardware cost for the same scale can be
much greater than the cost paid by the user. Also, the maintenance and
other overheads are also not included here. Further, as and when the
resources are not required, they may be returned to the Service provider
resulting in the cost saving.
2. Ease of use: The cloud upscaling and downscaling can be done in just a
few minutes (sometime dynamically) by using service providers
application interface.
2
RESOURCE PROVISIONING,
LOAD BALANCING AND
SECURITY
In the case of the clouds, virtual environments are utilized for resource
allocation. These virtual machines enable clouds to be elastic in nature which
can be configured according to the workload of the applications in real time. In
costs
Workload
Checkpoint|
Time
costs
Workload
Checkpoint|
Time
On the other hand, scaling saves cost of hardware setup for some small time
peaks or dips in load. In general most cloud service providers provide scaling
as a process for free and charge for the additional resource used. Scaling is also
a common service provided by almost all cloud platforms. Also need to
mention that user saves when usage of the resources declines by using scale
down.?
Let us now see what are the strategies for scaling, how one can achieve scaling
in a cloud environment and what are its types. In general, scaling is categorized
based on the decision taken for achieving scaling. The three main strategies for
scaling are discussed below.
Time of Day
4
RESOURCE PROVISIONING,
LOAD BALANCING AND
SECURITY
5.4.2 Reactive Scaling
The reactive scaling often monitors and enables smooth workload changes to
work easily with minimum cost. It empowers users to easily scale up or down
computing resources rapidly. In simple words, when the hardwares like CPU
or RAM or any other resource touches highest utilization, more of the
resources are added to the environment by the service providers. The auto
scaling works on the policies defined by the users/ resource managers for
traffic and scaling. One major concern with reactive scaling is a quick change
in load, i.e. user experiences lags when infrastructure is being scaled.
F
i
g
u
F r
Load
i e
g
u 1
r .
e
M
1 a
. n
Time of Day
u
M a
5.4.3 Combinational Scaling
a l
n
Till now we have seen uneed based
s and forecast based scaling techniques for
scaling. However, for better
a performance
c and low cool down period we can
also combine both of the l reactive
a and proactive scaling strategies where we
have some prior knowledge lof traffic. This helps us in scheduling timely
s
scaling strategies for expected iload. On the other hand, we also have provision
c
of load based scaling apart fromn the predicted load on the application. This
a
way both the problems of sudden g and expected traffic surges are addressed.
l
i i
Given below is the comparison between proactive and reactive scaling
n n
strategies. g
t
Parameters i r
Proactive Scaling Reactive Scaling
n a
Suitability For applications
d increasing For applications increasing loads in
loads tin expected/
i known unexpected/ unknown manner
mannerr t
a
Working User sets thei threshold but a User defined threshold values
d o
i n 5
t a
i l
o
SCALING
downtime is required. optimize the resources
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
In a cloud, auto scaling can be achieved using user defined policies, various
machine health checks and schedules. Various parameters such as Request
counts, CPU usage and latency are the key parameters for decision making in
autoscaling. A policy here refers to the instruction sets for clouds in case of a
particular scenario (for scaling -up or scaling -down). The autoscaling in the
cloud is done on the basis of following parameters.
6
RESOURCE PROVISIONING,
LOAD BALANCING AND
SECURITY
The process of auto scaling also requires some cooldown period for resuming
the services after a scaling takes place. No two concurrent scaling are triggered
so as to maintain integrity. The cooldown period allows the process of
autoscaling to get reflected in the system in a specified time interval and saves
any integrity issues in cloud environment.
Costs
Workload
Time
Consider a more specific scenario, when the resource requirement is high for
some time duration e.g. in holidays, weekends etc., a Scheduled scaling can
also be performed. Here the time and scale/ magnitude/ threshold of scaling
can be defined earlier to meet the specific requirements based on the previous
knowledge of traffic. The threshold level is also an important parameter in auto
scaling as a low value of threshold results in under utilization of the cloud
resources and a high level of threshold results in higher latency in the cloud.
After adding additional nodes in scale-up, the incoming requests per second
drops below the threshold. This results in triggering the alternate scale-up-
down processes known as a ping-pong effect. To avoid both underscaling and
overscaling issues load testing is recommended to meet the service level
agreements (SLAs). In addition, the scale-up process is required to satisfy the
following properties. Need to brief on SLA also?
1. The number of incoming requests per second per node > threshold of
scale down, after scale-up.
2. The number of incoming requests per second per node < threshold of
scale up, after scale-down
Here, in both the scenarios one should reduce the chances of ping-pong effect.
7
SCALING
Now we know what scaling is and how it affects the applications hosted on the
cloud. Let us now discuss how auto scaling can be performed in fixed amounts
as well as in percentage of the current capacity.
--------------------------------------------------------------------------------------------
Algorithm : 1
--------------------------------------------------------------------------------------------
Input : SLA specific application
Parameters:
N_min minimum number of nodes
D - scale down value.
U scale up value.
T_U scale up threshold
T_D scale down threshold
Let T (SLA) return the maximum incoming request per second (RPS) per node
for the specific SLA.
Let N_c and RPS_n represent the current number of nodes and incoming
requests per second per node respectively.
Repeat:
N_(c_old) ←N_c
N_c ← max(N_min, N_c - D)
RPS_n ←RPS_n x N_(c_old) / N_c
Until RPS_n< T_D or N_c = N_min
8
RESOURCE PROVISIONING,
LOAD BALANCING AND
Now, let us discuss how this algorithm works in detail. Let the values of a few SECURITY
4 0 450 112.5 4
1800
2 6 300
2510
2 8 313.75
3300
2 10 330.00
4120
2 12 343.33
5000
2 14 357.14
Similarly, in case of scaling down, let initially RPS = 8000 and N_c = 19. Now
RPS is reduced to 6200 and following it RPS_n reaches T_D, here an
autoscaling request is initiated deleting D = 2 nodes. Table - 2 lists all the
parameters as per the scale -down requirements.
18 8000 421.05 19
6200
2 17 364.7
4850
2 15 323.33
3500
9
SCALING
2 13 269.23
2650
2 11 240.90
1900
2 8 211.11
The given table shows the stepwise increase/ decrease in the cloud capacity
with respect to the change in load on the application(request per node per
second).
Percentage Scaling:
The below given algorithm is used to determine the scale up and down
thresholds for respective autoscaling.
-----------------------------------------------------------------------------------------------
Algorithm : 2
-----------------------------------------------------------------------------------------------
Input : SLA specific application
Parameters:
N_min - minimum number of nodes
D - scale down value.
U - scale up value.
T_U - scale up threshold
T_D - scale down threshold
Let T (SLA) returns the maximum requests per second (RPS) per node for
specific SLA.
Let N_c and RPS_n represent the current number of nodes and incoming
requests per second per node respectively.
10
RESOURCE PROVISIONING,
LOAD BALANCING AND
N_c ←N_c + max(1, N_c x U/100) SECURITY
Repeat:
N_(c_old) ←N_c
N_c ← max(N_min, N_c - max(1, N_c x D/ 100))
RPS_n ←RPS_n x N_(c_old) / N_c
Until RPS_n< T_D or N_c = N_min
Similarly in case of scaling down, initial RPS = 5000 and N_c = 19, here RPS
reduces to 4140 and RPS_n reaches T_D requesting scale down and hence
deleting 1 i.e. max(1, 1.8 x 8/100). The detailed example is explained using
Table -3 giving details of upscaling with D = 8, U = 1, N_min = 1, T_D = 230
and T_U = 290 .
6 0 500 83.33 6
1695
1 7 242.14
2190
1 8 273.75
2600
1 9 288.88
3430
1 10 343.00
3940
1 11 358.18
4420
1 12 368.33
11
SCALING
4960
1 13 381.53
5500
1 14 392.85
5950
1 15 396.6
The scaling down with the same algorithm is detailed in the table below.
19 5000 263.15 19
3920
1 18 217.77
3510
1 17 206.47
3200
1 16 200
2850
1 15 190
2600
1 14 185.71
2360
1 13 181.53
2060
1 12 171.66
1810
1 11 164.5
1500
150
12
RESOURCE PROVISIONING,
LOAD BALANCING AND
Here if we compare both the algorithms 1 and 2, it is clear that the values of SECURITY
the threshold U and D are at the higher side in case of 2. In this scenario the
utilization of hardware is more and the cloud experiences low footprints.
2) In Algorithm 1 for fixed amount auto scaling, calculate the values in table
if U = 3.
…………………………………………………………………………
…………………………………………………………………………
…………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
Let us now discuss the types of scaling, how we see the cloud infrastructure for
capacity enhancing/ reducing. In general we scale the cloud in a vertical or
horizontal way by either provisioning more resources or by installing more
resources.
The vertical scaling in the cloud refers to either scaling up i.e. enhancing the
computing resources or scaling down i.e. reducing/ cutting down computing
resources for an application. In vertical scaling, the actual number of VMs are
constant but the quantity of the resource allocated to each of them is increased/
decreased. Here no infrastructure is added and application code is also not
changed. The vertical scaling is limited to the capacity of the physical machine
or server running in the cloud. If one has to upgrade the hardware requirements
of an existing cloud environment, this can be achieved by minimum changes.
13
SCALING
B 4 CPUs
vertical scaling
A 2 CPUs
An IT resource (a virtual server with two CPUs) is scaled up by replacing it with a more
powerful IT resource with increased capacity for data storage (a physical server with four CPUs).
14
RESOURCE PROVISIONING,
LOAD BALANCING AND
SECURITY
Pooled
physical
servers
A A B A B C
horizontal scaling
An IT resource (Virtual Server A) is scaled out by adding more of the same IT resources (Virtual Servers B and C).
SUMMARY
In the end, we are now aware of various types of scaling, scaling strategies and
their use in real situations. Various cloud service providers like Amazon AWS,
Microsoft Azure and IT giants like Google offer scaling services on their
application based on the application requirements. These services offer good
help to the entrepreneurs who run small to medium businesses and seek IT
infrastructure support. We have also discussed various advantages of
cloudscaling for business applications.
SOLUTION/ANSWERS
Answers to CYPs 1.
3) Write differences between proactive and reactive scaling: The reactive scaling
technique only works for the actual variation of load on the application however, the
combination works for both expected and real traffic. A good estimate of load
increases performance of the combinational scaling.
Answers to CYPs 2.
1) Explain the concept of fixed amount auto scaling: The fixed amount scaling is a
simplistic approach for scaling in cloud environment. Here the resources are scaled
up/ down by a user defined number of nodes. In fixed amount scaling resource
utilization is not optimized. It can also happen that only a small node can solve the
resource crunch problem but the used defined numbers are very high leading to
underutilized resources. Therefore a percentage amount of scaling is a better
technique for optimal resource usage.
2) In Algorithm 1 for fixed amount auto scaling, calculate the values in table if U = 3:
For the given U = 3, following calculation are made.
4 0 450 112.5 4
1800
3 7 257.14
2510
3 10 251
3300
3 13 253.84
4120
3 16 257.50
16
RESOURCE PROVISIONING,
LOAD BALANCING AND
SECURITY
5000
3 19 263.15
3) What is a cool down period: When auto scaling takes place in cloud, a small time
interval (pause) prevents the triggering next auto scale event. This helps in
maintaining the integrity in the cloud environment for applications. Once the cool
down period is over, next auto scaling event can be accepted.
17
UNIT 6
Resource provisioning techniques are classified into two groups based on the application's
needs:
Static Resource Provisioning:
Static provisioning can be used efficiently for any application that has predictable and fixed
demands. When beginning the programme, the user must provide certain needs so that the
service provider can meet them.
Dynamic Resource Provisioning:
Dynamic provisioning allows a VM to be moved to a new VM on-the-fly if the demands on it
change unexpectedly. In this situation, the service provider provides more virtual machines
(VMs) if necessary, and removes them if they are no longer needed.
Response speed, workload, reduced SLA violations, and other factors are all taken into
account when allocating resources.
While completing the task, the resource provisioning algorithm must reply in the shortest
amount of time possible. This software must be able to minimize SLA violations while also
taking into account the workload of each VM.
Resource provisioning approaches must be utilized in order to get the most out of cloud
resources. For example, resource provisioning based on deadlines, cost analyses, and service
level agreements are all common approaches that scholars have advocated for managing
various aspects of resource allocation.
Categories of Load Balancing:
You may need to consider specific types of load balancing for your network, such as SQL
Server load balancing for your relational database, global server load balancing for
troubleshooting across many geographic locations, and DNS server load balancing to ensure
domain name operation. You can also consider load balancer types in terms of the many
cloud-based balancers available (such as the well-known AWS Elastic Load Balancer).
Static Algorithm Approach:
This type of method is used when the load on the system is relatively predictable and hence
static. Because of the static method, all of the traffic is split equally amongst all of the
servers. Implementing this algorithm effectively calls for extensive knowledge of server
resources, which is only known at implementation time.
However, the decision to shift loads does not take into account the current state of the system.
One of the main limitations of a static load balancing method is that load balancing jobs only
begin working once they have been established. It couldn't be used for load balancing on
other devices.
Dynamic Algorithm:
The dynamic process begins by locating the network's lightest server and assigns priority load
balancing to it. As a result, the system's traffic may need to be increased by utilising network
real-time communication. It's all about the present status of the system in this case.
Decisions are made in the context of the present system state, which is a key feature of
dynamic algorithms. Processes can be transferred from high-volume machines to low-volume
machines in real time.
Round Robin Algorithm:
For this algorithm, as its name implies, jobs are assigned in a round-robin fashion using the
name. The initial node is chosen at random, and other nodes are assigned work in a round-
robin fashion. This is one of the simplest strategies for distributing the load on a network.
Processes are assigned to each other in a random order with no regard for priority. When the
workload is evenly distributed throughout the processes, it responds quickly. The loading
time for each procedure varies. Some nodes may be underutilized while others are
overburdened, as a result.
Weighted Round Robin Load Balancing Algorithm:
Round Robin Load Balancing Algorithms using Weighted Round Robins have been created
to address the most problematic aspects of Round Robins. Weights and functions are
distributed according to the weight values in this algorithm.
Higher-capacity processors are valued more highly. Consequently, the servers with the most
traffic will be given the most work. Once the servers are fully loaded, they will see a steady
stream of traffic.
Opportunistic Load Balancing Algorithm:
The opportunistic load balancing technique ensures that each node is always busy. It doesn't
take into account how much work each system is currently doing. Unfinished jobs are
distributed among all nodes regardless of their current burden.
It will take a long time for the job to be completed as an OLB, and the node's implementation
time will not be taken into account, resulting in bottlenecks even when some nodes are
available.
Minimum To Minimum Load Balancing Algorithm:
Under minimum to minimal load balancing, these activities are completed in the shortest
possible amount of time. The function with the smallest value is chosen as a starting point.
The work on the machine is arranged in accordance with that minimal amount of time.
The job is deleted from the list and other tasks on the system are updated. Up to the final
assignment, this procedure will be repeated. This method is most effective when a large
number of little jobs must be performed.
Dynamic Approach:
During runtime, it may dynamically detect the amount of load that needs to be shed and
which system should carry the load.
Dynamic Load Balancing:
Least connection:
Verifies and transmits traffic to those servers that have the fewest connections open at any
one moment. All connections are assumed to demand nearly equal processing power in this
scenario.
7.0 Introduction
7.1 Objectives
7.2 Cloud Security
7.2.1 How Cloud Security is Different from Traditional IT Security?
7.2.2 Cloud Computing Security Requirements
7.3 Security Issues in Cloud Service Delivery Models
7.4 Security Issues in Cloud Deployment Models
7.0 INTRODUCTION
In the earlier unit, we had studied Load Balancing in Cloud computing and in
this unit we will focus on another important aspect namely Cloud Security in
cloud computing.
1
Resource Provisioning,
Load Balancing and security also partially rests in the client’s hands as well. Understanding both
Security facets is pivotal to a healthy cloud security solution.
Data security
Identity and access management (IAM)
Governance (policies on threat prevention, detection, and mitigation)
Data retention (DR) and business continuity (BC) planning
Legal compliance
In this unit, you will study what is cloud security, how it is different from
traditional(legacy) IT security, cloud computing security requirements,
challenges in providing cloud security, threats, ensuring security, Identity and
Access management and Security-as-a-Service.
7.1 OBJECTIVES
Cloud security is the whole bundle of technology, protocols, and best practices
that protect cloud computing environments, applications running in the cloud,
and data held in the cloud. Securing cloud services begins with understanding
what exactly is being secured, as well as, the system aspects that must be
managed.
The full scope of cloud security is designed to protect the following, regardless
of your responsibilities:
Cloud security may appear like traditional (legacy) IT security, but this
framework actually demands a different approach. Before diving deeper, let’s
first look how this is different to that of legacy IT security in the next section.
Traditional IT security has felt an immense evolution due to the shift to cloud-
based computing. While cloud models allow for more convenience, always-on
connectivity requires new considerations to keep them secure. Cloud security,
as a modernized cyber security solution, stands out from legacy IT models in a
few ways.
Data storage: The biggest distinction is that older models of IT relied heavily
upon onsite data storage. Organizations have long found that building all IT
frameworks in-house for detailed, custom security controls is costly and rigid.
Cloud-based frameworks have helped offload costs of system development and
upkeep, but also remove some control from users.
Proximity to other networked data and systems: Since cloud systems are a
persistent connection between cloud providers and all their users, this
substantial network can compromise even the provider themselves. In
networking landscapes, a single weak device or component can be exploited to
infect the rest. Cloud providers expose themselves to threats from many end-
users that they interact with, whether they are providing data storage or other
services. Additional network security responsibilities fall upon the providers
3
Resource Provisioning,
Load Balancing and who otherwise delivered products live purely on end-user systems instead of
Security their own.
Solving most cloud security issues means that users and cloud providers, both
in personal and business environments, both remain proactive about their own
roles in cyber security. This two-pronged approach means users and providers
mutually must address:
There are four main cloud computing security requirements that help to ensure
the privacy and security of cloud services: confidentiality, integrity,
availability, and accountability.
Confidentiality
4
Security Issues in
Integrity Cloud Computing
Availability
Availability is the ability for the consumer to utilize the system as expected.
One of the significant advantages of a cloud computing is its data availability.
Cloud computing enhances availability through authorized entry. In addition,
availability requires timely support and robust equipment. A client’s
availability may be ensured as one of the terms of a contract; to guarantee
availability, a provider may secure huge capacity and excellent architecture.
Because availability is a main part of the cloud computing system, increased
use of the environment will increase the possibility of a lack of availability and
thus could reduce the cloud computing system’s performance. Cloud
computing affords clients two ways of paying for cloud services: on-demand
resources and (the cheaper option) resource reservation. The optimal virtual-
machine (VM) placement mechanism helps to reduce the cost of both payment
methods. By reducing the cost of running VMs for many cloud providers, it
supports expected changes in demand and price. This method involves the
client making a declaration to pay for certain resources owned by the cloud
computing providers using the Session Initiation Protocol (SIP) optimal
solution.
Accountability
Access Control: To check and promote only legalized users, cloud must have
right access control policies. Such services must be adjustable, well planned,
and their allocation is overseeing conveniently. The approach governor
provision must be integrated on the basis of Service Level Agreement (SLA).
Policy Integration: There are many cloud providers such as Amazon, Google
which are accessed by end users. Minimum number of conflicts between their
policies because they user their own policies and approaches.
In the follow sections let us discuss major threats and issues in cloud
computing with respect to the cloud service delivery models and cloud
deployment models.
6
Security Issues in
Check Your Progress 1 Cloud Computing
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
2) How does cloud security work?
…………………………………………………………………………………………
…………………………………………………………………………………………
3) Mention various cloud security risks and discuss briefly.
…………………………………………………………………………………………
…………………………………………………………………………………………
Data loss is the second most important issue related to cloud security. Like
data breach, data loss is a sensitive matter for any organization and can have a
devastating effect on its business. Data loss mostly occurs due to malicious
attackers, data deletion, data corruption, loss of data encryption key, faults in
storage system, or natural disasters. In 2013, 44% of cloud service providers
have faced brute force attacks that resulted in data loss and data leakage.
Similarly, malware attacks have also been targeted at cloud applications
resulting in data destruction.
SQL injection attacks, are the one in which a malicious code is inserted into a
standard SQL code. Thus the attackers gain unauthorized access to a database
and are able to access sensitive information. Sometimes the hacker’s input data
is misunderstood by the web-site as the user data and allows it to be accessed
by the SQL server and this lets the attacker to have know-how of the
functioning of the website and make changes into that. Various techniques
like: avoiding the usage of dynamically generated SQL in the code, using
filtering techniques to sanitize the user input etc. are used to check the SQL
injection attacks. Some researchers proposed proxy based architecture towards
preventing SQL Injection attacks which dynamically detects and extracts
users’ inputs for suspected SQL control sequences.
Network plays an important part in deciding how efficiently the cloud services
operate and communicate with users. In developing most cloud solutions,
network security is not considered as an important factor by some
organizations. Not having enough network security creates attacks vectors for
the malicious users and outsiders resulting in different network threats. Most
critical network threats in cloud are account or service hijacking, and denial of
service attacks.
Denial of Service (DOS) attacks are done to prevent the legitimate users from
accessing cloud network, storage, data, and other services. DOS attacks have
been on rise in cloud computing in past few years and 81% customers consider
it as a significant threat in cloud. They are usually done by compromising a
service that can be used to consume most cloud resources such as computation
power, memory, and network bandwidth. This causes a delay in cloud
operations, and sometimes cloud is unable to respond to other users and
services. Distributed Denial of Service (DDOS) attack is a form of DOS
attacks in which multiple network sources are used by the attacker to send a
large number of requests to the cloud for consuming its resources. It can be
launched by exploiting the vulnerabilities in web server, databases, and
applications resulting in unavailability of resources.
9
Resource Provisioning,
Load Balancing and Another cause may be due to improper configuration of Secure Socket Layer
Security (SSL). For example, if SSL was improperly configured, then the middle party
could hew data. The preventive measure for this attack was before
communication with other parties, SSL should be properly organized.
Brute Force Attacks: The attacker attempts to crack the password by guessing
all potential passwords.
Replay Attacks: Also known as reflection attacks, replay attacks are a type of
attack that targets a user’s authentication process.
Key loggers: This is a program that records every key pressed by the user and
tracks their behavior.
Cloud service providers are largely responsible for controlling the cloud
environment. Some threats are specific to cloud computing such as cloud
service provider issues, providing insecure interfaces and APIs to users,
malicious cloud users, shared technology vulnerabilities, misuse of cloud
services, and insufficient due diligence by companies before moving to cloud.
The term abuse of cloud services refers to the misuse of cloud services by the
consumers. It is mostly used to describe the actions of cloud users that are
illegal, unethical, or violate their contract with the service provider. In 2010,
abusing of cloud services was considered to be the most critical cloud threat
and different measures were taken to prevent it. However, 84% of cloud users
still consider it as a relevant threat. Research has shown that some cloud
providers are unable to detect attacks launched from their networks, due to
which they are unable to generate alerts or block any attacks. The abuse of
cloud services is a more serious threat to the service provider than service
users. For instance, the use of cloud network addresses for spam by malicious
users has resulted in blacklisting of all network addresses, thus the service
provider must ensure all possible measures for preventing these threats. Over
the years, different attacks have been launched through cloud by the malicious
users. For example, Amazon’s EC2 services were used as a command and
control servers to launch Zeus botnet in 2009. Famous cloud services such as
Twitter, Google and Facebook as a command and control servers for launching
Trojans and Botnets. Other attacks that have been launched using cloud are
brute force for password cracking of encryption, phishing, performing DOS
attack against a web service at specific host, Cross Site Scripting and SQL
injection attacks.
The term due diligence refers to individuals or customers having the complete
information for assessments of risks associate with a business prior to using its
services. Cloud computing offers exciting opportunities of unlimited
computing resources, and fast access due which number of businesses shift to
cloud without assessing the risks associated with it. Due to the complex
architecture of cloud, some of organization security policies cannot be applied
using cloud. Moreover, the cloud customers have no idea about the internal
security procedures, auditing, logging, data storage, data access which results
in creating unknown risk profiles in cloud. In some cases, the developers and
designers of applications maybe unaware of their effects from deployment on
cloud that can result in operational and architectural issues.
12
Security Issues in
7.3.3.5 Shared Technology Vulnerabilities Cloud Computing
Earlier Xen hypervisors code used to create local privilege escalation (in which
a user can have rights of another user) vulnerability that can launch guest to
host VM escape attack. Later, Xen updated the code base of its hypervisor to
fix that vulnerability. Other companies such as Microsoft, Oracle and SUSE
Linux those based on Xen also released updates of their software to fix the
local privilege escalation vulnerability. Similarly, a report released in 2009,
showed the usage of VMware to run code from guests to hosts showing the
possible ways to launch attacks.
13
Resource Provisioning,
Load Balancing and There is a lack of strong isolation or compartmentalization of routing,
Security reputation, storage, and memory among tenants. Because of the lack of
isolation, attackers attempt to take control of the operations of other cloud
users to obtain unauthorized access to the data.
The attackers can gain access to remote system applications on the victim’s
resource systems via this approach. It’s a passive attack of sorts. Zombies are
sometimes used by attackers to carry out DDoS attacks. Back doors channels,
however, are frequently used by attackers to get control of the victim’s
resources. It has the potential to compromise data security and privacy.
Each of the three ways (Public, Private, Hybrid) in which cloud services can
be deployed has its own advantages and limitations. And from the security
perspective, all the three have got certain areas that need to be addressed with a
specific strategy to avoid them.
14
Security Issues in
complicated in case of a public cloud where we do not have any control Cloud Computing
over the service provider’s security practices.
In case of a public cloud, the same infrastructure is shared between
multiple tenants and the chances of data leakage between these tenants
are very high. However, most of the service providers run a multitenant
infrastructure. Proper investigations at the time of choosing the service
provider must be done in order to avoid any such risk.
In case a Cloud Service Provider uses a third party vendor to provide its
cloud services, it should be ensured what service level agreements they
have in between as well as what are the contingency plans in case of
the breakdown of the third party system.
Proper SLAs defining the security requirements such as what level of
encryption data should undergo, when it is sent over the internet and
what are the penalties in case the service provider fails to do so.
A private cloud model enables the customer to have total control over the
network and provides the flexibility to the customer to implement any
traditional network perimeter security practice. Although the security
architecture is more reliable in a private cloud, yet there are issues/risks that
need to be considered:
15
Resource Provisioning,
Load Balancing and the web interface using common languages such as Java, PHP, Python
Security etc. As part of screening process, Eucalyptus web interface has been
found to have a bug, allowing any user to perform internal port
scanning or HTTP requests through the management node which he
should not be allowed to do. In the nutshell, interfaces need to be
properly developed and standard web application security techniques
need to be deployed to protect the diverse HTTP requests being
performed.
While we talk of standard internet security, we also need to have a
security policy in place to safeguard the system from the attacks
originating within the organization. This vital point is missed out on
most of the occasions, stress being mostly upon the internet security.
Proper security guidelines across the various departments should exist
and control should be implemented as per the requirements.
Thus we see that although private clouds are considered safer in comparison to
public clouds, still they have multiple issues which if unattended may lead to
major security loopholes as discussed earlier.
The hybrid cloud model is a combination of both public and private cloud and
hence the security issues discussed with respect to both are applicable in case
of hybrid cloud.
In the following section the security methods to avoid the exploitation of the
threats will be discussed.
Various security measures and techniques have been proposed to avoid the
data breach in cloud. One of these is to encrypt data before storage on cloud,
and in the network. This will need efficient key management algorithm, and
the protection of key in cloud. Some measures that must be taken to avoid data
breaches in cloud are to implement proper isolation among VMs to prevent
information leakage, implement proper access controls to prevent unauthorized
access, and to make a risk assessment of the cloud environment to know the
storage of sensitive data and its transmission between various services and
networks.
To prevent data loss in cloud different security measures can be adopted. One
of the most important measures is to maintain backup of all data in cloud
which can be accessed in case of data loss. However, data backup must also be
protected to maintain the security properties of data such as integrity and
confidentiality. Various data loss prevention (DLP) mechanisms have been
proposed for the prevention of data loss in network, processing, and storage.
Many companies including Symantec, McAfee, and Cisco have also developed
solutions to implement data loss prevention across storage systems, networks
and end points. Trusted Computing can be used to provide data security. A
trusted server can monitor the functions performed on data by cloud server and
provide the complete audit report to data owner. In this way, the data owner
can be sure that the data access policies have not been violated.
To avoid DOS attacks it is important to identify and implement all the basic
security requirements of cloud network, applications, databases, and other
services. Applications should be tested after designing to verify that they have
no loop holes that can be exploited by the attackers. The DDoS attacks can be
prevented by having extra network bandwidth, using IDS that verify network
requests before reaching cloud server, and maintaining a backup of IP pools for
urgent cases. Industrial solutions to prevent DDOS attacks have also been
provided by different vendors. A technique named hop count filtering that can
be used to filter spoofed IP packets, and helps in decreasing DOS attacks by
90%. Another technique for securing cloud from DDoS involves using
intrusion detection system in virtual machine (VM). In this scheme when an
intrusion detection system (IDS) detects an abnormal increase in inbound
traffic, the targeted applications are transferred to VMs hosted on another data
center.
To protect the cloud from insecure API threats it is important for the
developers to design these APIs by following the principles of trusted
computing. Cloud providers must also ensure that all the all the APIs
implemented in cloud are designed securely, and check them before
deployment for possible flaws. Strong authentication mechanisms and access
controls must also be implemented to secure data and services from insecure
interfaces and APIs. The Open Web Application Security Project (OWASP)
provides standards and guidelines to develop secure applications that can help
in avoiding such application threats. Moreover, it is the responsibility of
customers to analyze the interfaces and APIs of cloud provider before moving
their data to cloud.
The protection from these threats can be achieved by limiting the hardware and
infrastructure access only to the authorized personnel. The service provider
must implement strong access control, and segregation of duties in the
management layer to restrict administrator access to only his authorized data
and software. Auditing on the employees should also be implemented to check
for their suspicious behavior. Moreover, the employee behavior requirements
should be made part of legal contract, and action should be taken against
anyone involved in malicious activities. To prevent data from malicious
insiders encryption can also be implemented in storage, and public networks.
7.5.10 Protection from SQL Injection, XSS, Google Hacking and Forced
Hacking
In order to secure cloud against various security threats such as: SQL injection,
Cross Site Scripting (XSS), DoS and DDoS attacks, Google Hacking, and
Forced Hacking, different cloud service providers adopt different techniques.
A few standard techniques to detect the above mentioned attacks include:
20
Security Issues in
URL filtering: It is being observed that the attacks are launched through Cloud Computing
various web pages and internet sites and hence filtering of the web-pages
ensures that no such harmful or threat carrying web pages are accessible. Also,
content from undesirable sites can be blocked.
A Google hacking database identifies the various types of information such as:
login passwords, pages containing logon portals, session usage information etc.
Various software solutions such as Web Vulnerability Scanner can be used to
detect the possibility of a Google hack. In order to prevent Google hack, users
need to ensure that only those information that do not affect them should be
shared with Google. This would prevent sharing of any sensitive information
that may result in adverse conditions.
21
Resource Provisioning,
Load Balancing and On a fundamental level, Identity and Access Management encompasses the
Security following components:
how roles are identified in a system and how they are assigned to
individuals
protecting the sensitive data within the system and securing the system
itself.
IAM technologies can be used to initiate, capture, record and manage user
identities and their related access permissions in an automated manner. An
organization gains the following IAM benefits:
22
Security Issues in
Pre-Shared Key (PSK): PSK is another type of digital authentication where Cloud Computing
the password is shared among users authorized to access the same resources --
think of a branch office Wi-Fi password. This type of authentication is less
secure than individual passwords. A concern with shared passwords like PSK
is that frequently changing them can be cumbersome.
In cloud computing, data is stored remotely and accessed over the Internet.
Because users can connect to the Internet from almost any location and any
device, most cloud services are device- and location-agnostic. Users no longer
need to be in the office or on a company-owned device to access the cloud.
And in fact, remote workforces are becoming more common.
The user's identity, not their device or location, determines what cloud data
they can access and whether they can have any access at all.
With cloud computing, sensitive files are stored in a remote cloud server.
Because employees of the company need to access the files, they do so by
logging in via browser or an app. IAM helps prevent identity-based attacks and
data breaches that come from privilege escalations (when an unauthorized user
has too much access). Thus, IAM systems are essential for cloud computing,
and for managing remote teams. It is a cloud service that controls the
permissions and access for users and cloud resources. IAM policies are sets of
permission policies that can be attached to either users or cloud resources to
authorize what they access and what they can do with it.
The concept “identity is the new perimeter” goes, when AWS first announced
their IAM service in 2012. We are now witnessing a renewed focus on IAM
due to the rise of abstracted cloud services and the recent wave of high-profile
data breaches.
Services that don’t expose any underlying infrastructure rely heavily on IAM
for security. Managing a large number of privileged users with access to an
23
Resource Provisioning,
Load Balancing and ever-expanding set of services is challenging. Managing separate IAM roles
Security and groups for these users and resources adds yet another layer of complexity.
Cloud providers like AWS and Google Cloud help customers solve these
problems with tools like the Google Cloud- IAM recommender (currently in
beta) and the AWS- IAM access advisor. These tools attempt to analyze the
services last accessed by users and resources, and help you find out which
permissions might be over-privileged. These tools indicate that cloud providers
recognize these access challenges, which is definitely a step in the right
direction. However, there are a few more challenges we need to consider.
IAM is crucial aspect of cloud security. Businesses must look at IAM as a part
of their overall security posture and add an integrated layer of security across
their application lifecycle.
Don’t use root accounts - Always create individual IAM users with
relevant permissions, and don’t give your root credentials to anyone.
24
Security Issues in
Adopt a role-per-group model - Assign policies to groups of users Cloud Computing
based on the specific things those users need to do. Don’t “stack” IAM
roles by assigning roles to individual users and then adding them to
groups. This will make it hard for you to understand their effective
permissions.
25
Resource Provisioning,
Load Balancing and updates as everything is managed for you by your SECaaS provider and
Security visible to you through a web-enabled dashboard.
Free Up Resources: When security provisions are managed externally,
your IT teams can focus on what is important to your organization.
SECaaS frees up resources, gives you total visibility through
management dashboards and the confidence that your IT security is
being managed competently by a team of outsourced security
specialists. You can also choose for your IT teams to take control of
security processes if you prefer and manage all policy and system
changes through a web interface.
Continuous Monitoring
Data Loss Prevention (DLP)
Business Continuity and Disaster Recovery (BC/DR or BCDR)
Email Security
Antivirus Management
Spam Filtering
Identity and Access Management (IAM)
Intrusion Protection
Security Assessment
Network Security
Security Information and Event Management (SIEM)
Web Security
Vulnerability Scanning
…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
2) What are the various security aspects that one needs to remember while
opting for Cloud services?
…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
3) How to choose a SECaaS Provider?
…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
26
Security Issues in
Cloud Computing
7.8 SUMMARY
1. In the 1990s, business and personal data stored locally and security was
local as well. Data would be located on a PC’s internal storage at home,
and on enterprise servers, if you worked for a company.
27
Resource Provisioning,
Load Balancing and That said, users are not alone in cloud security responsibilities. Being
Security aware of the scope of your security duties will help the entire system
stay much safer.
Data security is an aspect of cloud security that involves the technical end
of threat prevention. Tools and technologies allow providers and clients to
insert barriers between the access and visibility of sensitive data. Among
these, encryption is one of the most powerful tools available. Encryption
scrambles your data so that it's only readable by someone who has the
encryption key. If your data is lost or stolen, it will be effectively
unreadable and meaningless. Data transit protections like virtual private
networks (VPNs) are also emphasized in cloud networks.
The biggest risk with the cloud is that there is no perimeter. Traditional
cyber security focused on protecting the perimeter, but cloud environments
are highly connected which means insecure APIs (Application
Programming Interfaces) and account hijacks can pose real problems.
Faced with cloud computing security risks, cyber security professionals
need to shift to a data-centric approach.
Third-party storage of your data and access via the internet each pose their
own threats as well. If for some reason those services are interrupted, your
access to the data may be lost. For instance, a phone network outage could
mean you can't access the cloud at an essential time. Alternatively, a power
outage could affect the data center where your data is stored, possibly with
permanent data loss.
1. Fortunately, there is a lot that you can do to protect your own data in
the cloud. Let’s explore some of the popular methods.
Within the cloud, data is more at risk of being intercepted when it is on the
move. When it's moving between one storage location and another, or
being transmitted to your on-site application, it's vulnerable. Therefore,
end-to-end encryption is the best cloud security solution for critical data.
With end-to-end encryption, at no point is your communication made
available to outsiders without your encryption key.
29
Resource Provisioning,
Load Balancing and You can either encrypt your data yourself before storing it on the cloud, or
Security you can use a cloud provider that will encrypt your data as part of the
service. However, if you are only using the cloud to store non-sensitive
data such as corporate graphics or videos, end-to-end encryption might be
overkill. On the other hand, for financial, confidential, or commercially
sensitive information, it is vital.
If you are using encryption, remember that the safe and secure
management of your encryption keys is crucial. Keep a key backup and
ideally don't keep it in the cloud. You might also want to change your
encryption keys regularly so that if someone gains access to them, they will
be locked out of the system when you make the changeover.
Never leave the default settings unchanged: Using the default settings
gives a hacker front-door access. Avoid doing this to complicate a
hacker’s path into your system.
Never leave a cloud storage bucket open: An open bucket could allow
hackers to see the content just by opening the storage bucket's URL.
If the cloud vendor gives you security controls that you can switch
on, use them. Not selecting the right security options can put you at
risk.
Unfortunately, cloud companies are not going to give you the blueprints to
their network security. This would be equivalent to a bank providing you
with details of their vault, complete with the combination numbers to the
safe.
However, getting the right answers to some basic questions gives you
better confidence that your cloud assets will be safe. In addition, you will
be more aware of whether your provider has properly addressed obvious
cloud security risks. We recommend asking your cloud provider some
questions of the following questions:
30
Security Issues in
Customer data retention: “What customer data retention policies are Cloud Computing
being followed?”
User data retention: “Is my data is properly deleted if I leave your
cloud service?”
Access management: “How are access rights controlled?”
You will also want to make sure you’ve read your provider’s terms of
service (TOS). Reading the TOS is essential to understanding if you are
receiving exactly what you want and need.
Be sure to check that you also know all the services used with your
provider. If your files are on Dropbox or backed up on iCloud (Apple's
storage cloud), that may well mean they are actually held on Amazon's
servers. So, you will need to check out AWS, as well as, the service you
are using directly.
3. Hiring the third party cloud service for the security of your most critical
and sensitive business assets is a massive undertaking. Choosing a SECaaS
provider takes careful consideration and evaluation. Here are some of the
most important considerations when selecting a provider:
31
Unit 8: Internet of Things: An Introduction
Internet of Things (IoT) is a massive network of physical devices embedded with sensors,
software, electronics, and network which allows the devices to exchange or collect data and
perform certain actions.
IoT aims at extending internet connectivity beyond computers and smartphones to other
devices people use at home or for business. The technology allows devices to get controlled
across network infrastructure remotely. As a result, it cuts down the human effort and paves
the way for accessing the connected devices easily. With autonomous control, the devices are
operable without involving human interaction. IoT makes things virtually smart through AI
algorithms, data collection, and networks enhancing our lives.
Examples: Pet tracking devices, diabetes monitors, AC sensors to adjust the temperature
based on the outside temperature, smart wearables, and more.
IoT comprises things that have unique identities and are connected to internet. By 2020 there
will be a total of 50 billion devices /things connected to internet. IoT is not limited to just
connecting things to the internet but also allow things to communicate and exchange data.
Definition: A dynamic global n/w infrastructure with self -configuring capabilities based on
standard and interoperable communication protocols where physical and virtual ―things‖
have identities, physical attributes and virtual personalities and use intelligent interfaces, and
are seamlessly integrated into information n/w, often communicate data associated with users
and their environments.
8.2.Characteristics of IoT
1) Dynamic & Self Adapting: IoT devices and systems may have the capability
to dynamically adapt with the changing contexts and take actions based on their
operating conditions, user‘s context or sensed environment. Eg: the surveillance
system is adapting itself based on context and changing conditions.
2) Self Configuring: allowing a large number of devices to work together to
provide certain functionality.
3) Inter Operable Communication Protocols: support a number of interoperable
communication protocols and can communicate with other devices and also with
infrastructure.
4) Unique Identity: Each IoT device has a unique identity and a unique identifier
(IP address).
5) Integrated into Information Network: that allow them to communicate and
exchange data with other devices and systems.
There are numerous use cases for commercial IoT, including monitoring environmental
conditions, managing access to corporate facilities, and economizing utilities and
consumption in hotels and other large venues. Many Commercial IoT solutions are geared
towards improving customer experiences and business conditions.
2. Industrial IoT (IIoT), is perhaps the most dynamic wing of the IoT industry. Its focus is
on augmenting existing industrial systems, making them both more productive and more
efficient. IIoT deployments are typically found in large-scale factories and manufacturing
plants and are often associated with industries like healthcare, agriculture, automotive,
and logistics. The Industrial Internet is perhaps the most well-known example of IIoT.
System installers, repairers, craftsmen, electricians, plumbers, architects who connect devices
and systems to the Internet for personal use and for commercial and other business uses.
As the Internet of Things (IoT) enables devices to make intelligent decisions that generate
positive business outcomes, it’s the sensors that enable those decisions. As cost and time-to-
market pressures continue to rise, sensors provide greater visibility into connected systems
and empower those systems to react intelligently to changes driven by both external forces
and internal factors. Sensors are the components that provide the actionable insights that
power the IoT and enable organizations to make more effective business decisions. It’s
through this real-time measurement that the IoT can transform an organization’s ability to
react to change.
Wi-Fi was designed for computers, and 4G LTE wireless targeted smartphones and portable
devices. Both have been tremendously successful — and both were shaped by the devices
they were intended for. The same goes for 5G, the first generation of wireless technology
designed with extremely small, low-power, and near-ubiquitous IoT devices in mind. Unlike
Wi-Fi and LTE devices, which we handle and plug into power sources on a daily basis, IoT
sensors will operate autonomously for years at a time, often in inaccessible places, without
recharging or replacement. An explosion of new protocols: The IoT is prompting the
development of a number of different 5G communication standards, not just one or two
network types
1. IoT Security: Security technologies will be required to protect IoT devices and platforms
from both information attacks and physical tampering, to encrypt their communications, and
to address new challenges such as impersonating "things" or denial-of-sleep attacks that drain
batteries. IoT security will be complicated by the fact that many "things" use simple
processors and operating systems that may not support sophisticated security approaches.
2. IoT Analytics: IoT business models will exploit the information collected by "things" in
many ways, which will demand new analytic tools and algorithms. As data volumes increase
over the next five years, the needs of the IoT may diverge further from traditional analytics.
6. IoT Processors. The processors and architectures used by IoT devices define many of their
capabilities, such as whether they are capable of strong security and encryption, power
consumption, whether they are sophisticated enough to support an operating system,
updatable firmware, and embedded device management agents. Understanding the
implications of processor choices will demand deep technical skills.
7. IoT Operating Systems. Traditional operating systems such as Windows and iOS were
not designed for IoT applications. They consume too much power, need fast processors, and
in some cases, lack features such as guaranteed real-time response. They also have too large a
memory footprint for small devices and may not support the chips that IoT developers use.
Consequently, a wide range of IoT-specific operating systems has been developed to suit
many different hardware footprints and feature needs.
8. Event Stream Processing: Some IoT applications will generate extremely high data rates
that must be analyzed in real time. Systems creating tens of thousands of events per second
are common, and millions of events per second can occur in some situations. To address such
requirements, distributed stream computing platforms have emerged that can process very
high-rate data streams and perform tasks such as real-time analytics and pattern
identification.
9. IoT Platforms. IoT platforms bundle many of the infrastructure components of an IoT
system into a single product. The services provided by such platforms fall into three main
categories:
Low-level device control and operations such as communications, device monitoring and
management, security, and firmware updates; IoT data acquisition, transformation and
management; IoT application development, including event-driven logic, application
programming, visualization, analytics and adapters to connect to enterprise systems.
10.IoT Standards and Ecosystems. Standards and their associated application programming
interfaces (APIs) will be essential because IoT devices will need to interoperate and
communicate, and many IoT business models will rely on sharing data between multiple
devices and organizations. Many IoT ecosystems will emerge, and organizations creating
products may have to develop variants to support multiple standards or ecosystems and be
prepared to update products during their life span as the standards evolve and new standards
and APIs emerge.
8.5.Sensors
Sensors are used for sensing things and devices etc. A sensor is a device that provides a
usable output in response to a specified measurement. The sensor attains a physical parameter
and converts it into a signal suitable for processing (e.g. electrical, mechanical, optical) the
characteristics of any device or material to detect the presence of a particular physical
quantity. The output of the sensor is a signal which is converted to a human-readable form
like changes in characteristics, changes in resistance, capacitance, impedance etc.
1. Sensitivity is a measure of the change in output of the sensor relative to a unit change
in the input (the measured quantity.) Example: The speakers you purchase for your
home entertainment may have a rated sensitivity of 89 dB Signal Pressure Level per
Watt per meter.
2. Resolution is the smallest amount of change in the input that can be detected and
accurately indicated by the sensor. Example: What is the resolution of an ordinary
ruler? of a Vernier Calipers?
3. Linearity is determined by the calibration curve. The static calibration curve plots the
output amplitude versus the input amplitude under static conditions. Its degree of
resemblance to a straight line describes the linearity.
4. Drift is the deviation from a specific reading of the sensor when the sensor is kept at
that value for a prolonged period of time. The zero drift refers to the change in sensor
output if the input is kept steady at a level that (initially) yields a zero reading.
Similarly, the full -scale drift is the drift if the input is maintained at a value which
originally yields a full scale deflection. Reasons for drift may be extraneous, such as
changes in ambient pressure, humidity, temperature etc., or due to changes in the
constituents of the sensor itself, such as aging, wear etc.
5. The range of a sensor is determined by the allowed lower and upper limits of its input
or output. Usually the range is determined by the accuracy required. Example:
Sometimes the range may just be determined by physical limitations. Example: a pocket
ruler.
The dynamic characteristics of a sensor represent the time response of the sensor system.
Knowledge of these is essential to fruitfully use a sensor. Important common dynamic
responses of sensors include rise time, delay time, peak time, settling time percentage error
and steady-state error
Temperature sensors, Pressure sensors, Motion sensors, Level sensors, Image sensors,
Proximity sensors, Water quality sensors, Chemical sensors, Gas sensors, Smoke sensors,
Infrared (IR) sensors, Humidity sensors, etc.
A description of each of these sensors is provided below.
Temperature sensors
Temperature sensors detect the temperature of the air or a physical object and concert that
temperature level into an electrical signal that can be calibrated accurately reflect the
measured temperature. These sensors could monitor the temperature of the soil to help with
agricultural output or the temperature of a bearing operating in a critical piece of equipment
to sense when it might be overheating or nearing the point of failure.
Pressure sensors
Pressure sensors measure the pressure or force per unit area applied to the sensor and can
detect things such as atmospheric pressure, the pressure of a stored gas or liquid in a sealed
system such as tank or pressure vessel, or the weight of an object.
Motion sensors
Motion sensors or detectors can sense the movement of a physical object by using any one of
several technologies, including passive infrared (PIR), microwave detection, or ultrasonic,
which uses sound to detect objects. These sensors can be used in security and intrusion
detection systems, but can also be used to automate the control of doors, sinks, air
conditioning and heating, or other systems.
Level sensors
Level sensors translate the level of a liquid relative to a benchmark normal value into a
signal. Fuel gauges display the level of fuel in a vehicle’s tank, as an example, which
provides a continuous level reading. There are also point level sensors, which are a go-no/go
or digital representation of the level of the liquid. Some automobiles have a light that
illuminates when the fuel level tank is very close to empty, acting as an alarm that warns the
driver that fuel is about to run out completely.
Image sensors
Image sensors function to capture images to be digitally stored for processing. License plate
readers are an example, as well as facial recognition systems. Automated production lines can
use image sensors to detect issues with quality such as how well a surface is painted after
leaving the spray booth.
Proximity sensors
Proximity sensors can detect the presence or absence of objects that approach the sensor
through a variety of different technology designs.
The importance of water to human beings on earth not only for drinking but as a key
ingredient needed in many production processes dictates the need to be able to sense and
measure parameters around water quality. Some examples of what is sensed and monitored
include:
Chemical presence (such as chlorine levels or fluoride levels),Oxygen levels (which may
impact the growth of algae and bacteria),Electrical conductivity (which can indicate the level
of ions present in water), pH level (a reflection of the relative acidity or alkalinity of the
water),Turbidity levels (a measurement of the amount of suspended solids in water)
Chemical sensors
Chemical sensors are designed to detect the presence of specific chemical substances which
may have inadvertently leaked from their containers into spaces that are occupied by
personnel and are useful in controlling industrial process conditions.
Gas sensors
Related to chemical sensors, gas sensors are tuned to detect the presence of combustible,
toxic, or flammable gas in the vicinity of the sensor. Examples of specific gases that can be
detected include:
Bromine (Br2), Carbon Monoxide (CO), Chlorine (Cl2), Chlorine Dioxide (ClO2),Hydrogen
Cyanide (HCN),Hydrogen Peroxide (H2O2), Hydrogen Sulfide (H2S), Nitric Oxide (NO),
Nitrogen Dioxide (NO2), Ozone (O3), etc.
Smoke sensors
Smoke sensors or detectors pick up the presence of smoke conditions which could be an
indication of a fire typically using optical sensors (photoelectric detection) or ionization
detection.
Acceleration sensors
8.7. Actuators
Servo is a small device that incorporates a two wire DC motor, a gear train, a potentiometer,
an integrated circuit, and a shaft (output spine).
2. Stepper Motors:
Stepper motors are DC motors that move in discrete steps. They have multiple coils that
are organized in groups called “phases”. By energizing each phase in sequence, the motor
will rotate, one step at a time. With a computer controlled stepping, you can achieve very
precise positioning and/or speed control.
Direct Current (DC) motor is the most common actuator used in projects. They are simple,
cheap, and easy to use. DC motors convert electrical into mechanical energy. Also, they come
in different sizes.
4. Linear actuator:
A linear actuator is an actuator that creates motion in a straight line, in contrast to the circular
motion of a conventional electric motor. Linear actuators are used in machine tools and
industrial machinery, in computer peripherals such as disk drives and printers, in valves and
dampers, and in many other places where linear motion is required
5. Relay:
6. Solenoid:
A solenoid is simply a specially designed electromagnet. Solenoids are inexpensive, and their
use is primarily limited to on-off applications such as latching, locking, and triggering. They
are frequently used in home appliances (e.g. washing machine valves), office equipment (e.g.
copy machines), automobiles (e.g. door latches and the starter solenoid), pinball machines
(e.g., plungers and bumpers), and factory automation
Raspberry Pi
The Raspberry Pi is a very cheap computer that runs Linux, but it also provides a set
of GPIO (general purpose input/output) pins that allow you to control electronic
components for physical computing and explore the Internet of Things (IoT).
Raspberry Pi was basically introduced in 2006.
It is particularly designed for educational use and intended for Python.
A Raspberry Pi is of small size i.e., of a credit card sized single board computer,
which is developed in the United Kingdom(U.K) by a foundation called Raspberry Pi
All models feature a Broadcom system on a chip (SoC) with an integrated ARM-
compatible central processing unit (CPU) and on-chip graphics processing unit
(GPU).
Processor speed ranges from 700 MHz to 1.4 GHz for the Pi 3 Model B+ or 1.5 GHz
for the Pi 4; on-board memory ranges from 256 MB to 1 GB with up to 4 GB
available on the Pi 4 random-access memory (RAM).
Secure Digital (SD) cards in Micro SDHC form factor (SDHC on early models) are
used to store the operating system and program memory.
The boards have one to five USB ports. For video output, HDMI and composite video
are supported, with a standard 3.5 mm tip-ring-sleeve jack for audio output.
Lower-level output is provided by a number of GPIO pins, which support common
protocols like I²C. The B-models have an 8P8C Ethernet port and the Pi 3 and Pi Zero
W have on-board Wi-Fi and Bluetooth.
8.9.IoT Architecture
The Reference Model introduced in 2014 by Cisco, IBM, and Intel at the 2014 IoT World
Forum has as many as seven layers. According to an official press release by Cisco forum
host, the architecture aims to “help educate CIOs, IT departments, and developers on
deployment of IoT projects, and accelerate the adoption of IoT.”
These layers are:
1. The perception layer hosting smart things;
2. The connectivity or transport layer transferring data from the physical layer to the
cloud and vice versa via networks and gateways;
3. The processing layer employing IoT platforms to accumulate and manage all data
streams; and
4.The application layer delivering solutions like analytics, reporting, and device control
to end users.
Perception layer: converting analog signals into digital data and vice versa
The initial stage of any IoT system embraces a wide range of “things” or endpoint devices
that act as a bridge between the real and digital worlds. They vary in form and size, from tiny
silicon chips to large vehicles. By their functions, IoT things can be divided into the
following large groups.
Sensors such as probes, gauges, meters, and others. They collect physical parameters like
temperature or humidity, turn them into electrical signals, and send them to the IoT system.
IoT sensors are typically small and consume little power.
Actuators, translating electrical signals from the IoT system into physical actions.
Machines and devices connected to sensors and actuators or having them as integral parts.
Connectivity layer: enabling data transmission
The second level is in charge of all communications across devices, networks, and cloud
services that make up the IoT infrastructure. The connectivity between the physical layer and
the cloud is achieved in two ways:
directly, using TCP or UDP/IP stack;
via gateways — hardware or software modules performing translation between different
protocols as well as encryption and decryption of IoT data.
The communications between devices and cloud services or gateways involve different
networking technologies.
Ethernet connects stationary or fixed IoT devices like security and video cameras,
permanently installed industrial equipment, and gaming consoles.
WiFi, the most popular technology of wireless networking, is a great fit for data-intensive
IoT solutions that are easy to recharge and operate within a small area. A good example of
use is smart home devices connected to the electrical grid.
NFC (Near Field Communication) enables simple and safe data sharing between two
devices over a distance of 4 inches (10 cm) or less.
Bluetooth is widely used by wearables for short-range communications. To meet the needs of
low-power IoT devices, the Bluetooth Low-Energy (BLE) standard was designed. It transfers
only small portions of data and doesn’t work for large files.
LPWAN (Low-power Wide-area Network) was created specifically for IoT devices. It
provides long-range wireless connectivity on low power consumption with a battery life of
10+ years. Sending data periodically in small portions, the technology meets the requirements
of smart cities, smart buildings, and smart agriculture (field monitoring).
ZigBee is a low-power wireless network for carrying small data packages over short
distances. The outstanding thing about ZigBee is that it can handle up to 65,000 nodes.
Created specifically for home automation, it also works for low-power devices in industrial,
scientific, and medical sites.
Cellular networks offer reliable data transfer and nearly global coverage. There are two
cellular standards developed specifically for IoT things. LTE-M (Long Term Evolution for
Machines) enables devices to communicate directly with the cloud and exchange high
volumes of data. NB-IoT or Narrowband IoT uses low-frequency channels to send small data
packages.
Edge or fog computing layer: reducing system latency
This level is essential for enabling IoT systems to meet the speed, security, and scale
requirements of the 5th generation mobile network or 5G. The new wireless standard
promises faster speeds, lower latency, and the ability to handle many more connected
devices, than the current 4G standard.
The idea behind edge or fog computing is to process and store information as early and as
close to its sources as possible. This approach allows for analyzing and transforming high
volumes of real-time data locally, at the edge of the networks. Thus, you save the time and
other resources that otherwise would be needed to send all data to cloud services. The result
is reduced system latency that leads to real-time responses and enhanced performance.
Processing layer: making raw data useful
The processing layer accumulates, stores, and processes data that comes from the previous
layer. All these tasks are commonly handled via IoT platforms and include two major stages.
Data accumulation stage
The real-time data is captured via an API and put at rest to meet the requirements of non-real-
time applications. The data accumulation component stage works as a transit hub between
event-based data generation and query-based data consumption.
Among other things, the stage defines whether data is relevant to the business requirements
and where it should be placed. It saves data to a wide range of storage solutions, from data
lakes capable of holding unstructured data like images and video streams to event stores and
telemetry databases. The total goal is to sort out a large amount of diverse data and store it in
the most efficient way.
Data abstraction stage
Here, data preparation is finalized so that consumer applications can use it to generate
insights. The entire process involves the following steps:
combining data from different sources, both IoT and non-IoT, including ERM, ERP, and
CRM systems; reconciling multiple data formats; and aggregating data in one place or
making it accessible regardless of location through data virtualization.
Similarly, data collected at the application layer is reformatted here for sending to the
physical level so that devices can “understand” it.
Together, the data accumulation and abstraction stages veil details of the hardware,
enhancing the interoperability of smart devices. What’s more, they let software developers
focus on solving particular business tasks — rather than on delving into the specifications of
devices from different vendors.
Application layer: addressing business requirements
At this layer, information is analyzed by software to give answers to key business questions.
There are hundreds of IoT applications that vary in complexity and function, using different
technology stacks and operating systems. Some examples are:
device monitoring and control software, mobile apps for simple interactions, business
intelligence services, and analytic solutions using machine learning.
Currently, applications can be built right on top of IoT platforms that offer software
development infrastructure with ready-to-use instruments for data mining, advanced
analytics, and data visualization. Otherwise, IoT applications use APIs to integrate with
middleware.
Applications of IoT
1. IoT Wearables
Wearable technology is a hallmark of IoT applications and probably is one of the earliest
industries to have deployed the IoT at its service. We happen to see Fit Bits, heart rate
monitors and smart watches everywhere these days.
One of the lesser-known wearables includes the Guardian glucose monitoring device. The
device is developed to aid people suffering from diabetes. It detects glucose levels in the
body, using a tiny electrode called glucose sensor placed under the skin and relays the
information via Radio Frequency to a monitoring device.
2. IoT Applications – Smart Home Applications
When we talk about IoT Applications, Smart Homes are probably the first thing that we think
of. The best example I can think of here is Jarvis, the AI home automation employed by
Mark Zuckerberg. There is also Allen Pan’s Home Automation System where functions in
the house are actuated by use of a string of musical notes
The resources that current medical research uses, lack critical real-world information. It
mostly uses leftover data, controlled environments, and volunteers for medical
examination. IoT opens ways to a sea of valuable data through analysis, real-time field data,
and testing.
The Internet of Things also improves the current devices in power, precision, and
availability. IoT focuses on creating systems rather than just equipment
Security Challenges
Regulation Challenges
Compatibility Challenges
Bandwidth Challenges
Customer Expectation Challenges
Security Challenges:
Rapid advances in both technology and the complexity of cyber-attacks have meant that the
risk of security breaches has never been higher. There is an increased responsibility
for software developers to create the most secure applications possible to defend against this
threat as IoT devices are often seen as easy targets by hackers.
Regulation Challenges
We’ve already touched on how GDPR has impacted the IoT industry, however, as the
industry is still relatively new and young, it generally lacks specific regulation and oversight,
which is required to ensure that all devices are produced with a suitable level of protection
and security.
Compatibility Challenges
At the core of the IoT concept, all devices must be able to connect and communicate with
each other for data to be transferred.
The IoT industry currently lacks any compatibility standards, meaning that many devices
could all run on different standards resulting in difficulties communicating with one another
effectively.
Bandwidth Challenges
Perhaps at no surprise, devices and applications that rely on the ability to communicate with
each other constantly to work effectively tend to use a lot of data at once, leading to
bandwidth constraints for those using many devices at once.
Combine this with existing demands for data and broadband in the typical house, and you can
quickly see how data and bandwidth limitations can be a challenge.
Arguably the biggest hurdle for the industry relates to customer perception. For anything new
to be adopted by the masses, it has to be trusted completely.
For the IoT industry, this is a continuously evolving challenge as it relies on the ability to
actively combat security threats and reassure the general consumer market that the devices
are both safe to use and secure to hold vast quantities of sensitive data
UNIT 9 IoT NETWORKING AND CONNECTIVITY
TECHNOLOGIES
9.1Introduction
9.2Objectives
9.3M2M and IoT Technology
9.4Components of IoT Implementation
9.5Gateway Prefix Allotment
9.6Impact of Mobility on Addressing
9.7Multihoming
9.8IoT Identification and Data Protocols
IPv4, IPv6, MQTT, CoAP, XMPP, AMQP
9.9 Connectivity Technologies
IEEE 802.15.4, ZigBee, 6LoWPAN, RFID, NFC, Bluetooth, Z-wave
9.10Summary
9.1 INTRODUCTION
9.2 OBJECTIVES
After going through this unit, you should be able to:
1
9.3 M2M AND IoT TECHNOLOGY
Various components that make up an M2M system are - sensors, RFID (Radio Frequency
Identification) , Wi-Fi or cellular network, and a computing software which helps networking
devices to interpret data and decision making. These M2M applications can translate data which
in turn can trigger automated actions.Various benefits offered by M2M are -
M2M Applications
Sensor telemetry is one of the first application of M2M communication. It has been used since
the last century for transmitting operational data. Earlier people used telephone lines, then radio
waves, to transmit measurements factors like- temperature, pressure etc for remote monitoring.
Another example of M2M communication is ATM. ATM machine routes information regarding
request for transaction to appropriate bank. The bank in turn through its system approves it and
allows transactions to complete. It also has applications in supply chain management (SCM),
warehouse management systems (WMS), Utility companies, etc. Fig 1 shows various
applications of M2M.
2
Fig 1. Applications of M2M
Internet of Things or IoT, is a technology that has evolved from M2M by increasing the
capabilities at both consumers and enterprise level. It expands the concept of M2M by creating
large networks of devices in which devices communicate with one another through cloud
networking platforms. It allows users to create high performance, fast and flexible networks that
can connect a variety of devices. Table 1 summarizes the differences between M2M and IoT
devices.
IoT is a network of physical objects , called “Things” , embedded with hardware like - sensors or
actuators or software, for exchanging data with other devices over the internet. With the help of
this technology, it is possible to connect any kind of device like simple household objects
example- kitchen appliances, baby monitors, ACs, TVs, etc to other objects like- cars, traffic
lights, web camera, etc. Connecting these objects to the internet through embedded devices,
allows seamless communication between things, processes or people. Some of the applications of
IoT devices are – smart home voice assistant Alexa, smart traffic light system.
IoT devices when connected to cloud platforms, can provide a huge and wide variety of
industrial or business applications. As the number of IoT devices are increasing, the problem of
storing, accessing and processing is also emerging. IoT when used with Cloud technology
provides solutions to these problems due to huge infrastructure provided by cloud providers.
3
Table 1. Difference between M2M and IoT devices
Point to point connection establishment Devices are connected through the network
and also supports connecting to global cloud
networks.
Makes use of internet protocols like- HTTP, Makes use of traditional communication
FTP, etc. protocols
Generally may not rely on internet connection Generally Rely on internet connection
1. Sensors
Sensors are devices that are capable of collecting data from the environment. There are
various types of sensors available –temperature sensors, pressure sensors, RFID tags,
light intensity detectors, electromagnetic sensors, etc.
2. Network
Data collected from sensors are passed over the network for computations to the cloud or
processing nodes. Depending upon the scale, they may be connected over LAN, MAN or
WAN. They can also be connected through wireless networks like- Bluetooth, ZigBee,
Wi-Fi, etc.
3. Analytics
The process of generating useful insights from the data collected by sensors is called
analytics. Analytics when performed in real time, can have numerous applications and
can make the IoT system efficient.
4. Action
4
Information obtained after analytics must be either passed to the user using some user
interface, messages, alerts, etc; or may also trigger some actions with the help of
actuators. Actuators are the devices that perform some action depending on the command
given to them over the network.
Fig 2 shows implementation of IoT. Data captured by sensors are passed on to the cloud servers
over the internet via gateways. Cloud servers in turn perform analytics and pass on the decisions
or commands to actuators.
5
Gateways are networking devices that connect IoT devices like sensors or controllers to Cloud.
In other ways we can say that data generated by IoT devices are transferred to Cloud servers
through IoT gateways.
The number of IoT devices is increasing at an exponential rate. These IoT devices are connected
in a LAN or a WAN. A number of IoT devices within a building, communicating to a gateway
installed in the same building over a wi-fi connection can be called an IoT LAN. Geographically
distributed LAN segments are interconnected and connected to the internet via gateways to form
IoT WAN. Devices connected within LAN have unique IP addresses but may have addresses the
same as devices of another LAN .
Gateways connect IoT LANs and WANs together. It is responsible for forwarding packets
between them on the IP layer. Since a large number of devices are connected, address space
needs to be conserved. Each connected device needs a unique address. IP addresses allocated to
devices within a gateway's jurisdiction are valid only in its domain. Same addresses may be
allocated in another gateway’s domain. Hence to maintain uniqueness, each gateway is assigned
a unique network prefix. It is used for global identification of gateways. This unique identifier
removes the need of allocating a unique IP address to each and every device connected to the
network, hence saves a lot of address space.
Gateway prefix allotment is shown in fig 3. Here two gateway domains are shown. Both of them
are connected to the internet via router. This router has its own address space and allows
connectivity to the internet. This router assigns a unique gateway prefix to both the gateways.
Hence packets are forwarded from gateways to the internet via routers.
6
Fig 3: Gateway prefix allotment
(Source: Reference 1)
When an IoT device moves from one location to another in a network, its address is affected.
Network prefix allocated to gateways change due to mobility. WAN addresses allocated to
devices through gateways changes without affecting IoT LAN addresses. This is possible
because addresses allocated within a domain of gateway are unique. It is not affected by mobility
of devices. These unique local addresses (ULA) are maintained independent of global addresses.
For giving internet access to these ULAs, they are connected to application layer proxy which
routes them globally.
Gateways are attached to a remote anchor point by using protocols like IPv6. These remote
anchor points are immune to changes of network prefix. It is also possible for the nodes in a
network to establish direct connection with remote anchor points to access the internet directly
using tunneling. Fig 4 shows remote anchor points having access to gateways.
7
Fig 4: Remote anchor point
(Source: Reference 1)
9.7 MULTIHOMING
The practice of connecting a host to more than one network is called Multihoming. This can
increase reliability and performance. Various ways to performmultihoming are –
1. Host multihoming
In this type of multihoming, a single host can be connected to two or more networks. For
example a computer connected to both a local network and awi-fi network.
2. Classical multihoming
In this type of multihoming, a single network is connected to multiple providers. Edge
router communicates with providers using dynamic routing protocols. This protocol can
recognize failures and reconfigure routing tables without hosts being aware of it. It
requires address space recognized by all providers, hence it is costly.
8
9.8 IoT IDENTIFICATION AND DATA PROTOCOLS
IoT devices are diverse in their architecture and its use cases can scale from single device
deployment to massive cross-platform deployment. There are various types of communication
protocols that allow communication between these devices. Some of the protocols are given
below.
IPv4
Internet Protocol is a network layer protocol version 4 used to provide addresses to hosts in a
network. It is a widely used communication protocol for different kinds of networks. It is a
connectionless protocol that makes use of packet switching technology. It is used to give a 32 bit
address to a host. It is divided into five classes – A, B, C, D, and E. It can provide upto 4.3
billion addresses only which is not sufficient for an IoT device. It allows data to be encrypted but
does not limit access to data hosted on the network.
IPV6
As the total number of addresses provided by IPv4 are not sufficient specially for IoT devices,
Internet protocol version 6 or IPv6 is introduced. It is an upgraded version of IPv4. It uses 128
bits to address a host hence anticipates future growth and provides relief from shortage of
network addresses. It gives better performance than IPv4. It also ensures privacy and data
integrity. It is automatically configured and has built-in support for authentication. Some of the
differences between IPv4 and IPv6 are shown in table 2.
IPv4 IPv6
Possible number of addresses are 232 Possible number of addresses are 2128
9
It supports broadcasting It supports multicasting
MQTT
Message queuing telemetry transport (MQTT) is a widely used light-weight messaging protocol
based on subscription. It is used in conjunction with TCP/IP protocol. It is designed for battery
powered devices. Its model is based on Subscriber, Publisher and Broker. Publishers are light
weight sensors and subscribers are applications which will receive data from publishers.
Subscribers need to subscribe to a topic. Messages updated in a topic are distributed by brokers.
Publisher collects the data and sends it to the subscriber through a broker. Broker after receiving
messages, filtering and making decisions, sends messages to the subscribers. Brokers also ensure
security by authorizing subscribers and publishers. Fig 5 shows the working of MQTT.
CoAP
Constrained Application Protocol (CoAP) is a web transfer protocol used to translate the HTTP
model so as to be used with restrictive devices and network environments.It is used for low
powered devices. It allows low power sensors to interact with RESTful services. It makes use of
UDP for establishing communication between endpoints. It allows data to be transmitted to
multiple hosts using low bandwidth.
XMPP
10
Extensible messaging and presence protocol (XMPP) enables real time exchange of extensible
data between network entities. It is a communication protocol based on XML i.e. extensible
markup language. It is an open standard hence anyone can implement these services. It also
supports M2M communication across a variety of networks. It can be used for instant
messaging, multi-party chat, video calls, etc.
AMQP
Advanced message queuing protocol i.e AMQP is an application layer message oriented
protocol. It is open standard, efficient, multi-channel, portable and secure. This is fast and also
guarantees delivery along with acknowledgement of received messages. It can be used for both
point-to-point and publish-subscribe messaging. It is used for messaging in client-server
environments. It also supports a multi-client environment and helps servers to handle requests
faster.
IoT devices need to be connected in order to work. Various technologies used to establish
connections between devices are discussed in this section.
IEEE 802.15.4
It is an IEEE standard protocol used to establish wireless personal area networks (WPAN). It is
used for providing low cost, low speed, ubiquitous networks between devices. It is also known as
Low-Rate wireless Personal Area Network (LR-WPAN) standard. It makes use of the first two
layers (Physical and MAC layers) of the network stack and operates in ISM band. These
standards are also used with communication protocols of higher levels like- ZigBee, 6LoWPAN,
etc.
6LoWpan
11
IPV6 over low power wireless personal area network, is a standard for wireless communication.
It was the first standard created for IoT. It allows small, limited processing capabilities and low
power IoT devices to have direct connectivity with IP based servers on the internet. It also allows
IPV6 packets to be transmitted over IEEE 802.15.4 wireless network.
ZigBee
It is a wireless technology based on IEEE 802.15.4 used to address needs of low-power and low-
cost IoT devices. It is used to create low cost, low power, low data rate wireless ad-hoc
networks. It is resistant to unauthorized reading and communication errors but provides low
throughput. It is easy to install, implement and supports a large number of nodes to be connected.
It can be used for short range communications only.
NFC
Near Field Communication (NFC) is a protocol used for short distance communication between
devices. It is based on RFID technology but has a lower transmission range (of about 10 cm). It
is used for identification of documents or objects. It allows contact less transmission of data. It
has shorter setup time than Bluetooth and provides better security.
Bluetooth
It is one of the widely used types of wireless PAN used for short range transmission of data. It
makes use of short range radio frequency. It provides data rate of appx 2.1 Mbps and operates at
2.45GHz. It is capable of low cost and low power transmission for short distances. Its initial
version 1.0 supported upto 732kpbs speed. Its latest version is 5.2 which can work upto 400m
range with 2 Mbps data rate.
Z-Wave
It is one of the standards available for wireless networks. It is interoperable and uses low
powered radio frequency communication. It is used for connecting to smart devices by
consuming low power. These Z-waves devices allow IoT devices to be controlled over the
internet. It is generally used for applications like home automation . It supports data rate of upto
100kbps. It also supports encryption and multi-channel.
RFID
12
Radio frequency identification (RFID) are electronics devices consisting of an antenna and a
small chip. This chip is generally capable of carrying data upto 2000 bytes. It is used to give
unique identification to an object. Its system is composed of reading device and RFID tags.
RFID tags are used to store data and identification information, which is then attached to the
object to be tracked. The reader is used to track presence of RFID tag when the object passes
through it.
9.9 SUMMARY
In this unit M2M and IoTtechnologies are discussed in detail. Machine-to-Machine is a
technology that allows connectivity between networking devices. IoT technology expands the
concept of M2M by creating large networks of devices in which devices communicate with one
another through cloud networking platforms. In order to implement IoT, components involved
are – sensors, network, analytics and actions (actuators). Some of the existing IoT identification
and data protocols are IPv4, IPv6, MQTT, XMPP, etc. Existing connectivity technologies used
for connecting devices are – Bluetooth, Zigbee, 802.15.4, RFID, etc.
References
1. “Internet of Things”, Dr.JeevaJose , 2018, Khanna Book Publishing Co. (P) LTD. ISBN:
978-93-86173-59-1.
13
Solutions to Check your Progress 1
1. IoT is a network of physical objects , called “Things” , embedded with hardware like -
sensors or actuators or software, for exchanging data with other devices over the internet.
With the help of this technology, it is possible to connect any kind of device like simple
household objects example- kitchen appliances, baby monitors, ACs, TVs, etc.
M2M IoT
14
a) Action - Information obtained after analytics must be either passed to the user using
some user interface, messages, alerts, etc; or may also trigger some actions with the
help of actuators.
1. Gateways connect IoT LANs and WANs together. It is responsible for forwarding
packets between them on the IP layer. Since a large number of devices are connected,
address space needs to be conserved. Each connected device needs a unique address. IP
addresses allocated to devices within a gateway's jurisdiction are valid only in its domain.
Same addresses may be allocated in another gateway’s domain. Hence to maintain
uniqueness, each gateway is assigned a unique network prefix. It is used for global
identification of gateways.
2. Both IPv4 and IPv6 are network layer protocols. Some of the differences are –
IPv4 IPv6
Possible number of addresses are 232 Possible number of addresses are 2128
15
limited processing capabilities and low power IoT devices to have direct connectivity
with IP based servers on the internet.
c) RFID - Radio frequency identification (RFID) are electronics devices consisting of an
antenna and a small chip. This chip is generally capable of carrying data upto 2000
bytes. It is used to give unique identification to an object.
16
UNIT 10 IoT APPLICATION DEVELOPMENT
Structure
10.0 Introduction
10.1 Objectives
10.2 IoT Application Essential Requirements
10.3 Challenges in IoT Application Development
10.4 IoT Application Development Framework
10.5 Open Source IoT Platforms
10.5.1 Popular Open Source IoT Platforms
10.5.2 Some Tools for Building IoT Prototypes
10.6 IoT Application Testing Strategies
10.6.1 Performance Testing
10.6.2 Security Testing
10.6.3 Compatibility Testing
10.6.4 End-User Application Testing
10.6.5 Device Interoperability Testing
10.7 Security Issues in IoT
10.7.1 Counter Measures
10.8 Summary
10.9 Solutions/Answers
10.10 Further Readings
10.0 INTRODUCTION
In the earlier unit, we had studied various IoT networking and connectivity
technologies. After going through the basics of IoT in previous units, we will
concentrate on IoT Application Development in this unit.
When you are developing some application, Platform is one which allows you
to deploy and run your application. A platform could be a hardware plus
software suite upon which other applications can operate. Platform could
comprise hardware above which Operating system can reside. This Operating
system will allow application to work above it by providing necessary
execution environment to it.
1
Application
Development,
An IoT application platform is a virtual solution, means it resides over cloud.
Fog Computing and
Case Studies Data is the entity that drives business intelligence and every device has
something to talk with other device that is data. By means of cloud
connectivity, IoT application platform translates such devices data into useful
information. So it provides user means to implement business use cases and
enables predictive maintenance, pay-per-use, analytics and real time data
management. Thus, IoT application platforms provide a complete suite for
application development to its deployment and maintenance.
10.1 OBJECTIVES
10.2.1 Adaptability
IoT systems will consist of several nodes, which will be resource constrained
mobile and wirelessly connected to the Internet. Due to the factors such as
poor connectivity and power shortage, nodes can be connected and
disconnected from the system arbitrarily. Furthermore, the state, location and
computing speed of these nodes can change dynamically. All these factors can
2
IoT Application
make IoT systems to be extremely dynamic. In a physical environment that is Development
10.2.2 Intelligence
Intelligent things and system of systems are the building blocks of IoT. IoT
applications will power IoT enabling technologies in transforming everyday
objects into smart objects that can understand and obtain intelligence by
making or enabling context-related decisions, resulting in the execution of
tasks independently without human intervention. Achieving this requires IoT
application to be designed and developed with intelligent decision-making
techniques such as context-aware computing service, predictive analytics,
complex event processing and behavioural analytics.
A number of IoT domains requires the timely delivery of data and services. For
instance, consider IoT in scenarios such as telemedicine, patient care and
vehicle-to-vehicle communications where a delay in seconds can have
dangerous consequences. Environments, where operations are time-critical,
will require IoT applications that provide on-time delivery of data and services.
10.2.4 Security
The data generated from these heterogeneous devices are generally in huge
volume, in various forms, and are generated at different speeds. IoT
applications will often make critical decisions based on the data collected and
processed. Sometimes, these data can be corrupted for various reasons such as
the failure of a sensor, introduction of an invalid data by a malicious user,
delay in data delivery and wrong data format. Consequently, IoT application
developers are faced with the challenge of developing methods that establish
the presence of invalid data and new techniques that capture the relationship
between the data collected and the decision to be made.
4
IoT Application
10.3.4 Application Maintenance Development
Many IoT applications are human-centric applications, i.e. humans and objects
will work in synergy. However, the dependencies and interactions between
humans and objects are yet to be fully harmonized. Humans in the loop have
their advantages. For example, in healthcare, incorporating models of various
human activities and assisted technologies in the homes of the elderly can
improve their medical conditions. However, IoT applications that model
human behavior is a significant challenge, as it requires modeling of complex
behavioral, psychological and physiological aspects of human nature. New
research is necessary to incorporate human behaviors in IoT application design
and to understand the underlying requirements and complex dependencies
between IoT applications and humans.
Since IoT applications are currently being integrated into the daily activities of
our lives and sometimes used in critical situations with little or no tolerance for
errors and failures, it therefore means that the overall system quality is
important and must be thoroughly evaluated to guarantee that it is of high
quality before being deployed. However, evaluating quality attributes such as
performance is a key challenge since it depends on the performance of many
components as well as the performance of the underlying technologies.
6
IoT Application
Development
Device Hardware is the first layer of IoT technology stack that defines the
digital and physical parts of any smart connected product. In this stacked layer,
it is imperative to know the implications of size, deployment, cost, useful
lifetime, reliability and more such. If we talk about small devices like for
example, smartwatches then you may have only one room for such a System
on a Chip (SoC). Here, you will need embedded computer like Raspberry-Pi,
Artik module, and BeagleBone board.
The device software is the component that turns the device hardware into a
“smart device.” Device software is the second layer of the IoT technology
stack. Device software enables the concept of “software-defined hardware,”
meaning that a particular hardware device can serve multiple applications
depending on the embedded software it is running. It allows you to implement
communication with the Cloud or other local devices. You can perform real-
time analytics, data acquisition from your device’s sensors, and even control.
This layer of the IoT technology stack is critical because it serves as the glue
between the real world (hardware) and your Cloud Applications. You can also
use device software to reduce the risks of hardware development. Building
hardware is expensive, and it takes a lot longer than software. Instead of
building your device for a narrow and specific purpose, it is better to use the
generic hardware that can be customized by your device software to give you
more flexibility down the road. This technique is often known as “software-
defined hardware.” This way, you can update your embedded software
remotely via the Cloud, which will update your “hardware” functionality in the
field.
.The device software layer can be distributed into two categories i.e. Device
Operating System and Applications.
The whole complexity of your IoT solution will portray the type of operating
system you are in the need of. There are some top things that you must include
like when your app requires a real-time operating system, I/O support, and
7
Application
Development,
support for the full TCP/IP stack. Some examples of an embedded OS are
Fog Computing and
Case Studies Brill, Linux, Windows Embedded and VxWorks.
Device applications run on top of the Edge OS and provide the specific
functionality for your IoT solution. Here the possibilities are endless. You can
focus on data acquisition and streaming to the Cloud, analytics, local control,
etc.
Communications refer to all the different ways your device will exchange
information with the rest of the world. Communications are the third layer of
the IoT technology stack. Depending on your industry, some people refer to
this layer of the IoT technology stack as connectivity. Communications include
both physical networks and the protocols you will use. It is true that the
implementation of the communications layer is found in the device hardware
and device software. But from a conceptual model, selecting the right
communication mechanisms is a critical part of your IoT product strategy. It
will determine not only how you get data in and out from the Cloud (for
example, using Wi-Fi, WAN, LAN, 4G, 5G, LoRA, etc.), but also, how you
communicate with third-party devices too.
The cloud platform is the backbone of your IoT solution. If you are familiar
with managing SaaS offerings, then you are well aware of the role of this layer
of the IoT technology stack. A cloud platform provides the infrastructure that
supports the critical areas like data collection and management, analytics and
cloud APIs.
8
IoT Application
10.4.4.1 Data Collection Development
This is an important aspect. Your smart devices will stream information to the
Cloud. As you define the requirements of your solution, you need to have a
good idea of the type and amount of data you will be collecting on a daily,
monthly and yearly basis. One of the challenges of IoT applications is that they
can generate an enormous amount of data. You need to make sure you define
your scalability parameters so that your architects can determine the right data
management solution from the very beginning.
10.4.4.2 Analytics
The Internet of Things is all about connecting devices and sharing data, which
you can achieve by exposing APIs at either the Cloud level or the device level.
Cloud APIs allow your customers and partners to either interact with your
devices or to exchange data. Remember that opening an API is not a technical
decision; it’s a business decision.
The fifth layer of the IoT technology stack is the Cloud Applications layer.
Your end-user applications are the part of the system that your customers will
see and interact with. These applications will most likely be web-based, and
depending on your user needs, you might need separate apps for desktop,
mobile, and even wearables. Even though a smart device has its own display,
the user may likely use a cloud application as their main point of interaction
with your solution. This allows them to have access to your smart devices
anytime and anywhere, which is part of the goal of having connected devices.
While designing end-user applications, it is very important to understand who
your user is and what is his/her primary goal of using the product. The other
consideration is that for Industrial IoT (IIoT) applications, you’ll probably
have more than one user.
9
Application
Development,
These internal apps will require a deep understanding of your external and
Fog Computing and
Case Studies internal customers and will require the right prioritization and resourcing.
In the next section let us study open source platforms and some prototype tools
available for IoT Application Development.
(i) Each consumer desires to utilize any IoT device of their preference
without being restricted or bound to a specific product vendor. For
example, some smart devices need to be clubbed with only
smartphones from the same retailer.
(ii) All business companies of IoT devices desire to integrate their
particular devices with ease and diverse ecosystems.
(iii)All application developers desire their apps back multiple IoT devices,
which need not demand to blend the specially developed vendor-
specific codes.
Kaa
Kaa IoT Platform is one the most efficient and rich open-source Internet of
Things cloud platforms where anyone has a free way to materialize their smart
product concepts. On this platform, you can manage an unlimited number of
connected devices with cross-device interoperability.
You can achieve real-time device monitoring with the possibility of remote
device provisioning and configuration. It is one of the most flexible IoT
platforms for your business which is fast, scalable, and modern.
Macchina.io
Zetta
Zetta is a server-oriented platform that has been built around NodeJS, REST,
and a flow-based reactive programming development philosophy linked with
the Siren hypermedia APIs. They are connected with cloud services after being
abstracted as REST APIs. People believe that the Node.js platform is best to
develop IoT frameworks. These cloud services include visualization tools and
support for machine analytics tools like Splunk. It creates a zero-distributed
network by connecting endpoints such as Linux and Arduino hacker boards
with platforms such as Heroku. Key features are:
DeviceHive
DSA is an open-source IoT that unifies the separate devices, services, and
applications in the structured and real-time data model and facilitates
decentralized device inter-communication, logic, and applications. Distributed
service links are a community library that allows protocol translation and data
integration to and from 3rd part data sources. All these modules are lightweight
11
Application
Development,
making them more flexible in use. It implements DSA query DSL and has
Fog Computing and
Case Studies inbuilt hardware integration support.
Developers can code, test and deploy their applications with highly scalable
and reliable infrastructure that is provided by Google and Google itself uses it.
Developers have to just pay attention to the code and Google handles issues
regarding infrastructure, computing power and data storage facility.
Google is one of the popular IoT platform because of: Fast global network,
Google's BigData tool, Pay as you use strategy, Support of various available
services of cloud like RiptideIO, BigQuery, Firebase, PubSub, Telit Wireless
Solutions, Connecting Arduino and Firebase and Cassandra on Google Cloud
Platform and many more.
IoT opened many new horizons for companies and developers working for the
development of IoT systems. Many exceptional products have been developed
due to IoT app development. Companies providing Internet of Things solution
are creating hardware and software designs to help the IoT developers to create
new and remarkable IoT devices and applications. Some of the tools to build
IoT prototypes and applications are discussed below:
Arduino
Raspbian
This IDE is created for Raspberry Pi board. It has more than 35000 packages
and with the help of precompiled software, it allows rapid installation. It was
not created by the parent organization but by the IoT tech enthusiasts. For
working with Raspberry Pi, this is the most suitable IDE available.
Eclipse IoT
This tool or instrument allows the user to develop, adopt and promote open
source IoT technologies. It is best suited to build IoT devices, Cloud platforms,
and gateways. Eclipse supports various projects related to IoT. These projects
include open-source implementations of IoT Protocols, application frameworks
and services, and tools for using Lua programming language which is
promoted as the best-suited programming language for IoT.
12
IoT Application
Development
Tessel 2
It is used to build basic IoT prototypes and applications. It helps through its
numerous modules and sensors. Using Tessel 2 board, a developer can avail
Ethernet connectivity, Wi-Fi connectivity, two USB ports, a micro USB port,
32MB of Flash, 64MB of RAM. Additional modules can also be integrated
like cameras, accelerometers, RFID, GPS, etc.
Tessel 2 can support Node.JS and can use the libraries of Node.JS. It contains
two processors, its hardware uses 48MHz Atmel SAMD21 and 580.
Kinoma
13
Application
Development,
Fog Computing and
Case Studies
The security testing aspect of the IoT framework deals with security elements,
such as the protection of data, as well as encryption and decryption. It is aimed
at providing added security to connected devices, and also to the networks and
cloud services on which the devices are connected.
Some variables that mostly cause security threats in IoT are sensor networks,
applications that work to collect data, and interfaces. Therefore, it is highly
recommended that security testing be done at the device and protocol level,
since problems can easily be detected and solved at this level.
The End-user application testing takes into consideration the user experience,
as well as the usability and functionality of the IoT application.
This type of testing aims to assess the interoperability of protocols and devices,
compared with varying standards and specifications.
This testing is usually done in the Service layer. This is because the service
layer provides the most conducive environment for this testing, that is; a
platform that is communicable, programmable and operable.
The IoT is diverse from traditional computers and computing devices, makes it
more vulnerable to security challenges in different ways:
15
Application
Development,
also clear that many of these devices can establish connections and
Fog Computing and
Case Studies communicate with other devices automatically in an irregular way.
These call for consideration of the accessible tools, techniques, and
tactics which are related to the security of IoT.
Even with the issue of security in the sector of information and technology not
being new, IoT implementation has presented unique challenges that need to
be addressed. The consumers are required to trust the Internet of Things
devices and the services are very secure from weaknesses, particularly as this
technology continues becoming more passive and incorporated in our everyday
lives. With weakly protected IoT gadgets and services, this is one of the very
significant avenues used for cyber attacks as well as the exposure of the data of
users by leaving data streams not protected adequately. The nature of the
interconnection of the IoT devices means if a device is poorly secured and
connected it has the potential of affecting the security and the resilience on the
Internet internationally. This behavior is simply brought about by the challenge
of the vast employment of homogenous devices of IoT. Besides the capability
of some devices to be able to mechanically bond with other devices, it means
that the users and the developers of IoT all have an obligation of ensuring that
they are not exposing the other users as well as the Internet itself to potential
harm. A shared approach required in developing an effective and appropriate
solution to the challenges is currently witnessed in the IoT.
One of the most prevalent attacks in the IoT is the man in the middle, where
the third-party hijack communication channel is aimed at spoofing identities of
the palpable nodes which are involved in network exchange. Man in the middle
attack effectively makes the bank server recognize the transaction being done
as a valid event since the adversary does not have to know the identity of the
supposed victim.
16
IoT Application
Perception layer: The Perception layer is the typical external physical layer, Development
which includes sensors for sensing and gathering information about the
surrounding environment such as temperature, humidity, pressure etc..Table 1
shown below depicts the major threats in the Perception layer:
Table 1:Threats in the Perception Layer
Name of Description
the Threat
Denial of IoT sensing nodes have limited capacity and capabilities thus attackers can
Service use Denial of Service attack to stop the service. Eventually servers and the
Attack devices will be unable to provide its service for users.
Hardware Attacker can damage the node by replacing the parts of the
Jamming node hardware.
Insertion of Attacker can insert a falsified or malicious node between the actual nodes
Forged of the network to get access and get control over the IoT network.
nodes
BruteForce As the sensing nodes contains weaker computational power brute force
Attack attack can easily compromise the access control of the devices.
Cloud layer: The IoT Cloud Layer represents the back- end services required
to set up, manage, operate, and extract business value from an IoT system. It
will deliver the application specific services to the user so they can operate and
monitor the devices. Following are the threats in the Cloud Layer. Table 3
shown below depicts the major threats in the Cloud layer:
17
Application
Development, Table 3: Threats in the Cloud Layer
Fog Computing and
Case Studies
Authentication
Authorization
Confidentiality
Integrity
Non Repudiation
18
IoT Application
Following table 4 will depict what we can do to improve the security in Development
1) Compare and contrast various IoT platforms discussed in this unit with
reference to the parameters like services availability and device
management platform.
…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
2) What are the various factors and concerns those might have an impact on
compromising the efforts to secure the IoT devices?
…………………………………………………………………………………
…………………………………………………………………………………
3) Explore and write various current innovative techniques to mitigate the
security attacks.
…………………………………………………………………………………
…………………………………………………………………………………
19
Application
Development,
Fog Computing and 10.8 SUMMARY
Case Studies
2. Given below are various factors and concerns those might impact on
compromising the efforts to secure the IoT devices:
20
IoT Application
necessary updates remotely. However, hackers could utilize the feature Development
22
UNIT 11 FOG COMPUTING AND EDGE COMPUTING
11.1 Introduction
11.2 Objectives
11.3 Introduction to Fog Computing
11.4 Cloud Computing Vs Fog Computing
11.5 Fog Architecture
11.6 Working of Fog
11.7 Advantages of Fog
11.8 Applications of Fog
11.9 Challenges in Fog
11.10 Edge Computing
11.11 Working of Edge Computing
11.12 Cloud Vs Fog Vs Edge Computing
11.13 Applications of Edge Computing
11.14 Summary
11.1 INTRODUCTION
Use of emerging technologies like IoT, on-line applications and popularity of social networking are
leading to an increasing number of users on the internet. Hence data getting generated on a daily basis is
also increasing at an enormous rate leading to increasing workload on Cloud. Also, demand for increased
bandwidth and need for real time applications or analytics is also increasing. Fog computing is a
technology introduced to collaborate with cloud computing for providing solutions. It attempts to bring
cloud-like resources – memory, storage, and compute near end users.
11.2 OBJECTIVES
With increasing use of Internet of Things (IoT) devices and internet users, network traffic, storage and
processing load is also increasing at an exponential rate. Cisco in 2020 estimated that by the end of 2023,
29.3 billion devices and 5.3 billion internet users will be there.
1
Cloud computing technology offers computation service over the internet on a pay-per-use basis.
Resources offered by this technology like – storage, compute or network can be dynamically provisioned
according to user’s demand. This technology offers several advantages like – low cost, rapid
provisioning, high computation power, flexible, automatic updates, no management or monitoring needed
from user’s side, etc. Enormous amounts of data generated by IoT devices and users can be stored and
processed on cloud servers. But in addition to these benefits, there are several shortcomings associated
with this technology – like increased response time due to distant location of servers and centralized
architecture, security as resources are remotely stored and provided over insecure internet, demand of
higher network bandwidth, increasing load on network due to further increasing users.
Cisco in 2014 introduced a term called ‘Fog Computing’ to a technology which extends computing to the
edge of the network. The fog metaphor is used to represent a cloud close to the ground, similar to as fog
concentrates on the edge of the network.
Fog computing is a technology in which resources like - compute, data, storage and applications are
located in-between the end user layer (where data is generated) and the cloud. Devices like gateways,
routers, base stations can be configured as fog devices. It can bring all the advantages offered by cloud
computing closer to the location where data is generated; hence leading to reduced response time, reduced
bandwidth requirements, enhanced security and other benefits.
OpenFog Consortium defined fog computing as “a horizontal system level architecture that distributes
computing, storage, control and networking functions closer to the users along a cloud-to-thing
continuum”.
Fog computing is not introduced to replace cloud computing. Resources offered by Fog servers or devices
are limited as compared to resources offered by huge cloud infrastructure. Hence the cloud computing
model will continue to operate as a centralized computing system (needed for high processing power and
storage) with few capabilities shifted towards fog devices which are present in the proximity of users for
serving low latency operations.
Three layer logical architecture of fog computing is given in Fig 1. The first layer represents the end
devices layer, middle layer represents the fog devices, and the top most layer represents the cloud
servers.
2
Fig 1. Logical Architecture of Fog computing
Cloud computing is defined as a model that allows ubiquitous access to shared resources on demand over
the internet on a pay-per-use basis. Large pools of resources are maintained at data centers by the cloud
service providers. Virtual resources from these pools are dynamically provisioned and allocated to users
on demand. High performance can be achieved by using cloud resources but it may not be used for real
time applications that demand higher response time due to the distant location of cloud servers.
Fog computing is introduced to fill up the gap between the cloud servers and end devices. Fog servers like
cloud servers can offer various resources – compute, storage, or network. Due to its proximity to end
users, it allows computations to be done faster or near real time. Hence it is better suited for latency
sensitive applications. Since fog computing makes use of devices like- switches, routers, gateways; it is
generally limited by resources and hence offers less computation power as compared to cloud.
Some of the differences between cloud computing and fog computing are given in Table 1.
3
Architecture in centralized Architecture is distributed
Distant location from the end users In the proximity of end users
Can be accessed over internet Can be accessed by various protocols and standards
General architecture of fog computing is composed of three layers (as shown in Fig 1.)
1. End Devices Layer - Layer 1 is composed of end devices which can be mobile devices, IoT
devices, computer systems, camera, etc. Data either captured or generated from these end
devices is forwarded to a nearby fog server at Layer 2 for processing.
2. Fog Layer - Layer 2 is composed of multiple fog devices or servers. They are placed at the
edge of a network, between layer 1 and cloud servers. They can be implemented in devices like –
switches, routers, base stations, access points or can be specially configured fog servers.
3. Cloud Layer - Layer 3 is composed of Cloud data centers. They consist of huge
infrastructure - high performance servers, massive storage devices, etc. They provide all cloud
benefits like- high performance, automatic backup, agility.
4
11.6 WORKING OF FOG
Adding fog layer in-between the centralized cloud layer and end devices layer, improves the overall
performance of the system. Working of fog computing in collaboration with cloud computing is described
below.
1. Huge amounts of data is generated from end devices and IoT devices like –mobile, camera,
laptops, etc. This data is then forwarded to the nearest fog server (in layer 2) for processing.
2. Latency sensitive data or applications that require real time responses, are processed by the fog
servers on priority basis. Results of processing or actions to be performed are then reverted
back to the end devices. Fog servers also send the summarized results to cloud servers in layer
3 for future analysis. This allows only filtered data to be offloaded to the cloud layer.
3. Fog servers, if not able to serve requests due to unavailability of resources or information, can
either interact with neighbouring servers or may forward the request cloud servers at Layer 3
depending upon the offloading strategy. Also, time in-sensitive data is generally forwarded to
Cloud servers for processing and storage. After serving the task, response is given to users at
layer 1 via. Fog servers.
There are various advantages of using fog computing technology due to its architecture-
1. Low latency
Fog servers provide the benefit of faster response due to its geographical location i.e. they are
located nearby from the point of data origination. It is suited for time sensitive or real-time
applications.
3. Reduced Cost
Most of the processing is done locally at the fog layer, leading to conservation of networking
resources and hence reducing the overall cost of operations.
5
It also allows applications to be secure and private because data can be processed locally instead
of forwarding to remote centralized cloud infrastructure.
5. Mobility
Fog devices are mobile. They can be easily added or removed from the network and hence offers
flexibility.
Fog computing since its introduction, is gaining popularity due to its applications in various industries.
Some of the applications are –
Smart Cities
Cities that make use of technology to improve quality of life and services provided to people, can be
called smart cities. Fog computing can play a vital role in building smart cities. With the help of smart
devices, IoT devices and fog devices, it is possible to do tasks like – creating smart homes and buildings
by energy management of buildings, maintaining security, etc; intelligent cities by building smart parking
system, infrastructure, traffic management, environment monitoring, etc ; intelligent hospitals, highways,
factories, etc.
Smart Grids
Electrical grid is a network which delivers energy generated from various sources to consumers. The
process of efficient distribution of energy is possible by making use of fog computing. IoT sensors can
monitor energy generated from various sources – like wind energy farms, thermal plants, hydraulic plants,
etc. This data is then passed on to the nearby fog server to identify the optimal source of energy to be used
7
and can also identify problems like equipment malfunctions. Depending upon the problems it may also
identify alternative sources of energy to be used in order to maintain efficiency.
Fog computing has applications in the healthcare system also. Health reports of patients can be recorded
using different types of sensors and forwarded to fog devices. Fog devices after performing analysis
examples - diagnose cardiac diseases, etc can take necessary actions.
Surveillance
Security and Surveillance cameras are deployed in many areas. It is difficult to send massive amounts of
data collected by these cameras to cloud servers due to bandwidth constraints. Hence data collected from
these can be forwarded to nearby fog servers. Fog servers in turn can perform video processing to find out
problems like theft, kidnapping, murders, finding missing people. Necessary action can then be taken by
generating alerts or reporting to police stations.
Fog computing offers several advantages, but there are several challenges associated with it. Some of
them are –
1. Complexity
Fog devices can be diverse in architecture and located at different locations. Fog devices further
store and analyse their own data hence add more complexity to the network.
2. Power Consumption
8
Fog devices require high power consumption for proper functioning. Adding more fog devices
increases energy consumption, which results in an increase of cost.
3. Data Management
Data is distributed across multiple fog devices hence data management and maintaining
consistency is challenging.
4. Authentication
5. Security
Since there are many fog devices, each with a different IP. Getting access to personal data by
spoofing, taping, and hacking can be a challenge.
Edge computing is a technology which offers data processing on the same layer where data is generated
by making use of edge devices having computation capabilities. This allows data to be processed even
faster than processing at fog devices at no or a very low cost. This also increases utilization of edge
devices.
Edge or end devices found today are smarter with various advanced features like artificial intelligence
enabled in them. Edge computing takes advantage of this intelligence to reduce load on network or cloud
servers. Also edge devices when used for computation offers hardware security along with low power
consumption. It can improve security by encrypting data closer to the network core.
Edge computing is often seen as similar to fog computing but there are several differences. Edge
computing devices are limited in their resource capabilities and therefore cannot replace existing Cloud
or Fog computing technology. But edge computing when added with these technologies can offer
numerous advantages and applications. Fig 5 shows the Cloud-Fog-Edge collaboration scenario.
9
Fig 5: Cloud – Fog – Edge Computing architecture
Edge computing allows data processing to be done at the network edge. This can offer several advantages
like – decreases latency, reduces data to be offloaded to cloud or fog, reduces cost of bandwidth, reduces
energy consumption, etc.
Edge computing can work in collaboration with cloud computing only or can be either implemented with
Cloud –Fog collaboration environment.
Instead of sending all the data directly to the cloud or fog layer from the edge devices, data is first
processed at the edge layer. Processing data at the edge layer gives near real time response due to physical
proximity of edge devices. As data generated at the edge layer is huge, it cannot be handled entirely at the
edge layer. Hence it is offloaded to the Cloud or Fog layer. In Cloud-Fog-Edge collaboration scenario,
data from edge layer is first offloaded to fog servers over a localized network, which in turn can offload it
to cloud servers for updates or further processing needs. In Cloud-Edge scenarios, data after processing
on the edge layer, can be offloaded to the cloud layer as resources available at the edge layer are
insufficient to handle large amounts of data. Here the edge layer can decide what is relevant and what is
not before sending to further layers, hence reducing load on cloud and fog servers.
10
11.12 CLOUD Vs FOG Vs EDGE COMPUTING
Cloud, fog and edge computing all are concepts of distributed computing. All of them perform
computation but at different proximity levels and with different resource capacities. Adding Edge and fog
layer to the cloud reduces the amount of storage needed at cloud. It allows data to be transferred at a
faster data rate because of transferring relevant data. Also the cloud would store and process only relevant
data resulting in cost reduction.
Edge computing devices are located at the closest proximity to users. Fog computing devices are located
at intermediate proximity. Cloud computing devices are at distant and remote locations from users. Fog
computing generally makes use of a centralized system which interacts with gateways and computer
systems on LAN. Edge computing makes use of embedded systems directly interfacing with sensors and
controllers. But this distinction does not always exist. Some of the common differences between Cloud,
Fog and Edge computing are shown in Table 2.
Non-real time response Near real time response Real time response
Can be accessed with internet or
Can be accessed with internet Can be accessed without internet
without internet
Edge computing has applications similar to fog computing due to its close proximity. Some of the
applications are listed below.
1. Gaming
11
Gamings which require live streaming feed of the game depends upon latency. In this, edge
servers are placed closed to the gamers to reduce latency.
2. Content Delivery
It allows caching of data like- web pages, videos near users in order to improve performance by
delivering content fastly.
3. Smart Homes
IoT devices can collect data from around the house and process it. Response generated is secure
and in real time as round-trip time is reduced. For example –response generated by Amazon’s
Alexa.
4. Patient monitoring
Edge devices present on the hospital site can process data generated from various monitoring
devices like- temperature sensors, glucose monitors etc. Notifications can be generated to depict
unusual trends and behaviours.
5. Manufacturing
Data collected in manufacturing industries through sensors can be processed in edge devices.
Edge devices here can apply real time analytics and machine learning techniques for reporting
production errors to improve quality.
11.14 SUMMARY
In this unit two emerging technologies – Fog computing and Edge computing are discussed. Cisco
introduced Fog Computing as a technology which extends computing to the edge of the network. In this
technology, resources like - compute, data, storage and applications are located in-between the end user
layer and the cloud. It reduces response time, reduces bandwidth requirements and enhances security.
Edge computing is a technology which offers data processing on the same layer where data is generated
by making use of edge devices having computation capabilities. These technologies cannot replace cloud
computing but can work in collaboration with cloud computing in order to improve performance.
1. Cisco in 2014 introduced a term called ‘Fog Computing’ to a technology which extends
computing to the edge of the network. Fog computing is a technology in which resources like -
12
compute, data, storage and applications are located in-between the end user layer (where data is
generated) and the cloud. Devices like gateways, routers, base stations can be configured as fog
devices. It can bring all the advantages offered by cloud computing closer to the location where
data is generated; hence leading to reduced response time, reduced bandwidth requirements,
enhanced security and other benefits.
2. Some of the differences between cloud computing and fog computing are :-
Distant location from the end users In the proximity of end users
1. End Devices Layer – It is composed of end devices which can be mobile devices, IoT
devices, computer systems, camera, etc. Data either captured or generated from these end
devices is forwarded to a nearby fog server at Layer 2 for processing.
2. Fog Layer – It is composed of multiple fog devices or servers. They are placed at the edge
of a network, between layer 1 and cloud servers. They can be implemented in devices like –
switches, routers, base stations, access points or can be specially configured fog servers.
3. Cloud Layer – It is composed of Cloud data centers. They consist of huge infrastructure -
high performance servers, massive storage devices, etc. They provide all cloud benefits like- high
performance, automatic backup, agility.
13
Various challenges associated with fog are –
a) Complexity
b) Maintaining security
c) Authenticating
d) Additional power consumption
a) Smart Cities
Fog computing can play a vital role in building smart cities. With the help of smart devices,
IoT devices and fog devices, it is possible to do tasks like – creating smart homes and
buildings by energy management of buildings, maintaining security, etc.
c) Surveillance
Security and Surveillance cameras are deployed in many areas. Data collected from these can
be forwarded to nearby fog servers. Fog servers in turn can perform video processing to find
out problems like theft, kidnapping, murders, etc.
1. Edge computing is a technology which offers data processing on the same layer where data is
generated by making use of edge devices having computation capabilities. This allows data to be
processed even faster than processing at fog devices at no or a very low cost. This also increases
utilization of edge devices.
14
3. Some applications of edge computing are -
a) Gaming
Gamings which require live streaming feed of the game depends upon latency. In this, edge
servers are placed closed to the gamers to reduce latency.
b) Content Delivery
It allows caching of data like- web pages, videos near users in order to improve performance
by delivering content fastly.
c) Smart Homes
IoT devices can collect data from around the house and process it. Response generated is
secure and in real time as round-trip time is reduced. For example –response generated by
Amazon’s Alexa.
15
UNIT 12 IoT CASE STUDIES
Structure
12.0 Introduction
12.1 Objectives
12.2 IoT Use Cases for Smart Cities
12.3 Smart Homes
12.4 Applications of IoT in Agriculture
12.5 Smart Transportation
12.6 Smart Grids
12.6.1 Key Features of Smart Grid
12.6.2 Benefits of Smart Grid
12.7 Connected Vehicles
12.7.1 Connected Cars
12.7.2 How does Connected Car Technology Work?
12.7.3 Features of Connected Cars
12.7.4 Types of Connectivity
12.8 Smart Healthcare
12.9 Industrial IoT (IIoT)
12.9.1 Industry 4.0 and IIoT
12.9.2 IIoT Architecture
12.9.3 Applications of IIoT
12.9.4 IIoT Use Cases
12.10 Summary
12.11 Solutions/Answers
12.12 Further Readings
12.0 INTRODUCTION
In the earlier unit, we had studied various concepts namely - Fog Computing,
Edge Computing, IoT networking and Connectivity Technologies. After going
through the basics of IoT in previous units, we will focus on applications of
IoT in this unit.
Artificial Intelligence (AI) and the Internet of Things (IoT) are two of the
technologies which are rapidly growing day by day and have the scope
of heading towards an extremely intelligent future.
The number of connected mobile IoT devices is set to grow immensely and is
expected to reach 23.14 billion by 2027 and 29 billion by 2030. Organizations
across the spectrum are using IoT to operate more effectively. IoT allows
enterprises to improve decision-making, enhance customer service, and
1
Application
Development, increase business value. Additionally, cloud platform availability empowers
Fog Computing and
Case Studies individuals and businesses to access and scale up infrastructure without
managing it.
In this unit we will focus on various applications of IoT in Smart Cities, Smart
Homes, Smart Transportation, Smart Grids, Smart Healthcare, Connected
Vehicles and Industrial IoT.
12.1 OBJECTIVES
Big and small cities are becoming densely populated and in this regard,
municipalities are facing a wide range of challenges that require immediate
attention. Urban crimes, traffic congestions, sanitation problems and
environmental deterioration are some of the common challenges of increased
population in urban areas and to prevent these, municipalities turn to the
adoption of smart technologies, such as the Internet of Things (IoT).
IoT holds the potential to cater to the needs of the increased urban population
while making living more secure and comfortable. Referring to this, the IoT
use cases for smart cities are limitless as it contributes to public safety,
optimized traffic control and a healthier environment etc., which are the main
essences of smart city developments.
2
IoT Case Studies
The following section focus on the popular use cases of IoT for smart cities
that are worth for the implementation.
With the increased population, the traffic congestion on the roads is also
increasing. However, smart cities aim to make the citizens reach the desired
destination efficiently and safely. To achieve this aim, municipalities turn to
smart traffic solutions which are enabled by IoT technologies.
Different types of sensors are utilized in smart traffic solutions which also
extract the relevant data from the driver’s smartphones to determine the speed
of the vehicles and GPS location. Concurrently, monitoring of the green traffic
light timing is also enabled by the smart traffic lights which are linked to the
cloud management platform. Based on the current traffic situation, the traffic
lights are automatically altered and this ultimately prevents traffic congestion
on the roads. Furthermore, while utilizing historical data, IoT solutions can
predict the future traffic conditions in smart cities and can enable the
municipalities to prevent potential congestions.
The issue of parking in cities seems inevitable but many cities around the globe
are adopting IoT enabled smart parking solutions and providing hassle-free
parking experiences to the citizens. With the help of road surface sensors on
parking spots and GPS data from the driver’s phone, smart parking solutions
identify and mark the parking spots that are available or occupied. Alongside,
IoT based smart parking solution creates real-time parking map on either
mobile or web application. The sensors embedded in the ground send data to
the cloud and server which notifies the driver whenever the nearest parking
spot is free. Instead of blindly driving around, a smart parking solution helps
the driver to find the parking spots easily.
Managing public transport efficiently is one of the major concerns of the big
cities. However, IoT offers a use case for smart cities in this regard as well.
The IoT sensor associated with public transport gathers and analyzes data
which help the municipalities to identify the patterns in which the citizens are
using public transport. Later on, this data-driven information is used by the
traffic operators to achieve the standardized level of punctuality and security in
transportation along with enhancing the travelling experience of the citizens.
IoT enabled smart city solutions give citizens complete control over home
utilities and save their money as well. Different utility approaches are powered
by IoT. These include smart meter and billing solutions, identification of
consumption patterns and remote monitoring. Smart meters transfer data to the
3
Application
Development, public utility through a telecom network, making the meter readings reliable.
Fog Computing and
Case Studies This solution also enables utility companies to accurately bill the amount of
gas, energy and water consumed per household. A smart network of meters
facilitates utility companies to monitor the consumption of resources in real-
time to balance the supply and demand. This indicates that the IoT not only
offers the benefit of utility control to the consumers but also helps the utility
companies to manage their resources.
Smart city development aims to improve the quality of life and make living
easy, cost-effective and sustainable. Majority of the traditional street lights
equipped on the roads waste power as they are always switched on even when
no vehicle or a person is passing. IoT enables the cities to save power by
embedding sensors in street lights and connecting them with the cloud
management solution. This helps in managing the lighting schedule. Smart
lighting solutions collect data movement of vehicles and people and link it to
the historical data (e.g. time of day, public transport schedule and special
events). Later on, the data is analyzed to improve and manage the lighting
schedule in smart cities. In other words, it can be said that the smart lighting
solution analyzes the outer conditions and directs the street light to switch on
or switch off, brighten or dim where required.
The waste collection operators in cities use predefined schedules to empty the
waste containers. This is a traditional waste collection approach that is not only
inefficient but also leads to unnecessary use of fuel consumption and
unproductive use of waste containers by waste collecting trucks. IoT offers
waste collection optimization by tracking waste levels along with providing
operational analytics and route optimization to manage the waste collection
schedule efficiently.
IoT sensors are attached to the waste containers. These sensors monitor the
level of waste in the containers. When the waste reaches the threshold, waste
truck drivers are immediately notified through the mobile application. Hence,
only the full containers are emptied by the truck drivers.
Time clocks connected to the Internet via the IoT network allow employers to
monitor the attendance of workers on remote job sites. Forget the hassle of
SIM cards for tracking employee comings and goings with constant and real-
time connected monitoring.
5
Application
Development, visualization and real-time satisfaction insights. Alerts based on specific
Fog Computing and
Case Studies thresholds or results can also be triggered for a faster response.
Various sensors are easy to install and so cheap to run that an entire city can be
covered, enabling dozens of metrics to be tracked such as humidity,
temperature, air quality and more. Some cities have installed sensors on
moving locations such as trams and buses to collect even more data throughout
the day. With increased availability of data, it’s easy to build interactive, real-
time mapping of air pollution and improve pollution prediction through
Machine Learning.
Collection routes can be optimized to save time, energy and money with low-
power connected ultrasonic sensors that indicate the level of waste in
dumpsters. The sensors also provide valuable data about dumpster usage,
emptying cycles and more. This can consolidate routes to save time, energy,
and money.
Put an end to manual on-site meter readings and data processing of water, gas
and electricity consumption. You can now monitor and optimize your remote
assets in real-time, detecting issues such as leaks and breakdowns. Service
companies can also automate billing and remotely activate and deactivate
services. IoT-enabled meters can transmit data immediately over the public
network with no pairing or configuration required and no need to replace or
recharge batteries for years.
With IoT enabled pressure sensors, get real-time alerts when fire hydrants are
in use and also may know how much water is consumed. Install an
accelerometer sensor to send alerts instantly if a hydrant is broken, leaking or
malfunctioning. Install a temperature monitor to help prevent cold weather
damage in inclement and wintery conditions.
Soil condition monitoring can be done remotely and cost efficiently to help
minimize plant stress caused by dehydration. Besides reducing the cost of
replacing plants, these solutions also optimize water usage.
Let’s look at the most popular ways to use Smart Home IoT technologies and
understand what the benefits look like.
Today, the most widely used smart home application is home lighting. Most
people know of tunable lighting that can change between warm and bright with
different colour hues that suit your mood & requirement.
But let’s check a few other use case scenarios for smart lights.
As you enter your home, lights can turn on automatically without the
necessity to press a button. This can also work as a safety feature to
detect intrusions.
The opposite is also possible as you leave your home; the system can
turn the lights off automatically, thereby saving energy.
Your light can turn on when your alarm rings in the morning, waking
the whole household up if need be.
7
Application
Development,
Fog Computing and
Case Studies
Smart home automation devices can make the cooking process safer and
convenient too.
It can turn on the lights or play soothing music when you enter the
kitchen.
Smart sensors can check for gas leaks, smokes, water leakages and turn
off the power in the house if the indicators are outside the optimum
range.
Safety sensors identify anything wrong at your home. They can notify home
users of any overlooked like an appliance left on or any potential threats
immediately and even trigger necessary action to prevent them.
Smart home users can check their home state remotely through the app
on their phones and control pretty much everything at home.
While locking the door, you can set controllers to automatically close
the curtains, turn off devices and ensure your home is protected against
any trespassers.
You can monitor your elderly relatives and automate things remotely
for them if needed.
Smart home IoT technologies in the bathroom can help in power and energy
savings with convenience.
With smart home automation, you can set your geysers to automatically
turn on and off at a pre-set pattern basis your shower routine.
This also helps make your home energy efficient by eliminating the
unnecessary functioning of high power-consuming home appliances
like geysers, heaters, ACs.
8
IoT Case Studies
A smart home can be exceptionally beneficial for those plant lovers interested
in growing vegetables, fruit, herbs, and indoor plants at home.
You can monitor your plant and turn on your smart irrigation system
when needed. You can control and stop the watering system, thus
optimizing water usage
With temperature control automation, you can optimize your ACs to provide
the best experience while being energy efficient.
For instance, users can turn on their bedroom ACs as they drive from
the office to enjoy a cool room once home after a tiring day.
You can configure the bedroom AC with your geyser times, so once
you step out from your bath, the room is ready for you.
You can set the ACs to function based on the room temperature while
you sleep at night. So you are neither cold nor hot and get a good
night’s sleep.
We can safely assume the doors of our future will not need keys. Digital locks
are safe and can be set to initiate a sequence of other devices in your home.
The entry door digital lock can identify who opened the door when.
With a custom entry assigned for each individual, you can know when
your kids, your hubby, or your maid reached home through
notifications on your smartphones.
9
Application
Development, Till now the IoT has disrupted many industries and the Agriculture Industry
Fog Computing and
Case Studies isn’t an exception. In the following section let us study smart agriculture
applications using IoT.
The Industrial IoT(IIoT) has been a driving force behind increased agricultural
production at a lower cost. In the next several years, the use of smart solutions
powered by IoT will increase in the agriculture operations. In fact, few of the
recent report tells that the IoT device installation will see a compound annual
growth rate of 20% in the agriculture industry. And the number of connected
devices (agricultural) will grow from 13 million in 2014 to 225 million by
2024.
The IoT in Agriculture has come up as a second wave of green revolution. The
benefits that the farmers are getting by adapting IoT are twofold. It has helped
farmers to decrease their costs and increase yields at the same time by
improving farmer's decision making with accurate data.
With the recent agriculture trends dependent on agriculture, IoT has brought
huge benefits like efficient use of water, optimization of inputs and many
more. What made difference were the huge benefits and which has become a
revolutionized agriculture in the recent days.
Climate plays a very critical role for farming. And having improper knowledge
about climate heavily deteriorates the quantity and quality of the crop
production. But IoT solutions enable you to know the real-time weather
conditions. Sensors are placed inside and outside of the agriculture fields. They
collect data from the environment which is used to choose the right crops
which can grow and sustain in the particular climatic conditions. The whole
IoT ecosystem is made up of sensors that can detect real-time weather
conditions like humidity, rainfall, temperature and more very accurately. There
are numerous no. of sensors available to detect all these parameters and
configure accordingly to suit your smart farming requirements. These sensors
monitor the condition of the crops and the weather surrounding them. If any
disturbing weather conditions are found, then an alert is sent.
11
Application
Development,
Fog Computing and
intervention, thus making entire process cost-effective and increasing accuracy
Case Studies at the same time. For example, using solar-powered IoT sensors builds modern
and inexpensive greenhouses. These sensors collect and transmit the real-time
data which helps in monitoring the greenhouse state very precisely in real-
time. With the help of the sensors, the water consumption and greenhouse state
can be monitored via emails or SMS alerts. Automatic and smart irrigation is
carried out with the help of IoT. These sensors help to provide information on
the pressure, humidity, temperature and light levels.
Drones with thermal or multispectral sensors identify the areas that require
changes in irrigation. Once the crops start growing, sensors indicate their
health and calculate their vegetation index. Eventually smart drones have
reduced the environmental impact. The results have been such that there has
been a massive reduction and much lower chemical reaching the groundwater.
The conventional database system does not have enough storage for the data
collected from the IoT sensors. Cloud based data storage and an end-to-end
IoT Platform plays an important role in the smart agriculture system. These
systems are estimated to play an important role such that better activities can
be performed. In the IoT world, sensors are the primary source of collecting
data on a large scale. The data is analyzed and transformed to meaningful
information using analytics tools.
12
IoT Case Studies
In the next section let us study the use of IoT in another important sector i.e.,
Transportation.
13
Application
Development,
Fog Computing and
The traditional tolling and ticketing systems are not only becoming outdated
Case Studies but they are also not proving to be effective for assisting the current flow of
vehicles on the road. With the increased number of vehicles on the road, the
toll booths have become busy and crowded as well on the highways and the
drivers have to spend a lot of time waiting for their turn. The toll booths do not
have enough resources and manpower to immediately assist many vehicles.
Compared to traditional tolling and ticketing systems, IoT in transportation
offers automated tolls. With the help of RFID tags and other smart sensors,
managing toll and ticketing have become much easier for traffic police
officers.
Self-driving cars or autonomous vehicles are the coolest things that have been
introduced in the transportation industry. In the past decades, the concept of
self-driving cars was just like a dream, but this has been turned into an
innovative reality with the support of IoT technologies. Self-driving cars are
capable of moving safely by sensing the environment, with little or no human
interaction. However, to gather data about the surrounding, self-driving cars
use a wide range of sensors. For instance, the self-driving car uses acoustic
sensors, ultrasonic sensors, radar, LiDAR (Light detection and ranging),
camera and GPS sensors to have information about the surroundings and take
the data-driven decision about mobility accordingly. This indicates that the
functioning of self-driving cars is dependent on IoT sensors. With the help of
IoT, sensors equipped in the self-driving cars continuously gather the data
about the surrounding in real-time and transfer this data either to a central unit
or cloud. The system analyzes the data in a fraction of seconds, enabling the
self-driving cars to perform as per the information provided. This indicates that
IoT connects the sensor network for self-driving cars and enables them to
function in the desired manner.
14
IoT Case Studies
Vehicle tracking or transportation monitoring systems have become the need
of many businesses to manage their fleets and supply chain processes
effectively. With the help of GPS trackers, transportation companies have
smooth access to real-time location, facts and figures about the vehicle. This
enables the transportation companies to monitor their important assets in real-
time. Apart from location monitoring, IoT devices can also monitor the
driver’s behavior and can inform about the driving style and idling time. In
fleet management systems, IoT has minimized the operating and fuel
expenditures along with the cost of maintenance. As far as transportation
monitoring is concerned, then it can be said that real-time tracking has made
the implementation of smart decisions much easier, enabling the drivers to
identify the issues in the vehicle immediately and take precautions where
necessary.
One of the key areas in which the IoT in transportation is found to be the most
useful is focused on the security of public transport. By keeping an eye on
every transport with the help of IoT devices, municipalities can track traffic
violations and take appropriate actions. Apart from security, IoT in
transportation also complements public transport management by providing a
wide range of smart solutions. This includes advanced vehicle logistic
solutions, passenger information systems, automated fare collection and
integrated ticketing. These solutions help in managing public transport and
traffic congestion. Real-time management of public transport has become
possible with IoT. This has facilitated the transportation agencies to establish
better communication with the passengers and provide necessary information
through passenger information displays and mobile devices. IoT has
undoubtedly made public transport more secure and efficient
15
Application
Development,
Fog Computing and
The Internet of Things (IoT) has the power to reshape the way we think about
Case Studies cities across the world. IoT connects people and governments to smart
city solutions. Connecting and controlling devices has given rise to smart
grdi technology, designed to improve and replace older architecture
Smart grids are electrical grids that involve the same transmission lines,
transformers, and substations as a traditional power grid. What sets them apart
is that Smart Grids involve IoT devices that can communicate with each other
and with the consumers.
Smart grid technology will help tackle the growing demand for renewable
power sources to be integrated into the existing grid, and enable the national
and international vision of low carbon energy. They are designed with energy
efficiency and sustainability in mind. As such, they can measure power
transmission in real-time, automate management processes, reduce power cuts,
and easily integrate various renewable energy sources.
Street lighting
Transmission lines
Substations
Cogeneration
Outage sensors
Early detection (e.g., power disturbances due to earthquakes and
extreme weather)
The smart grid does this through private, dedicated networks connecting
devices that are distributed to businesses and homes citywide, including:
Smart meters
Data concentrators
Transformers
Sensors
Demand Response Support: Smart grids can help consumers reduce their
electricity bills by advising them to use devices with a lower priority when the
electrical rates are lower. This also helps in the real-time analysis of electrical
usage and charges.
Current power grids aren’t made to withstand the immense draw on resources
and the need to transmit data for billions of consumers worldwide. The smart
grid can:
Once fully integrated, smart grid technologies can change the way we work
and interact with the world.
Smart grid technologies will help to reduce energy consumption and costs
through usage and data maintenance. Intelligent lighting through smart city
technology will be able to:
For consumer applications, users can adjust the temperature of their home
thermostats through apps while at work or on vacation.
17
Application
Development,
Fog Computing and
Smart grid technologies are less demanding on batteries and more carbon
Case Studies efficient. They are designed to reduce the peak load on distribution feeders. For
example, the U.S. Department of Energy is integrating green technology into
their IoT smart management for more sustainable solutions. These solutions
have the potential to benefit all distribution chains and include:
As the world’s population continues to grow, the older grids won’t keep up
with the increasing demands. Smart grids are designed to lower long-term
costs through smart energy IoT monitoring and source rerouting for fast
recovery when a power failure is detected.
As more electric vehicles enter operation, IoT smart sensors can collect real-
time data to relay information to drivers and authorities. Accessing this data
from smart sensors will enable cities to:
IoT technology is also at the core of expanding electric charging stations that
heavily tax the power grid.
18
IoT Case Studies
The IEA report discusses how smart grids can provide rural areas with
electricity by transitioning to community grids that connect to regional and
national grids. These grids will be critical for deploying new power
infrastructures in developing countries experiencing population overflow
impacts. Starting with new technology ensures the best path to economic
growth.
Optimized smart city solutions mean greater insight into regional issues.
Imagine a smart grid set up to respond to a regional drought or wildfires in a
dry area. Adaptive city fog lighting would be suitable for some cities.
Customized technology and better data collection can improve the daily lives
of countless regional populations.
Connected cars have become the new norm in the automobile industry, and we
can only expect it to get better and better. In the following section, let us read
on to know more about connected vehicles, connected car features and the
future of connected car technology.
Any car which can connect to the Internet is called a Connected Car. Usually,
such vehicles connect to the internet via WLAN (Wireless Local Area
Network). A connected vehicle can also share the Internet with devices inside
and outside the car, and at the same time can also share data with any external
device/services. Connected vehicles can always access the internet to perform
functions/download data when requested by the user.
19
Application
Development, A connected vehicle comes equipped with a host of smart and convenient
Fog Computing and
Case Studies features. The features of connected car technology improve the overall driving
and ownership experience, and also add safety net with its advanced security
features. Below are the smart features of a connected vehicle:
20
IoT Case Studies
more. Apart from the onboard safety equipment, these smart safety features
come in handy during tricky situations.
The connected car technology will not be limited to conventional cars. The
self-driving vehicles will also make use of this technology to communicate
with the road infrastructure and cloud system. But at present, the connected
cars are disrupting the automobile industry. With more and more smart
vehicles being launched, buyers are leaning towards the connected cars. In the
coming years, connected technology will be the new norm, and it will also
enhance safety and reduce accidents.
Healthcare is the major domain which as IoT usage. Let us study Smart
Healthcare in the next section.
Devices in the form of wearables like fitness bands and other wirelessly
connected devices like blood pressure and heart rate monitoring cuffs,
glucometer etc. give patients access to personalized attention. These devices
can be tuned to remind calorie count, exercise check, appointments, blood
pressure variations and much more.
By using wearables and other home monitoring equipment embedded with IoT,
physicians can keep track of patients’ health more effectively. They can track
patients’ adherence to treatment plans or any need for immediate medical
attention. IoT enables healthcare professionals to be more watchful and
connect with the patients proactively. Data collected from IoT devices can help
physicians identify the best treatment process for patients and reach the
expected outcomes.
Apart from monitoring patients’ health, there are many other areas where IoT
devices are very useful in hospitals. IoT devices tagged with sensors are used
for tracking real time location of medical equipment like wheelchairs,
defibrillators, nebulizers, oxygen pumps and other monitoring equipment.
Deployment of medical staff at different locations can also be analyzed real
time.
22
IoT Case Studies
control, and environmental monitoring, for instance, checking refrigerator
temperature, and humidity and temperature control.
Insurers may offer incentives to their customers for using and sharing health
data generated by IoT devices. They can reward customers for using IoT
devices to keep track of their routine activities and adherence to treatment
plans and precautionary health measures. This will help insurers to reduce
claims significantly. IoT devices can also enable insurance companies to
validate claims through the data captured by these devices.
Error Reduction: Data generated through IoT devices not only help in
effective decision making but also ensure smooth healthcare operations
with reduced errors, waste and system costs
Remote patient monitoring is the most common application of IoT devices for
healthcare. IoT devices can automatically collect health metrics like heart rate,
blood pressure, temperature, and more from patients who are not physically
present in a healthcare facility, eliminating the need for patients to travel to the
providers, or for patients to collect it themselves.
When an IoT device collects patient data, it forwards the data to a software
application where healthcare professionals and/or patients can view it.
Algorithms may be used to analyze the data in order to recommend treatments
or generate alerts. For example, an IoT sensor that detects a patient’s unusually
low heart rate may generate an alert so that healthcare professionals can
intervene.
A major challenge with remote patient monitoring devices is ensuring that the
highly personal data that these IoT devices collect is secure and private.
These are not insurmountable challenges, however, and devices that address
them promise to revolutionize the way patients handle glucose monitoring.
Today, a variety of small IoT devices are available for heart rate
monitoring, freeing patients to move around as they like while ensuring that
their hearts are monitored continuously. Guaranteeing ultra-accurate results
remains somewhat of a challenge, but most modern devices can deliver
accuracy rates of about 90 percent or better.
Traditionally, there hasn’t been a good way to ensure that providers and
patients inside a healthcare facility washed their hands properly in order to
minimize the risk of spreading contagion.
Today, many hospitals and other health care operations use IoT devices
to remind people to sanitize their hands when they enter hospital rooms. The
devices can even give instructions on how best to sanitize to mitigate a
particular risk for a particular patient.
A major shortcoming is that these devices can only remind people to clean
their hands; they can’t do it for them. Still, research suggests that these devices
can reduce infection rates by more than 60 percent in hospitals.
The key challenge here is that metrics like these can’t predict depression
symptoms or other causes for concern with complete accuracy. But neither can
a traditional in-person mental assessment.
25
Application
Development, In order to treat Parkinson’s patients most effectively, healthcare providers
Fog Computing and
Case Studies must be able to assess how the severity of their symptoms fluctuate through the
day.
IoT sensors promise to make this task much easier by continuously collecting
data about Parkinson’s symptoms. At the same time, the devices give patients
the freedom to go about their lives in their own homes, instead of having to
spend extended periods in a hospital for observation.
While wearable devices like those described above remain the most commonly
used type of IoT device in healthcare, there are devices that go beyond
monitoring to actually providing treatment, or even “living” in or on the
patient. Examples include the following.
In addition, connected inhalers can alert patients when they leave inhalers at
home, placing them at risk of suffering an attack without their inhaler present,
or when they use the inhaler improperly.
Collecting data from inside the human body is typically a messy and highly
disruptive affair. With ingestible sensors, it’s possible to collect information
from digestive and other systems in a much less invasive way. They provide
insights into stomach PH- levels, for instance, or help pinpoint the source of
internal bleeding.
These devices must be small enough to be swallowed easily. They must also be
able to dissolve or pass through the human body cleanly on their own. Several
companies are hard at work on ingestible sensors that meet these criteria.
Smart contact lenses provide another opportunity for collecting healthcare data
in a passive, non-intrusive way. They could also, incidentally, include
microcameras that allow wearers effectively to take pictures with their eyes,
26
IoT Case Studies
which is probably why companies like Google have patented connected
contact lenses.
Whether they’re used to improve health outcomes or for other purposes, smart
lenses promise to turn human eyes into a powerful tool for digital interactions.
These devices must be small enough and reliable enough to perform surgeries
with minimal disruption. They must also be able to interpret complex
conditions inside bodies in order to make the right decisions about how to
proceed during a surgery. But IoT robots are already being used for
surgery, showing that these challenges can be adequately addressed.
Industry 4.0 is the outcome of the fourth industrial revolution. The fourth
industrial revolution is defined by the integration of conventional, automated
27
Application
Development, manufacturing with industrial processes powered by intelligent technologies
Fog Computing and
Case Studies
and autonomously communicating devices.
The term Industry 4.0 or I4.0 or simply I4, emerged in 2011 from an initiative
of the German government that, over the last two decades, advocated the
digitization of industrial processes significantly.
These are the groupings of networked objects located at the edge of an IoT
ecosystem. These are situated as near as feasible to the data source. These are
often wireless actuators and sensors in an industrial environment. A processing
unit or small computing device and a collection of observing endpoints are
present. Edge IoT devices may range from legacy equipment in a brownfield
environment to cameras, microphones, sensors, and other meters and monitors.
What occurs at the network’s most remote edge? Sensors acquire data from
both the surrounding environment and the items they monitor. Then, they
transform the information into metrics and numbers that an IoT platform can
analyze and transform into actionable insights. Actuators control the processes
occurring in the observed environment. They modify the physical
circumstances in which data is produced.
28
IoT Case Studies
In this aspect, edge computing provides the quickest answers since data is
preprocessed at the network’s edge, at the sensors themselves. Here, you can
conduct analyses on your digital and aggregated data. Once the relevant
insights have been gathered, one can move forward to the next stage instead of
sending all the collected information. This additional processing decreases data
volume sent to data centers or the cloud.
Edge devices are restricted in their capacity for preprocessing. While you
should strive to reach as near to the edge as is realistically possible to limit the
consumption of native computational power, users will need to utilize the
cloud for processing that is more in-depth and thorough.
At this point, you must choose whether to prioritize the agility and immediacy
of edge devices or the advanced insights of cloud computing. Cloud-based
solutions can perform extensive processing. Here, it is possible to aggregate
data from different sources and provide insights that are unavailable at the
edge.
The sensor data is gathered and turned into digital channels for further
processing at the Internet gateway. After obtaining the aggregated and
digitized data, the gateway transmits it over the internet so that it may be
further processed before being uploaded to the cloud. Gateways continue to be
part of the edge’s data-collecting systems. They remain adjacent to the
actuators and sensors and perform preliminary data processing at the edge.
Protocols are required for the transfer of data across the IIoT system. These
protocols should preferably be industry-standard, well-defined, and secure.
Protocol specifications may contain physical properties of connections and
cabling, the procedure for establishing a communication channel, and the
format of the data sent over that channel. Some of the common protocols used
in IIoT architecture include:
To have access to this competitive advantage, one would be wise to know the
main IIoT applications and how to implement the system.
This ability to remotely control equipment via digital machines and software
also implies that it is possible to control several plants located at different
geographic locations.
30
IoT Case Studies
This gives companies an unprecedented ability to oversee advances in their
production in real time, while also being able to analyze historical data that
they obtain in relation to their processes. The objective of collecting and using
that data is to support the improvement of processes and generating an
environment where information-based decisions are a priority.
This system is one of the most effective Industrial IOT applications and works
via sensors that, once installed on the machines and operating platforms,
can send alerts when certain risk factors emerge. For example, the sensors that
monitor robots or machines submit data to the platforms, which analyze the
data received in real time and apply advanced algorithms that can issue
warnings regarding high temperatures or vibrations that exceed normal
parameters.
The use of Industrial IoT systems allows for the automated monitoring of
inventory, certifying whether plans are followed and issuing an alert in case of
deviations. It is yet another essential Industrial IOT application to maintain
a constant and efficient workflow.
Another entry among the most important IIoT applications is the ability to
monitor the quality of manufactured products at any stage: from the raw
materials that are used in the process, to the way in which they are transported
(via smart tracking applications), to the reactions of the end customer once the
product is received.
This information is vital when studying the efficiency of the company and
applying the necessary changes in case failures are detected, with the purpose
of optimizing the processes and promptly detect issues in the production chain.
It has also been proven that it is essential to prevent risks in more delicate
industries, such as pharmaceutics or food.
31
Application
Development, 12.9.3.6 Supply Chain Optimization
Fog Computing and
Case Studies
Among the Industrial IoT applications aimed at achieving a higher efficiency,
we can find the ability to have real time in-transit information regarding the
status of a company’s supply chain.
Machines that are part of IIoT can generate real-time data regarding the
situation on the plant. Through the monitoring of equipment damages, plant air
quality and the frequency of illnesses in a company, among other indicators, it
is possible to avoid hazardous scenarios that imply a threat to the workers.
This not only boosts safety in the facility, but also productivity and employee
motivation. In addition, economic and reputation costs that result from poor
management of company safety are minimized.
Most notable industries and companies, from retail to manufacturing, use IIoT
in some way. Here are some notable IIoT examples that have resulted in
positive business outcomes:
32
IoT Case Studies
The energy and utilities sector utilizes large operational infrastructure,
sometimes in hazardous conditions where human operators are unsuitable. In
these instances, IIoT devices may gather and transmit crucial operational data
without the presence of a human operator. For example, Larsen & Toubro
(L&T) is deploying a remotely monitored Green Hydrogen Station in Gujarat,
India. Using IIoT, L&T may reduce operational and energy expenses and gain
relevant insights into the functioning of the energy plant.
The food and beverage sector relies heavily on the capacity to manufacture and
store products under ideal environmental conditions. IIoT systems may
monitor environmental changes to warn floor managers before product
degradation occurs.
…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
2) Where is IoT mainly used?
…………………………………………………………………………………
…………………………………………………………………………………
3) What are the major features of IoT?
…………………………………………………………………………………
…………………………………………………………………………………
4) Explore and discuss additional applications of IoT which are not presented
in this Unit.
…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
33
Application
Development,
Fog Computing and
Case Studies
12.10 SUMMARY
2. IoT applications are primarily used to build smart homes and smart
cities etc... IoT solutions are already in use in the following industries:
Information Technology
With network data collected by embedded IoT devices, digital
communication hardware and software can be managed and optimized.
Transportation and traffic management
34
IoT Case Studies
IoT technology has a lot to offer the world of retail. Online and in-store
shopping sales figures can control warehouse automation and robotics,
information gleaned from IoT sensors. Much of this relies on RFIDs,
which are already in heavy use worldwide.
Mall locations are iffy things; business tends to fluctuate, and the
advent of online shopping has driven down the demand for brick and
35
Application
Development, mortar establishments. However, IoT can help analyze mall traffic so
Fog Computing and
Case Studies that stores located in malls can make the necessary adjustments that
enhance the customer’s shopping experience while reducing overhead.
Other wearables include virtual glasses and GPS tracking belts. These
small and energy-efficient devices equipped with sensors and software
collect and organize data about users. Top companies like Apple,
Google, Fitbit, and Samsung, are behind the introduction of the Internet
of Things.
The installation of IoT sensors in fleet vehicles has been a boon for
geo-location, performance analysis, fuel savings, telemetry control,
pollution reduction, and information to improve the driving of
vehicles.
36
IoT Case Studies
availability of rooms, and quicker assignment of housekeeping tasks
while disabling the operation of doors.
37