0% found this document useful (0 votes)
99 views

CC Notes

Cloud computing is a model that enables on-demand access to a shared pool of configurable computing resources like networks, servers, storage, applications, and services. There are five essential cloud characteristics: on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. Cloud computing services include infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). Cloud computing architecture consists of a frontend for user interfaces and a backend for resources, management, and security.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
99 views

CC Notes

Cloud computing is a model that enables on-demand access to a shared pool of configurable computing resources like networks, servers, storage, applications, and services. There are five essential cloud characteristics: on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. Cloud computing services include infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). Cloud computing architecture consists of a frontend for user interfaces and a backend for resources, management, and security.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 88

CLOUD COMPUTING

Unit 1 : Introduction to Cloud Computing

Cloud computing is a model for enabling convenient, on-demand network access to a


shared pool of configurable computing resources (e.g., networks, servers, storage,
applications, and services) that can be rapidly provisioned and released with minimal
management effort or service provider interaction. This cloud model promotes
availability and is composed of five essential characteristics, three service models, and
four deployment models.
Examples of Cloud Computing Services: AWS, Azure, Google

Cloud Data center definition:-


A cloud data center moves a traditional on-prem data center off-site. Instead
of personally managing their own infrastructure, an organization leases
infrastructure managed by a third-party partner and accesses data center
resources over the Internet. Under this model, the cloud service provider is
responsible for maintenance, updates, and meeting service level agreements
(SLAs) for the parts of the infrastructure stack under their direct control.
Characteristics of Cloud Computing:-
Following are five main characteristics that cloud computing offers businesses today.
1. On-demand capabilities: A business will secure cloud-hosting services through
a cloud host provider which could be your usual software vendor. You can add or
delete users and change storage networks and software as needed. Typically,
you are billed with a monthly subscription or a pay-for-what-you-use scenario.
Terms of subscriptions and payments will vary with each software provider.
2. Broad network access: Your team can access business management solutions
using their smartphones, tablets, laptops, and office computers. They can use
these devices wherever they are located with a simple online access point. Broad
network access includes private clouds that operate within a company’s firewall,
public clouds, or a hybrid deployment.
3. Resource pooling: The cloud enables your employees to enter and use data
within the business management software hosted in the cloud at the same time,
from any location, and at any time. This is an attractive feature for multiple
business offices and field service or sales teams that are usually outside the
office.
4. Rapid elasticity: If anything, the cloud is flexible and scalable to suit your
immediate business needs. You can quickly and easily add or remove users,
software features, and other resources.
5. Measured service: Going back to the affordable nature of the cloud, you only
pay for what you use. You and your cloud provider can measure storage levels,
processing, bandwidth, and the number of user accounts and you are billed
appropriately.
Cloud Computing has a number of advantages from a user’s point of view i.e. for the
common man:
● Lower cost: Being an online service, Cloud Computing provides access to
applications using a browser while applications are stored on distributed servers.
● Free access: Access of applications from any location makes users
independent.
● More storage area: As Cloud Computing is distributed process, it maintains a
storage den. It provides more storage than personal storage.
● Flexibility: Cloud Computing provides a tension-free environment by upgrading,
managing, installing software on its own. It provides a download free zone.
● Mobility: The user can connect to the Cloud from any location.
● Ease of sharing: This is key component of Cloud Computing. The information,
resources and hardware sharing for instant delivery.
● Data safety: The files/data are safe unless the hard drive get stolen.
● Availability: There are several copies which can be owned as per user demand.
● Synchronization: Different experts from different issues, projects and locations.
● Rapid elasticity: migrating from one platform to another.

Cloud Computing Architecture:-


The cloud architecture is divided into 2 parts i.e.
1. Frontend
2. Backend
The below figure represents an internal architectural view of cloud computing.

Architecture of Cloud Computing

Architecture of cloud computing is the combination of both SOA (Service Oriented


Architecture) and EDA (Event Driven Architecture). Client infrastructure, application,
service, runtime cloud, storage, infrastructure, management and security all these
are the components of cloud computing architecture.

1. Frontend : Frontend of the cloud architecture refers to the client side of cloud
computing system. Means it contains all the user interfaces and applications which
are used by the client to access the cloud computing services/resources. For
example, use of a web browser to access the cloud platform.
● Client Infrastructure – Client Infrastructure is a part of the frontend
component. It contains the applications and user interfaces which are required
to access the cloud platform. • In other words, it provides a GUI( Graphical
User Interface ) to interact with the cloud.

2. Backend : Backend refers to the cloud itself which is used by the service
provider. It contains the resources as well as manages the resources and provides
security mechanisms. Along with this, it includes huge storage, virtual applications,
virtual machines, traffic control mechanisms, deployment models, etc.

1. Application – Application in backend refers to a software or platform to


which client accesses. Means it provides the service in backend as per the
client requirement.

2. Service – Service in backend refers to the major three types of cloud


based services like SaaS, PaaS and IaaS. Also manages which type of
service the user accesses.

3. Runtime - CloudRuntime cloud in backend provides the execution and


Runtime platform/environment to the Virtual machine.

4. Storage – Storage in backend provides flexible and scalable storage


service and management of stored data.

5. Infrastructure – Cloud Infrastructure in backend refers to the hardware


and software components of cloud like it includes servers, storage, network
devices, virtualization software etc.

6. Management – Management in backend refers to management of backend


components like application, service, runtime cloud, storage, infrastructure,
and other security mechanisms etc
.
7. Security – Security in backend refers to implementation of different
security mechanisms in the backend for secure cloud resources, systems,
files, and infrastructure to end-users.

8. Internet – Internet connection acts as the medium or a bridge between


frontend and backend and establishes the interaction and communication
between frontend and backend.

NIST model
NIST organizes concepts around three major elements: A) characteristics, B) cloud
service models, and C) cloud deployment models as shown in Figure 1.

Five characteristics (broad network access, rapid elasticity, measure service,


on-demand self-service and resource pooling) are considered essential to cloud
computing.
The three cloud service models, perhaps the most recognized cloud topics terms,
are: infrastructure as a service (IaaS), platform as a service (PaaS), and software
as a service (SaaS).
The SPI model represents SaaS, PaaS, and IaaS. Public, private, hybrid, and
community are the four deployment models for cloud computing.

CLOUD CUBE MODEL


Cloud Cube model has four dimensions to categorized cloud formations:

● Internal/External
● Proprietary/Open
● De-Perimeterized/Perimeterized
● Insourced/Outsourced Dimension

i. Internal/External
The most basic cloud form is the external and internal cloud form. The external or
internal dimension defines the physical location of the data. It acknowledges us
whether the data exists inside or outside of your organization’s boundary.

Here, the data which is stored using a private cloud deployment will be considered
internal and data outside the cloud will be considered external.

Cloud Cube Model – External/Internal

ii. Proprietary/Open
The second type of cloud formation is proprietary and open. The proprietary or open
dimension states about the state of ownership of the cloud technology and interfaces. It also
tells the degree of interoperability, while enabling data transportability between the system
and other cloud forms.

The proprietary dimension means, that the organization providing the service is securing
and protecting the data under their ownership.

The open dimension is using a technology in which there are more suppliers. Moreover, the
user is not constrained in being able to share the data and collaborate with selected
partners using the open technology.

Cloud Cube Model – Proprietary/Open

iii. De-Perimeterized/Perimeterized
The third type of cloud formation is De-perimeterized and Perimeterized. To reach this form,
the user needs collaboration oriented architecture and Jericho forum commandments.

The Perimeterised and De-perimeterized dimension tells us whether you are operating
inside your traditional it mindset or outside it.

Perimeterized dimension means, continuing to operate within the traditional it boundary,


orphan signaled by network firewalls.

With the help of VPN and operation of the virtual server in your own IP domain, the user can
extend the organizations perimeter into external Cloud Computing domain. This means that
the user is making use of the own services to control access.

De-perimeterized dimension means the system perimeter is architected on the principles


outlined in the Jericho forums commandments. In De-perimeterized dimension, the data will
be encapsulated with metadata and mechanisms, which will further help to protect the data
and limit the inappropriate usage.

Cloud Cube Model – De-Perimeterized/Perimeterized

iv. Insourced/Outsourced
The Insourced and outsourced dimensions have two states in each of the eight cloud forms.
In the outsourced dimension the services provided by the third party, whereas in the
insourced dimension the services provided by the own staff under the control.

In this few organizations that are traditional bandwidth software or hardware, providers will
run fluently on becoming cloud service providers.

The organizations which are seeking to procedure cloud services must have the ability to
set legally binding collaboration agreement. In this, an organization should ensure that data
is deleted from the service provider’s Infrastructure.
Cloud Deployment Models:-

The cloud deployment model identifies the specific type of cloud environment based on
ownership, scale, and access, as well as the cloud’s nature and purpose. The location
of the servers you’re utilizing and who controls them are defined by a cloud deployment
model. It specifies how your cloud infrastructure will look, what you can change, and
whether you will be given services or will have to create everything yourself.
Relationships between the infrastructure and your users are also defined by cloud
deployment types.

Different types of cloud computing deployment models are:


1. Public cloud
2. Private cloud
3. Hybrid cloud
4. Community cloud
5. Multi-cloud

1. Public Cloud : The public cloud makes it possible for anybody to access systems
and services. The public cloud may be less secure as it is open for everyone. The public
cloud is one in which cloud infrastructure services are provided over the internet to the
general people or major industry groups. The infrastructure in this cloud model is owned
by the entity that delivers the cloud services, not by the consumer. It is a type of cloud
hosting that allows customers and users to easily access systems and services. This
form of cloud computing is an excellent example of cloud hosting, in which service
providers supply services to a variety of customers. In this arrangement, storage backup
and retrieval services are given for free, as a subscription, or on a per-use basis.
Example: Google App Engine etc.

Advantages of the public cloud model:


• Minimal Investment: Because it is a pay-per-use service, there is no
substantial upfront fee, making it excellent for enterprises that require immediate
access to resources.
• No setup cost: The entire infrastructure is fully subsidized by the cloud service
providers, thus there is no need to set up any hardware.
• Infrastructure Management is not required: Using the public cloud does not
necessitate infrastructure management.
• No maintenance: The maintenance work is done by the service provider (Not
users).
• Dynamic Scalability: To fulfill your company’s needs, ondemand resources are
accessible.
2. Private Cloud: The private cloud deployment model is the exact opposite of the
public cloud deployment model. It’s a one-on-one environment for a single user
(customer). There is no need to share your hardware with anyone else. The distinction
between private and public cloud is in how you handle all of the hardware. It is also
called the “internal cloud” & it refers to the ability to access systems and services within
a given border or organization. The cloud platform is implemented in a cloud-based
secure environment that is protected by powerful firewalls and under the supervision of
an organization’s IT department. The private cloud gives the greater flexibility of control
over cloud resources.

Advantages of the private cloud model:


• Better Control: You are the sole owner of the property. You gain complete
command over service integration, IT operations, policies, and user behavior.
• Data Security and Privacy: It’s suitable for storing corporate information to
which only authorized staff have access. By segmenting resources within the
same infrastructure, improved access and security can be achieved.
• Supports Legacy Systems: This approach is designed to work with legacy
systems that are unable to access the public cloud.
• Customization: Unlike a public cloud deployment, a private cloud allows a
company to tailor its solution to meet its specific needs.

3. Hybrid cloud : By bridging the public and private worlds with a layer of proprietary
software, hybrid cloud computing gives the best of both worlds. With a hybrid solution,
you may host the app in a safe environment while taking advantage of the public cloud’s
cost savings. Organizations can move data and applications between different clouds
using a combination of two or more cloud deployment methods, depending on their
needs.

Advantages of the hybrid cloud model:


• Flexibility and control: Businesses with more flexibility can design
personalized solutions that meet their particular needs.
• Cost: Because public clouds provide for scalability, you’ll only be responsible
for paying for the extra capacity if you require it.
• Security: Because data is properly separated, th

4. Community cloud : It allows systems and services to be accessible by a group of


organizations. It is a distributed system that is created by integrating the services of
different cloudsto address the specific needs of a community, industry, or business. The
infrastructure of the community could be shared between the organization which has
shared concerns or tasks. It is generally managed by a third party or by the combination
of one or more organizations in the community.

Advantages of the community cloud model:


• Cost Effective: It is cost-effective because the cloud is shared by multiple
organizations or communities.
• Security: Community cloud provides better security.
• Shared resources: It allows you to share resources, infrastructure, etc. with
multiple organizations.
• Collaboration and data sharing: It is suitable for both collaboration and data
sharing.

Cloud service models :-

Cloud Service Models


There are the following three types of cloud service models -
1. Infrastructure as a Service (IaaS)
2. Platform as a Service (PaaS)
3. Software as a Service (SaaS)

1. Infrastructure as a Service (IaaS) :-

IaaS is also known as Hardware as a Service (HaaS). It is a computing infrastructure


managed over the internet. The main advantage of using IaaS is that it helps users to
avoid the cost and complexity of purchasing and managing the physical servers.

Characteristics of IaaS
There are the following characteristics of IaaS -
● Resources are available as a service
● Services are highly scalable
● Dynamic and flexible
● GUI and API-based access
● Automated administrative tasks

Example: DigitalOcean, Linode, Amazon Web Services (AWS), Microsoft Azure,


Google Compute Engine (GCE), Rackspace, and Cisco Metacloud.
Benefits of IaaS
IaaS is an efficient and cost-effective way to deploy, operate, and scale your IT infrastructure. It’s
easy to set up and configure, so you can start using it quickly. And because it’s available as a
service from an external provider, you don’t have to worry about building and maintaining your own
infrastructure. IaaS offers the following benefits:

Cost savings: IaaS is more cost-effective than building your own data center. You pay only for what
you need — storage space, CPU power, bandwidth, and other resources. This makes it easier to
scale up or down as needed.

On-demand access: You can instantly provision new resources whenever they’re needed without
having to invest in new hardware and software or hire additional IT staff members. The cloud
provider takes care of all the maintenance and upgrades required to keep your servers online 24/7
with 99 percent uptime guarantees (or better).

Flexibility: With cloud computing, you can easily add more resources when demand increases
without having to upgrade equipment or hire more IT professionals.

2. Platform as a Service (PaaS) :-

PaaS cloud computing platform is created for the programmer to develop, test, run, and
manage the applications.

1. PaaS is a cloud service model that gives a ready-to-use development environment where
developers can specialize in writing and executing high-quality code to make customized
applications.
2. It helps to create an application quickly without managing the underlying infrastructure. For
example, when deploying a web application using PaaS, you don’t have to install an
operating system, web server, or even system updates. However, you can scale and add
new features to your services.
3. This cloud service model makes the method of developing and deploying applications
simpler and it is more expensive than IaaS but less expensive than SaaS.
4. This helps you be more efficient as you don’t get to worry about resource procurement,
capacity planning, software maintenance, patching, or any of the opposite undifferentiated
work involved in running your application.
5. Examples of PaaS: Elastic Beanstalk or Lambda from AWS, WebApps, Functions or Azure
SQL DB from Azure, Cloud SQL DB from Google Cloud, or Oracle Database Cloud Service
from Oracle Cloud.
Characteristics of PaaS:

There are the following characteristics of PaaS -


● Accessible to various users via the same development application.
● Integrates with web services and databases.
● Builds on virtualization technology, so resources can easily be scaled up or down
as per the organization's need.
● Support multiple languages and frameworks.
● Provides an ability to "Auto-scale".

Example: AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, Google App
Engine, Apache Stratos, Magento Commerce Cloud, and OpenShift.

Benefits of PaaS
● PaaS is an easy way to build an application, and it offers a lot of benefits. Here are just a
few:
● Faster development time – You don’t have to build infrastructure before you can start
coding.
● Reduced costs – Your IT department won’t need to spend time on manual deployments or
server management.
● Enhanced security – PaaS providers lock down your applications so that they’re more
secure than traditional web apps.
● High availability – A PaaS provider can make sure your application is always available,
even during hardware failures or maintenance windows.

3. Software as a Service (SaaS) :-

SaaS is also known as "on-demand software". It is a software in which the applications


are hosted by a cloud service provider. Users can access these applications with the
help of internet connection and web browser.

1. SaaS provides you with a complete product that is run and managed by the service provider.
2. The software is hosted online and made available to customers on a subscription basis or for
purchase in this cloud service model.
3. With a SaaS offering, you don’t need to worry about how the service is maintained or how
the underlying infrastructure is managed. It would help if you believed how you’d use that
specific software.
4. Examples of SaaS: Microsoft Office 365, Oracle ERP/HCM Cloud, SalesForce, Gmail, or
Dropbox.
Characteristics of SaaS
There are the following characteristics of SaaS -
● Managed from a central location
● Hosted on a remote server
● Accessible over the internet
● Users are not responsible for hardware and software updates. Updates are applied
automatically.
● The services are purchased on the pay-as-per-use basis

Benefits of SaaS
The benefits of SaaS are numerous and varied. Many businesses have already made the switch to
SaaS, but some are still skeptical about making the change. Here are some of the top reasons why
you should consider switching to SaaS:

Lower Total Cost of Ownership: One of the biggest benefits of SaaS is that it lowers your total cost
of ownership (TCO) by eliminating hardware expenses and maintenance costs. There is no longer a
need to buy servers or hire IT professionals to maintain or monitor them, which results in fewer
upfront costs and reduced maintenance fees over time.

Better Security: Another benefit of SaaS is improved security. Since most services are hosted on
secure servers in data centers with 24/7 monitoring, there’s less chance for hackers to gain access
or steal your data. This makes SaaS a more secure option for storing sensitive information than
other options like on-premise software or local servers. In fact, according to Gartner’s 2017 Magic
Quadrant report, “Software as a service (SaaS) offerings provide better security than self-hosted
software does.”
Impact Of Cloud Computing On Business:-
Today IT is becoming an enabler of business. Business organizations are moving

towards automation, business intelligence, and a lot more.

Cloud computing is one such tool being used by many business organizations.

Cloud Computing provides a way for businesses to manage their resources online. It

allows the business entities to access their information virtually, whereby, data can be

accessed anytime and anywhere.

More and more companies are moving towards cloud computing. Just like a coin has

two sides, cloud computing also has a positive impact and at the same time, some

challenges are faced by the business entities.

Let’s have a look at them:

● Cost Reduction:

Cloud computing works on the concept of pay-per-use. Cloud computing helps in

reducing the expenses of the company as resources are acquired only when they are

needed and payment is made as per usage.

Cloud computing can cause a dramatic decrease in labor and maintenance costs

because the company is required to purchase the infrastructure or maintain it. The

computer hardware is owned by the vendor and stored in off-site locations; thus, this

creates a reduction in demand for in-house staff.


● Scalability:

This is the key benefit of cloud technology as the client has the flexibility to scale up and

scale down the resources as per the requirements of the organization.

Businesses need not worry about the future demands as they can easily scale up the

resources and acquire additional services anytime.

● Flexibility:

Cloud Computing provides a lot of flexibility. Customers or users are free to decide

about the services which they want to use and pay as per use. Also, users have the

option to switch from one cloud to another.

Users can choose public, private, or hybrid storage offerings, based on their security

needs and other factors.

● Provides Almost Unlimited Storage:

Users can store a lot more data on the cloud than their local physical storage devices.

Moreover, companies can scale their storage capacity as per their requirements. When

the business grows and more storage space is required, companies can request an

increase in the storage capacity and use the services.

● Disaster Recovery:

A business using cloud services need not prepare complex disaster recovery plans

because the cloud service providers take care of such issues and help the clients to

recover faster.
Key Business Drivers for Cloud Computing :-

The key drivers are:

Business Continuity/Disaster Recovery

A classic cloud driver for people to use hosted compute and storage resources is

BC/DR: keeping the organization running if they cannot access their building(s) or their

technology resources. Cloud hosting operates as either the primary or secondary

delivery mechanism, helping to eliminate dependence upon on-premise kit or taking

over if on-premise services are not available.

Overcoming Resource Shortages

Managed cloud hosting helps with all resource challenges – lack of available systems,

real-estate, personnel or expertise – by providing companies with dynamic resources

available elastically and on-demand, with dedicated experts working as an extension of

the customer’s IT department.

Campaign, Special Event or Seasonality

Many organizations, especially in B2C industries like leisure, hospitality, entertainment

and retail, run regular one-off campaigns, special events, or have high levels of

seasonality in their business model. Managed cloud hosting helps by allowing these

companies to “turn up” resources to cope with increased demand then “turn down”

when the demand passes. This elasticity – the ability to “cloud burst” – is a huge driver

for many businesses and will be significantly more cost-effective than buying hardware

that is only used for a short portion of any given year.

Application Upgrades or Performance Issues


Organizations do not always upgrade software (e.g. Exchange) to the latest software

load immediately, resulting in expensive hardware upgrades. Others are experiencing

hardware-related performance issues with applications that are putting a strain on

server infrastructure. This is often a driver for migrating to the cloud as it means

companies can focus on managing apps and leave the hardware challenge to us.

Compliance or Regulatory Challenges

Information security is a significant legal, regulatory and compliance challenge. Some

industries have best practice guidelines or accreditations that are even more stringent.

The cloud offers ways to achieve compliance but also threats – for many, they need to

work with UK-based managed cloud hosting providers operating ISO27001 compliant

datacenters.

Requirement for Development or UAT Environment

Some of the earliest cloud pioneers were “test and dev” users and it remains a major

driver today. The ability to spin up production quality environments to write code and for

User Acceptance Testing – with the ability to migrate easily into a live environment – for

a limited amount of time, is a compelling driver for cloud hosting.

Eliminate Hardware Ownership

Many IT departments now focus on managing applications and users and using

managed services to deliver the IaaS enabling components.


Advantages and Disadvantages of Cloud Computing:-

Advantages of Cloud Computing :-

1) Back-up and restore data

Once the data is stored in the cloud, it is easier to get back-up and restore that data
using the cloud.

2) Improved collaboration

Cloud applications improve collaboration by allowing groups of people to quickly and


easily share information in the cloud via shared storage.

3) Excellent accessibility

Cloud allows us to quickly and easily access store information anywhere, anytime in the
whole world, using an internet connection. An internet cloud infrastructure increases
organization productivity and efficiency by ensuring that our data is always accessible.

4) Low maintenance cost

Cloud computing reduces both hardware and software maintenance costs for
organizations.

5) Mobility

Cloud computing allows us to easily access all cloud data via mobile.

6) IServices in the pay-per-use model

Cloud computing offers Application Programming Interfaces (APIs) to the users for
access services on the cloud and pays the charges as per the usage of service.

7) Unlimited storage capacity

Cloud offers us a huge amount of storing capacity for storing our important data such as
documents, images, audio, video, etc. in one place.
8) Data security

Data security is one of the biggest advantages of cloud computing. Cloud offers many
advanced features related to security and ensures that data is securely stored and
handled.

Disadvantages of Cloud Computing

A list of the disadvantage of cloud computing is given below -

1) Internet Connectivity

As you know, in cloud computing, every data (image, audio, video, etc.) is stored on the
cloud, and we access these data through the cloud by using the internet connection. If
you do not have good internet connectivity, you cannot access these data. However, we
have no any other way to access data from the cloud.

2) Vendor lock-in

Vendor lock-in is the biggest disadvantage of cloud computing. Organizations may face
problems when transferring their services from one vendor to another. As different
vendors provide different platforms, that can cause difficulty moving from one cloud to
another.

3) Limited Control

As we know, cloud infrastructure is completely owned, managed, and monitored by the


service provider, so the cloud users have less control over the function and execution of
services within a cloud infrastructure.

Although cloud service providers implement the best security standards to store
important information. But, before adopting cloud technology, you should be aware that
you will be sending all your organization's sensitive information to a third party, i.e., a
cloud computing service provider. While sending the data on the cloud, there may be a
chance that your organization's information is hacked by Hackers.
Unit 2 : Virtualization

Virtualization is an approach to pooling and sharing technology resources to simplify


management and increase asset use so that IT resources can more readily meet
business demand. With servers or networks, virtualization is used to take a single
physical asset and make it operate as if it were multiple assets.

Benefits of virtualization are as follows:-


● Virtualization Increases Business Agility
When business and government conditions are changing more rapidly today than
before it’s vital for organizations to be able to respond in an instant. Virtualization
makes it much easier and quicker to spin up computing resources. Although it
used to take days (if not weeks) to install, configure, and begin operating server,
network, and storage resources, these processes can now be accomplished in
minutes. Virtualization allows dynamic provisioning of resources to support
applications when capacity is needed most, helping close the gap between what
an organization needs and what IT can deliver.
● Virtualization Increases IT Operational Flexibility
Virtualization can help you deal with hardware failures or application/operating
system crashes. Virtualization software can be configured to keep track of virtual
machines and, if one goes down, immediately restart another instance on the
same machine or even a different machine.
● Virtualization reduces IT Operations Costs
Virtualization can make maintenance and upgrades significantly easier and less
expensive. On servers, it enables a running virtual machine to be migrated to
another server very quickly, freeing up the original server to be worked on.
Virtualized PCs allow for centralized upgrades, patching, and repair because
some or all of the PC’s applications and data reside in the data center rather than
on the PC itself. Virtualization reduces IT operational costs in a more direct way.
By allowing software virtual machines to take the place of physical machines,
operational costs related to hardware maintenance may be cut by 60 percent or
more, depending on the ratio of virtual servers to physical servers.
● High Availability
The high availability can be considered as disaster or downtime avoidance. You
want business applications to remain online and accessible in the event of
failures to hardware, software, or facilities. Server virtualization can help to avoid
both planned and unplanned downtime, including the ability to move live, running
virtual servers from the affected host to another host. Some shared storage
systems also feature no-single-point of failure architectures to keep storage
online through a variety of failure scenarios. These architectures, when combined
with server virtualization provides high availability capabilities, minimize
downtime without the complexities of traditional server clustering approaches.
● Quality of Service
Virtualization when managed effectively and efficiently can raise the quality of
service provided by IT organizations. By implementing consistent management
practices backed up by software systems that track and manage IT
infrastructure, whether physical or virtual, IT organizations can ensure that IT
services avoid outages or uncoordinated activities.
● Disaster recovery
Disaster recovery is like life insurance for IT organizations. When disaster strikes,
IT operations must be brought back online as quickly as possible. Virtual
machines can be easily transferred within seconds or minutes to a backup data
center; in tough circumstances, many virtual machines can be run on a smaller
number of physical servers, reducing the cost of physical resources required for
disaster recovery.
Implementation levels of Virtualization:-

Instruction Set Architecture Level:


At the ISA level, virtualization is performed by emulating a given ISA by the ISA of the
host machine.
For example, MIPS binary code can run on an x86-based host machine with the help of
ISA emulation. With this approach, it is possible to run a large amount of legacy binary
code written for various processors on any given new hardware host machine.
Instruction set emulation leads to virtual ISAs created on any hardware machine. The
basic emulation method is through code interpretation.
An interpreter program interprets the source instructions to target instructions one by
one. One source instruction may require tens or hundreds of native target instructions to
perform its function. Obviously, this process is relatively slow. For better performance,
dynamic binary translation is desired. This approach translates basic blocks of dynamic
source instructions to target instructions.
The basic blocks can also be extended to program traces or super blocks to increase
translation efficiency. Instruction set emulation requires binary translation and
optimization.
A virtual instruction set architecture (V-ISA) thus requires adding a processor-specific
software translation layer to the compiler.
Hardware Abstraction Level:
Hardware-level virtualization is performed right on top of the bare hardware.
On one hand, this approach generates a virtual hardware environment for a VM. On the
other hand, the process manages the underlying hardware through virtualization.
Hardware-level virtualization is performed right on top of the bare hardware. On the one
hand, this approach generates a virtual hardware environment for a VM. The process
manages the underlying hardware through virtualization. The idea is to virtualize a
computer’s resources, such as its processors, memory, and I/O devices. The intention is
to upgrade the hardware utilization rate by multiple users concurrently. The idea was
implemented in the IBM VM/370 in the 1960s. More recently, the Xen hypervisor has
been applied to virtualize x86-based machines to run Linux or other guest OS
applications.
.
Operating System Level:
This refers to an abstraction layer between traditional OS and user applications.
OS-level virtualization creates isolated containers on a single physical server and the O
instances to utilize the hardware and software in data centers. The containers behave
like real servers.
OS-level virtualization is commonly used in creating virtual hosting environments to
allocate hardware resources among a large number of mutually distrusting users.
It is also used, to a lesser extent, in consolidating server hardware by moving services
on separate hosts into containers or VMs on one server.
Library Support Level:
Most applications use APIs exported by user-level libraries rather than using lengthy
system calls by the OS. Since most systems provide well-documented APIs, such an
interface becomes another candidate for virtualization.
Virtualization with library interfaces is possible by controlling the communication link
between applications and the rest of a system through API hooks.
User-Application Level:
Virtualization at the application level virtualizes an application as a VM. On a traditional
OS, an application often Hopefully you won't encounter issues with your newly deployed
virtual infrastructure. But if you do, you should have documentation and diagrams of
your environment. You should also have support information and a support contract for
your servers, SAN, network, storage and virtualization software.

Virtualization at OS Level:-
● OS Level Virtualization is a type of server virtualization technology which works
at the OS layer.
● The physical server and single instance of the operating system is virtualized into
multiple isolated partitions, where each partition replicates a real server.
● The OS kernel will run a single operating system and provide that operating
system functionality to each of the partitions.
● Operating-system-level virtualization is not as flexible as other virtualization
approaches since it cannot host a guest operating system different from the host
one, or a different guest kernel.
● For example, with Linux, different distributions are fine, but other operating
systems such as Windows cannot be hosted.
● Operating system virtualization (OS virtualization) is a server virtualization
technology that involves tailoring a standard operating system so that it can run
different applications handled by multiple users on a single computer at a time.
● The operating systems do not interfere with each other even though they are on
the same computer.
● In OS virtualization, the operating system is altered so that it operates like
several different, individual systems.
● The virtualized environment accepts commands from different users running
different applications on the same machine. The users and their requests are
handled separately by the virtualized operating system.
VIRTUALIZATION STRUCTURE / HYPERVISOR ARCHITECTURE :-

Virtualization is achieved through the software known as virtual machine monitor or the
hypervisor ,The software is used in two ways thus forming two different structure of
virtualization,namely Bare Metal Virtualization and Hosted Virtualization.

Bare-metal virtualization hypervisors: (TYPE I HYPERVISOR):-

● Is deployed as a bare-metal installation (the first thing to be installed on a server


as the operating system will be the hypervisor).
● The hypervisor will communicate directly with the underlying physical server
hardware, manages all hardware resources and support execution of VMs.
● Hardware support is typically more limited, because the hypervisor usually has
limited device drivers built into it.
● Well suited for enterprise data centers, because it usually comes with advanced
features for resource management, high availability and security.
● Bare-metal virtualization hypervisors examples: VMware ESX and ESXi,
Microsoft Hyper-V, Citrix Systems XenServer.

Hosted virtualization hypervisors: (TYPE II HYPERVISOR):-

● The software is not installed onto the bare-metal, but instead is loaded on top of
an already live operating system, so it requires you to first install an OS(Host
OS).
● The Host OS integrates a hypervisor that is responsible for providing the virtual
machines(VMs) with their virtual platform interface and for managing all context
switching scheduling, etc.
● The hypervisor will invoke drivers or other component of the Host OS as needed.
● On the Host OS you may run Guest VMs, but you can also run native
applications
● This approach provides better hardware compatibility than bare-metal
virtualization, because the OS is responsible for the hardware drivers instead of
the hypervisor.
● A hosted virtualization hypervisor does not have direct access to hardware and
must go through the OS, which increases resource overhead and can degrade
virtual machine (VM) performance.
● The latency is minimal and with today’s modern software enhancements, the
hypervisor can still perform optimally.
● Common for desktops, because they allow you to run multiple OSes. These
virtualization hypervisor types are also popular for developers, to maintain
application compatibility on modern OSes.
● Because there are typically many services and applications running on the host
OS, the hypervisor often steals resources from the VMs running on it
● The most popular hosted virtualization hypervisors are: VMware Workstation,
Server, Player and Fusion; Oracle VM VirtualBox; Microsoft Virtual PC; Parallels
Desktop.

• The below figure shows stucture of TYPE I and TYPE II virtualization.


Need for virtualization:

Server virtualization enables different OS to share the same network &


make it easy to move OS between different networks without affecting the
applications running on them. This allows portability of application.
Virtualization allows many instance of application to be created thus
allowing them to scale up & down as per requirement. Virtualization
enables load balancing thus allowing companies to handle peak loads.
Storage virtualization enables efficient utilization of existing resources.
Allows services to be provided over internet.

Need of virtualization & abstraction in WA:-

Cloud computing applications combine their resources into pools that can
be assigned to users on demand thus attaining efficiency, increased
utilization, reasonable cost & scalability.

Key techniques for creating pools of resources are.


1) Abstraction
2) Virtualization

Virtualization enables the creation of an abstraction layer that maps a


logical address to a physical resources thus hiding the complexity &
ensuring that the changes to the underlying system do not affect the client
requesting a service. Virtualization allows efficient management of
resources as the mapping for virtual resources is both dynamic & fail.

Abstraction & virtualization are needed for following reasons


● .Server virtualization enables different OS to share the same network
& make it easy to move OS between different networks without
affecting the applications running on them.
This allows portability of application.
● Virtualization allows many instance of application to be created thus
allowing them to scale up & down as per requirement.
● Virtualization enables load balancing thus allowing companies to
handle peak loads.
● Storage virtualization enables efficient utilization of existing
resources.
● Allows services to be provided over internet

Types of Virtualization:
1.Application Virtualization.
2.Network Virtualization.
3.Desktop Virtualization.
4.Storage Virtualization.
5.Server Virtualization.
6.Data virtualization.

1. Application Virtualization: Application virtualization helps a user to


have remote access of an application from a server. The server stores all
personal information and other characteristics of the application but can still
run on a local workstation through the internet. Example of this would be a
user who needs to run two different versions of the same software.
Technologies that use application virtualization are hosted applications and
packaged applications.

2. Network Virtualization: The ability to run multiple virtual networks with


each has a separate control and data plan. It co-exists together on top of
one physical network. It can be managed by individual parties that
potentially confidential to each other. Network virtualization provides a
facility to create and provision virtual networks—logical switches, routers,
firewalls, load balancer, Virtual Private Network (VPN), and workload
security within days or even in weeks.

3. Desktop Virtualization: Desktop virtualization allows the users’ OS to


be remotely stored on a server in the data centre. It allows the user to
access their desktop virtually, from any location by a different machine.
Users who want specific operating systems other than Windows Server will
need to have a virtual desktop. Main benefits of desktop virtualization are
user mobility, portability, easy management of software installation,
updates, and patches.

4. Storage Virtualization: Storage virtualization is an array of servers that


are managed by a virtual storage system. The servers aren’t aware of
exactly where their data is stored, and instead function more like worker
bees in a hive. It makes managing storage from multiple sources to be
managed and utilized as a single repository. storage virtualization software
maintains smooth operations, consistent performance and a continuous
suite of advanced functions despite changes, break down and differences
in the underlying equipment.

5. Server Virtualization: This is a kind of virtualization in which masking of


server resources takes place. Here, the central-server(physical server) is
divided into multiple different virtual servers by changing the identity
number, processors. So, each system can operate its own operating
systems in isolate manner. Where each sub-server knows the identity of
the central server. It causes an increase in the performance and reduces
the operating cost by the deployment of main server resources into a
sub-server resource. It’s beneficial in virtual migration, reduce energy
consumption, reduce infrastructural cost, etc.

6. Data virtualization: This is the kind of virtualization in which the data is


collected from various sources and managed that at a single place without
knowing more about the technical information like how data is collected,
stored & formatted then arranged that data logically so that its virtual view
can be accessed by its interested people and stakeholders, and users
through the various cloud services remotely. Many big giant companies are
providing their services like Oracle, IBM, At scale, Cdata, etc.

It can be used to performing various kind of tasks such as:


• Data-integration
• Business-integration
• Service-oriented architecture data-services
• Searching organizational data
1 The Xen Architecture:-
Xen is an open source hypervisor program developed. It is a microkernel
hypervisor, which separates the policy from the mechanism

The Xen hypervisor implements all the mechanisms, leaving the policy to be
handled by Domain 0, as shown in figure does not include any device drivers
natively. It just provides a mechanism by which a guest have direct access to the
physical devices. As a result, the size of the Xen hypervisor is small.

Xen provides a virtual environment located between the hardware and the OS.
The core components of a Xen system are the hypervisor, kernel, and
applications. The organization of the three components is important.

Like other virtualization systems, many guest run on top of the hypervisor. The
guest OS, which has control ability, is called Domain 0, and the others are called
Domain U. Domain 0 is a privileged guest OS of Xen.

It is first loaded when Xen boots without any file system drivers being available.
Domain 0 is designed to access hardware directly and manage devices.
Therefore, one of the responsibilities of Domain 0 is to allocate and map
hardware resources for the guest domains (the Domain U domains). For
example, Xen is based on Linux and its security level is C2. Its management VM
is named Domain 0, which has the privilege to manage other VMs implemented
on the same host. If Domain0 is compromised, the hacker can control the entire
system. So, in the VM system, security policies are needed to improve the
security of Domain 0.

Domain 0, behaving as a VMM, allows users to create, copy, save, read, modify,
share, migrate, and roll back VMs as easily as manipulating a file, which flexibly
provides tremendous benefits for users. It also brings a series of security
problems during the software life cycle and data lifetime. Traditionally, a
machine’s lifetime can be envisioned as a straight line where the current state of
the machine is a point that progresses monotonically as the software executes.

It also brings a series of security problems during the software life cycle and data
lifetime. Traditionally, a machine’s lifetime can be envisioned as a straight line
where the current state of the machine is a point that progresses monotonically
as the software executes. During this time, configuration changes are made,
software is installed, and patches are applied. In such an environment, the VM
state is in to a tree: At any point, execution can go into N different branches
where multiple instances of a VM can exist at any point in this tree at any given
time. VMs are allowed to roll back to previous states in their execution or rerun
from the same point many times.
Binary Translation with Full Virtualization
Depending on implementation technologies, hardware virtualization can be
classified into two categories: full virtualization and host-based
virtualization. Full virtualization does not need to modify the host OS. It
relies on binary translation to trap and to virtualize the execution of certain
sensitive, nonvirtualizable instructions. The guest OSes and their
applications consist of noncritical and critical instructions. In a host-based
system, both a host OS and a guest OS are used. A virtuali-zation software
layer is built between the host OS and guest OS. These two classes of VM
architec-ture are introduced next.

1. Full Virtualization

With full virtualization, noncritical instructions run on the hardware directly


while critical instructions are discovered and replaced with traps into the
VMM to be emulated by software. Both the hypervisor and VMM
approaches are considered full virtualization. Why are only critical
instructions trapped into the VMM? This is because binary translation can
incur a large performance overhead. Noncritical instructions do not control
hardware or threaten the security of the system, but critical instructions do.
Therefore, running noncritical instructions on hardware not only can
promote efficiency, but also can ensure system security.

2. Binary Translation of Guest OS Requests Using a VMM


This approach was implemented by VMware and many other software
companies. As shown in Figure 3.6, VMware puts the VMM at Ring 0 and
the guest OS at Ring 1. The VMM scans the instruction stream and
identifies the privileged, control- and behavior-sensitive instructions. When
these instructions are identified, they are trapped into the VMM, which
emulates the behavior of these instructions. The method used in this
emulation is called binary translation. Therefore, full vir-tualization
combines binary translation and direct execution. The guest OS is
completely decoupled from the underlying hardware. Consequently, the
guest OS is unaware that it is being virtualized.

The performance of full virtualization may not be ideal, because it


involves binary translation which is rather time-consuming. In particular, the
full virtualization of I/O-intensive applications is a really a big challenge.
Binary translation employs a code cache to store translated hot instructions
to improve performance, but it increases the cost of memory usage. At the
time of this writing, the performance of full virtualization on the x86
architecture is typically 80 percent to 97 percent that of the host machine.

3. Host-Based Virtualization

An alternative VM architecture is to install a virtualization layer on top of the


host OS. This host OS is still responsible for managing the hardware. The
guest OSes are installed and run on top of the virtualization layer.
Dedicated applications may run on the VMs. Certainly, some other
applications

can also run with the host OS directly. This host-based architecture has
some distinct advantages, as enumerated next. First, the user can install
this VM architecture without modifying the host OS. The virtualizing
software can rely on the host OS to provide device drivers and other
low-level services. This will simplify the VM design and ease its
deployment.

Second, the host-based approach appeals to many host machine


configurations. Compared to the hypervisor/VMM architecture, the
performance of the host-based architecture may also be low. When an
application requests hardware access, it involves four layers of mapping
which downgrades performance significantly. When the ISA of a guest OS
is different from the ISA of the underlying hardware, binary translation must
be adopted. Although the host-based architecture has flexibility, the
performance is too low to be useful in practice.

Para-Virtualization with Compiler Support

Para-virtualization needs to modify the guest operating systems. A


para-virtualized VM provides special APIs requiring substantial OS
modifications in user applications. Performance degradation is a critical
issue of a virtualized system. No one wants to use a VM if it is much slower
than using a physical machine. The virtualization layer can be inserted at
different positions in a machine soft-ware stack. However,
para-virtualization attempts to reduce the virtualization overhead, and thus
improve performance by modifying only the guest OS kernel.

Figure 3.7 illustrates the concept of a paravirtualized VM architecture.


The guest operating systems are para-virtualized. They are assisted by an
intelligent compiler to replace the nonvirtualizable OS instructions by
hypercalls as illustrated in Figure 3.8. The traditional x86 processor offers
four instruction execution rings: Rings 0, 1, 2, and 3. The lower the ring
number, the higher the privilege of instruction being executed. The OS is
responsible for managing the hardware and the privileged instructions to
execute at Ring 0, while user-level applications run at Ring 3. The best
example of para-virtualization is the KVM to be described below.
1. Para-Virtualization Architecture

When the x86 processor is virtualized, a virtualization layer is inserted


between the hardware and the OS. According to the x86 ring definition, the
virtualization layer should also be installed at Ring 0. Different instructions
at Ring 0 may cause some problems. In Figure 3.8, we show that
para-virtualization replaces nonvirtualizable instructions with hypercalls that
communicate directly with the hypervisor or VMM. However, when the
guest OS kernel is modified for virtualization, it can no longer run on the
hardware directly.

Although para-virtualization reduces the overhead, it has incurred other


problems. First, its compatibility and portability may be in doubt, because it
must support the unmodified OS as well. Second, the cost of maintaining
para-virtualized OSes is high, because they may require deep OS kernel
modifications. Finally, the performance advantage of para-virtualization
varies greatly due to workload variations. Compared with full virtualization,
para-virtualization is relatively easy and more practical. The main problem
in full virtualization is its low performance in binary translation. To speed up
binary translation is difficult. Therefore, many virtualization products employ
the para-virtualization architecture. The popular Xen, KVM, and VMware
ESX are good examples.

2. KVM (Kernel-Based VM)

This is a Linux para-virtualization system—a part of the Linux version


2.6.20 kernel. Memory management and scheduling activities are carried
out by the existing Linux kernel. The KVM does the rest, which makes it
simpler than the hypervisor that controls the entire machine. KVM is a
hardware-assisted para-virtualization tool, which improves performance
and supports unmodified guest OSes such as Windows, Linux, Solaris, and
other UNIX variants.

3. Para-Virtualization with Compiler Support

Unlike the full virtualization architecture which intercepts and emulates


privileged and sensitive instructions at runtime, para-virtualization handles
these instructions at compile time. The guest OS kernel is modified to
replace the privileged and sensitive instructions with hypercalls to the
hypervi-sor or VMM. Xen assumes such a para-virtualization architecture.
The guest OS running in a guest domain may run at Ring 1 instead of at
Ring 0. This implies that the guest OS may not be able to execute some
privileged and sensitive instructions. The privileged instructions are
implemented by hypercalls to the hypervisor. After replacing the
instructions with hypercalls, the modified guest OS emulates the behavior
of the original guest OS. On an UNIX system, a system call involves an
interrupt or service routine. The hypercalls apply a dedicated service
routine in Xen.

Example 3.3 VMware ESX Server for Para-Virtualization

VMware pioneered the software market for virtualization. The company has developed virtualization
tools for desktop systems and servers as well as virtual infrastructure for large data centers. ESX is
a VMM or a hypervisor for bare-metal x86 symmetric multiprocessing (SMP) servers. It accesses
hardware resources such as I/O directly and has complete resource management control. An
ESX-enabled server consists of four components: a virtualization layer, a resource manager,
hardware interface components, and a service console, as shown in Figure 3.9. To improve
performance, the ESX server employs a para-virtualization architecture in which the VM kernel
interacts directly with the hardware without involving the host OS.

The VMM layer virtualizes the physical hardware resources such as CPU, memory, network and disk
controllers, and human interface devices. Every VM has its own set of virtual hardware resources. The
resource manager allocates CPU, memory disk, and network bandwidth and maps them to the virtual
hardware resource set of each VM created. Hardware interface components are the device drivers and
the
VMware ESX Server File System. The service console is responsible for booting the system,
initiating the execution of the VMM and resource manager, and relinquishing control to those layers.
It also facilitates the process for system administrators.
CPU virtualization :-

CPU Virtualization emphasizes running programs and instructions through a


virtual machine, giving the feeling of working on a physical workstation. All the
operations are handled by an emulator that controls software to run according
to it. Nevertheless, CPU Virtualization does not act as an emulator. The
emulator performs the same way as a normal computer machine does. It
replicates the same copy or data and generates the same output just like a
physical machine does. The emulation function offers great portability and
facilitates working on a single platform, acting like working on multiple
platforms.

With CPU Virtualization, all the virtual machines act as physical machines and
distribute their hosting resources like having various virtual processors.
Sharing of physical resources takes place to each virtual machine when all
hosting services get the request. Finally, the virtual machines get a share of
the single CPU allocated to them, being a single-processor acting as a
dual-processor.

Memory Virtualization

Virtual memory virtualization is similar to the virtual memory support provided


by modern operat-ing systems. In a traditional execution environment, the
operating system maintains mappings of virtual memory to machine memory
using page tables, which is a one-stage mapping from virtual memory to
machine memory. All modern x86 CPUs include a memory management unit
(MMU) and a translation lookaside buffer (TLB) to optimize virtual memory
performance. However, in a virtual execution environment, virtual memory
virtualization involves sharing the physical system memory in RAM and
dynamically allocating it to the physical memory of the VMs.
That means a two-stage mapping process should be maintained by the guest
OS and the VMM, respectively: virtual memory to physical memory and
physical memory to machine memory. Furthermore, MMU virtualization should
be supported, which is transparent to the guest OS. The guest OS continues
to control the mapping of virtual addresses to the physical memory addresses
of VMs. But the guest OS cannot directly access the actual machine memory.
The VMM is responsible for mapping the guest physical memory to the actual
machine memory. Figure 3.12 shows the two-level memory mapping
procedure.

Since each page table of the guest OSes has a separate page table in the
VMM corresponding to it, the VMM page table is called the shadow page
table. Nested page tables add another layer of indirection to virtual memory.
The MMU already handles virtual-to-physical translations as defined by the
OS. Then the physical memory addresses are translated to machine
addresses using another set of page tables defined by the hypervisor. Since
modern operating systems maintain a set of page tables for every process,
the shadow page tables will get flooded. Consequently, the perfor-mance
overhead and cost of memory will be very high.
VMware uses shadow page tables to perform
virtual-memory-to-machine-memory address translation. Processors use TLB
hardware to map the virtual memory directly to the machine memory to avoid
the two levels of translation on every access. When the guest OS changes the
virtual memory to a physical memory mapping, the VMM updates the shadow
page tables to enable a direct lookup. The AMD Barcelona processor has
featured hardware-assisted memory virtualization since 2007. It provides
hardware assistance to the two-stage address translation in a virtual
execution environment by using a technology called nested paging.

I/O Virtualization

I/O virtualization involves managing the routing of I/O requests between


virtual devices and the shared physical hardware. At the time of this writing,
there are three ways to implement I/O virtualization: full device emulation,
para-virtualization, and direct I/O. Full device emulation is the first approach
for I/O virtualization. Generally, this approach emulates well-known,
real-world devices.

All the functions of a device or bus infrastructure, such as device


enumeration, identification, interrupts, and DMA, are replicated in software.
This software is located in the VMM and acts as a virtual device. The I/O
access requests of the guest OS are trapped in the VMM which interacts
with the I/O devices. The full device emulation approach is shown in Figure
3.14.
A single hardware device can be shared by multiple VMs that run
concurrently. However, software emulation runs much slower than the
hardware it emulates [10,15]. The para-virtualization method of I/O
virtualization is typically used in Xen. It is also known as the split driver
model consisting of a frontend driver and a backend driver. The frontend
driver is running in Domain U and the backend dri-ver is running in Domain
0. They interact with each other via a block of shared memory. The frontend
driver manages the I/O requests of the guest OSes and the backend driver
is responsible for managing the real I/O devices and multiplexing the I/O
data of different VMs. Although para-I/O-virtualization achieves better
device performance than full device emulation, it comes with a higher CPU
overhead.
Direct I/O virtualization lets the VM access devices directly. It can
achieve close-to-native performance without high CPU costs. However,
current direct I/O virtualization implementations focus on networking for
mainframes. There are a lot of challenges for commodity hardware devices.
For example, when a physical device is reclaimed (required by workload
migration) for later reassign-ment, it may have been set to an arbitrary state
(e.g., DMA to some arbitrary memory locations) that can function
incorrectly or even crash the whole system. Since software-based I/O
virtualization requires a very high overhead of device emulation,
hardware-assisted I/O virtualization is critical. Intel VT-d supports the
remapping of I/O DMA transfers and device-generated interrupts. The
architecture of VT-d provides the flexibility to support multiple usage
models that may run unmodified, special-purpose, or “virtualization-aware”
guest OSes.
Another way to help I/O virtualization is via self-virtualized I/O (SV-IO)
[47]. The key idea of SV-IO is to harness the rich resources of a multicore
processor. All tasks associated with virtualizing an I/O device are
encapsulated in SV-IO. It provides virtual devices and an associated
access API to VMs and a management API to the VMM. SV-IO defines one
virtual interface (VIF) for every kind of virtua-lized I/O device, such as
virtual network interfaces, virtual block devices (disk), virtual camera
devices, and others. The guest OS interacts with the VIFs via VIF device
drivers. Each VIF consists of two mes-sage queues. One is for outgoing
messages to the devices and the other is for incoming messages from the
devices. In addition, each VIF has a unique ID for identifying it in SV-IO.

Virtualization in Multi-Core Processors

Virtualizing a multi-core processor is relatively more complicated than


virtualizing a uni-core processor. Though multicore processors are claimed
to have higher performance by integrating multiple processor cores in a
single chip, muti-core virtualiuzation has raised some new challenges to
computer architects, compiler constructors, system designers, and
application programmers. There are mainly two difficulties: Application
programs must be parallelized to use all cores fully, and software must
explicitly assign tasks to the cores, which is a very complex problem.

Concerning the first challenge, new programming models, languages,


and libraries are needed to make parallel programming easier. The second
challenge has spawned research involving scheduling algorithms and
resource management policies. Yet these efforts cannot balance well
among performance, complexity, and other issues. What is worse, as
technology scales, a new challenge called dynamic heterogeneity is
emerging to mix the fat CPU core and thin GPU cores on the same chip,
which further complicates the multi-core or many-core resource
management. The dynamic heterogeneity of hardware infrastructure mainly
comes from less reliable transistors and increased complexity in using the
transistors [33,66].

5.1 Physical versus Virtual Processor Cores


Wells, et al. [74] proposed a multicore virtualization method to allow
hardware designers to get an abstraction of the low-level details of the
processor cores. This technique alleviates the burden and inefficiency of
managing hardware resources by software. It is located under the ISA and
remains unmodified by the operating system or VMM (hypervisor). Figure
3.16 illustrates the technique of a software-visible VCPU moving from one
core to another and temporarily suspending execution of a VCPU when
there are no appropriate cores on which it can run.

5.2 Virtual Hierarchy

The emerging many-core chip multiprocessors (CMPs) provides a new


computing landscape. Instead of supporting time-sharing jobs on one or a
few cores, we can use the abundant cores in a space-sharing, where
single-threaded or multithreaded jobs are simultaneously assigned to
separate groups of cores for long time intervals. This idea was originally
suggested by Marty and Hill [39]. To optimize for space-shared workloads,
they propose using virtual hierarchies to overlay a coherence and caching
hierarchy onto a physical processor. Unlike a fixed physical hierarchy, a
virtual hierarchy can adapt to fit how the work is space shared for improved
performance and performance isolation.
Today’s many-core CMPs use a physical hierarchy of two or more cache
levels that statically determine the cache allocation and mapping. A virtual
hierarchy is a cache hierarchy that can adapt to fit the workload or mix of
workloads [39]. The hierarchy’s first level locates data blocks close to the
cores needing them for faster access, establishes a shared-cache domain,
and establishes a point of coherence for faster communication. When a
miss leaves a tile, it first attempts to locate the block (or sharers) within the
first level. The first level can also pro-vide isolation between independent
workloads. A miss at the L1 cache can invoke the L2 access.

The idea is illustrated in Figure 3.17(a). Space sharing is applied to


assign three workloads to three clusters of virtual cores: namely VM0 and
VM3 for database workload, VM1 and VM2 for web server workload, and
VM4–VM7 for middleware workload. The basic assumption is that each
workload runs in its own VM. However, space sharing applies equally within
a single operating system. Statically distributing the directory among tiles
can do much better, provided operating sys-tems or hypervisors carefully
map virtual pages to physical frames. Marty and Hill suggested a two-level
virtual coherence and caching hierarchy that harmonizes with the
assignment of tiles to the virtual clusters of VMs.

Figure 3.17(b) illustrates a logical view of such a virtual cluster hierarchy in


two levels. Each VM operates in a isolated fashion at the first level. This will
minimize both miss access time and performance interference with other
workloads or VMs. Moreover, the shared resources of cache capacity,
inter-connect links, and miss handling are mostly isolated between VMs.
The second level maintains a globally shared memory. This facilitates
dynamically repartitioning resources without costly cache flushes.
Furthermore, maintaining globally shared memory minimizes changes to
existing system software and allows virtualization features such as
content-based page sharing. A virtual hierarchy adapts to space-shared
workloads like multiprogramming and server consolidation. Figure 3.17
shows a case study focused on consolidated server workloads in a tiled
architecture. This many-core mapping scheme can also optimize for
space-shared multiprogrammed workloads in a single-OS environment.
Unit 3 : Cloud Computing Services

A) Anything-as-a-Service or Everything-as-a-Service (XaaS) :-

● XaaS is a collective term said to stand for a number of things including "X as a
service," "anything as a service" or "everything as a service."
● The acronym refers to an increasing number of services that are delivered over
the Internet rather than provided locally or on-site.
● Everything-as-a-Service, or XaaS, originated as software-as-a-service (SaaS)
and has since expanded to include services such as infrastructure-as-a-service,
platform-as-a-service, storage-as-a-service, desktop-as-a-service, disaster
recovery-as-a-service, and even nascent operations like marketing-as-a-service
and healthcare-as-a-service.
● XaaS or ‘anything as a service’ is the delivery of IT as a Service through hybrid
Cloud computing and is a reference to either one or a combination of Software as
a Service (SaaS), Infrastructure as a Service (IaaS), Platform as a Service
(PaaS). Communications as a service (CaaS) or monitoring as a service (Maas).
XaaS brings with it at least three big advantages:
1. Flexible scaling
○ The beauty in outsourcing just about every technology-related business
process is that you won’t be bearing the true costs of up- or down-scaling
your processes and operations in response to strategic or business
changes. That burden will fall to your XaaS provider.
2. Access to evergreen technology
○ Technological evolution has long since been barreling along at an
exponential pace. Sadly, for most of us, our budgets only increase linearly,
if at all. Keeping up with new developments is difficult less from an
implementation perspective and more from a cost perspective. XaaS
changes that for you, the end user, because, as before, the burden of
keeping up with advances lies with the provider and not with you. The key
idea here is that making good use of XaaS means that your operations
stay evergreen at no extra cost!
3. Integrating everything
○ XaaS lets technology professionals concentrate on what they do best.
“XaaS offers great opportunity for the IT department to redirect focus to
more forward-thinking and strategic initiatives while confidently leveraging
XaaS offerings.
Examples of major XaaS’s
The What it stands What it does
_aaS for

DRaaS Disaster Recovers not just data, but also all the
Recovery-as-a-S infrastructure and applications that were in place
ervice prior to man-made or natural disaster. Ensures
business continuity.

DaaS Desktop-as-a-Se A virtual desktop managed over the cloud, the


rvice provider maintains and upgrades desktop apps for
customers. The provider typically secures, stores,
and backs up its customers’ data.

PaaS Platform-as-a-Se Actually provides the end user with a full


rvice development platform with which to create, run, and
host her or his own applications. Likely the
best-known PaaS is the Google App Engine that
allows end users to fully develop using Python,
Java, PHP, and Go.

IaaS Infrastructure-as Exactly what it says in the acronym: IaaS is a


-a-Service situation wherein the end user installs virtually
nothing onsite. Servers, storage, networks – and so
on and so forth – are completely outsourced and
held elsewhere.
B) Cloud security as service :-

Cloud Security as a Service (SECaaS)


● Security as a Service (SECaaS) is one type of cloud business model that
outsources cybersecurity facilities.
● SECaaS is inspired by the "software as a service (SaaS)" model to provide
security-related services to consumers' information.
● SECaaS model provides security facilities on a subscription basis which makes it
cost-effective.
● SECaaS do not require on-premises hardware, easily scales security facilities if
the business grows in the future, reduces maintenance burden, and eases down
the workload on the in-house security team of the business hence, day-by-day
gain popularity among the corporate field.

Facilities provided by Security as a Service (SECaaS)


SECaaS provide large numbers of security facilities to ensure information protection.
Some of the major security facilities provided by SECaaS are as follows:
● Antivirus Management
● Business Continuity and Disaster Recovery as a Service (BC/DR or BCDR)
● Continuous Monitoring
● Data Loss Prevention(DLP)
● Email Security
● Encryption
● Identity and Access Management (IAM)
● Identify Security Risks Patterns
● Intrusion Management Network
● Security Patch Updates
● Security Assessment
● Security Information and Event Management (SIEM)
● Spam Filtering
● Standard Compliance Management
● Virus Control Vulnerability Management
● Web Security

Advantages associated with Security as a Service (SECaaS)


● Faster Provisioning - Consumers get access to security services on an
immediate basis.
● Good Agility - SECaaS can be scaled up or down as per the requirement and
available on-demand at any place and any time. There is no chance for
uncertainty in the case of deployment and updates. All things are managed by
the SECaaS provider.
● Cost-effective - SECaaS reduces financial burdens for businesses, which acts
as a major advantage of SECaaS. It integrates security facilities that do not
require any on-premises hardware or a big budget. It eliminates the need for
costly security experts and analysts. Businesses only need to pay for what they
required.
● Latest Security Tools and Updates - Provides latest, up-to-date anti-virus and
other security tools with the latest patches and virus definitions.

Challenges for Security as a Service (SECaaS)


With so many advantages SECaaS also has a few disadvantages too which create
challenges for SECaaS such as:
● High vulnerability against large-scale attacks - SECaaS handles security
uniformly in a similar manner hence, if there is a security breach that happens
only for one request, security is broken down for all requests. Businesses cannot
afford to keep their data loose and vulnerable to hacker attacks.
● Data Storage locations – SECaaS stored backed up data offsite at different
places of the world. Hence, the employer can never be sure about the physical
reliability of the data in that server. Nothing can be done to prevent a physical
stealing attempt and relying on authorities.
The major challenge for the SECaaS is to maintain a reputation of reliability and
superiority to standard non-cloud services.

Major Security as a Service (SECaaS) Providers


CloudPassage, McAfee, Lacework, FireEye, Qualys, Palo Alto Networks, Tenable,
Trend Micro, VMWare, Symantec, etc.
C) Identity Management-as-a-Service :-
1. Identity Management as a Service (IdMaaS) is a cloud-based identity management
solution that allows customers to take advantage of identity Management (IdM)
technologies without having to invest in the underlying hardware or applications.
2.IdMaaS provides automate the management of user identities, access rights and
resources across multiple clouds, IT enviourments and applications. They often provide
break-through capabilities that are not available in traditional applications.
3.With the growing use of cloud, a cloud-native architecture is required to manage
identity for services on various private and public clouds.
4.The architecture must provide a portable, pervasive identity across multiple
clouds.Identity and access management as a service builds on the basic idea of
software as a service (SaaS) that started in recent years, as vendors were able to
effectively "stream" services over the Web rather than provide them as licensed
software packages, such as in CDs and boxes.
5.Vendors started offering a wider range of cloud-delivered SaaS products, such as
platform as a service (PaaS), communications as a service (CaaS) and infrastructure as
a service (IaaS).
6.Network virtualization and the abstraction of hardware into logical tools further
accelerated this development.IAMaaS helps companies set up customized levels of
security for an IT architecture, either as a whole or in parts.
7.The essential idea is that a third-party service vendor sets up user identities and
determines what these individual users can do within a system. Like the old identity and
access management tools, the way these services work is through a complicated
process of tagging and labeling individual users and user behaviors, and then creating a
detailed security authentication for them.
8.IAMaaS is even more applicable to companies that allow employees to use or bring
their own devices for work. In many cases, the use of different devices requires tighter
security to protect trade secrets and other confidential information. Most enterprises are
now migrating traditional IT infrastructures to the cloud, and they are finding the need to
securely manage identities for data center and cloud applications and systems.
9.The rapid rise of threats across the dynamic enterprise perimeter is posing new
challenges, such as:
•Integrating cloud technologies with older, on-premises systems and centrally managing
governance of IT resources
•Controlling compliance costs associated with deploying flexible and scalable
cloud-based identity governance
•Demonstrating compliance with a growing number of regulatory requirements, including
new data privacy rules in the European Union
10.IDMaaS governs and controls access of critical applications and services using
automation and reporting from a structured environment.
11.IDMaaS includes modern application connectors that enable you to add a new
service with appropriate role management within days, rather than months, resulting in
shorter time to market, reduced costs and increased productivity.
12.It also allows you to concentrate on your core business without worrying about
finding and retaining staff with high-demand identity management skills.
13.Service features:
IDMaaS enables your organization to adopt a business approach to consistent security
policies and compliance. Key components include:
•Governance Platform. Supports compliance, provisioning and access management
processes across your organization by centralizing identity data and providing a single
location from which to model roles, policies and risk.
•Compliance Management. Streamlines execution of compliance controls and
improves audit performance by automating access certifications and policy
management.
•Lifecycle Management. Simplifies the process for creating, changing and revoking
access privileges by combining self-service access request and password management
with automated life-cycle event management for each user.
•Identity Intelligence Services. Transforms technical identity data scattered across
multiple enterprise systems, or in the cloud, into centralized, easily understood and
business-relevant information.
14.Benefits of IdMaas:
•Improved user productivity: Productivity improvement comes from simplifying the
sign-on interface and the ability to quickly change access rights. Productivity is likely to
improve further where you provide user self-service.
•Improved customer and partner service: Customers and partners also benefit from a
more streamlined, secure process when accessing applications and data.
•Reduced help desk costs: IT help desks typically experience fewer calls about
forgotten passwords when an identity management process is implemented.
•Reduced IT costs: Identity management enables automatic provisioning, providing or
revoking users’ access rights to systems and applications. Provisioning happens
whether you automate it or not.
15.Attributes of IdMaaS Providers:
•Compliance
•Access Provisioning and De-Provisioning
•User Self-Service
•Single Sign-On
•Integration with In-house IdM or Directories
•Security Around IdMaaS
•Setup and Running Costs

D) Database-as-a-Service :-
Like SaaS, PaaS and IaaS of cloud computing we can consider DBaaS (also known as
Managed Database Service) as a cloud computing service. It allows users associated
with database activities to access and use a cloud database system without purchasing
it.
DBaaS and cloud database comes under Software as a Service (SaaS) whose demand
is growing so fast.

In simple we can say Database as a Service (DBaaS) is self service/ on demand


database consumption coupled with automation of operations. As we know cloud
computing services are like pay per use so DBaaS also based on same payment
structure like how much you will use just pay for your usage. This DBaaS provides
same function as like standard traditional and relational database models. So using
DBaaS, organizations can avoid data base configuration, management, upgradation
and security.

DBaaS consists of an info manager element, that controls all underlying info instances
via API. This API is accessible to the user through a management console, typically an
online application, that the user might use to manage and assemble the info and even
provision or deprovision info instances.
Key Characteristics of DBaaS :
● A fully managed info service helps to line up, manage, and administer your info
within the cloud and conjointly offer services for hardware provisioning and
Backup.
● DBaaS permits the availability of info’s effortlessly to Database shoppers from
numerous backgrounds and IT expertise.
● Provides on demand services.
● Supported the resources offered, it delivers a versatile info platform that tailors
itself to the environment’s current desires.
● A team of consultants at your disposal, endlessly watching the Databases.
● Automates info administration and watching.
● Leverages existing servers and storage.

Advantages of DBaaS :
1. DBaaS is responsible of the info supplier to manage and maintain info hardware
and code.
2. The hefty power bills for ventilation and cooling bills to stay the servers running
area unit eliminated.
3. An organization that subscribes to DBaaS is free from hiring info developers or
constructing a info system in-house.
4. Make use of the most recent automation, straightforward outs of clouds area unit
possible at low price and fewer time.
5. Human resources needed to manage the upkeep of the system is eliminated.
6. Since DBaaS is hosted off-site, the organization is free from the hassles of power
or network failure.
7. Explore the portfolio of Oracle info as a service.

Disadvantages of DBaaS :
1. Traditional enterprises may have objections to cloud-based services generally.
2. In case of significant failure of the DBaaS server or network, the organization
might lose its knowledge.
3. Companies already equipped with resources and IT-related human resources
might not realize DBaaS solutions economically viable.
4. Intrinsic network connected problems with cloud can impact the performance of a
DBaaS.
5. Features offered within the typical RDBMS might not perpetually be offered
during a DBaaS system.
6. The use of DBaaS may result in revenue loss in alternative areas of code
updates and hardware management.

E) Storage-as-a-Service :-
Storage as a Service is a type of Cloud Computing service, which makes service
providers offer data storage capacity services (storage) to their customers.

Like other Cloud services, customers use data storage over an Internet connection.
Customers only need to pay according to usage (Pay as You Go), and there is no need
for initial capital expenditures that can demand large funds. Budget management can be
easier because capital expenditure (Capital Expense/Capex) is converted into
operational expenditure (Operational Expense/Opex).

By utilizing STaaS, the burden of managing data storage devices in the corporate office
(on-premise) can be lighter. The space used for storage can be reduced and used for
other purposes. That way, companies can still store on-premise data, especially data
that is considered very sensitive

Storage as a Service use case example:-


STaaS can be used for all applications and purposes that require data storage. For
example, a company might use a magnetic tape device to store important, but rarely
accessed, data (cold data). This magnetic tape library is then stored in the office or sent
to another location deemed secure. By utilizing STaaS services, the data can be
transferred and stored securely without the hassles of managing magnetic tape devices.
Data can be accessed at any time when needed.

Another use of Storage as a Service is for data backup for disaster recovery purposes.
In this case, data from a system is periodically backed up. In the event of an event that
causes data loss in the main system, for example due to a disaster (fire, flood,
earthquake) or due to a cyber attack, data can be restored from a backup. The system
can quickly work as before, with minimum downtime.

Enterprises can also use STaaS for business application development and testing.
Application development sometimes requires the use of large capacity data storage, but
it is only used temporarily. With STaaS, application developers can access
large-capacity data storage devices easily and quickly.

Some aspects of Storage as a Service:-


STaaS services have many advantages that can be compared with other types of Cloud
services, especially the IaaS (Infrastructure as a Service) type.

1. Costs: Generally, expenses can be regulated so that they are more efficient, and
costs can be reduced. The company only pays data storage fees according to
the capacity used and the amount of data transferred. For cold data, paying for
the amount of data stored may be cheaper. However, if the data is expected to
be accessed frequently, you should take a close look at the available service
contracts.
2. Flexibility: Companies can add storage capacity immediately, without having to
go through a lengthy and expensive procurement process. Data can also be
accessed from anywhere via an Internet connection.
3. Security: Some customers may be reluctant to share sensitive data with third
parties. It is therefore very important to choose a proven STaaS provider with
experience in securing Cloud services. An alternative that is also starting to
attract attention is the on-premise STaaS model, where data storage is still
carried out at the company installation, but is managed by the STaaS service
provider.

F) Compliance-as-a-Service :-

Cloud compliance issues occur as any cloud consumer make use of cloud storage and
backup services. Cloud computing by its very nature extents various jurisdictions. The
laws of the country of request from where it originates many not necessarily match the
laws of the country in which the request is being processed, and probably laws of
neither location match the laws of the country in which the service is delivered.
Compliance is beyond than a basically provided an unidentified service token to an
identity so that access to a resource can be obtain. Compliance is a difficult issue which
needs considerable expertise.

While Compliance as a Service (CaaS) seems in discussion, some examples which falls
under service of this category exist as a general product for a cloud computing
architecture. A Compliance as a Service (CaaS) application would need to oblige as a
third party. CaaS may require to be architecture as its own layer of a Service Oriented
Architecture (SOA) in order to be reliable. A CaaS may be needed to be able to manage
cloud relationships, comprehend security rules and procedures, know how to operate
data and administer privacy, deliver an incidence feedback, archive, and enable the
system to be queried. This is a huge order, but CaaS has the capability to be a good
value-added service.
CaaS system built inside a private cloud in which the data is under control of a single
entity, thus confirming that the data is under that entity’s secure control and that
transaction is audited. Indeed, major cloud computing compliance systems have been
created with the help of private cloud. A well-implemented CaaS service may measure
the risk of servicing compliance and ensure or indemnify tenancy against that risk.
CaaS can be brought to bear as mechanism to guarantee that an e-mail conformed to
particular standards, anything which may be new electronic service of a network of
national postal system and something which may help in ending the scourge of spam.

The major services that should provided additionally in a Compliance as a Service


(CaaS) offering:

1. Database access control


2. Separation of duties
3. Annual risk assessment
4. Application management
5. Change control
6. Data discovery
7. Data masking
8. Incident response
9. Policy creation and enforcement
10. Real-time data protection
11. Repair of vulnerabilities
12. Personnel training
13. Service configuration

Advantages of Compliance as a Service (CaaS) –


1. In cloud, Encryption is quite arduous to track which is simplified by the
Compliance as a Service. To fulfill the needs of end user and organizations
around governance including compliance, they use a cloud provider’s service.
These services deliver pre-built behaviors with specific regulations, such as
needed encryption levels.
2. Compliance as a Services are configurable i.e. no development is required. This
is cost effective for the organizations and it reduce the maintenance along with
changing regulations, as well as internal and external policies of the corporations.

Disadvantages of Compliance as a Service (CaaS) –


1. Cloud service consumers will be held responsible for any issues with the
compliance services. Its mandatory that customer validate the compliance
services to ensure that there are no issues.
2. It is impossible to Compliance as a Service providers to support all the
regulations among all the countries. Also, as all the services are cloud based
then there is always a risk that providers will stop to providing the services at any
time because of low uses of their services. So, end-user and organization
become dependent on service providers. Overall these are some critical aspects
which falls under drawbacks of CaaS.

G) Monitoring-as-a-Service :-

Monitoring as a Service (MaaS) provides you with the security solutions that
are essential for the organizations that are reliant on the IT infrastructure.
However, for effective and efficient monitoring, the organization must have up
to date technology, experts knowing advanced technical skills, scalable
security processes and all this come with a tremendous expense.

Prior to the advent of electronic gadgets that are used for providing security
services, the human resource was used to perform all these monitoring
activities but it was ineffective.

MaaS provides an effective solution to this problem. It provides 24/7 real-time


monitoring, reports any issue across the security infrastructure and secures
the crucial data of their customers.

If compared to the traditional security operations centre MaaS exceed in two


important things:

1. The total cost of ownership was higher in the traditional security


operations centre.
2. Traditional security operations are less effective.

Features of MaaS
1. Protection Against External and Internal Threats
The security monitoring services analyze the alerts from security devices 24/7
in real-time. The security analyst collects data from various security devices to
recognize the threats and thereby imply effective measures to respond to
these threats.

● Early Detection
The information security team detects and discloses the security threats
as soon after they appear. The threats are reported to the customer via
emails.
This reports describes the vulnerabilities in the security of the system
and also describes its effect on the systems or application. The report
may also include the protective measures that you can take for these
vulnerabilities.
● Dashboard Interface
The dashboard interface is implemented as a platform, control and
service monitoring. This conceptualizes your system and its resource at
one place and eases the information security team to monitor the
operation status of the platform being monitor. The information security
team try to find the reason of vulnerability by navigating back in time
and visualize how the system was performing before the problem
occurred and how it is performing after the problem has occurred.
As the root cause of the vulnerability is understood the preventive
measure are suggested to resolve the issue.
● Log Centralization and Analysis
It is a monitoring solution which involves the correlation of log entries
and matching of the log entries. Analyzing this correlation and matching
of log entries set a benchmark for the operational performance and
provide an index of the security threats.
An alarm is raised if an incident moves above the benchmark
parameters. This alarm or warning is analyzed by security experts
responsible for the quick response for such threat incidents.
● Vulnerabilities Detection and Management
This service provides periodic automated testing which exposes the
threat to information system over the internet.
The service identifies threats such as unauthorized access to the
administrative services, the services that have not been updated for a
long.
● Continuous System Patching/Upgrade and Fortification
The level of security is enhanced with the continuous system patching.
System patching is nothing but enhancing the computer program to fix
the vulnerabilities and bugs in the computer program.
System patching is very important as it not only raises the security level
of your system but also supports the newer version of the application
and software installed on your system.
● Intervention, Forensics, and Help Desk Services
We all are familiar with the help desk that provides you with quick
assistance to your problems. Similarly, the MaaS vendor has a team of
experts with ample of knowledge that intervenes whenever any threat is
detected. They provide 24/7 assistance to support and maintain the
applications and infrastructure.
Whenever a threat is detected it requires the forensic analysis to check
out how much time cost and effort it will require to fix it.

2. Delivering Business Values

Most of the customer consider build-vs-buy decision is better if compared to


calculating return on investment (ROI).

But when calculated it is observed that cost of building a security monitoring


infrastructure along with the security monitoring team is more as compared to
the outsourcing a MaaS service provider.

The MaaS vendors have a complete information security infrastructure along


with a team of skill and the expert individual who are updated with the latest
technology. The MaaS vendors provide the scalable services which is an
advantage for their customers. If the company try and built its own security
infrastructure it must have to focus on the staff attrition, technical updates,
scheduling operations, identifying the vulnerabilities and also find the solution
to resolve them. Outsourcing the MaaS services eliminates all these
headaches.
Well if you want to evaluate the loss incurred by the external or internal
incident the parameters that you must take into account are the amount of
loss occurred, frequency of loss incurred and estimate the probability of
occurring the loss. This is not an actual method to calculate the loss incurred
but it helps you in tracking the security metric.

While outsourcing any service you must consider and quantify the risk
involved in it. It will raise your confidence that your investment will succeed. A
scalable service is more valuable as the customers can get additional
business benefit by giving some additional cost.

3. Real-Time Log Monitoring Enables Compliance

Log monitoring is a process of recording log messages into a file which helps
the developers or administrator to understand how the system or application is
being used. Real-time log monitoring helps in quick detection of errors, failed
process and services.

It also provides alerts for network and protocol failures. It warns the
developers of infrastructure problems. MaaS provides automation for this
time-consuming process.

Advantages of MaaS
1. MaaS provide a ready to use a monitoring tool to its customer at a very
minimal price.
2. MaaS leverage the customer to focus on their business instead of
worrying about the information security of their enterprise.
3. MaaS provides 24/7 assistance to its customers, who can report the
issues and get immediate assistance from the MaaS team.
H) Communication-as-a-Service :-
Communication as a service (CaaS) is a cloud-based solution provided by
cloud vendors. CaaS is a specialized variation of Software as a Service
(SaaS) which is among three basic services delivered by the cloud computing
technology. When we talk about communication, recall, in how many ways we
can communicate with others. Well, we can communicate via text message,
voice call and video call.

CaaS providers manage the hardware and software that are important for
delivering Voice over IP (VoIP) for voice communication service, and other
services like Instant Messaging (IM) to provide text communication service
and video conferencing to provide video communication service.
CaaS model provides economical services as the service users do not have to
bear the expenditure of buying and managing the communication equipment.
CaaS is favourable for small IT companies that on the verge of expansion. Let
us discuss the features of CaaS.

Features of CaaS
1. Integrated and Unified Communication

The advanced unified communication features include Chat, Multimedia


conferencing, Microsoft Outlook integration, Real-time presence, “Soft”
phones (software-based telephones), Video calls, Unified messaging and
mobility.

Nowadays, CaaS vendor introduces new features to their CaaS services


much faster than ever before. It has become economical for providers to
introduce a new feature to their CaaS application faster because the
end-users are benefitting from the provider’s scalable platform infrastructure
and ultimately the many end-users using the provider’s service shares this
cost of enhancement.

2. No Investment Required

As we have learnt above it is the sole responsibility of CaaS vendor to


manage hardware and software deployed to provide the communication
service to their customers. The customer only has to pay for the service he is
getting from the CaaS vendor, not for communication features deployed to
provide communication services.

3. Flexibility & Scalability

The customer can outsource the communication services form CaaS vendors.
The customers pay for what they have demanded. The customer can extend
their service requirement according to their need. This brings flexibility and
scalability in communication services and even make the service economical.

4. No Risk of Obsolescence
The CaaS vendors keep on updating their hardware and software that provide
communication services to meet the changing demands of the market. So the
customer using the services does not have to be worried about the service
obsolescence.

5. No Maintenance Cost Incurred

The customer outsourcing the CaaS service does not have to bear the cost of
maintaining the equipment deployed for providing communication services.

6. Ensure Business Continuity

If due to any calamity your business’s geographical region is affected then


how long can you continue your business? That’s why nowadays companies
distribute their data to the geographically dispersed data centre which
maintain the redundancy & help them in recovering soon after any
catastrophic event.

The same feature is adopted and implemented by the CaaS providers in order
to provide voice continuity or communication continuity even if any
catastrophic event strikes.

How Communication as a Service (CaaS) Works?


Business users opting for CaaS can selectively deploy communication
features (hardware and software) throughout there office on a pay-as-you-go
basis. CaaS vendor designs comprehensive, flexible and easy to understand
service plans for their users.

The quality of communication service is assured by the CaaS vendors under


the service level agreement. CaaS is a fully hosted solutions that are
practised on the cloud-based technology which can be implemented over
multiple types of operating system such as windows, Linus, Android & iOS.
Because of this, the CaaS can be accessed through multiple types of
connected devices such as mobiles, handsets, tablets, TV sets, laptop, PC
etc.
CaaS has brought the revolutionary change in method of communication from
person to person, person to machine and machine to machine.

CaaS abstracts the networks capability to handle peak load for their customer
which make it flexible. The network capabilities can be extended to raise the
network capacity, devices and area coverage based on the demands of the
CaaS customers. However, the network capabilities can be extended
dynamically according to customers demand so that the resources are not
wasted.

Risk Involved in CaaS?


As we have mentioned earlier that the CaaS vendors are solely responsible
for the quality of the service they provide. So from the customer’s perspective,
there is no risk involved in taking the services from the CaaS vendor.

The customers need not worry about the service being getting obsolete as the
CaaS providers perform periodic updates and they also manage the
replacement of hardware and software involved to keep the platform
technically up to date.

Advantages of Communication as a Service (CaaS)


● CaaS provides an economical way to deliver communication service to
its customer by preventing them from investing in hardware and
software required for delivering communication services.
● CaaS vendor provides 24/7 service to its customers.
● Customer receiving services from CaaS vendor do not have to indulge
and invest in managing the components of CaaS.
● CaaS vendor offers flexible service as they charge according to pay as
you go basis.
● CaaS provide scalable services as they provide service based on
customers demand.
● CaaS provides the hosted and managed solution which offers complete
communication solutions managed by a single vendor only.
● From the customers perspective, there is no risk of service becoming
obsolete as the vendors are responsible for upgrading the carrier
platform.

CaaS is all about recognizing the use cases where this technology can be
implemented to utilize the full value potential of telecommunication.

I) Communication-as-a-Service :-
Network-as-a-service (NaaS) is a cloud service model in which customers rent
networking services from cloud providers. NaaS allows customers to operate their own
networks without maintaining their own networking infrastructure. Like other cloud
services, NaaS vendors run networking functions using software, essentially allowing
companies to set up their own networks entirely without hardware. All they need is
Internet connectivity.

NaaS can replace virtual private networks (VPNs), multiprotocol label switching (MPLS)
connections, or other legacy network configurations. It can also replace on-premise
networking hardware such as firewall appliances and load balancers. A newer model for
routing traffic and applying security policies, NaaS has had a major impact on enterprise
networking architecture.

● Flexibility: Cloud services offer more flexibility and greater customization.


Changes are made to the network via software, not hardware. IT teams are often
able to reconfigure their corporate networks on demand.
● Scalability: Cloud services like NaaS are naturally more scalable than traditional,
hardware-based services. Enterprise NaaS customers can simply purchase more
capacity from a vendor instead of purchasing, plugging in, and turning on more
hardware.
● Access from anywhere: Depending on how a cloud-based network is configured,
users may be able to access it from anywhere — and on any device — without
using a VPN, although this introduces the need for strong access control. Ideally,
all a user needs is an Internet connection and login credentials.
● No maintenance: The cloud provider maintains the network, managing software
and hardware upgrades.
● Bundled with security: NaaS makes it possible for a single provider to offer both
networking services and security services like firewalls. This results in tighter
integration between the network and network security.
● Cost savings: This advantage depends on the vendor. However, purchasing cloud
services instead of building one's own services often results in cost savings:
cloud customers do not need to purchase and maintain hardware, and the vendor
already has the servers they need to provide the service.

J) Disaster Recovery -as-a-Service :-


Disaster Recovery as a Service (DRaaS) is disaster recovery hosted by a third party. It
involves replication and hosting of physical or virtual servers by the provider, to provide
failover in the event of a natural disaster, power outage, or other disaster that affects
business continuity.

The basic premise of DRaaS is that In the event of a real disaster, the remote vendor,
which typically has a globally distributed architecture, is less likely to be impacted
compared to the customer. This allows the vendor to support the customer in a worst
case disaster recovery scenario, in which a disaster results in complete shutdown of the
organization’s physical facilities or computing resources.

Third-party DRaaS vendors can provide failover for on-premise or cloud computing
environments, billed either on-demand, according to actual usage, or through ongoing
retainer agreements. DRaaS requirements and expectations are typically recorded in
service level agreements (SLAs).

DRaaS Operating Models


There are three primary models used by disaster recovery as a service
providers—managed, assisted, and self-service.

Managed DRaaS
In the managed DRaaS model, third parties take full responsibility for disaster recovery.
Choosing this option requires organizations to work closely with DRaaS providers to
keep all infrastructure, application, and service changes up to date. If you don’t have the
expertise and time to manage your own disaster recovery, this is the best option.

Assisted DRaaS
If you want to take responsibility for certain aspects of your disaster recovery plan, or if
you have custom applications that may be difficult for a third party to take over,
supported DRaaS may be a better choice. In this model, the service provider provides
services and expertise that can help optimize the disaster recovery process, but the
customer is responsible for implementing some or all of the disaster recovery plans.

Related content: read our guide to IT disaster recovery plans

Self-Service DRaaS
The cheapest option is a self-service DRaaS, where customers are responsible for
planning, testing, and managing disaster recovery, and the vendor provides backup
management software, and hosts backups and virtual machines in remote locations.
This model is offered by all major cloud providers—Amazon, Microsoft Azure and
Google Cloud.

When using this model, careful planning and testing is required to ensure that
operations can be immediately failed over to the vendor’s remote data center, and easily
recovered when local resources are restored. This option is ideal for organizations with
in-house disaster recovery and cloud computing expertise.

How Does DRaaS Work?


The DRaaS provider provides infrastructure that serves as the customer’s disaster
recovery site when a disaster happens. The service offered by the provider typically
includes a software application or hardware appliance that can replicate data and virtual
machines to a private or public cloud operated by the provider.
In managed DRaaS, the provider is responsible for the failover process, ensuring users
are redirected from the primary environment to the remote environment. DRaaS
providers also monitor disaster recovery operations and help customers recover
systems and resume normal operation. In other forms of DRaaS, your organization will
need to assume responsibility for some of these tasks.

Hosted DRaaS is especially useful for small businesses that lack in-house experts to
design and execute disaster recovery plans. The ability to outsource infrastructure is
another benefit for smaller organizations, because it avoids the high cost of equipment
needed to run a disaster recovery site.

Features :-

The following are key considerations when selecting a DRaaS provider for your
organization.

Reliability
In the early days of DRaaS, there were concerns about the resources available to the
DRaaS provider, and its ability to service a certain number of customers in case of a
widespread regional disaster.

Today, most DRaaS services are based on public cloud providers, which have virtually
unlimited capacity. At the same time, even public clouds have outages, and it is
important to understand what happens if, when disaster strikes, the DRaaS vendor is
unable to provide services. Another, more likely scenario is that the DRaaS vendor will
perform its duties, but will not meet its SLAs. Understand what are your rights under the
contract, and how your organization will react and recover, in each situation.

Access
Work with your DRaaS provider to understand how users will access internal
applications in a crisis, and how VPN will work—whether it will be managed by the
provider or rerouted. If you use virtual desktop infrastructure (VDI), check the impact of
a failover event on user access, and determine who will manage the VDI during a
disaster.

If you have applications accessed over the Internet, coordinate with providers,
customers, partners, and users how DNS will work in a crisis—whether it should be
transitioned to DNS managed by the provider, or kept with the same DNS (this also
depends on whether your DNS is hosted or self-managed). DNS is a mission critical
service, and if it doesn’t work smoothly during a disaster, even if systems are
successfully transitioned, they will be offline.

Assistance
Ask prospective DRaaS providers about the standard process and support they provide,
during normal operations and during a crisis. Determine:

● What is the disaster recovery procedure


● What professional services the provider offers in time of disaster
● What responsibility lies with the provider vs. your organization
● What is the testing process—determine if you can run tests for backup and
recovery internally, and whether testing or disaster “drills” are conducted by the
provider
● After declaring a disaster, how long can the provider run your workloads before
recovering (to account for long term disaster scenarios)

K) Analytics -as-a-Service :-
Analytics-as-a-Service (AaaS) is a type of Cloud service. It provides access to data
analysis software and tools through the Cloud, rather than having to invest in
on-premise software.

AaaS services are complete and customizable solutions for organizing, analyzing
and visualizing data. The objectives are the same as for on-premise solutions, namely,
to provide information that can be used to make better decisions.

These tools offer different data analysis methods and technologies such as Data
Mining, Predictive Analysis, Dataviz or even advanced techniques such as Artificial
Intelligence and Machine Learning.

One of the main advantages of AaaS solutions is that these services are based on a
subscription model. As with other types of Cloud services, the user pays only for the
resources he or she consumes. This typically saves money compared to purchasing
on-premise software and the accompanying license.
Analytics as a service also provides access to the benefits of data analysis without the
need for one's own Data Warehouse and a full team of Data Scientists. The
infrastructure is managed by the service provider, and some have their own experts
allowing you to completely outsource the work.

Across all industries, more and more companies burdened with untapped data are
turning to analytics solutions as a service. Faced with a shortage of Data Scientists and
other experts, this is often the best alternative. With these services, members of any
team can access the benefits of data analysis without having to master the theory and
technologies required.

Even organizations that already have in-house expertise can use AaaS to relieve
their Data Scientists from the simplest analysis tasks. This allows experts to focus on
more complex analyses.

Indeed, there are hybrid forms of AaaS that allow you to combine your existing
infrastructure with Cloud services. In this case, only part of the data analysis will be
outsourced via the cloud.

However, analytics as a service may not be suitable for all companies. It is essential to
identify and define your needs, so that you can choose a service that meets those
needs without offering unnecessary functionality.

L) Backup -as-a-Service :-
Online backup service, also known as cloud backup or backup as a service (BaaS), is a
method of offsite data storage in which files, folders, or the entire contents of a hard
drive are regularly backed up by a service vendor to a remote secure cloud-based data
repository over a network connection. The purpose of online backup is simple and
straightforward: to protect the information – whether it's business data or personal –
from the risk of loss associated with user error, hacking, or any other kind of
technological disaster.
Instead of performing backup with a centralized, on-premises IT department, BaaS
connects systems to a private, public, or hybrid cloud managed by the outside provider.

Backup as a service is easier to manage than other offsite services. Instead of worrying
about rotating and managing tapes or hard disks at an offsite location, data storage
administrators can offload maintenance and management to the provider

How Does Backup as a Service Work?

In employing backup as a service, the first step is to purchase and sign up for the
service. Next, you select the services you want to back up. To back up Microsoft Office
365, select Exchange Online or SharePoint Online, or OneDrive for Business.

You make those selections only once. After the initial setup, changes to data you've
selected, as well as new data added to the services you've selected, are backed up
automatically and, with most online backup services, almost instantly.

Benefits of Backup as a Service

Backup as a service offers many benefits, including:

● Convenience.: The convenience offered by BaaS solutions is indisputable.


BaaS is automated — once it's set up, information is saved automatically as
it streams in. You don't have to proactively save, label, and track information.
Rather, the convenience of BaaS allows you to concentrate on your work
without worrying about data loss.
● Because your data is stored in the BaaS, you are not subject to the
Safety.:typical threats of hackers, natural disasters, and user error. In fact,
data that is stored in the BaaS is encrypted, which minimizes the risks your
data can incur.
● Ease of recovery.: Due to multiple levels of redundancy, if data is lost or
deleted (most frequently through individual user error or deletion), backups
are available and easily located. Multiple levels of redundancy means that
your BaaS stores multiple copies of your data in locations independent of
each other. The more levels you have stored the better, because each level
ensures that your data is safeguarded against loss as much as possible,
allowing you to access a backed-up version of your data if it ever gets lost.
● Affordability. :BaaS can be less expensive than the cost of tape drives,
servers, or other hardware and software elements necessary to perform
backup; the media on which the backups are stored; the transportation of
media to a remote location for safekeeping; and the IT labor required to
manage and troubleshoot backup systems.
WHAT IS OPENSTACK ?

OpenStack is an open-standard and free platform for cloud computing. Mostly, it is deployed as IaaS (Infrastructure-as-a-
Service) in both private and public clouds where various virtual servers and other types of resources are available for users.
This platform combines inter-related components that networking resources, storage resources, multi-vendor hardware
processing tools, and control diverse throughout the data center. Various users manage it by the command-line tools, web
services, and web-based dashboard.

ARCHITECTURE OF OPENSTACK

OpenStack contains a modular architecture along with several code names for the components.

(1) Nova (Compute)

Nova supports building bare-metal servers, virtual machines. It has narrow support for various system containers. It executes
as a daemon set on the existing Linux server's top for providing that service. This component is specified in Python. It uses
several external libraries of Python such as SQL toolkit and object-relational mapper (SQLAlchemy), AMQP messaging
framework (Kombu), and concurrent networking libraries (Eventlet).

(2) Neutron (Networking)

It gives network connectivity as a service facility between various interface devices (such as vNICs) that are handled by some
other types of OpenStack services (such as Nova). It operates the Networking API of OpenStack. It handles every networking
facet for VNI (Virtual Networking Infrastructure) & various authorization layer factors of PNI (Physical Networking
Infrastructure) in an OpenStack platform. OpenStack networking allows projects to build advanced topologies of the virtual
network. It can include some of the services like VPN (Virtual Private Network) and a firewall.

(3) Keystone (Identity)

Keystone is a service of OpenStack that offers shared multi-tenant authorization, service discovery, and API client
authentication by implementing Identity API of OpenStack. Commonly, it is an authentication system around the cloud OS. It
also supports standard password and username credentials & token-based systems logins.

(4) Horizon (Dashboard)

Horizon is a canonical implementation of Dashboard of OpenStack which offers the web-based UI to various OpenStack
services such as Keystone, Swift, Nova, etc. Dashboard shifts with a few central dashboards like a Settings Dashboard, a
System Dashboard, and a User Dashboard. It envelopes Core Support. The horizon application ships using the API
abstraction set for many projects of Core OpenStack to facilitate a stable and consistent collection of reusable techniques for
developers. With these abstractions, the developers working on OpenStack Horizon do not require to be familiar intimately
with the entire OpenStack project's APIs.

(5) Heat (Orchestration)

Heat can be expressed as a service for orchestrating more than one fusion cloud application with templates by
‘CloudFormation’ adaptable Query API and OpenStack-native REST API.

DEPLOYMENT MODELS OF OPENSTACK

(a) OpenStack-based Public Cloud


A vendor provides a public cloud computing system based on the OpenStack project.
(b) On-premises distribution
In this model, a customer downloads and installs an OpenStack distribution in their internal network.
(c) Hosted OpenStack Private Cloud
A vendor hosts an OpenStack-based private cloud: including the underlying hardware and the OpenStack software.
(d) OpenStack-as-a-Service
A vendor hosts OpenStack management software (without any hardware) as a service. Customers sign up for the
service and pair it with their internal servers, storage and networks to get a fully operational private cloud.
(e) Appliance based OpenStack
Nebula was a vendor that sold appliances that could be plugged into a network which spawned an OpenStack
deployment.

FEATURES / CHARACTERISTICS / ADVANTAGES OF OPENSTACK

(a) Compatibility & Portability. OpenStack is agile & easy to deploy; it supports both the private & public clouds.
OpenStack APIs are compatible with Amazon Web Services, so users don't need to rewrite applications for AWS.
This compatibility also allows applications and storage to transit between private clouds and public cloud providers.
(b) Security. OpenStack's robust security system supports multiple forms of identification.
(c) Management & Visibility. The open source cloud's Horizon dashboard gives administrators an overview of their
cloud environment including resources and instance pools.
(d) Cloud Storage. OpenStack offers unlimited storage pools and supports block I/O from a variety of vendors, as well
as object file storage. Its built-in storage management automatically recovers failed drives or nodes. To avoid the
effects of drive failures, users can take advantage of pre-emptive drive checking. Additionally, OpenStack's scaling
capabilities enable users to add servers and storage elastically.
(e) Support Big Data. Users can run Hadoop apps & web pages for big data analytics, media files & standard block I/O.

WHAT IS EUCALYPTUS IN CLOUD COMPUTING ?

Eucalyptus in cloud computing is an open-source software platform for carrying out IaaS or Infrastructure-as-a-Service in a
hybrid cloud computing or private cloud computing environment. Eucalyptus in cloud computing pools together existing
virtualised framework to make cloud resources for storage as a service, network as a service and infrastructure as a service.
Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems is short known as Eucalyptus in
cloud computing.

ARCHITECTURE OF EUCALYPTUS

 The Cloud Controller (CLC) is a Java program that offers EC2-compatible interfaces, as well as a web interface to the
outside world. In addition to handling incoming requests, the CLC acts as the administrative interface for cloud management
and performs high-level resource scheduling and system accounting. The CLC accepts user API requests from command-
line interfaces like euca2ools or GUI-based tools like the Eucalyptus User Console and manages the underlying compute,
storage, and network resources. Only one CLC can exist per cloud and it handles authentication, accounting, reporting, and
quota management.
 Walrus, also written in Java, is the Eucalyptus equivalent to AWS Simple Storage Service (S3). Walrus offers persistent
storage to all of virtual machines in the Eucalyptus cloud & can be used as simple HTTP put/get storage as a service solution.
There are no data type restrictions for Walrus, & it can contain images (i.e., the building blocks used to launch virtual
machines), volume snapshots (i.e., point-in-time copies), with application data. Only one Walrus can exist per cloud.
 The Cluster Controller (CC) is written in C and acts as the front end for a cluster within a Eucalyptus cloud and communicates
with the Storage Controller and Node Controller. It manages instance (i.e., virtual machines) execution and Service Level
Agreements (SLAs) per cluster.
 The Storage Controller (SC) is written in Java and is the Eucalyptus equivalent to AWS EBS. It communicates with the
Cluster Controller and Node Controller and manages Eucalyptus block volumes and snapshots to the instances within its
specific cluster. If an instance requires writing persistent data to memory outside of the cluster, it would need to write to
Walrus, which is available to any instance in any cluster.
 The Node Controller (NC) is written in C and hosts the virtual machine instances and manages the virtual network endpoints.
It downloads and caches images from Walrus as well as creates and caches instances. While there is no theoretical limit to
the number of Node Controllers per cluster, performance limits do exist.
FEATURES / CHARACTERISTICS / ADVANTAGES OF EUCALYPTUS

 Supports both Linux and Windows virtual machines (VMs).


 Application program interface- (API)compatible with Amazon EC2
 Compatible with Amazon Web Services (AWS) and Simple Storage Service (S3).
 Works with multiple hypervisors including VMware, Xen and KVM.
 Can be installed and deployed from source code or DEB and RPM
 Internal processes communications are secured through SOAPand WS-Security.
 Multiple clusters can be virtualized as a single cloud.
 Administrative features such as user and group management and reports.
WHY IS SECURITY IMPORTANT IN CLOUD COMPUTING ?

Security in cloud computing is a major concern. Data in cloud should be stored in encrypted form. To restrict client from
accessing the shared data directly, proxy and brokerage services should be employed. Before deploying a particular resource
to cloud, one should need to analyze several aspects of the resource such as :-

 Select resource that needs to move to the cloud and analyze its sensitivity to risk.
 Cloud service models (IaaS, PaaS, SaaS) require customer to be responsible for security at different levels of service.
 Consider the cloud type to be used such as public, private, community or hybrid.
 Understand the cloud service provider's system about data storage and its transfer into and out of the cloud.

The risk in cloud deployment mainly depends upon the service models and cloud types.

UNDERSTANDING THE SECURITY OF A CLOUD

Security Boundaries

A particular service model defines the boundary between the responsibilities of service provider and customer. Cloud Security
Alliance (CSA) stack model defines the boundaries between each service model and shows how different functional units
relate to each other. The following diagram shows the CSA stack model:

Key Points to CSA Model


 IaaS is the most basic level of service with PaaS and SaaS next two above levels of services.
 Moving upwards, each of the service inherits capabilities and security concerns of the model beneath.
 IaaS provides infrastructure, PaaS platform development environment & SaaS the operating environment.
 IaaS has the least level of integrated functionalities and integrated security while SaaS has the most.
 This model clearly describes security boundaries at which cloud service provider's responsibilities end & customer's
responsibilities begin.
 Any security mechanism below the security boundary must be built into the system and should be maintained by the
customer.

UNDERSTANDING DATA SECURITY

Since all the data is transferred using Internet, data security is of major concern in the cloud. Here are key mechanisms for
protecting data :-

 Access Control
 Auditing
 Authentication
 Authorization

ISOLATED ACCESS TO DATA

Since data stored in cloud can be accessed from anywhere, we must have a mechanism to isolate data and protect it from
client’s direct access. Brokered Cloud Storage Access is an approach for isolating storage in the cloud. In this approach, two
services are created :-

 A broker with full access to storage but no access to client.


 A proxy with no access to storage but access to both client and broker.

WORKING OF BROKERED CLOUD STORAGE ACCESS SYSTEM

When the client issues request to access data :-

 The client data request goes to the external service interface of proxy.
 The proxy forwards the request to the broker.
 The broker requests the data from cloud storage system.
 The cloud storage system returns the data to the broker.
 The broker returns the data to proxy.
 Finally the proxy sends the data to the client.
Encryption

Encryption helps to protect data from being compromised. It protects data that is being transferred as well as data stored in
the cloud. Although encryption helps to protect data from any unauthorized access, it does not prevent data loss.

WHAT IS CLOUD SECURITY ARCHITECTURE ?

 The hardware and technology used to safeguard data, workloads, and systems on cloud platforms is called Cloud
Security Architecture.
 Developing a cloud security architecture plan should start with the blueprint and design process, and it should be built
into cloud platforms from the ground up.
 Cloud security architecture is a framework that includes all of the technology and software required to safeguard
information, data, and applications handled in or through the cloud.
 Public clouds, private clouds, and hybrid clouds are some of the cloud computing frameworks. All clouds must be
very secure to protect sensitive data and information.

IMPORTANCE OF CLOUD SECURITY ARCHITECTURE IN A BUSINESS

 As a company expands, it will require more secure systems to process its workload. Cloud networks provide many
benefits, but they also have a lot of security concerns.
 If private data is accessed by an unauthorized user, it may be a hazardous situation for the company. Hence, cloud
security architecture is critical.
 Cloud security architecture can close security gaps that go undiscovered in traditional point-of-sale (POS) systems.
In addition, cloud security design eliminates security network redundancy difficulties.
 It also aids in the organization of security measures while ensuring their reliability throughout data processing. A
suitable cloud security architecture can also handle complex security issues successfully.

WHAT ARE THE THREATS TO A CLOUD SECURITY ARCHITECTURE ?

Insider Risks
Insider risks include internal employees with access to systems and data and administrators from cloud service providers
(CSPs). When you sign up for CSP services, you are effectively handing your data and workloads to a team of people in
charge of keeping the CSP architecture up to date.

Availability of Data
Another factor to examine is whether or not data is available to government agencies. Security experts are paying more
attention to the rules, regulations, and real-world examples that show whether a government may access data in a private or
public cloud via court orders or other ways.

DoS Attacks
DoS attack is a hot topic right now. Typical temporary direct denial-of-service (D-DoS) attacks include bombarding a system
with requests until it crashes. Using network compliance standards to block out repeated requests, security perimeters can
deflect these attacks. While working to restore the system, CSPs can also move workloads and traffic to other resources.
Permanent DoS attacks are more damaging, as they frequently cause firmware damage, rendering a server unbootable. In
this situation, a technician needs to manually reload the firmware and rebuild the system from the ground up, which might
take days or weeks.
Cloud-connected Edge Systems
The cloud edge can refer to cloud-connected edge systems, but it also relates to server architecture that isn't directly controlled
by the CSP. Because global CSPs are unable to develop and operate facilities in every corner of the globe, they rely on
partners to provide services to smaller, geographically isolated, or rural areas. As a result, many CSPs lack complete control
over monitoring and ensuring physical box integrity for the hardware, as well as physical attack defenses such as shutting off
USB port access.

Access to Public Cloud Products


Customers' ability to assess public cloud products is influenced by their level of control. Users are concerned about shifting
sensitive workloads to the public cloud from the customer's standpoint. Big cloud providers, on the other hand, are often far
more equipped and have a lot greater degree of knowledge in cloud security than the ordinary private cloud user. Customers,
even if their security tools aren't too advanced, find it reassuring to have complete control over their most sensitive data.

Password Strength
Even with the most powerful cloud security architecture globally, a server can't assist you in developing a better password
due to hardware restrictions. One of the most prevalent attack vectors is Password. Hardware, firmware, and software
safeguards focus on cloud security architects.

THE AAA (AUTHENTICATION, AUTHORIZATION & ACCOUNTING) FRAMEWORK MODEL

A standard-based framework called AAA is used to manage who is allowed to access network resources, what they are
allowed to do, and record the actions taken while doing so (via authentication and Authorization). Or we can say, the AAA is
a structural framework used to access computer resources, enforce policies, conduct audits, provide vital data for service
billing, and perform other network administration and security tasks. The primary purpose of this operation is to grant specific,
Authorized user's access to network and software application resources. Authorization is the process of granting or denying
specific user's access to a computer network and its resources. Users can be given several Authorization levels, restricting
their access to the network and its resources. Accounting is known for monitoring and documenting user activities on a
computer network.
(a) Authentication

 Authentication provides a method of identifying a user, typically by having the user enter a valid username and
password before access to the network is granted.
 Authentication is based on each user having a unique set of login credentials for gaining network access.
 The AAA server compares a user's authentication credentials with other user credentials stored in a database; in this
case, that database is Active Directory.
 If the user's login credentials match, the user is granted access to the network. If the credentials don't match,
authentication fails and network access is denied.

(b) Authorization

 Following authentication, a user must gain authorization for doing certain tasks. After logging in to a system, for
instance, the user may try to issue commands.
 The authorization process determines whether the user has the authority to issue such commands.
 Simply put, authorization is the process of enforcing policies i.e. determining what types or qualities of activities,
resources, or services a user is permitted.
 Usually authorization occurs within the context of authentication. After you have authenticated a user, they may be
authorized for different types of access or activity.
 As it relates to network authentication via RADIUS and 802.1x, authorization can be used to determine what VLAN,
Access Control List (ACL), or user role that the user belongs to.

(c) Accounting

 The final piece in the AAA framework is accounting, which monitors the resources a user consumes during network
access.
 This can include the amount of system time or the amount of data sent and received during a session. Accounting is
carried out by logging session statistics and usage information.
 It is used for authorization control, billing, trend analysis, resource utilization, and planning for the data capacity
required for business operations.
 ClearPass Policy Manager functions as the accounting server and receives accounting information about the user
from the Network Access Server (NAS).
 The NAS must be configured to use ClearPass Policy Manager as an accounting server, and it is up to the NAS to
provide accurate accounting information to ClearPass Policy Manager.

You might also like