0% found this document useful (0 votes)
42 views102 pages

CC 1

Uploaded by

aniproductmail
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views102 pages

CC 1

Uploaded by

aniproductmail
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 102

UNIT - 1

Introduction to Cloud Computing

Cloud Computing is the delivery of computing services such as servers, storage, databases, networking, software,
analytics, intelligence, and more, over the Cloud (Internet).
Cloud Computing provides an alternative to the on-premises datacentre. With an on-premises datacentre, we have to
manage everything, such as purchasing and installing hardware, virtualization, installing the operating system, and
any other required applications, setting up the network, configuring the firewall, and setting up storage for data.
After doing all the set-up, we become responsible for maintaining it through its entire lifecycle.

The cloud environment provides an easily accessible online portal that makes handy for the user to manage the
compute, storage, network, and application resources. Some cloud service providers are:

Cloud refer to the servers that are accessed over the internet.

2
It means storing, managing and accessing the data and programs on the remote servers that are hosted on internet
instead of computers hard derive.
Cloud computing is the on-demand availability of computer system resources.
We store manage and process data on remote servers.

❖ Service providers: -
1. Google cloud.
2. AWS (Amazon Web Services).
3. Microsoft Azure.
4. IBM cloud.
5. Alibaba cloud.

❖ Types of cloud: -
1. Public (Accessible to all).
2. Private (Services accessible within a specific organization).
3. Hybrid (Public + Private cloud features).
4. Community (Services accessible by a group of organizations).

3
Cloud Computing Technologies

❖ A list of cloud computing technologies is given below -


1. Virtualization: -
Virtualization is the process of creating a virtual environment to run multiple applications and operating
systems on the same server. The virtual environment can be anything, such as a single instance or a combination
of many operating systems, storage devices, network application servers, and other environments.
The concept of Virtualization in cloud computing increases the use of virtual machines.
❖ Types of Virtualization: -
1. Hardware virtualization
2. Server virtualization
3. Storage virtualization
4. Operating system virtualization
5. Data Virtualization
2. Service-Oriented Architecture (SOA): -
Service-Oriented Architecture (SOA) allows organizations to access on-demand cloud-based computing
solutions according to the change of business needs. It can work without or with cloud computing. The
advantages of using SOA is that it is easy to maintain, platform independent, and highly scalable.
Service Provider and Service consumer are the two major roles within SOA.
❖ Applications of Service-Oriented Architecture: -
• It is used in the healthcare industry.
• It is used to create many mobile applications and games.
• In the air force, SOA infrastructure is used to deploy situational awareness systems.

3. Grid Computing: -
Grid computing is also known as distributed computing. It is a processor architecture that combines various
different computing resources from multiple locations to achieve a common goal. In grid computing, the grid is
connected by parallel nodes to form a computer cluster. These computer clusters are in different sizes and can
run on any operating system.
❖ Grid computing contains the following three types of machines -
1. Control Node: It is a group of server which administrates the whole network.
2. Provider: It is a computer which contributes its resources in the network resource pool.
3. User: It is a computer which uses the resources on the network.
Mainly, grid computing is used in the ATMs, back-end infrastructures, and marketing research.

4. Utility Computing: -
Utility computing is the most trending IT service model. It provides on-demand computing resources
(computation, storage, and programming services via API) and infrastructure based on the pay per use method.
It minimizes the associated costs and maximizes the efficient use of resources. The advantage of utility
computing is that it reduced the IT cost, provides greater flexibility, and easier to manage.
Large organizations such as Google and Amazon established their own utility services for computing storage
and application.

4
History of Cloud Computing

Before Computing was come into existence, client Server Architecture was used where all the data and control of
client resides in Server side. If a single user want to access some data, firstly user need to connect to the server
and after that user will get appropriate access. But it has many disadvantages. So, After Client Server computing,
Distributed Computing was come into existence, in this type of computing all computers are networked together
with the help of this, user can share their resources when needed. It also has certain limitations. So in order to
remove limitations faced in distributed system, cloud computing was emerged.

❖ Evaluation of cloud computing: -


1. Grid computing: -
1. Solving large problems with parallel computing.
2. Made mainstream by Globus Alliance.
2. Utility computing: -
1. Offering computing resources as a metered service.
2. Introduced in late 1990’s.
3. Software as a service: -
1. Network-based subscriptions to applications.
2. Gained momentum in 2001.
4. Cloud Computing: -
1. Next-generation internet computing.
2. Next-generation data centers.

❖ Advantages: -
• It is easier to get backup in cloud.
• It allows us easy and quick access stored information anywhere and anytime.
• It allows us to access data via mobile.
• It reduces both hardware ad Software cost, and it is easily maintainable.
• One of the biggest advantage of Cloud Computing is Database Security.

❖ Disadvantages: -
• It requires good internet connection.
• User have limited control on the data.

5
Vision of Cloud Computing

Cloud computing means storing and accessing the data and programs on remote servers that are hosted on internet
instead of computer’s hard drive or local server. Cloud computing is also referred as Internet based computing.
These are following Vision of Cloud Computing: -
1. Cloud computing provides the facility to provision virtual hardware, runtime environment and
services to a person having money.
2. These all things can be used as long as they are needed by the user.
3. The whole collection of computing system is transformed into collection of utilities, which can be
provisioned and composed together to deploy systems in hours rather than days, with no
maintenance cost.
4. The long term vision of a cloud computing is that IT services are traded as utilities in an open
market without technological and legal barriers.
5. In the future, we can imagine that it will be possible to find the solution that matches with our
requirements by simply entering out request in a global digital market that trades with cloud
computing services.
6. The existence of such market will enable the automation of discovery process and its integration into
its existing software systems.
7. Due to the existence of a global platform for trading cloud services will also help service providers
to potentially increase their revenue.
8. A cloud provider can also become a consumer of a competition service in order to fulfill its
promises to customers.
9. In the near future we can imagine a solution that suits our needs by simply applying our application
to the global digital market for cloud computing services.
10. The presence of this market will enable the acquisition process to automatically integrate with its
integration into its existing software applications. The availability of a global cloud trading platform
will also help service providers to increase their revenue.
11. A cloud provider can also be a buyer of a competitive service to fulfill its promises to customers.

6
Features of Cloud Computing

Cloud computing has many features that make it one of the fastest growing industries at present. The flexibility
offered by cloud services in the form of their growing set of tools and technologies has accelerated its deployment
across industries.

1. Resources Pooling: -
Resource pooling means that a cloud service provider can share resources among multiple clients, each
providing a different set of services according to their needs. The administration process of allocating resources
in real-time does not conflict with the client's experience.

2. On-Demand Self-Service: -
This enables the client to continuously monitor server uptime, capabilities and allocated network storage. This
is a fundamental feature of cloud computing, and a customer can also control the computing capabilities
according to their needs.

3. Easy Maintenance: -
Servers are easily maintained, and downtime is minimal or sometimes zero.

4. Scalability And Rapid Elasticity: -


A key feature and advantage of cloud computing is its rapid scalability. Many customers have workloads that
can be run very cost-effectively due to the rapid scalability of cloud computing.

5. Economical: -
In cloud computing, clients need to pay the administration for the space used by them. There is no cover-up or
additional charges that need to be paid. Administration is economical, and more often than not, some space is
allocated for free.

6. Measured And Reporting Service: -


The measurement and reporting service is helpful for both cloud providers and their customers. It helps in
monitoring billing and ensuring optimum utilization of resources.

7. Security: -
Cloud services make a copy of the stored data to prevent any kind of data loss. If one server loses data by any
chance, the copied version is restored from the other server. This feature comes in handy when multiple users
are working on a particular file in real-time, and one file suddenly gets corrupted.

8. Automation: -
The ability of cloud computing to automatically install, configure and maintain a cloud service is known as
automation in cloud computing. This requires the installation and deployment of virtual machines, servers, and
large storage.

9. Resilience: -
The resilience of a cloud is measured by how fast its servers, databases and network systems restart and recover
from any loss or damage.

10. Large Network Access: -


The client can access cloud data or transfer data to the cloud from any location with a device and internet
connection. These capabilities are available everywhere in the organization and are achieved with the help of
internet.

7
Components of Cloud Computing
Architecture

❖ There are the following components of cloud computing architecture –

1. Client Infrastructure: -
Client Infrastructure is a Front-end component. It provides GUI (Graphical User Interface) to interact with
the cloud.

2. Application: -
The application may be any software or platform that a client wants to access.

3. Service: -
A Cloud Services manages that which type of service you access according to the client’s requirement.

4. Runtime Cloud: -
Runtime Cloud provides the execution and runtime environment to the virtual machines.

5. Storage: -
Storage is one of the most important components of cloud computing. It provides a huge amount of storage
capacity in the cloud to store and manage data.

6. Infrastructure: -
It provides services on the host level, application level, and network level.

7. Management: -
Management is used to manage components such as application, service, runtime cloud, storage,
infrastructure, and other security issues in the backend and establish coordination between them.

8. Security: -
Security is an in-built back-end component of cloud computing. It implements a security mechanism in the
back end.

9. Internet: -
The Internet is medium through which front end and back end can interact and communicate with each
other.

8
Cloud Computing Challenges

❖ Most common challenges that are faced when dealing with cloud computing:
1. Data Security and Privacy: -
Security issues on the cloud include identity theft, data breaches, malware infections, and a lot more
which eventually decrease the trust amongst the users of your applications. This can in turn lead to
potential loss in revenue alongside reputation and stature. Also, dealing with cloud computing requires
sending and receiving huge amounts of data at high speed, and therefore is susceptible to data leaks.

2. Cost Management: -
Even as almost all cloud service providers have a “Pay As You Go” model, which reduces the overall
cost of the resources being used, there are times when there are huge costs incurred to the enterprise
using cloud computing. If you turn on the services or an instance of cloud and forget to turn it off during
the weekend or when there is no current use of it, it will increase the cost without even using the
resources.

3. Multi-Cloud Environments: -
Most of these companies use hybrid cloud tactics and close to 84% are dependent on multiple clouds.
This often ends up being hindered and difficult to manage for the infrastructure team. The process most
of the time ends up being highly complex for the IT team due to the differences between multiple cloud
providers.

4. Performance Challenges: -
If the performance of the cloud is not satisfactory, it can drive away users and decrease profits. Even a
little latency while loading an app or a web page can result in a huge drop in the percentage of users.
This latency can be a product of inefficient load balancing, which means that the server cannot
efficiently split the incoming traffic so as to provide the best user experience.

5. Interoperability and Flexibility: -


There is a lack of flexibility from switching from one cloud to another due to the complexities involved.
Handling data movement, setting up the security from scratch and network also add up to the issues
encountered when changing cloud solutions, thereby reducing flexibility.

6. High Dependence on Network: -


This is only made possible due to the availability of the high-speed network. Although these data and
resources are exchanged over the network, this can prove to be highly vulnerable in case of limited
bandwidth or cases when there is a sudden outage.

7. Lack of Knowledge and Expertise: -


Due to the complex nature and the high demand for research working with the cloud often ends up being
a highly tedious task. It requires immense knowledge and wide expertise on the subject. There is a need
for upskilling so these professionals can actively understand, manage and develop cloud-based
applications with minimum issues and maximum reliability.

9
Cloud Migration

Cloud migration is the procedure of transferring applications, data, and other types of business components to
any cloud computing platform. The most used model is the applications and data transfer through an on-premises
and local data center to any public cloud.
But, a cloud migration can also entail transferring applications and data from a single cloud environment or facilitate
them to another- a model called cloud-to-cloud migration.
❖ Pros of Cloud Migration: -
Some of the advantages of migrating to a cloud are as follows:
• Flexibility: If our apps face fluctuations in traffic, then cloud infrastructure permits us to scale down
and up to meet the demand. Hence, we can apply only those resources we require.
• Scalability: The cloud facilitates the ability to enhance existing infrastructure. Therefore, applications
have space to raise without impacting work.
• Agility: The part of the development is remaining elastic enough for responding to rapid modifications
within the technology resources.
• Productivity: Our cloud provider could handle the complexities of our infrastructure so we can
concentrate on productivity.
• Security: The cloud facilitates security than various others data centers by centrally storing data. Also,
most of the cloud providers give some built-in aspects including cross-enterprise visibility, periodic
updates, and security analytics.
• Profitability: The cloud pursues a pay-per-use technique. There is no requirement to pay for extra
charges or to invest continually in training on, maintaining, making, and updating space for various
physical servers.
❖ Risk in cloud migration: -
1. Data loss.
2. Wasted costs.
3. Added latency.
4. Security.
5. Lack of visibility and control.
6. Incompatibility of the existing architecture.
7. No clear cloud migration strategy in place.
❖ Cloud Migration Strategies Types: -
1. Rehosting (lift-and-shift): -
The most general path is rehosting (or lift-and-shift), which implements as it sounds. It holds our
application and then drops it into our new hosting platform without changing the architecture and code of
the app.
2. Re-platforming: -
Re-platforming is called "lift-tinker-and-shift". It includes making some cloud optimizations without
modifying our app's core architecture.
3. Re-factoring: -
It means to rebuild our applications from leverage to scratch cloud-native abilities. A potential
disadvantage is vendor lock-in as we are re-creating on the cloud infrastructure. It is the most expensive
and time-consuming route as we may expect.
4. Re-purchasing: -
It means replacing our existing applications along with a new SaaS-based and cloud-native platform (such
as a homegrown CRM using Salesforce).
Re-purchasing is the most cost-effective process if moving through a highly personalized legacy landscape
and minimizing the apps and service number we have to handle.
5. Retiring: -
When we don't find an application useful and then simply turn off these applications.
6. Re-visiting: -
Re-visiting may be all or some of our applications must reside in the house.

10
Ethical issues in cloud computing

1. the control is relinquished to third party services;


2. the data is stored on multiple sites administered by several organizations; and
3. multiple services interoperate across the network.

Unauthorized access, data corruption, infrastructure failure, or unavailability are some of the risks related to
relinquishing the control to third party services; moreover, it is difficult to identify the source of the problem and the
entity causing it.

Ubiquitous and unlimited data sharing and storage among organizations test the self-determination of information,
the right or ability of individuals to exercise personal control over the collection, use and disclosure of their personal
data by others; this tests the confidence and trust in today’s evolving information society. Identity fraud and theft are
made possible by the unauthorized access to personal data in circulation and by new forms of dissemination through
social networks and they could also pose a danger to cloud computing.

Unwanted dependency on a cloud service provider, the so-called vendor lock-in, is a serious concern and the current
standardization efforts at NIST attempt to address this problem. Another concern for the users is a future with only a
handful of companies which dominate the market and dictate prices and policies

11
Economics of Cloud Computing

Economics of Cloud Computing is based on the PAY AS YOU GO method. Users/Customers must have to pay
only for their way of the usage of the cloud services. So the Cloud is economically very convenient for all.

Economical background of the cloud is more useful for developers in the following ways:
• Pay as you go model offered by cloud providers.
• Scalable and Simple.

❖ Cloud Computing Allows: -


• Reduces the capital costs of infrastructure.
• Removes the maintenance cost.
• Removes the administrative cost.

❖ What is Capital Cost?


It is cost occurred in the purchasing infrastructure or the assets that is important in the production of goods. It
takes a long time to generate profit.
There are three different Pricing Strategies that are introduced by Cloud Computing: Tiered Pricing, Per-
unit Pricing, and Subscription-based Pricing. These are explained as:
1. Tiered Pricing: Cloud Services are offered in the various tiers. Each tier offers to fix service
agreements at a specific cost. Amazon EC2 uses this kind of pricing.
2. Per-unit Pricing: The model is based upon the unit-specific service concept. Data transfer and
memory allocation include in this model for specific units. GoGrid uses this kind of pricing in
terms of RAM/hour.
3. Subscription-based Pricing: In this model, users are paying periodic subscription fees for the
usage of the software.

❖ Cloud’s business impact and economics: -


Three ways cloud computing affects business:
1. Faster Communication.
2. Ease and Access.
3. Secure Collaboration.

❖ Some Important points: -


1. Flexibility.
2. Dedicated employees (Focused on gaining profit for business).
3. Cloud services helps in saving time and money and also travelling.
4. Reduce IT cost.

12
Future of Cloud Computing

Cloud Computing has many features that make it’s future brighter in mostly all sectors of the world. But it will
not be alone. Internet of Things (IoT) and Big Data will add more to it.

❖ Cloud with Operating System: -


Operating systems allow users to run programs, store and retrieve data from one user session to next.

❖ Cloud with Internet of Thing: -


1. Cloud-based location-tracking applications – Using cloud and location-tracking solutions, you will be
able to track not only packages you ship, but also stolen cars, lost luggage, misplaced cell phones, missing
pets, and more.
2. Cloud-based smart fabrics and paints – Ability to connect devices to cloud from any place, at any time
will open door to wide range of cutting-edge applications.
Paints being developed change form based on environmental conditions. Currently, paints can change color
on roads to indicate presence of ice.
3. Cloud TV – TV viewers will not just watch shows on-demand in their homes, in their cars, and on
airplanes but also new breed of projection devices will make any flat surface TV screen.
4. Cloud-based smart devices – Cloud’s ability to provide internet access and at any time makes such
processing reality. Using the cloud for communication, devices can coordinate activities.

13
Cloud Networking

Cloud Networking is service or science in which company’s networking procedure is hosted on public or private
cloud.

❖ Why cloud networking is required and in-demand?


• It is in demand by many companies for their speedy and impervious delivery, fast processing,
dependable transmission of information without any loss, pocket-friendly set-up.
• The web access can be expanded and made greater reliable bandwidth to promote couple of network
features into cloud.
• Workloads are shared between cloud surroundings using software program as provider application.
• Software-Defined Wide Area Network is technology that makes use of bunch of networking switches
and routers to virtually get entry to machine from hardware to software program deployed on white box.
• Software-defined Wide range community offers standard load balancing approach and combines all
stages of network to user experience.

❖ Advantages of Cloud Networking: -


1. On-Demand Self Service –
Cloud computing provides required application, services, and utility to client. With login key, they can
begin to use besides any human interplay and cloud service providers. It consists of storage and digital
machines.
2. High Scalability –
It requests grant of resources on large scale besides any human intervention with every service provider.
3. Agility –
It shares the assets efficiently amongst customers and works quickly.
4. Multi-sharing –
By distributed computing, distinctive clients from couple of areas share identical resources through
fundamental infrastructure.
5. Low Cost –
It is very economical and can pay in accordance with its usage.
6. Services in pay per use Model –
Application Programming Interface is given to clients to use resources and offerings and pay on service
basis.
7. High availability and Reliability –
The servers are accessible at the proper time besides any delay or disappointment.
8. Maintenance –
It is user-friendly as they are convenient to get entry to from their location and does not require any
installation set up.

14
Ubiquitous Computing

Ubiquitous Computing is a term associated with the Internet of Things (IoT) and refers to the potential for connected
devices and their benefits to become common place.
The method of enhancing computing use by making many devices(services) available throughput the physical
environment, but making them effectively invisible to the user.
Also called ambient computing or pervasive computing, ubiquitous computing can be described as the saturation of
work, living, and transportation spaces with devices that intercommunicate. These embedded systems would make
these settings and transportation methods considerably more enjoyable and convenient since through contextual data
aggregation and application, seamless, intuitive access points, and fluid payment systems.
A prime example of a ubiquitous computing experience would be an autonomous vehicle that recognizes its
authorized passenger through smartphone proximity, docks and charges itself when needed, and handles toll,
emergency response, and fast-food payments itself by communicating with infrastructure.

❖ Tries to construct a universal computing environment (UCE) that conceals: -


1. Computing instructions.
2. Devices.
3. Resources.
4. Technology.

Invisible to users.
Computing everywhere.
Many embedded, wearable, handheld devices communicate transparently to provide different services to the users.
Devices mostly have low power and short-range wireless communication capabilities.
Devices utilize multiple on-board sensors to gather information about surrounding environments.

❖ Characteristics: -
1. Context-awareness.
2. Volatile interaction.
3. Interaction among applications are based on specific context.

❖ Challenges and requirements: -


1. Hardware.
2. Applications.
3. User Interfaces.
4. Networking.
5. Mobility.
6. Scalability.
7. Reliability.
8. Interoperability.
9. Resource Discovery.
10. Privacy and Security.

15
❖ Applications: -
A combination of several factors, including the current location, the current user or if there are any other
Ubicomp devices present in the near surroudings.

❖ Networking: -
Another kye driver for the final transition will be the use of short-range wireless as well as traditional wired
technologies.
Wireless computing refers to the use of wireless technology to connect computers to a network.

❖ Mobility: -
Mobility is made possible through wireless communication technologies.
This behavior is an inherent property of the ubicomp concept and it should not be treated as a failure.

❖ Scalability: -
In a ubiquitous computing environment where possibly thousands and thousands of devices are part of
scalability of the whole system is a key requirement.
All the devices are autonomous and must be able to operate independently a decentralized management will
most likely be most suitable.

16
UNIT – 2

Cloud Reference Model

The cloud computing reference model is an abstract model that characterizes and standardizes the functions of a
cloud computing environment by partitioning it into abstraction layers and cross-layer functions. This reference
model groups the cloud computing functions and activities into five logical layers and three cross-layer functions.

The five layers are physical layer, virtual layer, control layer, service orchestration layer, and service layer. Each of
these layers specifies various types of entities that may exist in a cloud computing environment, such as compute
systems, network devices, storage devices, virtualization software, security mechanisms, control software,
orchestration software, management software, and so on. It also describes the relationships among these entities.
The three cross-layer functions are business continuity, security, and service management. Business continuity and
security functions specify various activities, tasks and processes that are required to offer reliable and secure cloud
services to the consumers. Service management function specifies various activities tasks and processes that enable
the administrations of the cloud infrastructure and services to meet the provider’s business requirements and
consumer’s expectations.

17
❖ Cloud computing layers: -
1. Physical layer:
• Foundation layer of the cloud infrastructure.
• Specifies entities that operate at this layer: Compute systems, network devices and storage
devices. Operating environment, protocol, tools and processes.
• Functions of physical layer: Executes requests generated by the virtualization and control layer.
2. Virtual layer:
• Deployed on the physical layer.
• Specifies entities that operate at this layer: Virtualization software, resource pools, virtual
resources.
• Functions of virtual layer: Abstracts physical resources and make them appear as virtual resources.
Executes the requests generated by control layer.
3. Control layer:
• Deployed either on virtual layer or on physical layer.
• Specifies entities that operate at this layer: Control software.
• Functions of control layer: Enables resource configuration, resource pool configuration and
resource provisioning. Executes requests generated by service layer. Exposes resources to and
supports the service layer.
• Collaborates with the virtualization software and enables resource pooling and creating virtual
resources, dynamic allocation and optimizing utilization of resources.
4. Service Orchestration layer:
• Specifies the entities that operate at this layer: Orchestration software.
• Functions of orchestration layer: Provides workflows for executing automated tasks. Interacts with
various entities to invoke provisioning tasks.
5. Service layer:
• Consumers interact and consume cloud resources via those layer.
• Specifies the entities that operate at this layer: Service catalog and self-service portal.
• Functions of service layer: Store information about cloud services in service catalog and presents
them to the consumers. Enables consumers to access and manage cloud services via a self-service
portal.
6. Business continuity:
• Specifies adoption of proactive and reactive measures to mitigate the impact of downtime.
• Enables ensuring the availability of services in line with SLA.
• Supports all the layers to provide uninterrupted services.
7. Security:
• Specifies the adoption of : Administrative mechanisms (Security and personnel policies, standard
procedures to direct safe execution of operations) and technical mechanisms(Firewall, intrusion
detection and prevention systems, antivirus).
• Deploys security mechanisms to meet GRC requirements.
• Supports all the layers to provide secure services.
8. Service portfolio management:
• Define the service roadmap, service features, and service levels.
• Assess and prioritize where investments across the service portfolio are most needed.
• Establish budgeting and pricing.
• Deal with consumers in supporting activities such as taking orders, processing bills, and collecting
payments.
9. Service operation management:
• Enables infrastructure configuration and resource provisioning.
• Enable problem resolution.
• Enables capacity and availability management.
• Enables compliance conformance.

18
Layered Architecture of Cloud

All of the physical manifestations of cloud computing can be arranged into a layered picture that encompasses
anything from software systems to hardware appliances. The infrastructure can also include database systems and
other storage services.

❖ Layered Architecture of Cloud: -

❖ Application Layer: -

1. The application layer, which is at the top of the stack, is where the actual cloud apps are located.
Cloud applications, as opposed to traditional applications, can take advantage of the automatic-
scaling functionality to gain greater performance, availability, and lower operational costs.
2. This layer consists of different Cloud Services which are used by cloud users. Users can access these
applications according to their needs. Applications are divided into Execution
layers and Application layers.
3. In order for an application to transfer data, the application layer determines whether communication
partners are available. Whether enough cloud resources are accessible for the required
communication is decided at the application layer. Applications must cooperate in order to
communicate, and an application layer is in charge of this.
4. The application layer, in particular, is responsible for processing IP traffic handling protocols like
Telnet and FTP. Other examples of application layer systems include web browsers, SNMP
protocols, HTTP protocols, or HTTPS, which is HTTP’s successor protocol.

19
1. Platform Layer: -
1. The operating system and application software make up this layer.
2. Users should be able to rely on the platform to provide them with Scalability, Dependability, and Security
Protection which gives users a space to create their apps, test operational processes, and keep track of
execution outcomes and performance. SaaS application implementation’s application layer foundation.
3. The objective of this layer is to deploy applications directly on virtual machines.
4. Operating systems and application frameworks make up the platform layer, which is built on top of the
infrastructure layer. The platform layer’s goal is to lessen the difficulty of deploying programmers directly
into VM containers.
5. By way of illustration, Google App Engine functions at the platform layer to provide API support for
implementing storage, databases, and business logic of ordinary web apps.

2. Infrastructure Layer: -
1. It is a layer of virtualization where physical resources are divided into a collection of virtual resources using
virtualization technologies like Xen, KVM, and VMware.
2. This layer serves as the Central Hub of the Cloud Environment, where resources are constantly added
utilizing a variety of virtualization techniques.
3. A base upon which to create the platform layer. constructed using the virtualized network, storage, and
computing resources. Give users the flexibility they want.
4. Automated resource provisioning is made possible by virtualization, which also improves infrastructure
management.
5. The infrastructure layer sometimes referred to as the virtualization layer, partitions the physical resources
using virtualization technologies like Xen, KVM, Hyper-V, and VMware to create a pool of compute and
storage resources.
6. The infrastructure layer is crucial to cloud computing since virtualization technologies are the only ones that
can provide many vital capabilities, like dynamic resource assignment.

3. Data center Layer: -


1. In a cloud environment, this layer is responsible for Managing Physical Resources such as servers,
switches, routers, power supplies, and cooling systems.
2. Providing end users with services requires all resources to be available and managed in data centers.
3. Physical servers connect through high-speed devices such as routers and switches to the data center.
4. In software application designs, the division of business logic from the persistent data it manipulates is well -
established.
5. A single database used by many microservices creates a very close coupling.

20
Types of Cloud

1. Public Cloud: -
Public clouds are managed by third parties which provide cloud services over the internet to the public, these
services are available as pay-as-you-go billing models.
The fundamental characteristics of public clouds are multitenancy. A public cloud is meant to serve multiple
users, not a single customer.

❖ Advantages of using a Public cloud are: -


1. High Scalability
2. Cost Reduction
3. Reliability and flexibility
4. Disaster Recovery

❖ Disadvantages of using a Public cloud are: -


1. Loss of control over data
2. Data security and privacy
3. Limited Visibility
4. Unpredictable cost

21
2. Private cloud: -
Private clouds are distributed systems that work on private infrastructure and provide the users with dynamic
provisioning of computing resources. Instead of a pay-as-you-go model in private clouds, there could be
other schemes that manage the usage of the cloud and proportionally billing of the different departments or
sections of an enterprise.

❖ Advantages of using a private cloud are as follows:


1. Customer information protection: In the private cloud security concerns are less since customer data
and other sensitive information do not flow out of private infrastructure.
2. Infrastructure ensuring SLAs: Private cloud provides specific operations such as appropriate
clustering, data replication, system monitoring, and maintenance, disaster recovery, and other upt ime
services.
3. Compliance with standard procedures and operations: Specific procedures have to be put in place
when deploying and executing applications according to third-party compliance standards. This is not
possible in the case of the public cloud.

❖ Disadvantages of using a private cloud are:


1. The restricted area of operations: Private cloud is accessible within a particular area. So the area of
accessibility is restricted.
2. Expertise requires: In the private cloud security concerns are less since customer data and other
sensitive information do not flow out of private infrastructure. Hence skilled people are required to
manage & operate cloud services.

22
3. Hybrid cloud: -
A hybrid cloud is a heterogeneous distributed system formed by combining facilities of the public cloud and
private cloud. For this reason, they are also called heterogeneous clouds.

❖ Advantages of using a Hybrid cloud are: -


1) Cost: Available at a cheap cost than other clouds because it is formed by a distributed system.
2) Speed: It is efficiently fast with lower cost, It reduces the latency of the data transfer process.
3) Security: Most important thing is security. A hybrid cloud is totally safe and secure because it works on
the distributed system network.

❖ Disadvantages of using a Hybrid cloud are: -


1. It’s possible that businesses lack the internal knowledge necessary to create such a hybrid environment.
Managing security may also be more challenging. Different access levels and security consideratio ns
may apply in each environment.
2. Managing a hybrid cloud may be more difficult. With all of the alternatives and choices available today,
not to mention the new PaaS components and technologies that will be released every day going
forward, public cloud and migration to public cloud are already complicated enough.

23
4. Community cloud: -
Community clouds are distributed systems created by integrating the services of different clouds to address
the specific needs of an industry, a community, or a business sector. But sharing responsibilities among the
organizations is difficult.
In the community cloud, the infrastructure is shared between organizations that have shared concerns or
tasks. An organization or a third party may manage the cloud.

❖ Advantages of using Community cloud are: -


1. Because the entire cloud is shared by numerous enterprises or a community, community clouds are
cost-effective.
2. Because it works with every user, the community cloud is adaptable and scalable. Users can alter the
documents according to their needs and requirements.
3. Public cloud is less secure than the community cloud, which is more secure than private cloud.
4. Thanks to community clouds, we may share cloud resources, infrastructure, and other capabilities
between different enterprises.

❖ Disadvantages of using Community cloud are: -


1. Not all businesses should choose community cloud.
2. gradual adoption of data
3. It’s challenging for corporations to share duties.

24
5. Multi-cloud: -
Multi-cloud is the use of multiple cloud computing services from different providers, which allows
organizations to use the best-suited services for their specific needs and avoid vendor lock-in.

❖ Advantages of using multi-cloud: -


1. Flexibility: Using multiple cloud providers allows organizations to choose the best-suited services for
their specific needs, and avoid vendor lock-in.
2. Cost-effectiveness: Organizations can take advantage of the cost savings and pricing benefits offered by
different cloud providers for different services.
3. Improved performance: By distributing workloads across multiple cloud providers, organizations can
improve the performance and availability of their applications and services.
4. Increased security: Organizations can increase the security of their data and applications by spreading
them across multiple cloud providers and implementing different security strategies for each.

❖ Disadvantages of using multi-cloud: -


1. Complexity: Managing multiple cloud providers and services can be complex and require specialized
knowledge and expertise.
2. Increased costs: The cost of managing multiple cloud providers and services can be higher than using a
single provider.
3. Compatibility issues: Different cloud providers may use different technologies and standards, which can
cause compatibility issues and require additional resources to resolve.
4. Limited interoperability: Different cloud providers may not be able to interoperate seamlessly, which can
limit the ability to move data and applications between them.

25
Cloud Service Models

❖ There are the following three types of cloud service models -

1. Infrastructure as a Service (IaaS):

IaaS is also known as Hardware as a Service (HaaS). It is a computing infrastructure managed over the
internet. The main advantage of using IaaS is that it helps users to avoid the cost and complexity of purchasing
and managing the physical servers.

❖ Characteristics of IaaS:

There are the following characteristics of IaaS:

o Resources are available as a service

o Services are highly scalable

o Dynamic and flexible

o GUI and API-based access

o Automated administrative tasks

Example: Digital Ocean, Linode, Amazon Web Services (AWS), Microsoft Azure, Google Compute
Engine (GCE), Rackspace, and Cisco Metacloud.

2. Platform as a Service (PaaS):

PaaS cloud computing platform is created for the programmer to develop, test, run, and manage the
applications.

❖ Characteristics of PaaS:

There are the following characteristics of PaaS -

o Accessible to various users via the same development application.

o Integrates with web services and databases.

o Builds on virtualization technology, so resources can easily be scaled up or down as per the
organization's need.

26
o Support multiple languages and frameworks.

o Provides an ability to "Auto-scale".

Example: AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, Google App Engine, Apache
Stratos, Magento Commerce Cloud, and OpenShift.

3. Software as a Service (SaaS):

SaaS is also known as "on-demand software". It is a software in which the applications are hosted by a cloud
service provider. Users can access these applications with the help of internet connection and web browser.

❖ Characteristics of SaaS:

There are the following characteristics of SaaS -

o Managed from a central location

o Hosted on a remote server

o Accessible over the internet

o Users are not responsible for hardware and software updates. Updates are applied automatically.

o The services are purchased on the pay-as-per-use basis

Example: BigCommerce, Google Apps, Salesforce, Dropbox, ZenDesk, Cisco WebEx, ZenDesk, Slack,
and GoToMeeting.

27
❖ Difference between IaaS, PaaS, and SaaS:

The below table shows the difference between IaaS, PaaS, and SaaS -

IaaS Paas SaaS

It provides a virtual data center to store It provides virtual platforms and It provides web software and
information and create platforms for app tools to create, test, and deploy apps to complete business
development, testing, and deployment. apps. tasks.

It provides access to resources such as virtual It provides runtime environments It provides software as a
machines, virtual storage, etc. and deployment tools for service to the end-users.
applications.

It is used by network architects. It is used by developers. It is used by end users.

IaaS provides only Infrastructure. PaaS provides Infrastructure + SaaS provides Infrastructure +
Platform. Platform + Software.

28
Data Center in Cloud Computing

A data center - also known as a data center is a facility made up of networked computers, storage systems, and
computing infrastructure that businesses and other organizations use to organize, process, store large amounts of
data. And to broadcast.
Enterprise data centers increasingly incorporate cloud computing resources and facilities to secure and protect in-
house, onsite resources.

❖ How do Data Centers work?


• systems for storing, sharing, accessing, and processing data across the organization;
• physical infrastructure to support data processing and data communication; And
• Utilities such as cooling, electricity, network access, and uninterruptible power supplies (UPS).

❖ Why are data centers important?


Data centers enable organizations to concentrate their processing power, which in turn enables the organization
to focus its attention on:
• IT and data processing personnel;
• computing and network connectivity infrastructure; And
• Computing Facility Security.

❖ What are the main components of Data Centers?


1. Calculation
2. enterprise data storage
3. networking

❖ How are Datacenters managed?


• Facilities Management. Management of a physical data center facility may include duties related to the
facility's real estate, utilities, access control, and personnel.
• Datacenter inventory or asset management. Datacenter features include hardware assets and software
licensing, and release management.
• Datacenter Infrastructure Management. DCIM lies at the intersection of IT and facility management
and is typically accomplished by monitoring data center performance to optimize energy, equipment, and
floor use.
• Technical support. The data center provides technical services to the organization, and as such, it should
also provide technical support to the end-users of the enterprise.

29
Architecture of Cloud Computing

Starting from small to medium and medium to large, every organization use cloud computing services for storing
information and accessing it from anywhere and any time only with the help of internet.
Transparency, scalability, security and intelligent monitoring are some of the most important constraints which
every cloud infrastructure should experience.

❖ Cloud Computing Architecture: -


The cloud architecture is divided into 2 parts:
1. Frontend
2. Backend

Architecture of cloud computing is the combination of both SOA (Service Oriented Architecture) and
EDA (Event Driven Architecture). Client infrastructure, application, service, runtime cloud, storage,
infrastructure, management and security all these are the components of cloud computing architecture.
1. Frontend:
Frontend of the cloud architecture refers to the client side of cloud computing system. Means it contains
all the user interfaces and applications which are used by the client to access the cloud computing
services/resources.
2. Backend:
Backend refers to the cloud itself which is used by the service provider. It contains the resources as well
as manages the resources and provides security mechanisms.

30
❖ Benefits of Cloud Computing Architecture: -
• Makes overall cloud computing system simpler.
• Improves data processing requirements.
• Helps in providing high security.
• Makes it more modularized.
• Results in better disaster recovery.
• Gives good user accessibility.
• Reduces IT operating costs.
• Provides high level reliability.
• Scalability.

❖ Cloud platform design goals: -


Four major design goals of a cloud computing platform.
1. Scalability.
2. Virtualization.
3. Efficiency.
4. Reliability.
System scalability can benefit from cluster architecture. If one service takes a lot of processing power,
storage capacity, or network traffic, it is simple to add more servers and band width.
The scale of the cloud architecture can be easily expanded by adding more servers and enlarging the
network connectivity accordingly.
System reliability can benefit from this architecture. Data can be put into multiple locations.
Goal of virtualization is to centralize administrative tasks while improving scalability and workloads.
The internet cloud is imagined as a massive cluster of servers. The different resources(Space, Data, and
Speed) of the concerned servers are allocated as per demand dynamically.
In general, private clouds are easier to manage, and public clouds are easier to access.
The trends in cloud development are that more and more clouds will be hybrid.
One must learn how to create a private cloud and how to interact with public clouds in the open internet.
Security becomes a critical issue in safeguarding the operation of all cloud types.

❖ The architecture of a cloud is developed at three layers: -


1. Infrastructure.
2. Platform.
3. Application/Software.
These three development layers are implemented with virtualization and standardization of hardware and
software resources provisioned in the cloud.
The infrastructure layer serves as the foundation for building the platform layer of the cloud for supporting
PaaS services.
The platform layer is a foundation for implementing the application layer for SaaS applications.
The infrastructure layers is built with virtualized compute, storage, and network resources. Proper
utilization of these resources provides the flexibility demanded by the users.

31
The platform layer environment is provided for the development, testing, deployment and monitoring the
usage of apps. Indirectly, a virtualized cloud platform acts as a ‘system middleware’ between the
infrastructure and application layers of a cloud.
The application layer is formed with the collection of different modules of all software that are needed for
the SaaS apps. The general service apps include those of information retrieval, doc processing, and
authentication services. This layer also used in large-scale by the CRMs, financial transactions, and supply
chain management.
The application layer is also heavily used by enterprises in business marketing and sales, consumer
relationship management (CRM), financial transactions, and supply chain management.
It should be noted that not all cloud services are restricted to a single layer. Many applications may apply
resources at mixed layers. After all, the three layers are built from the bottom up with a dependence
relationship.
In general, SaaS demands the most work from the provider, PaaS is in the middle, and IaaS demands the
least.
The SLA resource allocator acts as the interface between the data center/cloud service provider and
external users.
When a service request is first submitted, the service request examiner interprets the submitted request for
QoS requirements before determining whether to accept or reject the request.
The Accounting mechanism maintains the actual usage of resources by requests so that the final cost can be
computed and charged to users.
The VM monitor mechanism keeps track of the availability of VMs and their resource entitlements.
The service request monitor mechanism keeps track of the execution progress of service requests.
Multiple VMs can concurrently run applications based on different operating system environments on a
single physical machine since the VMs are isolated from one another on the same physical machine.

❖ Architectural Design Challenges: -


1. Service availability and data lock-in problem.
2. Data privacy and security.
3. Unpredictable performance and bottlenecks.
4. Distributed storage and widespread bugs.
5. Cloud scalability, interoperability, and standardization.
6. Software licensing.
7. Service Availability and Data lock-in problem.
8. Unpredictable performance and bottlenecks.
9. Distributed storage and widespread bugs.

32
Parallel Computing and Distributed
Computing

❖ What is Parallel Computing?


It is also known as parallel processing. It utilizes several processors. A shared memory or distributed memory
system can be used to assist in parallel computing. All CPUs in shared memory systems share the memory.
Memory is shared between the processors in distributed memory systems.
Parallel computing helps to increase the CPU utilization and improve the performance because several
processors work simultaneously. Moreover, the failure of one CPU has no impact on the other CPUs'
functionality. Furthermore, if one processor needs instructions from another, the CPU might cause latency.

❖ Advantages: -
1. It saves time and money because many resources working together cut down on time and costs.
2. It may be difficult to resolve larger problems on Serial Computing.
3. You can do many things at once using many computing resources.
4. Parallel computing is much better than serial computing for modeling, simulating, and comprehending
complicated real-world events.

❖ Disadvantages: -
1. The multi-core architectures consume a lot of power.
2. Parallel solutions are more difficult to implement, debug, and prove right due to the complexity of
communication and coordination, and they frequently perform worse than their serial equivalents.

❖ What is Distributing Computing?


It comprises several software components that reside on different systems but operate as a single system. A
distributed system's computers can be physically close together and linked by a local network or geographically
distant and linked by a wide area network (WAN). A distributed system can be made up of any number of
different configurations, such as mainframes, PCs, workstations, and minicomputers. The main aim of
distributed computing is to make a network work as a single computer.
It enables scalability and makes it simpler to share resources. It also aids in the efficiency of computation
processes.

❖ Advantages: -
1. It is flexible, making it simple to install, use, and debug new services.
2. In distributed computing, you may add multiple machines as required.
3. If the system crashes on one server, that doesn't affect other servers.
4. A distributed computer system may combine the computational capacity of several computers, making it
faster than traditional systems.

❖ Disadvantages: -
1. Data security and sharing are the main issues in distributed systems due to the features of open systems
2. Because of the distribution across multiple servers, troubleshooting and diagnostics are more challenging.
3. The main disadvantage of distributed computer systems is the lack of software support.

33
❖ Comparison between the Parallel Computing and Distributed Computing: -

Features Parallel Computing Distributed Computing

Definition It is a type of computation in It is that type of computing in which the components are
which various processes runs located on various networked systems that interact and
simultaneously. coordinate their actions by passing messages to one
another.

Communication The processors communicate The computer systems connect with one another via a
with one another via a bus. network.

Functionality Several processors execute Several computers execute tasks simultaneously.


various tasks simultaneously in
parallel computing.

Number of It occurs in a single computer It involves various computers.


Computers system.

Memory The system may have Each computer system in distributed computing has its
distributed or shared memory. own memory.

Usage It helps to improve the system It allows for scalability, resource sharing, and the
performance efficient completion of computation tasks.

34
MapReduce Tutorial

❖ What is MapReduce?
A MapReduce is a data processing tool which is used to process the data parallelly in a distributed form.
The MapReduce is a paradigm which has two phases, the mapper phase, and the reducer phase. In the Mapper,
the input is given in the form of a key-value pair. The output of the Mapper is fed to the reducer as input. The
reducer runs only after the Mapper is over. The reducer too takes input in key-value format, and the output of
reducer is the final output.

❖ Steps in Map Reduce: -

• The map takes data in the form of pairs and returns a list of <key, value> pairs. The keys will not be unique
in this case.
• Using the output of Map, sort and shuffle are applied by the Hadoop architecture. This sort and shuffle acts
on these list of <key, value> pairs and sends out unique keys and a list of values associated with this unique
key <key, list(values)>.
• An output of sort and shuffle sent to the reducer phase. The reducer performs a defined function on a list of
values for unique keys, and Final output <key, value> will be stored/displayed.

35
❖ Usage of MapReduce: -

o It can be used in various application like document clustering, distributed sorting, and web link-graph
reversal.
o It can be used for distributed pattern-based searching.
o We can also use MapReduce in machine learning.
o It was used by Google to regenerate Google's index of the World Wide Web.
o It can be used in multiple computing environments such as multi-cluster, multi-core, and mobile
environment.

36
Hadoop – Architecture

Hadoop works on MapReduce Programming Algorithm that was introduced by Google.

❖ The Hadoop Architecture Mainly consists of 4 components: -


• MapReduce
• HDFS (Hadoop Distributed File System)
• YARN (Yet Another Resource Negotiator)
• Common Utilities or Hadoop Common

1. MapReduce: -
MapReduce nothing but just like an Algorithm or a data structure that is based on the YARN framework. The
major feature of MapReduce is to perform the distributed processing in parallel in a Hadoop cluster which
Makes Hadoop working so fast. When you are dealing with Big Data, serial processing is no more of any use.
MapReduce has mainly 2 tasks which are divided phase-wise:
In first phase, Map is utilized and in next phase Reduce is utilized.

37
➢ Map Task:
• RecordReader The purpose of recordreader is to break the records. It is responsible for providing
key-value pairs in a Map() function.
• Map: A map is nothing but a user-defined function whose work is to process the Tuples obtained
from record reader. Combiner: Combiner is used for grouping the data in the Map workflow.
• Partitionar: Partitional is responsible for fetching key-value pairs generated in the Mapper Phases.

➢ Reduce Task:
• Shuffle and Sort: The process in which the Mapper generates the intermediate key-value and
transfers them to the Reducer task is known as Shuffling. Using the Shuffling process the system can
sort the data using its key value.
• Reduce: The main function or task of the Reduce is to gather the Tuple generated from Map and
then perform some sorting and aggregation sort of process on those key-value depending on its key
element.
• OutputFormat: Once all the operations are performed, the key-value pairs are written into the file
with the help of record writer, each record in a new line, and the key and value in a space-separated
manner.

2. HDFS: -
HDFS (Hadoop Distributed File System) is utilized for storage permission. It is mainly designed for working
on commodity Hardware devices(inexpensive devices), working on a distributed file system design. HDFS is
designed in such a way that it believes more in storing the data in a large chunk of blocks rather than storing
small data blocks.
HDFS in Hadoop provides Fault-tolerance and High availability to the storage layer and the other devices
present in that Hadoop cluster. Data storage Nodes in HDFS.
• NameNode(Master)
• DataNode(Slave)

NameNode: NameNode works as a Master in a Hadoop cluster that guides the Datanode(Slaves). Namenode
is mainly used for storing the Metadata i.e. the data about the data. Meta Data can be the transaction logs that
keep track of the user’s activity in a Hadoop cluster.
DataNode: DataNodes works as a Slave DataNodes are mainly utilized for storing the data in a Hadoop
cluster, the number of DataNodes can be from 1 to 500 or even more than that.

3. YARN (Yet Another Resource Negotiator): -


YARN is a Framework on which MapReduce works. YARN performs 2 operations that are Job scheduling
and Resource Management. The Purpose of Job schedular is to divide a big task into small jobs so that each
job can be assigned to various slaves in a Hadoop cluster and Processing can be Maximized. Job Scheduler
also keeps track of which job is important, which job has more priority, dependencies between the jobs and
all the other information like job timing, etc. And the use of Resource Manager is to manage all the resources
that are made available for running a Hadoop cluster.
Features of YARN
• Multi-Tenancy
• Scalability
• Cluster-Utilization
• Compatibility

38
4. Hadoop common or Common Utilities: -
These utilities are used by HDFS, YARN, and MapReduce for running the cluster. Hadoop Common verify
that Hardware failure in a Hadoop cluster is common so it needs to be solved automatically in software by
Hadoop Framework.

Programming Languages For Cloud


Computing

1. JavaScript: -
The best option for client-side JavaScript development is to develop rich, HTTP-based clients that require
access to multiple cloud services, such as Azure Blob Storage and Amazon Cognito. Due to JavaScript's
evolution in many cases, middleware layers with RESTful functionality are no longer necessary.

2. Node.js: -
For speed and scalable cloud programming language, Node.js is the best. As this language is easy to manipulate
and highly effective, it is used to develop an end-to-end application. Featuring non-blocking events and
asynchronous communication patterns, it allows the application to handle many connections. Among many
modern developers, Node.js is one of the favorites.

3. Python: -
Python is a combination of various high-tech features that comprises speed, productivity, community, and open-
source development. To increase the chances of landing lucrative gigs, you must learn it. As well it is also used
in creating business applications, games, operating systems, etc.

4. C: -
C programming language is known to be the fastest and most efficient. Choosing the C language is always best
when priorities are based on optimization and efficiency. To support the cloud, developers use C and write the
behind-scenes software.

5. GoLang: -
When it comes to cloud development, GoLang is specifically chosen. It is a modern and robust language created
and backed by Google, which supports concurrency, package management, and parallelism management.
Although Golang is used in cloud platforms, it's maximum while working on Google Cloud (GCP).

6. Java: -
Java is widely known as a general-purpose programming language. Java is also a highly versatile programming
language, making it one of the languages widely used to create applications for desktops, websites, or mobile
devices.

7. .NET: -
ASP.NET or .NET is one of the best programming languages owned by Microsoft itself. It is mostly used to
develop web applications and websites with multiple purposes.

8. PHP: -
In this language, there is the availability of powerful output. To develop applications with great elements, PHP
is the best choice. With various database management systems, PHP is also used in cloud computing.

9. Ruby on Rails: -
With many benefits, Ruby on Rails is one of the programming languages used in cloud computing.

39
What is Google App Engine (GAE)
A scalable runtime environment, Google App Engine is mostly used to run Web applications. App Engine makes
it easier to develop scalable and high-performance Web apps.
The App Engine SDK facilitates the testing and professionalization of applications by emulating the production
runtime environment and allowing developers to design and test applications on their own PCs. When an
application is finished being produced, developers can quickly migrate it to App Engine, put in place quotas to
control the cost that is generated, and make the programmer available to everyone.
The development and hosting platform Google App Engine, which powers anything from web programming for
huge enterprises to mobile apps, uses the same infrastructure as Google’s large-scale internet services. It is a fully
managed PaaS (platform as a service) cloud computing platform that uses in-built services to run your apps.

❖ Features of App Engine: -


1. Runtimes and Languages: -
To create an application for an app engine, you can use Go, Java, PHP, or Python.
2. Generally Usable Features: -
The implementation of such a feature is often stable, and any changes made to it are backward -
compatible. These include communications, process management, computing, data storage, retrieval, and
search, as well as app configuration and management.
3. Features in Preview: -
In a later iteration of the app engine, these functions will undoubtedly be made broadly accessible.
However, because they are in the preview, their implementation may change in ways that are backward -
incompatible.
4. Experimental Features: -
These might or might not be made broadly accessible in the next app engine updates. The experimental
features include Prospective Search, Page Speed, OpenID, Restore/Backup/Datastore Admin, Task
Queue Tagging, MapReduce, and Task Queue REST API.
5. Third-Party Services: -
As Google provides documentation and helper libraries to expand the capabilities of the app engine
platform, your app can perform tasks that are not built into the core product you are familiar with as app
engine.

❖ Advantages of Google App Engine: -


1. Infrastructure for Security: The Internet infrastructure that Google uses is arguably the safest in the
entire world.
2. Faster Time to Market: For every organization, getting a product or service to market quickly is
crucial.
3. Quick to Start: You don’t need to spend a lot of time prototyping or deploying the app to users because
there is no hardware or product to buy and maintain.
4. Easy to Use: The tools that you need to create, test, launch, and update the applications are included in
Google App Engine (GAE).
5. Rich set of APIs & Services: A number of built-in APIs and services in Google App Engine enable
developers to create strong, feature-rich apps.
6. Scalability: This is one of the deciding variables for the success of any software. When using the
Google app engine to construct apps, you may access technologies like GFS, Big Table, and others that
Google uses to build its own apps.

40
7. Performance and Reliability: Among international brands, Google ranks among the top ones.
Therefore, you must bear that in mind while talking about performance and reliability.
8. Cost Savings: To administer your servers, you don’t need to employ engineers or even do it yourself.
9. Platform Independence: Since the app engine platform only has a few dependencies, you can easily
relocate all of your data to another environment.

UNIT – 3

Virtualization

Virtualization is the "creation of a virtual (rather than actual) version of something, such as a server, a desktop, a
storage device, an operating system or network resources".

In other words, Virtualization is a technique, which allows to share a single physical instance of a resource or an
application among multiple customers and organizations. It does by assigning a logical name to a physical storage
and providing a pointer to that physical resource when demanded.

❖ What is the concept behind the Virtualization?

Creation of a virtual machine over existing operating system and hardware is known as Hardware
Virtualization. A Virtual machine provides an environment that is logically separated from the underlying
hardware.

The machine on which the virtual machine is going to create is known as Host Machine and that virtual
machine is referred as a Guest Machine.

41
Advantages and Disadvantages of
Virtualization

❖ Characteristics of Virtualization: -
1. Instance Virtualization: -
The primary feature is that it virtualizes the whole platform. This implies that an operating system is
separated from the primary platform resources. Without installing or purchasing additional hardware, it can
virtualize the platform.

2. Vulnerability of Resources: -
It permits resource virtualization in addition to operating system-wide virtualization. It permits the
virtualization of particular system resources. These include namespaces, storage, network resources, and
more.

3. Virtualization of applications: -
The fact that it also virtualizes apps is another feature. It refers to running a programme on various
hardware or software. For instance, cross-platform virtualization, portable programmes, etc.

4. Execution Control: -
The execution process is more controlled and secured when virtualization is used in the environment.
Additionally, it makes it possible to use more features. These include loneliness, sharing, and other things.

5. Process transparency: -
The existence of transparency is one of the most crucial traits. The procedure becomes more transparent
and safe because it is moved entirely online. On virtual machines, which represent a clean and regulated
environment, all operations are carried out.

6. Observation of the Infrastructure: -


Continuous monitoring is made possible via virtualization in the cloud. As a result, it is simple to keep tabs
on all activities around-the-clock.

42
❖ Advantages of virtualization: -
Virtualization is being pursued with great attention by numerous IT businesses. One of the main benefits of
virtualization for platforms that support remote working is their integration with the cloud.

1. Cheap: -
IT infrastructures find virtualization to be a more affordable implementation option because it doesn't require
the use or installation of actual hardware components. Dedicating substantial amounts of space and money to
create an on-site resource is no longer required. We need a licence or access from a third-party vendor to begin
using the hardware, just as if it were locally produced.

2. Efficient: -
By downloading the new versions of the software and hardware from a third-party supplier, efficient
virtualization also enables automatic upgrades of both. By handling the problem themselves and saving money,
IT specialists are able to avoid having to hire specialists. Virtualization also lessens the difficulty of managing
resources to increase the effectiveness of virtual environments.

3. Disaster recovery: -
When servers are virtualized, disaster recovery is relatively simple thanks to fast backup restoration and current
snapshots of your virtual machines. Organizations were better able to create a low-cost replication location
thanks to virtualization. If a disaster occurs in the data centre or server room itself, you can still relocate such
virtual machines to a cloud provider. Having the flexibility level guarantees that the disaster recovery plan will
be simpler to implement and will have a 99% success rate.

4. Deployment: -
Resources may be deployed much more quickly when employing virtualization technology. It is feasible to
significantly reduce the amount of time required for setting up physical devices or creating local networks. As a
result, users really need is at least one connection to the virtual world. Additionally, the implementation of
virtual machines is frequently simpler than the installation of actual models.

5. Encourages digital entrepreneurship: -


Prior to widespread virtualization, the average person found it nearly impossible to start a digital business.
Thanks to the multiple networks, servers, and storage devices that are now accessible, almost anyone can start
their own side business or turn into a business owner. Everyone can hang out their shingle and start looking for
employment.

6. Saves energy: -
Both individuals and businesses can save energy by using virtualization. The rate of energy consumption can be
reduced because no local hardware or software alternatives are being employed. To boost the total ROI of
virtualization, monies can be used over time for other operational expenses rather than paying for a data centre's
cooling costs and equipment operation costs.

7. improved uptime: -
Virtualization technologies have increased uptime dramatically. An uptime of 99.9999% is offered by some
providers. Even low-cost carriers now offer uptime at a rate of 99.99%.

8. Consistent cost: -
People and corporations can have predictable expenses for their IT requirements because third-party vendors
frequently offer choices for virtualization.

43
❖ Disadvantages of virtualization: -
Numerous complex dimensions that digital technology had to explore have been resolved through virtualization.
However, virtualization still shows signs of minor but significant problems. As a result, virtualization has a
lot of drawbacks, which are listed below:

1. Exorbitant costs of implementation: -


Virtualization would result in very low costs for the common person or business. In a virtualization
environment, the suppliers, however, may incur very significant implementation expenses. It follows that
devices must either be created, made, or purchased for implementation when hardware and software are
eventually required.

2. Restraints: -
Virtualization is hampered by a number of issues. Virtualization cannot be used with every server and
application currently in existence. Therefore, certain firms' IT infrastructures would not be able to support the
virtualized solutions. They no longer receive support from a number of vendors as well. The demands of both
individuals and organisations must be served using a hybrid approach.

3. Problems with availability: -


The accessibility of a company is another important factor. Long-term data linking is required. If not, the
business would become less competitive in the market. Because every document from and for the client is
essential to the service provider, availability difficulties might be seen as one of the drawbacks of virtualization.
It seems as though the virtualization servers are taken offline. Additionally, hosted websites would be useless.
The user has no control over this; it is completely the responsibility of the third-party providers.

4. Time-intensive: -
In comparison to local systems, virtualization takes less time to implement, but it ultimately costs users time.
This is due to the fact that there are additional procedures that need to be completed in order to attain the
desired result.

5. Threats to security: -
Information is our current currency. Having money allows you to make money. Without it, people will forget
about you. The success of a corporation depends on information, hence it is frequently targeted.

6. Problems with scalability: -


People can grow a business or opportunity quickly owing to virtualization, but won't be able to grow it as large
as they would like. In a virtualization network, growth generates latency since multiple firms share the same
resources. There wouldn't be much that could be done to stop it, but one powerful presence could syphon
money away from other, smaller businesses.

7. A Number of links must interact: -


If users have access to local equipment, they have complete control over their options. With virtualization,
people lose control because numerous ties are required to cooperate in order to complete the same task. We can
take the example of saving a document file. Using a local storage device like a flash drive or HDD, users can
instantly save the content and even create a backup. In order to use virtualization, the ISP connection must be
reliable.

44
IMPLEMENTATION LEVELS OF
VIRTUALIZATION

❖ Levels of Virtualization Implementation: -


A traditional computer runs with a host operating system specially tailored for its hardware architecture. After
virtualization, different user applications managed by their own operating systems (guest OS) can run on the
same hardware, independent of the host OS. This is often done by adding additional software, called
a virtualization layer. This virtualization layer is known as hypervisor or virtual machine monitor (VMM). The
VMs are shown in the upper boxes, where applications run with their own guest OS over the virtualized CPU,
memory, and I/O resources.
The main function of the software layer for virtualization is to virtualize the physical hardware of a host
machine into virtual resources to be used by the VMs, exclusively. This can be implemented at various
operational levels, as we will discuss shortly. The virtualization software creates the abstraction of VMs by
interposing a virtualization layer at various levels of a computer system. Common virtualization layers include
the instruction set architecture (ISA) level, hardware level, operating system level, library support level, and
application level.

45
1. Instruction Set Architecture Level: -
At the ISA level, virtualization is performed by emulating a given ISA by the ISA of the host machine. For example,
MIPS binary code can run on an x86-based host machine with the help of ISA emulation. With this approach, it is
possible to run a large amount of legacy binary code writ-ten for various processors on any given new hardware host
machine. Instruction set emulation leads to virtual ISAs created on any hardware machine.
The basic emulation method is through code interpretation. An interpreter program interprets the source instructions
to target instructions one by one. One source instruction may require tens or hundreds of native target instructions to
perform its function. Obviously, this process is relatively slow. For better performance, dynamic binary translation is
desired. This approach translates basic blocks of dynamic source instructions to target instructions. The basic blocks
can also be extended to program traces or super blocks to increase translation efficiency. Instruction set emulation
requires binary translation and optimization. A virtual instruction set architecture (V-ISA) thus requires adding a
processor-specific software translation layer to the compiler.

2. Hardware Abstraction Level: -


Hardware-level virtualization is performed right on top of the bare hardware. On the one hand, this approach
generates a virtual hardware environment for a VM. On the other hand, the process manages the underlying
hardware through virtualization. The idea is to virtualize a computer’s resources, such as its processors, memory,
and I/O devices. The intention is to upgrade the hardware utilization rate by multiple users concurrently. The idea
was implemented in the IBM VM/370 in the 1960s. More recently, the Xen hypervisor has been applied to virtualize
x86-based machines to run Linux or other guest OS applications.

46
3. Operating System Level: -
This refers to an abstraction layer between traditional OS and user applications. OS-level virtualiza-tion creates
isolated containers on a single physical server and the OS instances to utilize the hard-ware and software in data
centers. The containers behave like real servers. OS-level virtualization is commonly used in creating virtual hosting
environments to allocate hardware resources among a large number of mutually distrusting users. It is also used, to a
lesser extent, in consolidating server hardware by moving services on separate hosts into containers or VMs on one
server.

4. Library Support Level: -


Most applications use APIs exported by user-level libraries rather than using lengthy system calls by the OS. Since
most systems provide well-documented APIs, such an interface becomes another candidate for virtualization.
Virtualization with library interfaces is possible by controlling the communication link between applications and the
rest of a system through API hooks. The software tool WINE has implemented this approach to support Windows
applications on top of UNIX hosts. Another example is the vCUDA which allows applications executing within
VMs to leverage GPU hardware acceleration.

5. User-Application Level: -
Virtualization at the application level virtualizes an application as a VM. On a traditional OS, an application often
runs as a process. Therefore, application-level virtualization is also known as process-level virtualization. The most
popular approach is to deploy high level language (HLL)
VMs. In this scenario, the virtualization layer sits as an application program on top of the operating system, and the
layer exports an abstraction of a VM that can run programs written and compiled to a particular abstract machine
definition. Any program written in the HLL and compiled for this VM will be able to run on it. The Microsoft .NET
CLR and Java Virtual Machine (JVM) are two good examples of this class of VM.
Other forms of application-level virtualization are known as application isolation, application sandboxing,
or application streaming. The process involves wrapping the application in a layer that is isolated from the host OS
and other applications. The result is an application that is much easier to distribute and remove from user
workstations. An example is the LANDesk application virtuali-zation platform which deploys software applications
as self-contained, executable files in an isolated environment without requiring installation, system modifications, or
elevated security privileges.

47
VIRTUALIZATION STRUCTURES/TOOLS
AND MECHANISMS

In general, there are three typical classes of VM architecture. Before virtualization, the operating system manages
the hardware. After virtualization, a virtualization layer is inserted between the hardware and the operating system.
In such a case, the virtualization layer is responsible for converting portions of the real hardware into virtual
hardware. Therefore, different operating systems such as Linux and Windows can run on the same physical
machine, simultaneously. Depending on the position of the virtualization layer, there are several classes of VM
architectures, namely the hypervisor architecture, para-virtualization, and host-based virtualization.
The hypervisor is also known as the VMM (Virtual Machine Monitor). They both perform the same virtualization
operations.

1. Hypervisor and Xen Architecture: -


A micro-kernel hypervisor includes only the basic and unchanging functions (such as physical memory management
and processor scheduling). The device drivers and other changeable components are outside the hypervisor. A
monolithic hypervisor implements all the aforementioned functions, including those of the device drivers. Therefore,
the size of the hypervisor code of a micro-kernel hyper-visor is smaller than that of a monolithic hypervisor.
Essentially, a hypervisor must be able to convert physical devices into virtual resources dedicated for the deployed
VM to use.

1.1 The Xen Architecture: -


The core components of a Xen system are the hypervisor, kernel, and applications. The organization of the three
components is important. Like other virtualization systems, many guest OSes can run on top of the hypervisor.
However, not all guest OSes are created equal, and one in particular control the others. The guest OS, which has
control ability, is called Domain 0, and the others are called Domain U. Domain 0 is a privileged guest OS of Xen. It
is first loaded when Xen boots without any file system drivers being available. Domain 0 is designed to access
hardware directly and manage devices. Therefore, one of the responsibilities of Domain 0 is to allocate and map
hardware resources for the guest domains (the Domain U domains).

48
2. Binary Translation with Full Virtualization: -
Depending on implementation technologies, hardware virtualization can be classified into two categories: full
virtualization and host-based virtualization. Full virtualization does not need to modify the host OS. It relies
on binary translation to trap and to virtualize the execution of certain sensitive, nonvirtualizable instructions. The
guest OSes and their applications consist of noncritical and critical instructions. In a host-based system, both a host
OS and a guest OS are used. A virtualization software layer is built between the host OS and guest OS. These two
classes of VM architecture are introduced next.

2.1 Full Virtualization: -


With full virtualization, noncritical instructions run on the hardware directly while critical instructions are
discovered and replaced with traps into the VMM to be emulated by software. Both the hypervisor and VMM
approaches are considered full virtualization. Why are only critical instructions trapped into the VMM? This is
because binary translation can incur a large performance overhead. Noncritical instructions do not control hardware
or threaten the security of the system, but critical instructions do. Therefore, running noncritical instructions on
hardware not only can promote efficiency, but also can ensure system security.

2.2 Binary Translation of Guest OS Requests Using a VMM: -


The performance of full virtualization may not be ideal, because it involves binary translation which is rather time-
consuming. In particular, the full virtualization of I/O-intensive applications is a really a big challenge. Binary
translation employs a code cache to store translated hot instructions to improve performance, but it increases the cost
of memory usage. At the time of this writing, the performance of full virtualization on the x86 architecture is
typically 80 percent to 97 percent that of the host machine.

2.3 Host-Based Virtualization: -


An alternative VM architecture is to install a virtualization layer on top of the host OS. This host OS is still
responsible for managing the hardware. The guest OSes are installed and run on top of the virtualization layer.
Dedicated applications may run on the VMs. Certainly, some other applications can also run with the host OS
directly. This host-based architecture has some distinct advantages, as enumerated next. First, the user can install
this VM architecture without modifying the host OS. The virtualizing software can rely on the host OS to provide
device drivers and other low-level services. This will simplify the VM design and ease its deployment.
Second, the host-based approach appeals to many host machine configurations. Compared to the hypervisor/VMM
architecture, the performance of the host-based architecture may also be low. When an application requests
hardware access, it involves four layers of mapping which downgrades performance significantly. When the ISA of
a guest OS is different from the ISA of the underlying hardware, binary translation must be adopted. Although the
host-based architecture has flexibility, the performance is too low to be useful in practice.

49
3. Para-Virtualization with Compiler Support: -
Para-virtualization needs to modify the guest operating systems. A para-virtualized VM provides special APIs
requiring substantial OS modifications in user applications. Performance degradation is a critical issue of a
virtualized system. No one wants to use a VM if it is much slower than using a physical machine. The virtualization
layer can be inserted at different positions in a machine soft-ware stack. However, para-virtualization attempts to
reduce the virtualization overhead, and thus improve performance by modifying only the guest OS kernel.
Although para-virtualization reduces the overhead, it has incurred other problems. First, its compatibility and
portability may be in doubt, because it must support the unmodified OS as well. Second, the cost of maintaining
para-virtualized OSes is high, because they may require deep OS kernel modifications. Finally, the performance
advantage of para-virtualization varies greatly due to workload variations. Compared with full virtualization, para-
virtualization is relatively easy and more practical. The main problem in full virtualization is its low performance in
binary translation. To speed up binary translation is difficult. Therefore, many virtualization products employ the
para-virtualization architecture. The popular Xen, KVM, and VMware ESX are good examples.

50
Hypervisor

A hypervisor, also known as a virtual machine monitor or VMM. The hypervisor is a piece of software that allows
us to build and run virtual machines which are abbreviated as VMs.
A hypervisor allows a single host computer to support multiple virtual machines (VMs) by sharing resources
including memory and processing.

❖ What is the use of a hypervisor?


Hypervisors allow the use of more of a system's available resources and provide greater IT versatility because the
guest VMs are independent of the host hardware which is one of the major benefits of the Hypervisor.
In other words, this implies that they can be quickly switched between servers. Since a hypervisor with the help of
its special feature, it allows several virtual machines to operate on a single physical server. So, it helps us to reduce:
• The Space efficiency.
• The Energy uses.
• The Maintenance requirements of the server.

51
❖ Kinds of hypervisors: -
There are two types of hypervisors: "Type 1" (also known as "bare metal") and "Type 2" (also known as
"hosted"). A type 1 hypervisor functions as a light operating system that operates directly on the host's
hardware, while a type 2 hypervisor functions as a software layer on top of an operating system, similar to other
computer programs.
Since they are isolated from the attack-prone operating system, bare-metal hypervisors are extremely stable.
Furthermore, they are usually faster and more powerful than hosted hypervisors. For these purposes, the
majority of enterprise businesses opt for bare-metal hypervisors for their data center computing requirements.
While hosted hypervisors run inside the OS, they can be topped with additional (and different) operating
systems.
The hosted hypervisors have longer latency than bare-metal hypervisors which is a very major disadvantage of
the it. This is due to the fact that contact between the hardware and the hypervisor must go through the OS's
extra layer.

1. The Type 1 hypervisor: -


The native or bare metal hypervisor, the Type 1 hypervisor is known by both names.
It replaces the host operating system, and the hypervisor schedules VM services directly to the hardware.
The type 1 hypervisor is very much commonly used in the enterprise data center or other server-based
environments.
It includes KVM, Microsoft Hyper-V, and VMware vSphere. If we are running the updated version of the
hypervisor then we must have already got the KVM integrated into the Linux kernel in 2007.

2. The Type 2 hypervisor: -


It is also known as a hosted hypervisor, The type 2 hypervisor is a software layer or framework that runs on a
traditional operating system.
It operates by separating the guest and host operating systems. The host operating system schedules VM
services, which are then executed on the hardware.
Individual users who wish to operate multiple operating systems on a personal computer should use a form 2
hypervisor.
This type of hypervisor also includes the virtual machines with it.
Hardware acceleration technology improves the processing speed of both bare-metal and hosted hypervisors,
allowing them to build and handle virtual resources more quickly.
On a single physical computer, all types of hypervisors will operate multiple virtual servers for multiple tenants.
Different businesses rent data space on various virtual servers from public cloud service providers. One server
can host multiple virtual servers, each of which is running different workloads for different businesses.

❖ What is a cloud hypervisor?


Hypervisors are a key component of the technology that enables cloud computing since they are a software
layer that allows one host device to support several virtual machines at the same time.
Hypervisors allow IT to retain control over a cloud environment's infrastructure, processes, and sensitive data
while making cloud-based applications accessible to users in a virtual environment.
Increased emphasis on creative applications is being driven by digital transformation and increasing consumer
expectations. As a result, many businesses are transferring their virtual computers to the cloud.
Having to rewrite any existing application for the cloud, on the other hand, will eat up valuable IT resources and
create infrastructure silos.
A hypervisor also helps in the rapid migration of applications to the cloud as being a part of a virtualization
platform.
As a result, businesses will take advantage of the cloud's many advantages, such as lower hardware costs,
improved accessibility, and increased scalability, for a quicker return on investment.

52
➢ Benefits of hypervisors: -
Using a hypervisor to host several virtual machines has many advantages:
• Speed: The hypervisors allow virtual machines to be built instantly unlike bare-metal servers. This
makes provisioning resources for complex workloads much simpler.
• Efficiency: Hypervisors that run multiple virtual machines on the resources of a single physical
machine often allow for more effective use of a single physical server.
• Flexibility: Since the hypervisor distinguishes the OS from the underlying hardware, the program no
longer relies on particular hardware devices or drivers, bare-metal hypervisors enable operating
systems and their related applications to operate on a variety of hardware types.
• Portability: Multiple operating systems can run on the same physical server thanks to hypervisors
(host machine). The hypervisor's virtual machines are portable because they are separate from the
physical computer.

53
Kernel-Based Virtual Machine (KVM)

Kernel-Based Virtual Machine (KVM) is an open source virtualization module directly built into the Linux kernel,
enabling Linux OS to function as a Type 1 (bare-metal) hypervisor. However, worth noting is that the distinction
between Type 1 and Type 2 hypervisors can be blurred with KVM, as it can function as either of the two.
Furthermore, it enables the hypervisor to deploy separate virtual machines.
❖ KVM Advantages: -
1. Since the KVM module is built into the Linux kernel, it comes built-in with most Linux distributions.
2. KVM is open source, which means it's free to use, regularly updated, and very secure due to being part of
the world's largest open source community.
3. KVM is very stable and has excellent performance with suitable hardware.
4. KVM has fantastic command-line options with a polished GUI interface.

❖ KVM Disadvantages: -
1. Depending on a user's needs and infrastructure, the host hardware needs to be robust.
2. Because KVM is a Linux kernel module, it can't run on most operating systems with a few exceptions, such
as FreeBSD and illumos.
3. Centralized hardware, which can be problematic in cases of failure.

❖ KVM Features: -
1. Scalability and Clustering: -
KVM is an excellent solution for scalability and clustering. As resource demand increases, you can deploy
the desired amount of VMs to meet all the workload needs and fine-tune them for specific tasks.
Additionally, you can use KVM to set up server clusters or private clouds, such as OpenStack.
2. Virtio Devices and Hardware: -
Virtio is a virtualization standard primarily for storage and network devices used as the primary
input/output (IO) virtualization platform in KVM. It enables high performance of disks and network
devices on virtual guest machines.
By default, it supports a wide variety of hardware devices. This support also applies to new advancements
in hardware technologies. As they get adopted into the Linux kernel mainline, they will also work with
KVM.
3. Migration: -
KVM supports offline and live migrations. For example, when the host server is offline for maintenance,
use KVM to migrate VMs to another host.
Live migrations require no downtime if the host and target servers use CPUs from the same manufacturer.
Alternatively, you will first need to shut down an instance to move VMs from one host to another using
different CPU manufacturers. In this case, some features may or may not work, depending on the CPU.
4. Security: -
KVM uses SELinux and sVirt to isolate virtual instances and protect the guest systems from various
attacks. It also benefits from regular security updates to the kernel. Patches and updates are quickly
released if anything goes wrong. Furthermore, the source code is transparent and regularly tested for
suitability due to the collaborative approach to work done on the kernel.
5. Wrapping Up: -
This article briefly explained the Linux Kernel-Based Virtual Machine.
If your needs require using KVM, Liquid Web has many available hosting solutions that support it,
including dedicated servers and server clusters, which are great for scaling your business. Contact our sales
team to learn more about our offerings.

54
Xen

Xen is an open source hypervisor based on paravirtualization. It is the most popular application of
paravirtualization. Xen has been extended to compatible with full virtualization using hardware-assisted
virtualization. It enables high performance to execute guest operating system. This is probably done by removing
the performance loss while executing the instructions requiring significant handling and by modifying portion of
the guest operating system executed by Xen, with reference to the execution of such instructions. Henc e this
especially support x86, which is the most used architecture on commodity machines and servers.

Above figure describes the Xen Architecture and its mapping onto a classic x86 privilege model. A Xen based
system is handled by Xen hypervisor, which is executed in the most privileged mode and maintains the access of
guest operating system to the basic hardware. Guest operating system are run between domains, which represents
virtual machine instances.
In addition, particular control software, which has privileged access to the host and handles all other guest OS,
runs in a special domain called Domain 0. This the only one loaded once the virtual machine manager has fully
booted, and hosts an HTTP server that delivers requests for virtual machine creation, configuration, and
termination. This component establishes the primary version of a shared virtual machine manager (VMM), which
is a necessary part of Cloud computing system delivering Infrastructure-as-a-Service (IaaS) solution.
Various x86 implementation support four distinct security levels, termed as rings, i.e.,
Ring 0,
Ring 1,
Ring 2,
Ring 3
Here, Ring 0 represents the level having most privilege and Ring 3 represents the level having least privilege.
Almost all the frequently used Operating system, except for OS/2, uses only two levels i.e. Ring 0 for the Kernel
code and Ring 3 for user application and non-privilege OS program. This provides a chance to the Xen to
implement paravirtualization. This enables Xen to control unchanged the Application Binary Interface (ABI) thus
allowing a simple shift to Xen-virtualized solutions, from an application perspective.
Due to the structure of x86 instruction set, some instructions allow code execution in Ring 3 to switch to Ring 0
(Kernel mode). Such an operation is done at hardware level, and hence between a virtualized environment, it will
lead to a TRAP or a silent fault, thus preventing the general operation of the guest OS as it is now running in
Ring 1.

55
This condition is basically occurred by a subset of system calls. To eliminate this situation, implementation in
operating system requires a modification and all the sensitive system calls needs re-implementation with hyper-
calls. Here, hyper-calls are the particular calls revealed by the virtual machine (VM) interface of Xen and by use
of it, Xen hypervisor tends to catch the execution of all the sensitive instructions, manage them, and return the
control to the guest OS with the help of a supplied handler.
Paravirtualization demands the OS codebase be changed, and hence all operating systems can not be referred to
as guest OS in a Xen-based environment. This condition holds where hardware-assisted virtualization can not be
free, which enables to run the hypervisor in Ring 1 and the guest OS in Ring 0. Hence, Xen shows some
limitations in terms of legacy hardware and in terms of legacy OS.
In fact, these are not possible to modify to be run in Ring 1 safely as their codebase is not reachable, and
concurrently, the primary hardware hasn’t any support to execute them in a more privileged mode than Ring 0.
Open source OS like Linux can be simply modified as its code is openly available, and Xen delivers full support
to virtualization, while components of Windows are basically not compatible with Xen, unless hardware-assisted
virtualization is available. As new releases of OS are designed to be virtualized, the problem is getting resolved
and new hardware supports x86 virtualization.

❖ Pros: -
1. a) Xen server is developed over open-source Xen hypervisor and it uses a combination of hardware-
based virtualization and paravirtualization. This tightly coupled collaboration between the operating
system and virtualized platform enables the system to develop lighter and flexible hypervisor that
delivers their functionalities in an optimized manner.
2. b) Xen supports balancing of large workload efficiently that capture CPU, Memory, disk input -output
and network input-output of data. It offers two modes to handle this workload: Performance
enhancement, and For handling data density.
3. c) It also comes equipped with a special storage feature that we call Citrix storage link. Which allows a
system administrator to uses the features of arrays from Giant companies- Hp, Netapp, Dell Equal logic
etc.
4. d) It also supports multiple processor, Iive migration one machine to another, physical server to virtual
machine or virtual server to virtual machine conversion tools, centralized multiserver management, real
time performance monitoring over window and linux.
❖ Cons: -
1. a) Xen is more reliable over linux rather than on window.
2. b) Xen relies on 3rd-party component to manage the resources like drivers, storage, backup, recovery &
fault tolerance.
3. c) Xen deployment could be a burden some on your Linux kernel system as time passes.
4. d) Xen sometimes may cause increase in load on your resources by high input-output rate and and may
cause starvation of other Vm’s.

56
CPU Virtualization
A VM is a duplicate of an existing computer system in which a majority of the VM instructions are executed on the
host processor in native mode. Thus, unprivileged instructions of VMs run directly on the host machine for higher
efficiency. Other critical instructions should be handled carefully for correctness and stability. The critical
instructions are divided into three categories: privileged instructions, control-sensitive instructions, and behavior-
sensitive instructions. Privileged instructions execute in a privileged mode and will be trapped if executed outside
this mode. Control-sensitive instructions attempt to change the configuration of resources used. Behavior-sensitive
instructions have different behaviors depending on the configuration of resources, including the load and store
operations over the virtual memory.
A CPU architecture is virtualizable if it supports the ability to run the VM’s privileged and unprivileged instructions
in the CPU’s user mode while the VMM runs in supervisor mode. When the privileged instructions including
control- and behavior-sensitive instructions of a VM are exe-cuted, they are trapped in the VMM. In this case, the
VMM acts as a unified mediator for hardware access from different VMs to guarantee the correctness and stability
of the whole system. However, not all CPU architectures are virtualizable. RISC CPU architectures can be naturally
virtualized because all control- and behavior-sensitive instructions are privileged instructions. On the contrary, x86
CPU architectures are not primarily designed to support virtualization. This is because about 10 sensitive
instructions, such as SGDT and SMSW, are not privileged instructions. When these instruc-tions execute in
virtualization, they cannot be trapped in the VMM.
On a native UNIX-like system, a system call triggers the 80h interrupt and passes control to the OS kernel. The
interrupt handler in the kernel is then invoked to process the system call. On a para-virtualization system such as
Xen, a system call in the guest OS first triggers the 80h interrupt nor-mally. Almost at the same time,
the 82h interrupt in the hypervisor is triggered. Incidentally, control is passed on to the hypervisor as well. When the
hypervisor completes its task for the guest OS system call, it passes control back to the guest OS kernel. Certainly,
the guest OS kernel may also invoke the hypercall while it’s running. Although paravirtualization of a CPU lets
unmodified applications run in the VM, it causes a small performance penalty.
❖ CPU Virtualization: -
This technique attempts to simplify virtualization because full or paravirtualization is complicated. Intel and AMD
add an additional mode called privilege mode level (some people call it Ring-1) to x86 processors. Therefore,
operating systems can still run at Ring 0 and the hypervisor can run at Ring -1. All the privileged and sensitive
instructions are trapped in the hypervisor automatically. This technique removes the difficulty of implementing
binary translation of full virtualization. It also lets the operating system run in VMs without modification.

CPU state for VMs, a set of additional instructions is added. At the time of this writing, Xen, VMware, and the
Microsoft Virtual PC all implement their hypervisors by using the VT-x technology.
Generally, hardware-assisted virtualization should have high efficiency. However, since the transition from the
hypervisor to the guest OS incurs high overhead switches between processor modes, it sometimes cannot outperform
binary translation. Hence, virtualization systems such as VMware now use a hybrid approach, in which a few tasks

57
are offloaded to the hardware but the rest is still done in software. In addition, para-virtualization and hardware-
assisted virtualization can be combined to improve the performance further.
❖ Memory Virtualization: -
Virtual memory virtualization is similar to the virtual memory support provided by modern operating systems. In a
traditional execution environment, the operating system maintains mappings of virtual memory to machine
memory using page tables, which is a one-stage mapping from virtual memory to machine memory. All modern x86
CPUs include a memory management unit (MMU) and a translation lookaside buffer (TLB) to optimize virtual
memory performance. However, in a virtual execution environment, virtual memory virtualization involves sharing
the physical system memory in RAM and dynamically allocating it to the physical memory of the VMs.
That means a two-stage mapping process should be maintained by the guest OS and the VMM, respectively: virtual
memory to physical memory and physical memory to machine memory. Furthermore, MMU virtualization should
be supported, which is transparent to the guest OS. The guest OS continues to control the mapping of virtual
addresses to the physical memory addresses of VMs. But the guest OS cannot directly access the actual machine
memory. The VMM is responsible for mapping the guest physical memory to the actual machine memory.

Since each page table of the guest OSes has a separate page table in the VMM corresponding to it, the VMM page
table is called the shadow page table. Nested page tables add another layer of indirection to virtual memory. The
MMU already handles virtual-to-physical translations as defined by the OS. Then the physical memory addresses
are translated to machine addresses using another set of page tables defined by the hypervisor. Since modern
operating systems maintain a set of page tables for every process, the shadow page tables will get flooded.
Consequently, the performance overhead and cost of memory will be very high.

58
❖ I/O Virtualization: -
I/O virtualization involves managing the routing of I/O requests between virtual devices and the shared physical
hardware. At the time of this writing, there are three ways to implement I/O virtualization: full device emulation,
para-virtualization, and direct I/O. Full device emulation is the first approach for I/O virtualization. Generally, this
approach emulates well-known, real-world devices.

All the functions of a device or bus infrastructure, such as device enumeration, identification, interrupts, and DMA,
are replicated in software. This software is located in the VMM and acts as a virtual device. The I/O access requests
of the guest OS are trapped in the VMM which interacts with the I/O devices. The full device emulation approach.
A single hardware device can be shared by multiple VMs that run concurrently. However, software emulation runs
much slower than the hardware it emulates [10,15]. The para-virtualization method of I/O virtualization is typically
used in Xen. It is also known as the split driver model consisting of a frontend driver and a backend driver. The
frontend driver is running in Domain U and the backend driver is running in Domain 0. They interact with each
other via a block of shared memory. The frontend driver manages the I/O requests of the guest OSes and the
backend driver is responsible for managing the real I/O devices and multiplexing the I/O data of different VMs.
Although para-I/O-virtualization achieves better device performance than full device emulation, it comes with a
higher CPU overhead.
Direct I/O virtualization lets the VM access devices directly. It can achieve close-to-native performance without
high CPU costs. However, current direct I/O virtualization implementations focus on networking for mainframes.
There are a lot of challenges for commodity hardware devices. For example, when a physical device is reclaimed
(required by workload migration) for later reassignment, it may have been set to an arbitrary state (e.g., DMA to
some arbitrary memory locations) that can function incorrectly or even crash the whole system. Since software-
based I/O virtualization requires a very high overhead of device emulation, hardware-assisted I/O virtualization is
critical. Intel VT-d supports the remapping of I/O DMA transfers and device-generated interrupts. The architecture
of VT-d provides the flexibility to support multiple usage models that may run unmodified, special-purpose,
or “virtualization-aware” guest OSes.
Another way to help I/O virtualization is via self-virtualized I/O (SV-IO) [47]. The key idea of SV-IO is to harness
the rich resources of a multicore processor. All tasks associated with virtualizing an I/O device are encapsulated in
SV-IO. It provides virtual devices and an associated access API to VMs and a management API to the VMM. SV-IO
defines one virtual interface (VIF) for every kind of virtualized I/O device, such as virtual network interfaces, virtual
block devices (disk), virtual camera devices, and others. The guest OS interacts with the VIFs via VIF device
drivers. Each VIF consists of two message queues. One is for outgoing messages to the devices and the other is for
incoming messages from the devices. In addition, each VIF has a unique ID for identifying it in SV-IO.

59
❖ VIRTUAL CLUSTERS AND RESOURCE MANAGEMENT: -
A physical cluster is a collection of servers (physical machines) interconnected by a physical network such as a
LAN.
When a traditional VM is initialized, the administrator needs to manually write configuration information or specify
the configuration sources. When more VMs join a network, an inefficient configuration always causes problems
with overloading or underutilization. Amazon’s Elastic Compute Cloud (EC2) is a good example of a web service
that provides elastic computing power in a cloud. EC2 permits customers to create VMs and to manage user
accounts over the time of their use. Most virtualization platforms, including XenServer and VMware ESX Server,
support a bridging mode which allows all domains to appear on the network as individual hosts. By using this mode,
VMs can communicate with one another freely through the virtual network interface card and configure the network
automatically.
1. Physical versus Virtual Clusters: -
Virtual clusters are built with VMs installed at distributed servers from one or more physical clusters. The VMs in a
virtual cluster are interconnected logically by a virtual network across several physical networks. Each virtual
cluster is formed with physical machines or a VM hosted by multiple physical clusters. The virtual cluster
boundaries are shown as distinct boundaries.
The provisioning of VMs to a virtual cluster is done dynamically to have the following interesting properties:
• The virtual cluster nodes can be either physical or virtual machines. Multiple VMs running with different OSes
can be deployed on the same physical node.
• A VM runs with a guest OS, which is often different from the host OS, that manages the resources in the physical
machine, where the VM is implemented.
• The purpose of using VMs is to consolidate multiple functionalities on the same server. This will greatly enhance
server utilization and application flexibility.

• VMs can be colonized (replicated) in multiple servers for the purpose of promoting distributed parallelism, fault
tolerance, and disaster recovery.
• The size (number of nodes) of a virtual cluster can grow or shrink dynamically, similar to the way an overlay
network varies in size in a peer-to-peer (P2P) network.
• The failure of any physical nodes may disable some VMs installed on the failing nodes. But the failure of VMs
will not pull down the host system.

Since system virtualization has been widely used, it is necessary to effectively manage VMs running on a mass of
physical computing nodes (also called virtual clusters) and consequently build a high-performance virtualized
computing environment. This involves virtual cluster deployment, monitoring and management over large-scale

60
clusters, as well as resource scheduling, load balancing, server consolidation, fault tolerance, and other techniques.
In a virtual cluster system, it is quite important to store the large number of VM images efficiently.

The different colors in the figure represent the nodes in different virtual clusters. As a large number of VM images
might be present, the most important thing is to determine how to store those images in the system efficiently. There
are common installations for most users or applications, such as operating systems or user-level programming
libraries. These software packages can be preinstalled as templates (called template VMs). With these templates,
users can build their own software stacks. New OS instances can be copied from the template VM. User-specific
components such as programming libraries and applications can be installed to those instances.
Four virtual clusters are created on the right, over the physical clusters. The physical machines are also called host
systems. In contrast, the VMs are guest systems. The host and guest systems may run with different operating
systems. Each VM can be installed on a remote server or replicated on multiple servers belonging to the same or
different physical clusters. The boundary of a virtual cluster can change as VM nodes are added, removed, or
migrated dynamically over time.

1.1 Fast Deployment and Effective Scheduling: -


The system should have the capability of fast deployment. Here, deployment means two things: to construct and
distribute software stacks (OS, libraries, applications) to a physical node inside clusters as fast as possible, and to
quickly switch runtime environments from one user’s virtual cluster to another user’s virtual cluster. If one user
finishes using his system, the corresponding virtual cluster should shut down or suspend quickly to save the
resources to run other VMs for other users.
The concept of “green computing” has attracted much attention recently. However, previous approaches have
focused on saving the energy cost of components in a single workstation without a global vision. Consequently, they
do not necessarily reduce the power consumption of the whole cluster. Other cluster-wide energy-efficient
techniques can only be applied to homogeneous workstations and specific applications. The live migration of VMs
allows workloads of one node to transfer to another node. However, it does not guarantee that VMs can randomly
migrate among themselves. In fact, the potential overhead caused by live migrations of VMs cannot be ignored.

1.2 High-Performance Virtual Storage: -


The template VM can be distributed to several physical hosts in the cluster to customize the VMs. In addition,
existing software packages reduce the time for customization as well as switching virtual environments. It is
important to efficiently manage the disk spaces occupied by template software packages. Some storage architecture
design can be applied to reduce duplicated blocks in a distributed file system of virtual clusters. Hash values are
used to compare the contents of data blocks. Users have their own profiles which store the identification of the data
blocks for corresponding VMs in a user-specific virtual cluster. New blocks are created when users modify the
corresponding data. Newly created blocks are identified in the users’ profiles.

61
2. Live VM Migration Steps and Performance Effects: -
There are four ways to manage a virtual cluster. First, you can use a guest-based manager, by which the cluster
manager resides on a guest system. In this case, multiple VMs form a virtual cluster. For example, openMosix is an
open-source Linux cluster running different guest systems on top of the Xen hypervisor. Another example is Sun’s
cluster Oasis, an experimental Solaris cluster of VMs supported by a VMware VMM. Second, you can build a
cluster manager on the host systems. The host-based manager supervises the guest systems and can restart the guest
system on another physical machine. A good example is the VMware HA system that can restart a guest system
after failure.
These two cluster management systems are either guest-only or host-only, but they do not mix. A third way to
manage a virtual cluster is to use an independent cluster manager on both the host and guest systems. This will make
infrastructure management more complex, however. Finally, you can use an integrated cluster on the guest and host
systems. This means the manager must be designed to distinguish between virtualized resources and physical
resources. Various cluster management schemes can be greatly enhanced when VM life migration is enabled with
minimal overhead.

3. Migration of Memory, Files, and Network Resources: -


Since clusters have a high initial cost of ownership, including space, power conditioning, and cooling equipment,
leasing or sharing access to a common cluster is an attractive solution when demands vary over time. Shared clusters
offer economies of scale and more effective utilization of resources by multiplexing. Early configuration and
management systems focus on expressive and scalable mechanisms for defining clusters for specific types of
service, and physically partition cluster nodes among those types.

3.1 Memory Migration: -


This is one of the most important aspects of VM migration. Moving the memory instance of a VM from one
physical host to another can be approached in any number of ways. But traditionally, the concepts behind the
techniques tend to share common implementation paradigms. The techniques employed for this purpose depend
upon the characteristics of application/workloads supported by the guest OS.
Memory migration can be in a range of hundreds of megabytes to a few gigabytes in a typical system today, and it
needs to be done in an efficient manner. The Internet Suspend-Resume (ISR) technique exploits temporal locality as
memory states are likely to have considerable overlap in the suspended and the resumed instances of a VM.
Temporal locality refers to the fact that the memory states differ only by the amount of work done since a VM was
last suspended before being initiated for migration.
To exploit temporal locality, each file in the file system is represented as a tree of small subfiles. A copy of this tree
exists in both the suspended and resumed VM instances. The advantage of using a tree-based representation of files
is that the caching ensures the transmission of only those files which have been changed. The ISR technique deals
with situations where the migration of live machines is not a necessity. Predictably, the downtime (the period during
which the service is unavailable due to there being no currently executing instance of a VM) is high, compared to
some of the other techniques discussed later.

3.2 File System Migration: -


To support VM migration, a system must provide each VM with a consistent, location-independent view of the file
system that is available on all hosts. A simple way to achieve this is to provide each VM with its own virtual disk
which the file system is mapped to and transport the contents of this virtual disk along with the other states of the
VM. However, due to the current trend of high-capacity disks, migration of the contents of an entire disk over a
network is not a viable solution. Another way is to have a global file system across all machines where a VM could
be located. This way removes the need to copy files from one machine to another because all files are network-
accessible.

62
3.3 Network Migration: -
A migrating VM should maintain all open network connections without relying on forwarding mechanisms on the
original host or on support from mobility or redirection mechanisms. To enable remote systems to locate and
communicate with a VM, each VM must be assigned a virtual IP address known to other entities. This address can
be distinct from the IP address of the host machine where the VM is currently located. Each VM can also have its
own distinct virtual MAC address. The VMM maintains a mapping of the virtual IP and MAC addresses to their
corresponding VMs. In general, a migrating VM includes all the protocol states and carries its IP address with it.

3.4 Live Migration of VM Using Xen: -


Xen as a VMM or hypervisor, which allows multiple commodity OSes to share x86 hardware in a safe and orderly
fashion. The following example explains how to perform live migration of a VM between two Xen-enabled host
machines. Domain 0 (or Dom0) performs tasks to create, terminate, or migrate to another host. Xen uses a
send/receive model to transfer states across VMs.

4. Dynamic Deployment of Virtual Clusters: -


We briefly introduce them here just to identify their design objectives and reported results. The Cellular Disco at
Stanford is a virtual cluster built in a shared-memory multiprocessor system. The INRIA virtual cluster was built to
test parallel algorithm performance. The COD and VIOLIN clusters are studied in forthcoming examples.

63
Server Virtualization

Server Virtualization is the process of dividing a physical server into several virtual servers, called virtual private
servers. Each virtual private server can run independently.
The concept of Server Virtualization widely used in the IT infrastructure to minimizes the costs by increasing the
utilization of existing resources.

❖ Types of Server Virtualization: -


1. Hypervisor: -
In the Server Virtualization, Hypervisor plays an important role. It is a layer between the operating system (OS)
and hardware. There are two types of hypervisors.
• Type 1 hypervisor (also known as bare metal or native hypervisors)
• Type 2 hypervisor (also known as hosted or Embedded hypervisors)
The hypervisor is mainly used to perform various tasks such as allocate physical hardware resources (CPU, RAM,
etc.) to several smaller independent virtual machines, called "guest" on the host machine.

2. Full Virtualization: -
Full Virtualization uses a hypervisor to directly communicate with the CPU and physical server. It provides the best
isolation and security mechanism to the virtual machines.
The biggest disadvantage of using hypervisor in full virtualization is that a hypervisor has its own processing needs,
so it can slow down the application and server performance.
VMWare ESX server is the best example of full virtualization.

3. Para Virtualization: -
Para Virtualization is quite similar to the Full Virtualization. The advantage of using this virtualization is that it
is easier to use, Enhanced performance, and does not require emulation overhead. Xen primarily and UML use
the Para Virtualization.
The difference between full and pare virtualization is that, in para virtualization hypervisor does not need too much
processing power to manage the OS.

4. Operating System Virtualization: -


Operating system virtualization is also called as system-lever virtualization. It is a server virtualization
technology that divides one operating system into multiple isolated user-space called virtual environments. The
biggest advantage of using server visualization is that it reduces the use of physical space, so it will save money.
Linux OS Virtualization and Windows OS Virtualization are the types of Operating System virtualization.
FreeVPS, OpenVZ, and Linux Vserver are some examples of System-Level Virtualization.

5. Hardware Assisted Virtualization: -


Hardware Assisted Virtualization was presented by AMD and Intel. It is also known as Hardware
virtualization, AMD virtualization, and Intel virtualization. It is designed to increase the performance of the
processor. The advantage of using Hardware Assisted Virtualization is that it requires less hypervisor overhead.

64
6. Kernel-Level Virtualization: -
Kernel-level virtualization is one of the most important types of server virtualization. It is an open-source
virtualization which uses the Linux kernel as a hypervisor. The advantage of using kernel virtualization is that it
does not require any special administrative software and has very less overhead.
User Mode Linux (UML) and Kernel-based virtual machine are some examples of kernel virtualization.

❖ Advantages of Server Virtualization: -


There are the following advantages of Server Virtualization -
1. Independent Restart
In Server Virtualization, each server can be restart independently and does not affect the working of other
virtual servers.
2. Low Cost
Server Virtualization can divide a single server into multiple virtual private servers, so it reduces the cost of
hardware components.
3. Disaster Recovery<
Disaster Recovery is one of the best advantages of Server Virtualization. In Server Virtualization, data can
easily and quickly move from one server to another and these data can be stored and retrieved from anywhere.
4. Faster deployment of resources
Server virtualization allows us to deploy our resources in a simpler and faster way.
5. Security
It allows uses to store their sensitive data inside the data centers.

❖ Disadvantages of Server Virtualization: -


There are the following disadvantages of Server Virtualization -
1. The biggest disadvantage of server virtualization is that when the server goes offline, all the websites
that are hosted by the server will also go down.
2. There is no way to measure the performance of virtualized environments.
3. It requires a huge amount of RAM consumption.
4. It is difficult to set up and maintain.
5. Some core applications and databases are not supported virtualization.
6. It requires extra hardware resources.

❖ Uses of Server Virtualization: -


A list of uses of server virtualization is given below -
• Server Virtualization is used in the testing and development environment.
• It improves the availability of servers.
• It allows organizations to make efficient use of resources.
• It reduces redundancy without purchasing additional hardware components.

65
What is Desktop Virtualization?

Desktop virtualization is a method of simulating a user workstation so it can be accessed from a remotely connected
device. By abstracting the user desktop in this way, organizations can allow users to work from virtually anywhere
with a network connecting, using any desktop laptop, tablet, or smartphone to access enterprise resources without
regard to the device or operating system employed by the remote user.
Remote desktop virtualization is also a key component of digital workspaces Virtual desktop workloads run on
desktop virtualization servers which typically execute on virtual machines (VMs) either at on-premises data centers
or in the public cloud.
Since the user devices is basically a display, keyboard, and mouse, a lost or stolen device presents a reduced risk to
the organization. All user data and programs exist in the desktop virtualization server, not on client devices.

❖ What are the benefits of Desktop Virtualization?


1. Resource Utilization: Since IT resources for desktop virtualization are concentrated in a data center,
resources are pooled for efficiency. The need to push OS and application updates to end-user devices is
eliminated, and virtually any desktop, laptop, tablet, or smartphone can be used to access virtualized
desktop applications. IT organizations can thus deploy less powerful and less expensive client devices
since they are basically only used for input and output.
2. Remote Workforce Enablement: Since each virtual desktop resides in central servers, new user desktops
can be provisioned in minutes and become instantly available for new users to access. Additionally IT
support resources can focus on issues on the virtualization servers with little regard to the actual end-user
device being used to access the virtual desktop. Finally, since all applications are served to the client over
a network, users have the ability to access their business applications virtually anywhere there is internet
connectivity. If a user leaves the organization, the resources that were used for their virtual desktop can
then be returned to centrally pooled infrastructure.
3. Security: IT professionals rate security as their biggest challenge year after year. By removing OS and
application concerns from user devices, desktop virtualization enables centralized security control, with
hardware security needs limited to virtualization servers, and an emphasis on identity and access
management with role-based permissions that limit users only to those applications and data they are
authorized to access. Additionally, if an employee leaves an organization there is no need to remove
applications and data from user devices; any data on the user device is ephemeral by design and does not
persist when a virtual desktop session ends.

❖ How does Desktop Virtualization work?


Remote desktop virtualization is typically based on a client/server model, where the organization’s chosen
operating system and applications run on a server located either in the cloud or in a data center. In this model all
interactions with users occur on a local device of the user’s choosing, reminiscent of the so-called ‘dumb’
terminals popular on mainframes and early Unix systems.

❖ What are the types of Desktop Virtualization?

The three most popular types of desktop virtualization are Virtual desktop infrastructure (VDI), Remote desktop
services (RDS), and Desktop-as-a-Service (DaaS).

VDI simulates the familiar desktop computing model as virtual desktop sessions that run on VMs either in on-
premises data center or in the cloud. Organizations who adopt this model manage the desktop virtualization
server as they would any other application server on-premises. Since all end-user computing is moved from
users back into the data center, the initial deployment of servers to run VDI sessions can be a considerable
investment, tempered by eliminating the need to constantly refresh end-user devices.

RDS is often used where a limited number of applications need be virtualized, rather than a full Windows, Mac,
or Linux desktop. In this model applications are streamed to the local device which runs its own OS. Because
only applications are virtualized RDS systems can offer a higher density of users per VM.

66
DaaS shifts the burden of providing desktop virtualization to service providers, which greatly alleviates the IT
burden in providing virtual desktops. Organizations that wish to move IT expenses from capital expense to
operational expenses will appreciate the predictable monthly costs that DaaS providers base their business
model on.

❖ Desktop Virtualization vs. Server Virtualization: -


In server virtualization, a server OS and its applications are abstracted into a VM from the underlying hardware
by a hypervisor. Multiple VMs can run on a single server, each with its own server OS, applications, and all the
application dependencies required to execute as if it were running on bare metal.
Desktop virtualization abstracts client software (OS and applications) from a physical thin client which connects
to applications and data remotely, typically via the internet. This abstraction enables users to utilize any number
of devices to access their virtual desktop. Desktop virtualization can greatly increase an organization’s need for
bandwidth, depending on the number of concurrent users during peak.

❖ Desktop Virtualization vs. App Virtualization: -


Application virtualization insulates executing programs from the underlying device, where desktop
virtualization abstracts the entire desktop – OS and applications – which are then accessible by virtually any
client device.
Application virtualization simplifies the installation of each individual application, which is installed once on a
server and then virtualized to the various end-user device that it executes on. Client devices are sent a packaged,
pre-configured executable which eases deployment.
A virtualization application exists as a single instance in the application server, so maintenance is greatly
simplified. Only one instance needs be updated. If an application is retired, deleting it from the application
server will also delete it from all users wherever they are. Further, since virtualized applications are packaged in
their own ‘containers’ they cannot interact with each other or cause other applications to fail. Finally, since
virtualized applications are independent of the underlying device OS, they can be used on any endpoint,
whether Windows, iOS or Linux/Android.
However, application virtualization is not for every application. Compute- and graphics-intensive applications
can suffer from slowing down causing visible lag during rendering, and a solid broadband connection is
necessary to delivery a user experience comparable to local device applications.

❖ What are the benefits of Desktop Virtualization?


1. Resource Utilization: Since IT resources for desktop virtualization are concentrated in a data center,
resources are pooled for efficiency. The need to push OS and application updates to end-user devices is
eliminated, and virtually any desktop, laptop, tablet, or smartphone can be used to access virtualized
desktop applications. IT organizations can thus deploy less powerful and less expensive client devices since
they are basically only used for input and output.
2. Remote Workforce Enablement: Since each virtual desktop resides in central servers, new user desktops
can be provisioned in minutes and become instantly available for new users to access. Additionally IT
support resources can focus on issues on the virtualization servers with little regard to the actual end-user
device being used to access the virtual desktop. Finally, since all applications are served to the client over a
network, users have the ability to access their business applications virtually anywhere there is internet
connectivity. If a user leaves the organization, the resources that were used for their virtual desktop can
then be returned to centrally pooled infrastructure.
3. Security: IT professionals rate security as their biggest challenge year after year. By removing OS and
application concerns from user devices, desktop virtualization enables centralized security control, with
hardware security needs limited to virtualization servers, and an emphasis on identity and access
management with role-based permissions that limit users only to those applications and data they are
authorized to access. Additionally, if an employee leaves an organization there is no need to remove
applications and data from user devices; any data on the user device is ephemeral by design and does not
persist when a virtual desktop session ends.

67
Network Virtualization in Cloud Computing

Network Virtualization is a process of logically grouping physical networks and making them operate as single or
multiple independent networks called Virtual Networks.

❖ Tools for Network Virtualization: -


1. Physical switch OS –
It is where the OS must have the functionality of network virtualization.
2. Hypervisor –
It is which uses third-party software or built-in networking and the functionalities of network
virtualization.
The basic functionality of the OS is to give the application or the executing process with a simple set of
instructions. System calls that are generated by the OS and executed through the libc library are comparable to
the service primitives given at the interface between the application and the network thro ugh the SAP (Service
Access Point).
The hypervisor is used to create a virtual switch and configuring virtual networks on it. The third -party software
is installed onto the hypervisor and it replaces the native networking functionality of the hypervisor. A hypervisor
allows us to have various VMs all working optimally on a single piece of computer hardware.

❖ Functions of Network Virtualization:


• It enables the functional grouping of nodes in a virtual network.
• It enables the virtual network to share network resources.
• It allows communication between nodes in a virtual network without routing of frames.
• It restricts management traffic.
• It enforces routing for communication between virtual networks.

❖ Network Virtualization in Virtual Data Center: -


1. Physical Network: -
• Physical components: Network adapters, switches, bridges, repeaters, routers and hubs.
• Grants connectivity among physical servers running a hypervisor, between physical servers and
storage systems and between physical servers and clients.
2. VM Network: -
• Consists of virtual switches.
• Provides connectivity to hypervisor kernel.
• Connects to the physical network.
• Resides inside the physical server.

68
❖ Advantages of Network Virtualization: -
1. Improves manageability –
• Grouping and regrouping of nodes are eased.
• Configuration of VM is allowed from a centralized management workstation using management
software.
2. Reduces CAPEX –
• The requirement to set up separate physical networks for different node groups is reduced.
3. Improves utilization –
• Multiple VMs are enabled to share the same physical network which enhances the utilization of
network resource.
4. Enhances performance –
• Network broadcast is restricted and VM performance is improved.
5. Enhances security –
• Sensitive data is isolated from one VM to another VM.
• Access to nodes is restricted in a VM from another VM.

❖ Disadvantages of Network Virtualization: -


• It needs to manage IT in the abstract.
• It needs to coexist with physical devices in a cloud-integrated hybrid environment.
• Increased complexity.
• Upfront cost.
• Possible learning curve.

❖ Applications of Network Virtualization:


• Network virtualization may be used in the development of application testing to mimic real -world
hardware and system software.
• It helps us to integrate several physical networks into a single network or separate single physical
networks into multiple analytical networks.
• In the field of application performance engineering, network virtualization allows the simulation of
connections between applications, services, dependencies, and end-users for software testing.
• It helps us to deploy applications in a quicker time frame, thereby supporting a faster go-to-market.
• Network virtualization helps the software testing teams to derive actual results with expected instances
and congestion issues in a networked environment.

69
Data Center

Data center virtualization is the process of creating a modern data center that is highly scalable, available and secure.
With data center virtualization products you can increase IT agility and create a seamless foundation to manage
private and public cloud services alongside traditional on-premises infrastructure.

❖ Benefits of Data Center Virtualization with VMware: -


1. Modernize for Cloud: -
Support future evolution with a consistent software stack on-prem that can expand into the public cloud and
edge.
2. Eliminate Silos: -
Leverage existing investments for use in new cloud environments while eliminating vertical infrastructure
silos.
3. Operate Efficiently: -
Reduce TCO with automated performance management, optimized capacity utilization, proactive planning
and reduced mean time to resolution (MTTR).

❖ VMware Products for Data Center Virtualization: -


Modern data centers are fully virtualized, software defined and highly automated, providing consistent
infrastructure and application delivery across a hybrid cloud environment. You can begin your virtual data
center journey with server virtualization. The next steps involve adding storage and network virtualization,
moving toward a fully virtualized software-defined data center architecture. This means virtualized compute,
storage, networking, security and management all on one consistent foundation.

1. Server Virtualization: -
Ensure a common operating environment across hybrid cloud with the industry-leading virtualization
platform. Use existing virtualization tools to ensure consistent operations from edge to core to cloud.

2. Network and Security Virtualization: -


Connect and protect applications across your data center and multi-cloud, regardless of where your
applications run. Ensure intrinsic security, with data protection at rest and in-flight.

3. Private & Hybrid Cloud: -


Integrate cloud infrastructure and management services that are consistent and secure for private and public
cloud.

4. Private & Hybrid Cloud: -


Get simple, secure and scalable infrastructure delivered as a service to data center and edge locations.

5. Storage Virtualization and Availability: -


Extend virtualization to storage with a secure, integrated hyperconverged solution that is flexible and multi-
cloud ready. Manage, compute, and storage within a single platform to improve business agility.

6. Cloud Management: -
Build an agile, efficient cloud infrastructure with application-focused management. vCloud Suite includes
vSphere for compute virtualization plus the complete VMware Aria Suite for automated multi-cloud
management.

7. Private & Hybrid Cloud: -


Deliver a seamless hybrid cloud by extending your on-premises vSphere environment to the AWS Cloud.

70
UNIT – 4
What is cloud security?

Cloud security is the set of control-based security measures and technology protection, designed to protect online
stored resources from leakage, theft, and data loss. Protection includes data from cloud infrastructure,
applications, and threats. Security applications uses a software the same as SaaS (Software as a Service) model.

❖ How to manage security in the cloud?


Cloud service providers have many methods to protect the data.
Firewall is the central part of cloud architecture. The firewall protects the network and the perimeter of end-
users. It also protects traffic between various apps stored in the cloud.
Access control protects data by allowing us to set access lists for various assets. For example, you can allow the
application of specific employees while restricting others. It's a rule that employees can access the equipment
that they required. We can keep essential documents which are stolen from malicious insiders or hackers to
maintaining strict access control.
Data protection methods include Virtual Private Networks (VPN), encryption, or masking. It allows remote
employees to connect the network. VPNaccommodates the tablets and smartphone for remote access. Data
masking maintains the data's integrity by keeping identifiable information private. A medical company share
data with data masking without violating the HIPAA laws.
For example, we are putting intelligence information at risk in order of the importance of security. It helps to
protect mission-critical assets from threats. Disaster recovery is vital for security because it helps to recover lost
or stolen data.
❖ Benefits of Cloud Security System: -
We understand how the cloud computing security operates to find ways to benefit your business.
Cloud-based security systems benefit the business by:
• Protecting the Business from Dangers
• Protect against internal threats
• Preventing data loss
• Top threats to the system include Malware, Ransomware, and
• Break the Malware and Ransomware attacks
• Malware poses a severe threat to the businesses.

71
❖ Difference between Cloud Security and Traditional IT Security: -

Cloud security Traditional IT Security

Quick scalable Slow scaling

Efficient resource utilization Lower efficiency

Usage-based cost Higher cost

Third-party data centres In-house data centres

Reduced time to market Longer time to market

Low upfront infrastructure High Upfronts costs

72
Cloud Security Services

In comparison to the on-premises network security, there is a number of benefits of using a Security-as-a-Service
solution. One of the major benefits is that it is available in lower costing. It is so because the service eliminates the
capital expenditure and the maintenance services purchased either on an individual basis or subscription basis. Apart
from this main benefit, security-as-a-service is rapidly to deploy that demands less maintenance costing and is
supportable for mobile users too. If the cloud vendors satisfy the SLAs (Service-level agreements), these types of
cloud security services are more than enough to replace some of the on-premises security apps.

❖ Services: -
• Identity and Access Management – Business network admins have to maintain cloud identity
management services to create, handle, and delete the role-based identities, enforce strong passwords, and
prefer the use of biometric technologies. A cloud security services provider should render a simplified
platform from where it becomes easier for administrators to manage their responsibilities.
• Intrusion Detection and Prevention – This requirement is quite obvious in Cybersecurity service
providers, which is capable of detecting threats on its own. Advanced intrusion prevention and detection
system enable administrators to perform network traffic inspection, responses over manual or automated
intrusions, and behavioral analyses of employees because they are the main cause for internal threats.
• Coded With Email Security Measures – Of course when it’s about cloud security services, it is
mandatory to have email security policies already embedded in them. Enterprises have to make sure that
this feature is already provided in the shortlisted service provider. If no, immediately reject the security
vendor proposal because email security is one of the basic aspects of Cyber protection.
• Security Data & Event Management – Online apps contribute themselves to monitoring and auditing
procedure, and these features are core in SIEM. It is accomplished by the events and security data collected
from traditional IT security systems (like anti-malware, IDP), network systems, and management systems.
Administrators must ensure that the log file data meets particular regulatory and compliance requirements
at the time of shifting data to the cloud.

73
Design principles

1. Implement a strong identity foundation: Implement the principle of least privilege and enforce
separation of duties with appropriate authorization for each interaction with your AWS resources.
Centralize identity management and aim to eliminate reliance on long-term static credentials.
2. Enable traceability: Monitor, alert, and audit actions and changes to your environment in real time.
Integrate log and metric collection with systems to automatically investigate and take action.
3. Apply security at all layers: Apply a defense in depth approach with multiple security controls. Apply to
all layers (for example, edge of network, VPC, load balancing, every instance and compute service,
operating system, application, and code).
4. Automate security best practices: Automated software-based security mechanisms improve your ability
to securely scale more rapidly and cost-effectively. Create secure architectures, including the
implementation of controls that are defined and managed as code in version-controlled templates.
5. Protect data in transit and at rest: Classify your data into sensitivity levels and use mechanisms, such as
encryption, tokenization, and access control where appropriate.
6. Keep people away from data: Use mechanisms and tools to reduce or eliminate the need for direct access
or manual processing of data. This reduces the risk of mishandling or modification and human error when
handling sensitive data.
7. Prepare for security events: Prepare for an incident by having incident management and investigation
policy and processes that align to your organizational requirements. Run incident response simulations and
use tools with automation to increase your speed for detection, investigation, and recovery.

74
Policies and Procedures to Mitigate Risk

In our experience, the best solution starts with a paradigm shift for most businesses we work with. Security, often
treated as something added on to a tech stack after the fact, must be the top priority; companies that build
applications with security front and center from the code level upward are able to adapt to the dynamic security
needs that are inherent in a connected world. Here are six methods to get started.

1. Embrace Security as Code: -


Security as Code seems obvious, but it is so often overlooked because of the mischaracterization of what security
actually is. DevOps professionals use it as a building block for compliance automation. Testing, validation, and
gating are all part of the application. One might not think that API validation or integration and unit testing as
security, but they absolutely are.

2. Implement automated policy enforcement: -


While establishing a security-first culture is a significant step, it’s not enough to assume all workers will follow
requirements. Any live system should ensure automated policy enforcement restricts individuals from making
unauthorized changes. This starts by establishing a central policy store that holds all organizational protocols related
to audits, regulations, frameworks, standards, and requirements. The central policy store fuels an engine that uses
this information to parse the system and ensure compliance.

3. Use multifactor authentication: -


Despite all the warnings from security officers over the years, “password” remains on the top ten list of most-used
passwords. Hackers load popular options—123456, qwerty, 11111, etc.—into programs that relentlessly attempt to
gain access to a system until they inevitably do. While security experts can set password standards, it only takes one
weak secret to jeopardize an entire organization. Multifactor authentication, however, eliminates this; adding
another checkpoint before individuals can access a program is as extraordinarily effective as it is simple.

4. Leverage API management: -


An API management program incorporates vulnerability intelligence, threat detection, access control, and analytics
to monitor an API and increase its resilience. Extra attention at gateways is ideal, as the best opportunities to catch
bad actors occur when they enter the system—and there is always an entrance. Administrators have a second chance
to see issues as users exit, so a client registry is a useful component for tracing the root causes of problems.

5. Follow a zero-trust model: -


All users should have the minimum level of access needed to complete their necessary tasks. If you were to have
someone come to your home to repair the dishwasher, would you also give them the keys to your safe? Strict control
of those with privileged access—whether a repairman in your home or a developer at your company is not personal;
it is simple commonsense. A zero-trust model cuts weak links in the security chain by requiring users to prove their
need and their identities before access.

6. Deploy centralized cloud management: -


Incorporating management, services, and infrastructure in a single place is necessary for monitoring a network of
any size. The cloud management platform (CMP) is the command center for the business to enhance efficiency and
control security across assets. Going back to our example of a home, imagine if every light had its own circuit
breaker that needed to be connected manually whenever someone wanted to turn it on. Adding new lights and
debugging problems would not only be inefficient but also extraordinarily costly to scale. Additionally,
standardizing how a light was connected would be impossible; one would need an intimate familiarity with the
wiring of each light to be able to use it.

75
Security Challenges

1. Misconfiguration: -
Cloud computing is a popular way to access resources remotely and save on costs. However, cloud security
threats cannot arise if your cloud resources are configured correctly. Misconfiguration is the top cloud
security challenge, as users must appropriately protect their data and applications in the cloud. To avoid
this security threat, users must ensure that their data is protected and applications are configured correctly.
It can be accomplished using a cloud storage service that offers security features such as encryption or
access control.

2. Unauthorized Access: -
Unauthorized access to data is one of the top cloud security challenges businesses face. The cloud provides
a convenient way for businesses to store and access data, which can make data vulnerable to cyber threats.
Cloud security breaches can include unauthorized access to user data, theft of data, and malware attacks. To
protect their data from these types of threats, businesses must ensure that only authorized users have access
to it.
Another security feature business can implement encrypting sensitive data in the cloud. It will help ensure
that only authorized users can access it.

3. Hijacking of Accounts: -
Hijacking of user accounts is one of the major cloud security hacks. Using cloud-based applications and
services will increase the risk of account hijacking. As a result, users must be vigilant about protecting their
passwords and other confidential information to stay secure in the cloud.
Users can protect themselves using strong passwords, security questions, and two-factor authentication to
access their accounts. They can also monitor their account activity and take steps to protect themselves
from unauthorized access or usage. This will help ensure that hackers cannot access their data or hijack
their accounts.

4. Lack of Visibility: -
Cloud computing has made it easier for businesses to access and store their data online, but this
convenience comes with risks. As a result, companies need to protect their data from unauthorized access
and theft. But cloud computing also poses security threats due to its reliance on remote servers. In order to
ensure that their systems are vulnerable only to authorized sources, businesses must implement security
measures such as strong authentication, data loss prevention (DLP), data breach detection, and data breach
response.

76
5. Data Privacy/Confidentiality: -
Data privacy and confidentiality are critical issues when it comes to cloud computing. With cloud
computing, businesses can access their data from anywhere in the world, which raises security concerns.
Companies don’t have control over who has access to their data, so they must ensure that only authorized
users can access it. Data breaches can happen when hackers gain access to company data. In coming years,
there will be even more data privacy and confidentiality issues due to the rise of big data and the increased
use of cloud computing in business.

6. External Sharing of Data: -


External data sharing is one of the leading cloud security challenges businesses face. This issue arises when
data is shared with third-party providers who have to be vetted and approved by the organization. As a
result, external data sharing can lead to the loss of critical business information and theft and fraud. To
prevent these risks, companies must implement robust security measures, such as encryption, and data
management practices. In addition, it will help ensure that sensitive data remains secure and confidential.

7. Legal and Regulatory Compliance: -


A cloud is a powerful tool that can help organizations reduce costs and improve the efficiency of their
operations. However, cloud computing presents new security challenges that must be addressed to protect
data and ensure compliance with legal and regulatory requirements.
Organizations must ensure data security and comply with legal and regulatory requirements to ensure the
safety and integrity of their cloud-based systems. Cyber threats such as malware, data breach, and phishing
are just a few of the challenges organizations face when using cloud computing.

8. Unsecure Third-party Resources: -


Third-party resources are applications, websites, and services outside the cloud provider’s control. These
resources may have security vulnerabilities, and unauthorized access to your data is possible. Additionally,
unsecured third-party resources may allow hackers to access your cloud data. These vulnerabilities can put
your security at risk. Therefore, it is essential to ensure that only trusted, and secure resources are used for
cloud computing. In addition, it will help ensure that only authorized individuals access data and reduce the
risk of unauthorized data loss or breach.

77
Cloud Computing Security Architecture

Security in cloud computing is a major concern. Proxy and brokerage services should be employed to restrict a
client from accessing the shared data directly. Data in the cloud should be stored in encrypted form.
❖ Security Planning: -
Before deploying a particular resource to the cloud, one should need to analyze several aspects of the resource,
such as:
• A select resource needs to move to the cloud and analyze its sensitivity to risk.
• Consider cloud service models such as IaaS, PaaS, and SaaS. These models require the customer to be
responsible for Security at different service levels.
• Consider the cloud type, such as public, private, community, or
• Understand the cloud service provider's system regarding data storage and its transfer into and out of the
cloud.
• The risk in cloud deployment mainly depends upon the service models and cloud types.

❖ Security Boundaries: -
The Cloud Security Alliance (CSA) stack model defines the boundaries between each service model and
shows how different functional units relate. A particular service model defines the boundary between the
service provider's responsibilities and the customer. The following diagram shows the CSA stack model:

78
❖ Key Points to CSA Model: -
• IaaS is the most basic level of service, with PaaS and SaaS next two above levels of services.
• Moving upwards, each service inherits the capabilities and security concerns of the model beneath.
• IaaS provides the infrastructure, PaaS provides the platform development environment, and SaaS provides
the operating environment.
• IaaS has the lowest integrated functionality and security level, while SaaS has the highest.
• This model describes the security boundaries at which cloud service providers' responsibilities end and
customers' responsibilities begin.
• Any protection mechanism below the security limit must be built into the system and maintained by the
customer.

❖ Understanding data security: -


Since all data is transferred using the Internet, data security in the cloud is a major concern. Here are the key
mechanisms to protect the data.
• access control
• audit trail
• certification
• authority

79
Issues in Cloud Computing

Cloud Computing is a new name for an old concept. The delivery of computing services from a remote location.
Cloud Computing is Internet-based computing, where shared resources, software, and information are provided to
computers and other devices on demand.
These are major issues in Cloud Computing:
1. Privacy: The user data can be accessed by the host company with or without permission. The service provider
may access the data that is on the cloud at any point in time. They could accidentally or deliberately alter or even
delete information.
2. Compliance: There are many regulations in places related to data and hosting. To comply with regulations
(Federal Information Security Management Act, Health Insurance Portability and Accountability Act, etc.) the
user may have to adopt deployment modes that are expensive.
3. Security: Cloud-based services involve third-party for storage and security. Can one assume that a cloud-based
company will protect and secure one’s data if one is using their services at a very low or for free? They may share
users’ information with others. Security presents a real threat to the cloud.
4. Sustainability: This issue refers to minimizing the effect of cloud computing on the environment.
5. Abuse: While providing cloud services, it should be ascertained that the client is not purchasing the services of
cloud computing for a nefarious purpose.
6. Higher Cost: If you want to use cloud services uninterruptedly then you need to have a powerful network with
higher bandwidth than ordinary internet networks, and also if your organization is broad and large so ordinary
cloud service subscription won’t suit your organization. Otherwise, you might face hassle in utilizing an ordinary
cloud service while working on complex projects and applications. This is a major problem before small
organizations, that restricts them from diving into cloud technology for their business.
7. Recovery of lost data in contingency: Before subscribing any cloud service provider goes through all norms
and documentations and check whether their services match your requirements and sufficient well-maintained
resource infrastructure with proper upkeeping. Once you subscribed to the service you almost hand over your
data into the hands of a third party. If you are able to choose proper cloud service then in the future you don’t
need to worry about the recovery of lost data in any contingency.
8. Upkeeping(management) of Cloud: Maintaining a cloud is a herculin task because a cloud architecture
contains a large resources infrastructure and other challenges and risks as well, user satisfaction, etc. As users
usually pay for how much they have consumed the resources. So, sometimes it becomes hard to decide how much
should be charged in case the user wants scalability and extend the services.
9. Lack of resources/skilled expertise: One of the major issues that companies and enterprises are going through
today is the lack of resources and skilled employees. Every second organization is seeming interested or has
already been moved to cloud services. That’s why the workload in the cloud is increasing so the cloud service
hosting companies need continuous rapid advancement. Due to these factors, organizations are having a tough
time keeping up to date with the tools. As new tools and technologies are emerging every day so more
skilled/trained employees need to grow. These challenges can only be minimized through additional training of
IT and development staff.
10. Pay-per-use service charges: Cloud computing services are on-demand services a user can extend or
compress the volume of the resource as per needs. so you paid for how much you have consumed the resources. It
is difficult to define a certain pre-defined cost for a particular quantity of services. Such types of ups and downs
and price variations make the implementation of cloud computing very difficult and intricate. It is not easy for a
firm’s owner to study consistent demand and fluctuations with the seasons and various events. So it is ha rd to
build a budget for a service that could consume several months of the budget in a few days of heavy use.

80
Business Continuity Within the Cloud
Provider

When you deploy assets into the cloud, you can’t assume the cloud will always be there, or always work the way
you expect. A key point is that the very nature of virtualizing resources into pools typically creates less resiliency for
any single asset, like a virtual machine. On the other hand, abstracting resources and managing everything through
software opens up flexibility to more easily enable resiliency features like durable storage.
There is a huge range of options here, and not all providers or platforms are created equal, but you shouldn’t assume
that “the cloud” as a general term is more or less resilient than traditional infrastructure. This is why it is typically
best to re-architect deployments when you migrate them to the cloud.

❖ Some points to keep in mind: -


• Understand and leverage the platform’s BC/DR features before adding on any additional capabilities
through third-party tools.
• BC/DR must account for the entire logical stack, including meta-structure, infrastructure, infostructure, and
applistructure.
• When real-time switching isn’t possible, design your application to gracefully fail in case of a service
outage. There are many automation techniques to support this.
• Downtime is always an option. You don’t always need perfect availability, but if you do plan to accept an
outage, you should at least ensure you fail gracefully, with emergency downtime notification pages and
responses.

❖ Business Continuity for Loss of the Cloud Provider: -


It’s always possible that an entire cloud provider, or at least a major portion of its infrastructure, can go down.
Depending on the history of your provider, and their internal availability capabilities, accepting this risk is often
a legitimate option. Downtime may be another option, but it depends on your RTOs. Be wary of selecting a
secondary provider or service if said service may also be located or reliant on the same provider.
SaaS may often be the biggest provider outage concern, due to total reliance on the provider. Scheduled data
extraction and archiving may be your only BC option outside of accepting downtime. Extracting and archiving
to another cloud service, especially IaaS/PaaS, may be a better option than moving it to local/on-premises
storage.

❖ Business Continuity for Private Cloud: -


This is completely on the provider’s shoulders. RTOs and RPOs should be stringent, since if the cloud goes
down, everything goes down.
If you are providing services to others, be aware of contractual requirements, including data residency, when
building your BC plans. For example, failing over to a different geography in a different legal jurisdiction may
violate contracts or local laws.

81
kam karna
Mitigate Cloud Security Risk

❖ What Is Cloud Security?


Cloud security refers to the collection of applications, practices, and policies created to protect your cloud
infrastructure and all of your data. Today, your business’ cloud systems are more susceptible to hacking and
threats than ever, putting the valuable data stored within them at risk.
Unfortunately, there is no full-proof way to eliminate all cloud security risks, but you can mitigate
them. Common examples of these risks include:
• Poor visibility of your network
• Ineffective protection against malware
• Inadequate security policy compliance
• Failure to conduct diligence on 3rd parties
• Insecure business application interfaces
Failing to mitigate and address the risks you find in your system can lead to severe consequences for your
business. This may include the loss of customer trust – especially considering the data you store and process
will likely be their confidential and private information. Nothing will tank a customer relationship faster than a
data security breach or loss of valuable data.

❖ Here are 5 ways to mitigate the risks associated with cloud security and protect your
most valuable data:
1. Assess the Risks in Your System:
The first way to assure the security of your cloud’s infrastructure is to consider assessing the risks specific to your
system. More specifically, this means performing a cybersecurity risk assessment on your system. To perform a
cybersecurity assessment, consider the following steps:
• Scope: You certainly have the option to assess your entire organization’s infrastructure to gain a
comprehensive view of your level of cybersecurity risk. On the other hand, you can also opt to prioritize
the indispensable components of your organization first instead of scanning your system in its entirety.
• Identify: The second step is to identify all of your assets, such as hardware and software systems, and note
the possible threats to each of them. Not only will this give you an overview of what assets may be at risk,
but it will also give you an idea of the problems that you can prepare for.
• Analyze: Third, you’ll need to determine how likely an incident or threat may occur. Furthermore, this
means understanding how it may impact your organization. This will help you figure out which assets to
prioritize for protection.
• Evaluate: Fourth, you evaluate alternative solutions to avoid or mitigate cybersecurity risks and select the
best options that fit your needs.
• Document: Lastly, it is important to document all of the risks you identified in the previous steps and your
approaches to mitigating them. You can also look into how others are dealing with potential risks and
compare them to the preventive measures you have put in place.

2. Monitor Third Parties:


Another effective way to mitigate cloud security risks is to monitor third parties that have access to your
infrastructure, regardless of whether they have full or limited access. Note that there are several ways you can assure
that service providers are secure and reliable such as:
• Shared responsibility model: This refers to a shared responsibility for risk mitigation between clients and
cloud providers. In this arrangement, the service provider is responsible for the security of the cloud while
the client is responsible for the one in the cloud.
• Due diligence: Another way to ensure that third parties don’t endanger your business’ cloud infrastructure,
consider conducting thorough due diligence on them before working with them. And part of this is also
knowing what their previous clients have to say about their services and performance.
• Validation or certification: Similar to the previous point, it’s also essential that you see to it that your
vendors are always validated and have the necessary certification. This can help ensure they have the
substantial knowledge, skills, and expertise to assist you in securing your cloud infrastructure.

82
3. Train Your Employees:
The next way to mitigate cloud security risk in your organization is to train your employees, set up security risk
mitigation policies, and enforce these. This approach helps ensure your staff is well informed and familiar with
various attacks that may infiltrate your cloud infrastructure. And with that in mind, your internal personnel can act
as a line of defence in keeping your system secure from being hijacked by external forces such as cybercriminals.

4. Set Up a Strong Security System:


Another surefire way to mitigate cloud security risk would be to set up a strong security system. But instead of only
relying on a strong password system, consider having a solid network monitoring system and backing up and
encrypting your data:
• Monitoring System: A network monitoring system means monitoring both the outbound and inbound traffic
to your network systems. Doing so enables you to see if there are any intrusion attempts into your system.
And from there, you can take immediate action before any more damage can be done. It will also help you
identify any leaks made by any rogue employees.
• Encrypting Your Data: Consider always having a backup of your data and encrypting them. This can help
you mitigate the aftermath of data loss caused by breaches.

5. Prepare for Incidents and Breaches:


Lastly, consider having an incident response plan to help you mitigate cloud security risks better and more
efficiently. It can’t be denied that breaches aren’t impossible, no matter how advanced your systems are. Therefore,
having an incident response plan or IRP is worth considering. An IRP refers to your organization’s protocols if any
cyber incidents occur.

83
Cloud Security Threats

In general, the features that make cloud services easily accessible to employees and IT systems also make it difficult
for organizations to prevent unauthorized access. However, the security challenges introduced by cloud services
have not slowed the adoption of cloud computing and the decline in on-premise data centers. As a result,
organizations of all sizes need to rethink their network security protocols to mitigate the risk of unauthorized data
transfers, service disruptions and reputational damage.
Cloud services expose organizations to new security threats related to authentication and public APIs. Sophisticated
hackers use their expertise to target cloud systems and gain access. Hackers employ social engineering, account
takeover, lateral movement and detection evasion tactics to maintain a long-term presence on the victim
organization’s network, often using the built in tools from the cloud services. Their goal is to transfer sensitive
information to systems under their control.

❖ Common Cloud Security Threats: -


Cloud services have transformed the way businesses store data and host applications while introducing new
security challenges.
1. Identity, authentication and access management – This includes the failure to use multi-factor
authentication, misconfigured access points, weak passwords, lack of scalable identity management
systems, and a lack of ongoing automated rotation of cryptographic keys, passwords and certificates.
2. Vulnerable public APIs – From authentication and access control to encryption and activity
monitoring, application programming interfaces must be designed to protect against both accidental
and malicious attempts to access sensitive data.
3. Account takeover – Attackers may try to eavesdrop on user activities and transactions, manipulate
data, return falsified information and redirect users to illegitimate sites.
4. Malicious insiders – A current or former employee or contractor with authorized access to an
organization’s network, systems or data may intentionally misuse the access in a manner that leads to a
data breach or affects the availability of the organization’s information systems.
5. Data sharing – Many cloud services are designed to make data sharing easy across organizations,
increasing the attack surface area for hackers who now have more targets available to access critical
data.
6. Denial-of-service attacks – The disruption of cloud infrastructure can affect multiple organizations
simultaneously and allow hackers to harm businesses without gaining access to their cloud services
accounts or internal network.

84
❖ Cloud Attack Lifecycle: -
Attackers have two avenues of attack to compromise cloud resources:
1. The first is through traditional means, which involves accessing systems inside the enterprise network
perimeter, followed by reconnaissance and privilege escalation to an administrative account that has
access to cloud resources.
2. The second involves bypassing all the above by simply compromising credentials from an
administrator account that has administrative capabilities or has cloud services provider (CSP)
administrative access.
When a main administrative account is compromised, it is far more detrimental to the security of the cloud
network. With access to an administrative account, the attacker does not need to escalate privileges or maintain
access to the enterprise network because the main administrative account can do all that and more.
This poses the question: How can the organization properly monitor misuse of CSP administrative privileges?
It is no longer enough to identify a suspicious login attempt to protect your cloud network. Modern day,
sophisticated hackers are able to access an account through social engineering exploits, such as phishing. It is
now essential to monitor the behaviour of accounts that are already logged into and detect any suspicious
activity.

85
Service level agreements

A Service Level Agreement (SLA) is the bond for the performance of the negotiation between a cloud service
provider and a client. Earlier, in cloud computing, all service level agreements were negotiated between a customer
and a service consumer. With the introduction of large utilities such as cloud computing providers, most service
level agreements are standardized until a customer becomes a large consumer of cloud services. Service level
agreements are also defined at different levels, which are mentioned below:
• Customer-based SLA
• Service-based SLA
• Multilevel SLA
Some service level agreements are enforceable as contracts, but most are agreements or contracts that are more in
line with an operating level agreement (OLA) and may not be constrained by law. It's okay to have a lawyer review
documents before making any major settlement with a cloud service provider. Service level agreements usually
specify certain parameters, which are mentioned below:
• Availability of the Service (uptime)
• Latency or the response time
• Service component reliability
• Each party accountability
• Warranties
If a cloud service provider fails to meet the specified targets of the minimum, the provider will have to pay a penalty
to the cloud service consumer as per the agreement. So, service level agreements are like insurance policies in which
the corporation has to pay as per the agreement if an accident occurs.

❖ What to look for in a cloud SLA: -


The cloud SLA should outline each party's responsibilities, acceptable performance parameters, a description of the
applications and services covered under the agreement, procedures for monitoring service levels, and a program for
remediation of outages. SLAs typically use technical definitions to measure service levels, such as mean time
between failures (MTBF) or average time to repair (MTTR), which specify targets or minimum values for service-
level performance. does.
The defined level of services must be specific and measurable so that they can be benchmarked and, if stipulated by
contract, trigger rewards or penalties accordingly.

86
Depending on the cloud model you choose, you can control much of the management of IT assets and services or let
cloud providers manage it for you.
A typical compute and cloud SLA expresses the exact levels of service and recourse or compensation that the User
is entitled to in case the Provider fails to provide the Service. Another important area is service availability, which
specifies the maximum time a read request can take, how many retries are allowed, and other factors.
The cloud SLA should also define compensation for users if the specifications are not met. A cloud service provider
typically offers a tiered service credit plan that gives credit to users based on the discrepancy between the SLA
specifications and the actual service tiers.

87
UNIT – 5
AWS Tutorial

AWS tutorial provides basic and advanced concepts. Our AWS tutorial is designed for beginners and professionals.
AWS stands for Amazon Web Services which uses distributed IT infrastructure to provide different IT resources on
demand.
Our AWS tutorial includes all the topics such as introduction, history of aws, global infrastructure, features of aws,
IAM, Storage services, Database services, etc.
❖ What is AWS?
• AWS stands for Amazon Web Services.
• The AWS service is provided by the Amazon that uses distributed IT infrastructure to provide different IT
resources available on demand. It provides different services such as infrastructure as a service (IaaS),
platform as a service (PaaS) and packaged software as a service (SaaS).
• Amazon launched AWS, a cloud computing platform to allow the different organizations to take advantage
of reliable IT infrastructure.
❖ Uses of AWS: -
• A small manufacturing organization uses their expertise to expand their business by leaving their IT
management to the AWS.
• A large enterprise spread across the globe can utilize the AWS to deliver the training to the distributed
workforce.
• An architecture consulting company can use AWS to get the high-compute rendering of construction
prototype.
• A media company can use the AWS to provide different types of content such as ebox or audio files to the
worldwide files.
❖ Pay-As-You-Go: -
Based on the concept of Pay-As-You-Go, AWS provides the services to the customers.
AWS provides services to customers when required without any prior commitment or upfront investment. Pay-
As-You-Go enables the customers to procure services from AWS.
• Computing
• Programming models
• Database storage
• Networking

88
❖ Advantages of AWS: -
1) Flexibility: -
• We can get more time for core business tasks due to the instant availability of new features and services in
AWS.
• It provides effortless hosting of legacy applications. AWS does not require learning new technologies and
migration of applications to the AWS provides the advanced computing and efficient storage.
• AWS also offers a choice that whether we want to run the applications and services together or not. We can
also choose to run a part of the IT infrastructure in AWS and the remaining part in data centres.

2) Cost-effectiveness: -

AWS requires no upfront investment, long-term commitment, and minimum expense when compared to traditional
IT infrastructure that requires a huge investment.

3) Scalability/Elasticity: -
Through AWS, autoscaling and elastic load balancing techniques are automatically scaled up or down, when
demand increases or decreases respectively. AWS techniques are ideal for handling unpredictable or very high
loads. Due to this reason, organizations enjoy the benefits of reduced cost and increased user satisfaction.

4) Security: -

• AWS provides end-to-end security and privacy to customers.


• AWS has a virtual infrastructure that offers optimum availability while managing full privacy and isolation
of their operations.
• Customers can expect high-level of physical security because of Amazon's several years of experience in
designing, developing and maintaining large-scale IT operation centers.
• AWS ensures the three aspects of security, i.e., Confidentiality, integrity, and availability of user's data.

89
Google App Engine (GAE)

Google Cloud Platform: - Google Cloud Platform (GCP), offered by google, is a suite of cloud computing
services that runs on the same infrastructure that google uses internally for its end-user products., such as google
search, file storage, and Youtube.
Google App Engine: - It is a platform for hosting web applications in google managed Data centres. It is a cloud
computing technology which virtualizes applications across multiple servers and data centres.

❖ Key features: -
1. Popular programming languages: - Build your application in Node.js , Java, Ruby, C#, Go, Python, or
PHP or bring your own language runtime. Google app engine has vast acceptance diversity for
programming languages.
2. Open & Flexible: - Custom runtimes in google app engine allows you to bring any library and
framework to App Engine by supplying a Docker container.
3. Fully Managed: - Google App Engine provides you a fully managed environment which lets you focus
on code while App Engine Manages infrastructure concerns.

90
A scalable runtime environment, Google App Engine is mostly used to run Web applications. These dynamic
scales as demand change over time because of Google’s vast computing infrastructure. Because it offers a secure
execution environment in addition to a number of services, App Engine makes it easier to develop scalable and
high-performance Web apps. Google’s applications will scale up and down in response to shifting demand.
The App Engine SDK facilitates the testing and professionalization of applications by emulating the production
runtime environment and allowing developers to design and test applications on their own PCs. When an
application is finished being produced, developers can quickly migrate it to App Engine, put in place quotas to
control the cost that is generated, and make the programmer available to everyone. Python, Java, and Go are
among the languages that are currently supported.
The development and hosting platform Google App Engine, which powers anything from web programming for
huge enterprises to mobile apps, uses the same infrastructure as Google’s large-scale internet services. It is a fully
managed PaaS (platform as a service) cloud computing platform that uses in-built services to run your apps. You
can start creating almost immediately after receiving the software development kit (SDK). You may immediately
access the Google app developer’s manual once you’ve chosen the language you wish to use to build your app.

❖ Features of App Engine: -


1. Runtimes and Languages:
To create an application for an app engine, you can use Go, Java, PHP, or Python. You can develop and
test an app locally using the SDK’s deployment toolkit. Each language’s SDK and nun time are unique.
Your program is run in a:
• Java Run Time Environment version 7
• Python Run Time environment version 2.7
• PHP runtime’s PHP 5.4 environment
• Go runtime 1.2 environment
2. Generally Usable Features:
These are protected by the service-level agreement and depreciation policy of the app engine. The
implementation of such a feature is often stable, and any changes made to it are backward -compatible.
These include communications, process management, computing, data storage, retrieval, and search, as
well as app configuration and management. Features like the HRD migration tool, Google Cloud SQL,
logs, datastore, dedicated Memcached, blob store, Memcached, and search are included in the categories
of data storage, retrieval, and search.
3. Features in Preview:
In a later iteration of the app engine, these functions will undoubtedly be made broadly accessible.
However, because they are in the preview, their implementation may change in ways that are backward-
incompatible. Sockets, MapReduce, and the Google Cloud Storage Client Library are a few of them.
4. Experimental Features:
These might or might not be made broadly accessible in the next app engine updates. They might be
changed in ways that are irreconcilable with the past. The “trusted tester” features, however, are only
accessible to a limited user base and require registration in order to utilize them. The experimental
features include Prospective Search, Page Speed, OpenID, Restore/Backup/Datastore Admin, Task
Queue Tagging, MapReduce, and Task Queue REST API. App metrics analytics, datastore
admin/backup/restore, task queue tagging, MapReduce, task queue REST API, OAuth, prospective
search, OpenID, and Page Speed are some of the experimental features.
5. Third-Party Services:
As Google provides documentation and helper libraries to expand the capabilities of the app engine
platform, your app can perform tasks that are not built into the core product you are familiar with as app
engine. To do this, Google collaborates with other organizations. Along with the helper libraries, the
partners frequently provide exclusive deals to app engine users.

91
❖ Advantages of Google App Engine: -
The Google App Engine has a lot of benefits that can help you advance your app ideas. This comprises:

1. Infrastructure for Security: The Internet infrastructure that Google uses is arguably the safest in the
entire world. Since the application data and code are hosted on extremely secure servers, there has rarely
been any kind of illegal access to date.
2. Faster Time to Market: For every organization, getting a product or service to market quickly is
crucial. When it comes to quickly releasing the product, encouraging the development and maintenance
of an app is essential. A firm can grow swiftly with Google Cloud App Engine’s assistance.
3. Quick to Start: You don’t need to spend a lot of time prototyping or deploying the app to users because
there is no hardware or product to buy and maintain.
4. Easy to Use: The tools that you need to create, test, launch, and update the applications are included in
Google App Engine (GAE).
5. Rich set of APIs & Services: A number of built-in APIs and services in Google App Engine enable
developers to create strong, feature-rich apps.
6. Scalability: This is one of the deciding variables for the success of any software. When using the
Google app engine to construct apps, you may access technologies like GFS, Big Table, and others that
Google uses to build its own apps.
7. Performance and Reliability: Among international brands, Google ranks among the top ones.
Therefore, you must bear that in mind while talking about performance and reliability.
8. Cost Savings: To administer your servers, you don’t need to employ engineers or even do it yourself.
The money you save might be put toward developing other areas of your company.
9. Platform Independence: Since the app engine platform only has a few dependencies, you can easily
relocate all of your data to another environment.

92
Microsoft Azure Tutorial

Microsoft Azure is a growing set of cloud computing services created by Microsoft that hosts your existing
applications, streamline the development of a new application, and also enhances our on-premises applications. It
helps the organizations in building, testing, deploying, and managing applications and services through Microsoft-
managed data centers.

❖ Azure Services: -
• Compute services: It includes the Microsoft Azure Cloud Services, Azure Virtual Machines, Azure
Website, and Azure Mobile Services, which processes the data on the cloud with the help of powerful
processors.
• Data services: This service is used to store data over the cloud that can be scaled according to the
requirements. It includes Microsoft Azure Storage (Blob, Queue Table, and Azure File services), Azure
SQL Database, and the Redis Cache.
• Application services: It includes services, which help us to build and operate our application, like the
Azure Active Directory, Service Bus for connecting distributed systems, HDInsight for processing big data,
the Azure Scheduler, and the Azure Media Services.
• Network services: It helps you to connect with the cloud and on-premises infrastructure, which includes
Virtual Networks, Azure Content Delivery Network, and the Azure Traffic Manager.

❖ How Azure works: -


It is essential to understand the internal workings of Azure so that we can design our applications on Azure effectively
with high availability, data residency, resilience, etc.

Microsoft Azure is completely based on the concept of virtualization. So, similar to other virtualized data center, it
also contains racks. Each rack has a separate power unit and network switch, and also each rack is integrated with a
software called Fabric-Controller. This Fabric-controller is a distributed application, which is responsible for
managing and monitoring servers within the rack. In case of any server failure, the Fabric-controller recognizes it and
recovers it. And Each of these Fabric-Controller is, in turn, connected to a piece of software called Orchestrator.
This Orchestrator includes web-services, Rest API to create, update, and delete resources.

93
When a request is made by the user either using PowerShell or Azure portal. First, it will go to the Orchestrator,
where it will fundamentally do three things:
1. Authenticate the User
2. It will Authorize the user, i.e., it will check whether the user is allowed to do the requested task.
3. It will look into the database for the availability of space based on the resources and pass the request to an
appropriate Azure Fabric controller to execute the request.

Combinations of racks form a cluster. We have multiple clusters within a data center, and we can have multiple Data
Centers within an Availability zone, multiple Availability zones within a Region, and multiple Regions within a
Geography.
• Geographies: It is a discrete market, typically contains two or more regions, that preserves data residency
and compliance boundaries.
• Azure regions: A region is a collection of data centers deployed within a defined perimeter and
interconnected through a dedicated regional low-latency network.
Azure covers more global regions than any other cloud provider, which offers the scalability needed to bring
applications and users closer around the world. It is globally available in 50 regions around the world. Due to its
availability over many regions, it helps in preserving data residency and offers comprehensive compliance and
flexible options to the customers.
• Availability Zones: These are the physically separated location within an Azure region. Each one of them
is made up of one or more data centers, independent configuration.

94
Aneka

Aneka includes an extensible set of APIs associated with programming models like MapReduce.
These APIs support different cloud models like a private, public, hybrid Cloud.
Manjrasoft focuses on creating innovative software technologies to simplify the development and deployment of
private or public cloud applications. Our product plays the role of an application platform as a service for multiple
cloud computing.
• Multiple Structures:
• Aneka is a software platform for developing cloud computing applications.
• In Aneka, cloud applications are executed.
• Aneka is a pure PaaS solution for cloud computing.
• Aneka is a cloud middleware product.
• Manya can be deployed over a network of computers, a multicore server, a data center, a virtual cloud
infrastructure, or a combination thereof.

❖ Multiple containers can be classified into three major categories:


1. Textile Services:
Fabric Services defines the lowest level of the software stack that represents multiple containers. They provide
access to resource-provisioning subsystems and monitoring features implemented in many.
2. Foundation Services:
Fabric Services are the core services of Manya Cloud and define the infrastructure management features of the
system. Foundation services are concerned with the logical management of a distributed system built on top of
the infrastructure and provide ancillary services for delivering applications.
3. Application Services:
Application services manage the execution of applications and constitute a layer that varies according to the
specific programming model used to develop distributed applications on top of Aneka.

95
❖ Architecture of Aneka: -

Aneka is a platform and framework for developing distributed applications on the Cloud. It uses desktop PCs on-
demand and CPU cycles in addition to a heterogeneous network of servers or data-centers. Aneka provides a rich set
of APIs for developers to transparently exploit such resources and express the business logic of applications using
preferred programming abstractions.
System administrators can leverage a collection of tools to monitor and control the deployed infrastructure. It can be
a public cloud available to anyone via the Internet or a private cloud formed by nodes with restricted access.
A multiplex-based computing cloud is a collection of physical and virtualized resources connected via a network,
either the Internet or a private intranet. Each resource hosts an instance of multiple containers that represent the
runtime environment where distributed applications are executed. The container provides the basic management
features of a single node and takes advantage of all the other functions of its hosting services.
Services are divided into clothing, foundation, and execution services. Foundation services identify the core system
of Anka middleware, which provides a set of infrastructure features to enable Anka containers to perform specific
and specific tasks. Fabric services interact directly with nodes through the Platform Abstraction Layer (PAL) and
perform hardware profiling and dynamic resource provisioning. Execution services deal directly with scheduling
and executing applications in the Cloud.
One of the key features of Aneka is its ability to provide a variety of ways to express distributed applications by
offering different programming models; Execution services are mostly concerned with providing middleware with
the implementation of these models. Additional services such as persistence and security are inverse to the whole
stack of services hosted by the container.

96
Cloud Platform

There are a ton of ways in which every individual can state the meaning of the cloud platform. But in the simplest
way it can be stated as the operating system and hardware of a server in an Internet-based data centre are referred to
as a cloud platform. It enables remote and large-scale coexistence of software and hardware goods.
Compute facilities, such as servers, databases, storage, analytics, networking, applications, and intelligence, are
rented by businesses. As a result, businesses do not need to invest in data centres or computing facilities. They
actually pay for the services they offer.

❖ Types of Cloud Platforms: -


Cloud systems come in a range of shapes and sizes. None of them are suitable for all. To meet the varying needs
of consumers, a range of models, forms, and services are available. They are as follows:
1. Public Cloud: Third-party providers that distribute computing services over the Internet are known as
public cloud platforms. A few good examples of trending and mostly used cloud platform are Google
Cloud Platform, AWS (Amazon Web Services), Microsoft Azure, Alibaba and IBM Bluemix.
2. Private Cloud: A private cloud is normally hosted by a third-party service provider or in an on-site data
centre. A private cloud platform is always dedicated to a single company and it is the key difference
between the public and private cloud.
Or we can say that a private cloud is a series of cloud computing services used primarily by one corporation
or organization.
3. Hybrid Cloud: The type of cloud architecture that combines both the public and private cloud systems is
termed to as a Hybrid cloud platform. Data and programs are easily migrated from one to the other. This
allows the company to be more flexible while still improving infrastructure, security, and enforcement.
Organizations can use a cloud platform to develop cloud-native software, test and build them, and store, back
up, and recover data. The major role of it is that will not only help the company to grow but also it helps to
perform the data analysis with the help of different algorithms and the results can be a true deal breaker.
Streaming video and audio, embedding information into activities, and providing applications on-demand on a
global scale are all possibilities.
Simply stated, cloud computing is the distribution of computing services over the Internet ("the cloud") in order
to provide quicker innovation, more versatile resources, and economies of scale.
We usually only pay for the cloud services that we use, which helps us to cut costs, operate our infrastructure
more effectively, and scale as our company grows.

97
Protein Structure Prediction

Cloud computing is an emerging technology that provides various computing services on demand. It provides
convenient access to a shared pool of higher-level services and other system resources. Nowadays, cloud
computing has a great significance in the fields of geology, biology, and other scientific research areas.
Protein structure prediction is the best example in research area that makes use of cloud applications for it s
computation and storage.
A protein is composed of long chains of amino acids joined together by peptide bonds. The various structures of
protein help in the designing of new drugs and the various sequences of proteins from its three-dimensional
structure in predictive form is known as a Protein structure prediction.
Firstly primary structures of proteins are formed and then prediction of the secondary, tertiary and quaternary
structures are done from the primary one. In this way predictions of protein structures are done. Protein structure
prediction also makes use of various other technologies like artificial neural networks, artificial intelligence,
machine learning and probabilistic techniques, also holds great importance in fields like theoretical chem istry and
bioinformatics.
There are various algorithms and tools that exists for protein structure prediction. CASP (Critical Assessment of
Protein Structure Prediction) is a well-known tool that provides methods for automated web servers and the
results of research work are placed on clouds like CAMEO (Continuous Automated Model Evaluation) server.
These servers can be accessed by anyone as per their requirements from any place. Some of the tools or servers
used in protein structure prediction are Phobius, FoldX, LOMETS, Prime, Predict protein, SignalP, BBSP,
EVfold, Biskit, HHpred, Phre, ESyired3D. Using these tools new structures are predicted and the results are
placed on the cloud-based servers.

98
Cloud Computing and Data Analytics

Cloud Computing: Cloud Computing is a technique in which a network of remote servers is hosted on the
Internet. These servers primarily store the data, manage the data, and process the data. But this is not done by a
local server or a personal computer.
In short, data here is gathered on the internet. Thus, eliminating the use of a physical server.
Cloud Computing doesn’t depend on data analytics for anything.
Data Analytics: Data analysis is defined as a process where data is inspected, cleaned, transformed, and
modeled. The primary aim of data analytics is to discover information that is useful. So that a conclusion is made
which further helps in the decision-making process of a company.
In data analytics, the data is measured and estimated from big data sources.
Data storage is done on cloud and data analytics involves the extraction of data. Thus, data analytics depends on
cloud computing for data extraction.

❖ Differences between Cloud Computing and Data Analytics:

S.No. Cloud Computing Data Analytics

Data storage and retrieval from whichever A process where data is inspected, cleaned, transformed
1
place at whatever time. and modelled

2 Is independent of data analytics Is dependent on cloud computing.

Has solutions to data intensive computing and


3 Works on the improvement of a particular organization
doesn’t focus on a particular organization.

Involves SaaS (Software as a Service), PaaS


4 (Platform as a Service), IaaS (Infrastructure Involves python, Sas, Apache Spark, etc.
as a Service)

Is further categorized as public cloud, private Big Data Technology is Hadoop, MapReduce, and HDFS
5
cloud, community cloud and hybrid cloud. are important aspects

Cloud Computing providers are Google,


Data Analytics providers are Cloudera, Hortonworks,
6 Amazon Web Service, Microsoft, Dell,
Apache and MapR.
Apple, IBM.

7 Less Costly Costlier

Roles related to cloud computing are cloud


Roles related to data analytics are data developer, data
8 resource administrator, cloud service
administrator, data analyst, data scientist, etc.
provider, cloud consumer, cloud auditor, etc

99
Satellite Image Processing

Satellite Image Processing is an important field in research and development and consists of the images of earth
and satellites taken by the means of artificial satellites. Firstly, the photographs are taken in digital form and later
are processed by the computers to extract the information. Statistical methods are applied to the digital images
and after processing the various discrete surfaces are identified by analysing the pixel values.
The satellite imagery is widely used to plan the infrastructures or to monitor the environmental conditions or to
detect the responses of upcoming disasters.
In broader terms we can say that the Satellite Image Processing is a kind of remote sensing which works on pixel
resolutions to collect coherent information about the earth surface.
Majorly there are four kinds of resolutions associated with satellite imagery. These are:
• Spatial resolution –
It is determined by the sensors Instantaneous Field of View(IFoV) and is defined as the pixel size of
an image that is visible to the human eye being measured on the ground. Since it has high resolving
power or the ability to separate and hence is termed as Spatial Resolution.
• Spectral resolution –
This resolution measures the wavelength internal size and determines the number of wavelength
intervals that the sensor measures.
• Temporal resolution –
The word temporal is associated with time or days and is defined as the time that passes between
various imagery cloud periods.
• Radiometric resolution –
This resolution provides the actual characteristics of the image and is generally expressed in bits
size. It gives the effective bit depth and records the various levels of brightness of imaging system.
Thus, Satellite Image Processing has huge amount of applications in research and development fields, in remote
sensing, in astronomy and now even in cloud computing on a large scale.

100
CRM

CRM stands for Customer relationship management and refers to all the tools, techniques, strategies, and
technologies which are used by an organization to build, retain, acquiring customer relationships and customer data.

Customer relationship management makes sure the smooth functioning of storage of


customer data like demographics, purchase behaviour, pattern, history, etc, and every
interaction with a customer to build strong relations and increase the sales and profits of
an organization.

❖ CRM In Cloud Computing: -


CRM in cloud computing is referred to as software that is in cloud-based form for easily accessible to
customers over the internet. Nowadays many organization uses CRM cloud so that the customer can easily
access information via the internet.
Moreover, the computing system has become so strong that customers can easily access it via their phones. As a
result, easy access to information made quick sales or conversions.
CRM cloud provides the facilities for information sharing, backup, and storing, and access from any part of the
world.

❖ Mobile CRM: -
CRM mobile is a tool that is becoming a necessity with the increasing use and dependency on smartphones.
Mobile CRM is a tool that is used to access data via mobiles. It provides the facility to the salesforce to access
customer data via connected mobiles or CRM in cloud computing. Similarly, customers can also access data
easily.
As mobile usage has taken over the desktop, companies are becoming more flexible and adaptable toward
mobile compatibility. So, CRM in cloud computing takes another step towards mobile compatibility and mobile
CRM.

❖ Differentiate Between Cloud CRM & Mobile CRM: -

CRM Cloud Mobile CRM

CRM in cloud computing is software that allows Mobile CRM is a tool that is designed to provide
users to access data from anywhere, anytime full CRM facilities on smartphones and tablets
over the internet. over the internet.

Mobile CRM is narrow or one of the parts of


CRM cloud is a wider approach
CRM in cloud computing.

CRM in cloud computing can be accessed Whereas, in the case of mobile CRM it is only
through desktop, laptop, mobile, tablet, etc. accessible on smartphones and tablets.

In the case of CRM cloud users access the cloud And, in the case of mobile CRM user is not
server to gather information. necessarily connected to the cloud.

101
❖ Types Of CRM Systems: -
An organization designs and set up the entire cloud-based system for the functioning of CRM in the cloud
system and to achieve that organization has two different options:
1. On-Premise/Traditional System: -
In the case of an on-premise/ traditional system, an organization have to own and needs to install servers,
networks, and systems to get their CRM in cloud computing work.
As on-premise refers to keeping the entire system of cloud computing in-house. This means an organization
build an entire IT- environment- like server, system, and networks and maintain them on its own.
On-Premise/Traditional-System
Moreover, the organization needs to maintain and keep safe their data on its own. So, organizations that are
required to keep their data safe and secure opt for on-premise CRM in cloud computing.
For example, Banks have to keep their data secure regarding payments. So, they maintain their own
servers.
2. On-Demand/Cloud-Based System: -
In the case of an on-demand/cloud-based system, an organization only need a system and an internet
connection. Because the facility servers, networks, database, and infrastructure are all maintained by the
other company.
For instance Google, salesforce, cisco, etc.

❖ Benefits Of CRM In Cloud Computing: -


1. Access From Anywhere: -
One basic advantage of CRM in cloud computing is the ease of access. As the organization needs to build a
structure where customers or employees can use the information from anywhere.
2. Flexibility: -
CRM in cloud computing can is flexible according to the organization’s budget, size, and other
requirements. As the structure requirements can be upgraded or downgraded with the changes or
requirements of an organization.
3. Ease Of Installation & Use: -
These days even small business requires CRM in cloud computing, there are different sectors and different
CRM like Ecommerce customer relationship management. So, it’s important to have the installation and
working process super easy.
4. Data Backup: -
In old days before cloud computing, ensuring the data backup was tricky and hectic work. As it had the risk
involved of data loss, it is time-consuming and costly. But in the case of CRM in cloud computing data
backup is super easy, fast, and cheap.
5. Cost Savings: -
CRM in cloud computing has the biggest advantage of cost-saving to a company. Creating your server, and
maintaining and upgrading it from time to time can cost a lot to the company. So, organizations opt for
cloud computing facilities, so that organizations do not have to engage in such expenses.
6. Software Updates: -
Small and medium enterprises do not have this many resources to develop an entire IT department, which
can solely focus on the smooth functioning of IT infrastructure. So, these organizations opt for vendors
who can provide these facilities.
7. Better Communication: -
CRM in cloud computing has made functioning and communication within an organization quick and easy.
As, the organization needs to share the data within the department, inter-department.
8. Integrations With Tools: -
The right cloud CRM can be easily integrated into your existing workflow. This is great because you don’t
want to invest in software that doesn’t cooperate with the tools and services your team relies on to do their
job well.

102

You might also like