0% found this document useful (0 votes)
35 views22 pages

CCD Chapter 1.0

Uploaded by

Gaurang Rane
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views22 pages

CCD Chapter 1.0

Uploaded by

Gaurang Rane
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Chapter 1: Cloud computing Fundamentals:

Definition of cloud computing:


According to the definition given by Armbrust

1) Cloud computing refers to both the applications delivered as


services over the Internet and the hardware and system
software in the datacenters that provide those services.

2) According to the definition proposed by the U.S. National


Institute of Standards and Technology (NIST):
Cloud computing is a model for enabling ubiquitous,
convenient, on-demand network access to a shared pool of
configurable computing resources (e.g., networks, servers,
storage, applications, and services) that can be rapidly
provisioned and released with minimal management effort or
service provider interaction.

3) RajKumar Buyya defined cloud computing based on the


nature of utility computing
A cloud is a type of parallel and distributed system consisting
of a collection of interconnected and virtualized computers that
are dynamically provisioned and presented as one or more
unified computing resources based on service-level agreements
established through negotiation between the service provider
and consumers.
Q) What is evolution of cloud computing

Distributed Systems:
• It is a composition of multiple independent systems but all of them
are depicted as a single entity to the users. The purpose of distributed
systems is to share resources and also use them effectively and
efficiently. Distributed systems possess characteristics such as
scalability, concurrency, continuous availability, heterogeneity, and
independence in failures. But the main problem with this system was
that all the systems were required to be present at the same
geographical location.

Mainframe computing:
• Mainframes which first came into existence in 1951 are highly
powerful and reliable computing machines. These are responsible
for handling large data such as massive input-output operations.
Even today these are used for bulk processing tasks such as online
transactions etc. These systems have almost no downtime with high
fault tolerance. After distributed computing, these increased the
processing capabilities of the system.
Cluster computing:
• In 1980s, cluster computing came as an alternative to mainframe
computing. Each machine in the cluster was connected to each other
by a network with high bandwidth. These were way cheaper than
those mainframe systems. These were equally capable of high
computations. Also, new nodes could easily be added to the cluster
if it was required.

Grid computing:
• In 1990s, the concept of grid computing was introduced. It means
that different systems were placed at entirely different geographical
locations and these all were connected via the internet. These
systems belonged to different organizations and thus the grid
consisted of heterogeneous nodes. Although it solved some
problems but new problems emerged as the distance between the
nodes increased.

Virtualization:
• It was introduced nearly 40 years back. It refers to the process of
creating a virtual layer over the hardware which allows the user to
run multiple instances simultaneously on the hardware. It is a key
technology used in cloud computing. It is the base on which major
cloud computing services such as Amazon EC2, VMware vCloud,
etc work on.

Web 2.0:
• It is the interface through which the cloud computing services
interact with the clients. It is because of Web 2.0 that we have
interactive and dynamic web pages. It also increases flexibility
among web pages. Popular examples of web 2.0 include Google
Maps, Facebook,

Service orientation:
• It acts as a reference model for cloud computing. It supports lowcost,
flexible, and evolvable applications. Two important concepts were
introduced in this computing model.
Utility computing:
• It is a computing model that defines service provisioning techniques
for services such as compute services along with other major
services such as storage, infrastructure, etc which are provisioned on
a pay-per-use basis.

What is Virtualisation ?

• Virtualization is technology that you can use to create virtual


representations of servers, storage, networks, and other physical
machines. Virtual software mimics the functions of physical
hardware to run multiple virtual machines simultaneously on a
single physical machine.

• One of the main cost-effective, hardware-reducing, and


energysaving techniques used by cloud providers is Virtualization.
Virtualization allows sharing of a single physical instance of a
resource or an application among multiple customers and
organizations at one time.

• Hypervisor is software that creates and runs virtual machines

The term virtualization is often synonymous with hardware


virtualization, which plays a fundamental role in efficiently delivering
Infrastructure-as-a-Service (IaaS) solutions for cloud computing.
Moreover, virtualization technologies provide a virtual environment for
not only executing applications but also for storage, memory, and
networking.
Advantages of Virtualization:
1. More flexible and efficient allocation of resources.
2. Enhance development productivity.
3. It lowers the cost of IT infrastructure.
4. Remote access and rapid scalability.
5. High availability and disaster recovery.
6. Pay peruse of the IT infrastructure on demand.
7. Enables running multiple operating systems.
8. All Virtual Machines will work independently.

• Virtualization involves the creation of something's virtual


platform, including virtual computer hardware, virtual
storage devices and virtual computer networks.
• Software called hypervisor is used for hardware
virtualization. With the help of a virtual machine
hypervisor, software is incorporated into the server
hardware component. The role of hypervisor is to control
the physical hardware that is shared between the client and
the provider. Hardware virtualization can be done using
the Virtual Machine Monitor (VVM) to remove physical
hardware. There are several extensions to the processes
which help to speed up virtualization activities and
increase hypervisor performance. When this virtualization
is done for the server platform, it is called server
socialization.
• Hypervisor creates an abstract layer from the software to
the hardware in use. After a hypervisor is installed, virtual
representations such as virtual processors take place. After
installation, we cannot use physical processors. There are
several popular hypervisors including ESXi-based
VMware vSphere and Hyper-V.
FIGURE 1.14 Hardware Virtualization

• Instances in virtual machines are typically represented by


one or more data, which can be easily transported in
physical structures. In addition, they are also autonomous
since they do not have other dependencies for their use
other than the virtual machine manager.

• A Process virtual machine, sometimes known as an


application virtual machine, runs inside a host OS as a
common application, supporting a single process. It is
created at the beginning and at the end of the process. Its
aim is to provide a platform-independent programming
environment which abstracts the information about the
hardware or operating system underlying the program and
allows it to run on any platform in the same way. For
example, Linux wine software helps you run Windows.

• A high level abstraction of a VM process is the high level


programming language (compared with the low-level ISA
abstraction of the VM system). Process VMs are
implemented by means of an interpreter; just-in-time
compilation achieves performance comparable to
compiled programming languages.

• The Java programming language introduced with the Java


virtual machine has become popular with this form of VM.
The .NET System, which runs on a VM called the
Common Language Runtime, is another example.
FIGURE 1.15 process virtual machine design
Reference from “Mastering Cloud Computing Foundations and
Applications Programming” by Rajkumar Buyya

Q) Properties and characteristics of cloud computing?


 Characteristics and benefits
As both commercially and technologically mature cloud
computing services, companies will be easier to maximize their
potential benefits. However, it is equally important to know what
cloud computing is and what it does.

FIGURE 1. 7 Features of Cloud Computing


Following are the characteristics of Cloud Computing:

1. Resources Pooling
This means that the Cloud provider used a multi-leaner
model to deliver the computing resources to various
customers. There are various allocated and reassigned
physical and virtual resources, which rely on customer
demand. In general, the customer has no control or
information about the location of the resources provided,
but can choose location on a higher level of abstraction.

2. On-Demand Self-Service
This is one of the main and useful advantages of Cloud
Computing as the user can track server uptimes, capability
and network storage on an ongoing basis. The user can also
monitor computing functionalities with this feature.

3. Easy Maintenance
The servers are managed easily and the downtime is small
and there are no downtime except in some cases. Cloud
Computing offers an update every time that increasingly
enhances it. The updates are more system friendly and
operate with patched bugs faster than the older ones.

4. Large Network Access


The user may use a device and an Internet connection to
access the cloud data or upload it to the cloud from
anywhere. Such capabilities can be accessed across the
network and through the internet.

5. Availability
The cloud capabilities can be changed and expanded
according to the usage. This review helps the consumer to
buy additional cloud storage for a very small price, if
necessary.
6. Automatic System
Cloud computing analyzes the data required automatically
and supports a certain service level of measuring
capabilities. It is possible to track, manage and report the
usage. It provides both the host and the customer with
accountability.

7. Economical
It is a one-off investment since the company (host) is
required to buy the storage, which can be made available to
many companies, which save the host from monthly or
annual costs. Only the amount spent on the basic
maintenance and some additional costs are much smaller.

8. Security
Cloud Security is one of cloud computing's best features. It
provides a snapshot of the data stored so that even if one of
the servers is damaged, the data cannot get lost. The
information is stored on the storage devices, which no other
person can hack or use. The service of storage is fast and
reliable.

9. Pay as you go
Users only have to pay for the service or the space in cloud
computing. No hidden or additional charge to be paid is
liable to pay. The service is economical and space is often
allocated free of charge.

10. Measured Service


Cloud Computing resources that the company uses to
monitor and record. This use of resources is analyzed by
charge-per-use capabilities. This means that resource use
can be measured and reported by the service provider,
either on the virtual server instances running through the
cloud. You will receive a models pay depending on the
manufacturing company's actual consumption.
Q) Challenges and risk in cloud computing?
All has advantages and challenges. We saw many Cloud features
and it’s time to identify the Cloud computing challenges with
tips and techniques you can identify all your own. Let's therefore
start to explore cloud computing risk and challenges. Nearly all
companies are using cloud computing because companies need
to store the data.
The companies generate and store a tremendous amount of data.
Thus, they face many security issues. Companies would include
establishments to streamline and optimize the process and to
improve cloud computing management.

This is a list of all cloud computing threats and challenges:


• Security & Privacy
• Interoperability & Portability
• Reliable and flexible
• Cost
• Downtime
• Lack of resources
• Dealing with Multi-Cloud Environments
• Cloud Migration
• Vendor Lock-In
• Privacy and Legal issues

Security and Privacy of Cloud


The cloud data store must be secure and confidential. The
clients are so dependent on the cloud provider. In other words,
the cloud provider must take security measures necessary to
secure customer data.
Securities are also the customer's liability because they must
have a good password, don't share the password with others,
and update our password on a regular basis. If the data are
outside of the firewall, certain problems may occur that the
cloud provider can eliminate.
Hacking and malware are also one of the biggest problems
because they can affect many customers. Data loss can result;
the encrypted file system and several other issues can be
disrupted.

Interoperability and Portability


Migration services into and out of the cloud shall be provided
to the Customer. No bond period should be allowed, as the
customers can be hampered. The cloud will be capable of
supplying premises facilities. Remote access is one of the
cloud obstacles, removing the ability for the cloud provider to
access the cloud from anywhere.

Reliable and Flexible


Reliability and flexibility are indeed a difficult task for cloud
customers, which can eliminate leakage of the data provided
to the cloud and provide customer trustworthiness. To
overcome this challenge, third-party services should be
monitored and the performance, robustness, and dependence
of companies supervised.

Cost
Cloud computing is affordable, but it can be sometimes
expensive to change the cloud to customer demand. In
addition, it can hinder the small business by altering the cloud
as demand can sometimes cost more. Furthermore, it is
sometimes costly to transfer data from the Cloud to the
premises.

Downtime
Downtime is the most popular cloud computing challenge as
a platform free from downtime is guaranteed by no cloud
provider. Internet connection also plays an important role, as
it can be a problem if a company has a non-trustworthy
internet connection, because it faces downtime.
Lack of resources
The cloud industry also faces a lack of resources and
expertise, with many businesses hoping to overcome it by
hiring new, more experienced employees. These employees
will not only help solve the challenges of the business but will
also train existing employees to benefit the company.
Currently, many IT employees work to enhance cloud
computing skills and it is difficult for the chief executive
because the employees are little qualified. It claims that
employees with exposure of the latest innovations and
associated technology would be more important in
businesses.

Dealing with Multi-Cloud Environments


Today not even a single cloud is operating with full
businesses. According to the RightScale report revelation,
almost 84 percent of enterprises adopt a multi-cloud approach
and 58 percent have their hybrid cloud approaches mixed with
the public and private clouds. In addition, five different public
and private clouds are used by organizations.

FIGURE 1. 8 Right-Scale 2019 report revelation


The teams of the IT infrastructure have more difficulty with a
long-term prediction about the future of cloud computing
technology. Professionals have also suggested top strategies
to address this problem, such as rethinking processes,
training personnel, tools, active vendor relations
management, and the studies.
Cloud Migration
While it is very simple to release a new app in the cloud,
transferring an existing app to a cloud computing
environment is harder. 62% said their cloud migration
projects are harder than they expected, according to the report.
In addition, 64% of migration projects took longer than
expected and 55% surpassed their budgets. In particular,
organizations that migrate their applications to the cloud
reported migration downtime (37%), data before cutbacks
synchronization issues (40%), migration tooling problems
that work well (40%), slow migration of data (44%), security
configuration issues (40%), and time-consuming
troubleshooting (47%).
And to solve these problems, close to 42% of the IT experts
said that they wanted to see their budget increases and that
around 45% of them wanted to work at an in- house
professional, 50% wanted to set the project longer, 56%
wanted more pre-migration tests.

Vendor lock-in
The problem with vendor lock-in cloud computing includes
clients being reliant (i.e. locked in) on the implementation of
a single Cloud provider and not switching to another vendor
without any significant costs, regulatory restrictions or
technological incompatibilities in the future. The lock-up
situation can be seen in apps for specific cloud platforms, such
as Amazon EC2, Microsoft Azure, that are not easily
transferred to any other cloud platform and that users are
vulnerable to changes made by their providers to further
confirm the lenses of a software developer.
In fact, the issue of lock-in arises when, for example, a
company decide to modify cloud providers (or perhaps
integrate services from different providers), but cannot move
applications or data across different cloud services, as the
semantics of cloud providers' resources and services do not
correspond. This heterogeneity of cloud semantics and APIs
creates technological incompatibility which in turn leads to
challenge interoperability and portability.
This makes it very complicated and difficult to interoperate,
cooperate, portability, handle and maintain data and services.
For these reasons, from the point of view of the company it is
important to maintain flexibility in changing providers
according to business needs or even to maintain in-house
certain components which are less critical to safety due to
risks.
The issue of supplier lock-in will prevent interoperability and
portability between cloud providers. It is the way for cloud
providers and clients to become more competitive.

Privacy and Legal issues


Apparently, the main problem regarding cloud privacy/data
security is 'data breach.'

Infringement of data can be generically defined as loss of


electronically encrypted personal information. An infringement
of the information could lead to a multitude of losses both for the
provider and for the customer; identity theft, debit/credit card
fraud for the customer, loss of credibility, future prosecutions and
so on.
In the event of data infringement, American law requires
notification of data infringements by affected persons. Nearly
every State in the USA now needs to report data breaches to the
affected persons.
Problems arise when data are subject to several jurisdictions, and
the laws on data privacy differ. For example, the Data Privacy
Directive of the European Union explicitly states that 'data can
only leave the EU if it goes to a 'additional level of security'
country.' This rule, while simple to implement, limits movement
of data and thus decreases data capacity. The EU's regulations
can be enforced.
Explain hardware virtualization
Hardware virtualization is the method used to create virtual
versions of physical desktops and operating systems. It uses a
virtual machine manager (VMM) called a hypervisor to provide
abstracted hardware to multiple guest operating systems, which
can then share the physical hardware resources more efficiently
Hardware virtualization, also known as platform virtualization,
is a technology that enables the creation and operation of virtual
machines (VMs) on a physical computing system. It allows
multiple operating systems and applications to run
simultaneously on a single hardware platform, as if they were
running on separate physical machines.
In hardware level virtualization, a software layer called a
hypervisor, also known as a virtual machine monitor (VMM), is
installed on the host machine. The hypervisor acts as an
intermediary between the physical hardware and the virtual
machines, managing the allocation of hardware resources such
as CPU, memory, storage, and network interfaces between those
machines.
The hypervisor creates virtual instances of the underlying
hardware, including virtual CPUs, memory spaces, and disk
storage, which are then assigned to each virtual machine. This
enables each VM to operate independently, with its isolated
environment, as if running on its dedicated hardware. Solution
Architect courses will aid in fast-tracking your career with
Cloud Computing certifications and acquiring essential skills.
Isolation: Hardware-based virtualization provides strong
isolation between virtual machines, which means that any
problems in one virtual machine will not affect other virtual
machines running on the same physical host.

Resource allocation: Hardware-based virtualization allows for


flexible allocation of hardware resources such as CPU, memory,
and I/O bandwidth to virtual machines.

Snapshot and migration: Hardware-based virtualization


allows for the creation of snapshots, which can be used for
backup and recovery purposes. It also allows for live migration
of virtual machines between physical hosts, which can be used
for load balancing and other purposes.

Support for multiple operating systems: Hardware-based


virtualization supports multiple operating systems, which
allows for the consolidation of workloads onto fewer physical
machines, reducing hardware and maintenance costs.

Compatibility: Hardware-based virtualization is compatible


with most modern operating systems, making it easy to integrate
into existing IT infrastructure.
Advantages of hardware-based virtualization:
It reduces the maintenance overhead of paravirtualization as it
reduces (ideally, eliminates) the modification in the guest
operating system.
It is also significantly convenient to attain enhanced
performance. A practical benefit of hardware-based
virtualization has been mentioned by VMware engineers and
Virtual Iron.

Disadvantages of hardware-based virtualization:


Hardware-based virtualization requires explicit support in the
host CPU, which may not available on all x86/x86_64
processors.
A “pure” hardware-based virtualization approach, including the
entire unmodified guest operating system, involves many VM
traps, and thus a rapid increase in CPU overhead occurs which
limits the scalability and efficiency of server consolidation.
This performance hit can be mitigated by the use of
paravirtualized drivers; the combination has been called “hybrid
virtualization”.
What are Different Types of Hardware Virtualization

• Full Virtualization
With full virtualization, one of the different hardware
virtualization types, VMs run their own operating systems and
applications, just as if they were on separate physical machines.
This allows for great flexibility and compatibility.
You can have VMs running different operating systems, like
Windows, Linux, or even exotic ones, all coexisting peacefully
on the same physical hardware.

Advantages:
One of the key advantages is isolation. Each VM operates in its
own virtual bubble, protected from the chaos that might arise
from other VMs sharing the same hardware.
Furthermore, full virtualization enables the migration of VMs
between physical hosts. Imagine the ability to move a running
VM from one physical server to another, like a teleportation
trick. This live migration feature allows for workload balancing,
hardware maintenance without downtime, and disaster
recovery.
Full virtualization also plays a vital role in testing and
development environments. It allows developers to create
different VMs for software testing, without the need for
dedicated physical machines. This helps them save a lot of
money, time, and efforts in the long run.
• Emulation Virtualization
Emulation virtualization, the next one in different types of
hardware virtualization, relies on a clever technique known as
hardware emulation. Through hardware emulation, a virtual
machine monitor, or hypervisor, creates a simulated hardware
environment within each virtual machine.
This simulated environment replicates the characteristics and
behaviour of the desired hardware platform, even if the
underlying physical hardware is different. It's like putting on a
digital costume that makes the virtual machine look and feel like
it's running on a specific type of hardware.

Advantages:
But how does this aid in enabling hardware virtualization? Well,
the main advantage of emulation virtualization lies in its
flexibility and compatibility. It enables virtual machines to run
software that may be tied to a specific hardware platform,
without requiring the exact hardware to be present.
This flexibility is particularly useful in scenarios where legacy
software or operating systems need to be preserved or migrated
to modern hardware. Emulation virtualization allows these
legacy systems to continue running on virtual machines,
ensuring their longevity and compatibility with new hardware
architectures.
It is a powerful tool in the virtualization magician's arsenal,
allowing us to transcend the limitations of physical hardware
and embrace a world of endless possibilities.
Q) Advantages of cloud computing in machine learning?

• Para-Virtualization
Unlike other types of hardware virtualization, paravirtualization
requires some special coordination between the virtual machine
and the hypervisor. The guest operating system running inside
the virtual machine undergoes slight modifications. These
modifications introduce specialised API calls, allowing the
guest operating system to communicate directly with the
hypervisor.

Advantages:
This direct communication eliminates the need for certain
resource-intensive tasks, such as hardware emulation, which is
required in full virtualization. By bypassing these tasks,
paravirtualization can achieve higher performance and
efficiency compared to other virtualization techniques.
Para-virtualization shines in scenarios where performance is
paramount. It's like having a race car driver and a skilled
navigator working together to achieve the fastest lap times. By
leveraging the direct communication between the guest
operating system and the hypervisor, para-virtualization
minimises the overhead and latency associated with traditional
virtualization approaches.
This performance boost is particularly beneficial for high
performance computing, real-time systems, and I/O-intensive
workloads. It's like having a turbocharger that boosts the virtual
machine's performance, enabling it to handle demanding tasks
with efficiency and precision.
Advantages of Hardware Virtualization:

Improved Resource Utilisation:


With hardware virtualization, you can maximise the utilisation
of physical resources such as CPU, memory, and storage. By
running multiple virtual machines (VMs) on a single physical
server, you can effectively make use of the available resources.

Enhanced Scalability:
Hardware virtualization enables you to easily scale your
infrastructure to meet changing demands. Whether you need to
add more virtual machines or allocate additional resources to
existing VMs, virtualization allows for seamless scalability. It's
like having the ability to expand your stage and accommodate
more performers as the audience grows.

Increased Flexibility and Agility:


Virtualization offers flexibility by decoupling the software from
the underlying hardware.
You can run different operating systems and applications on the
same physical server, allowing for diverse workloads and
environments.

Cost Savings:
One of the major benefits of hardware virtualization is
significant cost savings. By consolidating multiple physical
servers into a virtualized environment, you reduce the need for
additional hardware, power consumption, and cooling costs. It
enables optimising your expenses by sharing resources
efficiently.
Improved Disaster Recovery and Business Continuity:

Virtualization provides robust disaster recovery capabilities.


With features like live migration and snapshots, you can easily
move virtual machines between physical hosts or create pointin-
time backups. In the event of hardware failure or a disaster, you
can quickly restore operations, minimising downtime and
ensuring business continuity. It's like having an emergency plan
that allows you to seamlessly switch venues and continue with
the work.

Simplified Testing and Development:


Virtualization simplifies the process of testing and development.
You can create isolated virtual environments to test new
software, configurations, or updates without impacting
production systems. This also can help you save a lot of time
you’d have invested in gathering all the hardware for different
machines.

Enhanced Security:
Hardware virtualization can improve security by isolating
virtual machines from each other. Even if one VM is
compromised, the others remain unaffected.

You might also like