100% found this document useful (4 votes)
11K views

Cloud Computing Notes For BSC and BCA

The document discusses cloud computing and compares it to traditional IT infrastructure. It covers the key aspects of cloud architecture including front end and back end components. The back end consists of servers, storage, networking hardware, virtualization software, and other resources. Cloud computing provides benefits like scalability, flexibility and lower costs compared to traditional IT which requires businesses to purchase and maintain their own physical hardware. The document analyzes the differences between the two approaches.

Uploaded by

Pvlsowjanya
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (4 votes)
11K views

Cloud Computing Notes For BSC and BCA

The document discusses cloud computing and compares it to traditional IT infrastructure. It covers the key aspects of cloud architecture including front end and back end components. The back end consists of servers, storage, networking hardware, virtualization software, and other resources. Cloud computing provides benefits like scalability, flexibility and lower costs compared to traditional IT which requires businesses to purchase and maintain their own physical hardware. The document analyzes the differences between the two approaches.

Uploaded by

Pvlsowjanya
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 48

Unit 1

Cloud Computing Overview – Origins of Cloud computing – Cloud components - Essential


characteristics – On-demand self-service , Broad network access , Location independent
resource pooling , Rapid elasticity , Measured service

Unit II

. Cloud scenarios – Benefits: scalability , simplicity , vendors ,security.

Limitations – Sensitive information - Application development – Security concerns - privacy


concern with a third party - security level of third party - security benefits

Regularity issues: Government policies

Unit III

Cloud architecture: Cloud delivery model – SPI framework , SPI evolution , SPI vs.
traditional IT Model

Software as a Service (SaaS): SaaS service providers – Google App Engine, Salesforce.com
and google platfrom – Benefits – Operational benefits - Economic benefits – Evaluating
SaaS

Platform as a Service ( PaaS ): PaaS service providers – Right Scale – Salesforce.com –


Rackspace – Force.com – Services and Benefits

Unit IV

Infrastructure as a Service ( IaaS): IaaS service providers – Amazon EC2 , GoGrid –


Microsoft soft implementation and support – Amazon EC service level agreement – Recent
developments – Benefits

Cloud deployment model : Public clouds – Private clouds – Community clouds - Hybrid
clouds - Advantages of Cloud computing

Unit V

Virtualization : Virtualization and cloud computing - Need of virtualization – cost ,


administration , fast deployment , reduce infrastructure cost - limitations
Types of hardware virtualization: Full virtualization - partial virtualization - para
virtualization
Desktop virtualization: Software virtualization – Memory virtualization - Storage
virtualization – Data virtualization – Network virtualization

Microsoft Implementation: Microsoft Hyper V – Vmware features and infrastructure – Virtual


Box - Thin client
Unit-3
Cloud Architecture
Cloud Computing architecture comprises of many cloud components, which
are loosely coupled. We can broadly divide the cloud architecture into two
parts:

 Front End

 Back End

Each of the ends is connected through a network, usually Internet. The


following diagram shows the graphical view of cloud computing architecture:
Front End
The front end refers to the client part of cloud computing system. It consists
of interfaces and applications that are required to access the cloud computing
platforms, Example - Web Browser.

Back End
The back End refers to the cloud itself. It consists of all the resources
required to provide cloud computing services. It comprises of huge data
storage, virtual machines, security mechanism, services, deployment
models, servers, etc.

Cloud Infrastructure
Cloud infrastructure consists of servers, storage devices, network, cloud
management software, deployment software, and platform virtualization.

Hypervisor
Hypervisor is a firmware or low-level program that acts as a Virtual
Machine Manager. It allows to share the single physical instance of cloud
resources between several tenants.

Management Software
It helps to maintain and configure the infrastructure.

Deployment Software
It helps to deploy and integrate the application on the cloud.

Network
It is the key component of cloud infrastructure. It allows to connect cloud
services over the Internet. It is also possible to deliver network as a utility
over the Internet, which means, the customer can customize the network
route and protocol.

Server
The server helps to compute the resource sharing and offers other services
such as resource allocation and de-allocation, monitoring the resources,
providing security etc.

Storage
Cloud keeps multiple replicas of storage. If one of the storage resources fails,
then it can be extracted from another one, which makes cloud computing
more reliable.

Infrastructural Constraints
Fundamental constraints that cloud infrastructure should implement are
shown in the following diagram:

Transparency
Virtualization is the key to share resources in cloud environment. But it is not
possible to satisfy the demand with single resource or server. Therefore,
there must be transparency in resources, load balancing and application, so
that we can scale them on demand.

Scalability
Scaling up an application delivery solution is not that easy as scaling up an
application because it involves configuration overhead or even re-architecting
the network. So, application delivery solution is need to be scalable which will
require the virtual infrastructure such that resource can be provisioned and
de-provisioned easily.

Intelligent Monitoring
To achieve transparency and scalability, application solution delivery will need
to be capable of intelligent monitoring.

Security
The mega data center in the cloud should be securely architected. Also the
control node, an entry point in mega data center, also needs to be secure.

SPI model vs IT traditional model


Cloud is the new frontier of business computing and delivery of software and
applications, and is rapidly overtaking the traditional in-house system as a
reliable, scalable and cost-effective IT solution. However, many businesses
that have built their own robust data centres and traditional IT infrastructure
still rely heavily on this model for security and managerial reasons.

Choosing an IT model for your business is a very important decision. Every


company needs a safe and secure storage space, where data and
applications can be easily accessed and running costs are kept to a
minimum. If you’re thinking of migrating your data from traditional IT
infrastructure to cloud based platforms, read on to explore the differences
between the two, to better understand the benefits of such a move.

What is Traditional IT Infrastructure?


Traditional data centres consist of various pieces of hardware, such as a
desktop computer, which are connected to a network via a remote server.
This server is typically installed on the premises, and provides all employees
using the hardware, access to the business’s stored data and applications.

Businesses with this IT model must purchase additional hardware and


upgrades in order to scale up their data storage and services to support
more users. Mandatory software upgrades are also required with traditional
IT infrastructure to ensure fail safe systems are in place to in case a
hardware failure occurs. For many businesses with IT data centres, an in-
house IT department is needed to install and maintain the hardware.

On the other hand, traditional IT infrastructures are considered to be one of


the most secure data hosting solutions and allows you to maintain full
control of your company’s applications and data on the local server. They are
a customised, dedicated system ideal for organisations that need to run
many different types of applications.

Cloud Computing vs Traditional IT infrastructure


Cloud computing is far more abstract as a virtual hosting solution. Instead of
being accessible via physical hardware, all servers, software and networks
are hosted in the cloud, off premises. It’s a real-time virtual environment
hosted between several different servers at the same time. So rather than
investing money into purchasing physical servers in-house, you can rent the
data storage space from cloud computing providers on a more cost effective
pay-per-use basis.

The main differences between cloud hosting and traditional web hosting are:

Resilience and Elasticity


The information and applications hosted in the cloud are evenly distributed
across all the servers, which are connected to work as one. Therefore, if one
server fails, no data is lost and downtime is avoided. The cloud also offers
more storage space and server resources, including better computing power.
This means your software and applications will perform faster.

Traditional IT systems are not so resilient and cannot guarantee a


consistently high level of server performance. They have limited capacity
and are susceptible to downtime, which can greatly hinder workplace
productivity.

Flexibility and Scalability


Cloud hosting offers an enhanced level of flexibility and scalability in
comparison to traditional data centres. The on-demand virtual space of cloud
computing has unlimited storage space and more server resources. Cloud
servers can scale up or down depending on the level of traffic your website
receives, and you will have full control to install any software as and when
you need to. This provides more flexibility for your business to grow.

With traditional IT infrastructure, you can only use the resources that are
already available to you. If you run out of storage space, the only solution is
to purchase or rent another server.If you hire more employees, you will
need to pay for additional software licences and have these manually
uploaded on your office hardware. This can be a costly venture, especially if
your business is growing quite rapidly.
Automation
A key difference between cloud computing and traditional IT infrastructure is
how they are managed. Cloud hosting is managed by the storage provider
who takes care of all the necessary hardware, ensures security measures are
in place, and keeps it running smoothly. Traditional data centres require
heavy administration in-house, which can be costly and time consuming for
your business. Fully trained IT personnel may be needed to ensure regular
monitoring and maintenance of your servers – such as upgrades,
configuration problems, threat protection and installations.

Running Costs
Cloud computing is more cost effective than traditional IT infrastructure due
to methods of payment for the data storage services. With cloud based
services, you only pay for what is used – similarly to how you pay for
utilities such as electricity. Furthermore, the decreased likelihood of
downtime means improved workplace performance and increased profits in
the long run.

With traditional IT infrastructure, you will need to purchase equipment and


additional server space upfront to adapt to business growth. If this slows,
you will end up paying for resources you don’t use. Furthermore, the value
of physical servers decreases year on year, so the return on investment of
investing money in traditional IT infrastructure is quite low.

Security
Cloud computing is an external form of data storage and software delivery,
which can make it seem less secure than local data hosting. Anyone with
access to the server can view and use the stored data and applications in the
cloud, wherever internet connection is available. Choosing a cloud service
provider that is completely transparent in its hosting of cloud platforms and
ensures optimum security measures are in place is crucial when transitioning
to the cloud.

With traditional IT infrastructure, you are responsible for the protection of


your data, and it is easier to ensure that only approved personnel can access
stored applications and data. Physically connected to your local network,
data centres can be managed by in-house IT departments on a round-the-
clock basis, but a significant amount of time and money is needed to ensure
the right security strategies are implemented and data recovery systems are
in place.
Software-as-a-Service
Software-as–a-Service (SaaS) model allows to provide software
application as a service to the end users. It refers to a software that is
deployed on a host service and is accessible via Internet. There are several
SaaS applications listed below:

 Billing and invoicing system

 Customer Relationship Management (CRM) applications

 Help desk applications

 Human Resource (HR) solutions

Some of the SaaS applications are not customizable such as Microsoft


Office Suite. But SaaS provides us Application Programming Interface
(API), which allows the developer to develop a customized application.

Characteristics
Here are the characteristics of SaaS service model:

 SaaS makes the software available over the Internet.

 The software applications are maintained by the vendor.

 The license to the software may be subscription based or usage based. And it is
billed on recurring basis.

 SaaS applications are cost-effective since they do not require any maintenance at
end user side.

 They are available on demand.

 They can be scaled up or down on demand.

 They are automatically upgraded and updated.

 SaaS offers shared data model. Therefore, multiple users can share single
instance of infrastructure. It is not required to hard code the functionality for
individual users.

 All users run the same version of the software.


Benefits
Using SaaS has proved to be beneficial in terms of scalability, efficiency and
performance. Some of the benefits are listed below:

 Modest software tools

 Efficient use of software licenses

 Centralized management and data

 Platform responsibilities managed by provider

 Multitenant solutions

Modest software tools


The SaaS application deployment requires a little or no client side software
installation, which results in the following benefits:

 No requirement for complex software packages at client side

 Little or no risk of configuration at client side

 Low distribution cost

Efficient use of software licenses


The customer can have single license for multiple computers running at
different locations which reduces the licensing cost. Also, there is no
requirement for license servers because the software runs in the provider's
infrastructure.

Centralized management and data


The cloud provider stores data centrally. However, the cloud providers may
store data in a decentralized manner for the sake of redundancy and
reliability.

Platform responsibilities managed by providers


All platform responsibilities such as backups, system maintenance, security,
hardware refresh, power management, etc. are performed by the cloud
provider. The customer does not need to bother about them.
Multitenant solutions
Multitenant solutions allow multiple users to share single instance of different
resources in virtual isolation. Customers can customize their application
without affecting the core functionality.

Issues
There are several issues associated with SaaS, some of them are listed below:

 Browser based risks

 Network dependence

 Lack of portability between SaaS clouds

Browser based risks


If the customer visits malicious website and browser becomes infected, the
subsequent access to SaaS application might compromise the customer's
data.

To avoid such risks, the customer can use multiple browsers and dedicate a
specific browser to access SaaS applications or can use virtual desktop while
accessing the SaaS applications.

Network dependence
The SaaS application can be delivered only when network is continuously
available. Also network should be reliable but the network reliability cannot
be guaranteed either by cloud provider or by the customer.

Lack of portability between SaaS clouds


Transferring workloads from one SaaS cloud to another is not so easy
because work flow, business logics, user interfaces, support scripts can be
provider specific.

Service Providers
Shopify

Shopify owns four products. Its main product, Shopify, is an e-commerce


platform for online stores and retail POS. It was ranked 76 in market
presence and 99 in satisfaction, leaving it with an overall score of 94.
Google

Google owns 137 products that are focused on Internet-related services, like
search engine, digital analytics, document creation, online advertising, and
more. It was ranked 92 in market presence and 94 in satisfaction, leaving it
with an overall score of 93.

Salesforce

Salesforce owns 58 products geared towards enterprise cloud


computing that helps employees collaborate with their customers. It
was ranked 85 in market presence and 94 in satisfaction, leaving it
with an overall score of 92.
WordPress

WordPress owns three products. Its products are part of an Open


Source project to democratize writing and publishing. It was ranked
68 in market presence and 100 in satisfaction, leaving it with an
overall score of 92.
Adobe

Adobe owns 60 computer software products in digital media and


marketing, printing, and publishing. It was ranked 86 in market
presence and 92 in satisfaction, leaving it with an overall score of 91.
Zoom

Zoom owns three products, all dedicated to providing remote video


conferencing for online meetings and collaboration. It was ranked 66
in market presence and 98 in satisfaction, leaving it with an overall
score of 90.

Google App Engine


Google App Engine is Google's platform as a service offering that allows
developers and businesses to build and run applications using Google's
advanced infrastructure. These applications are required to be written in one
of a few supported languages, namely: Java, Python, PHP and Go. It also
requires the use of Google query language and that the database used is
Google Big Table. Applications must abide by these standards, so applications
either must be developed with GAE in mind or else modified to meet the
requirements.
GAE is a platform, so it provides all of the required elements to run and host
Web applications, be it on mobile or Web. Without this all-in feature,
developers would have to source their own servers, database software and
the APIs that would make all of them work properly together, not to mention
the entire configuration that must be done. GAE takes this burden off the
developers so they can concentrate on the app front end and functionality,
driving better user experience.
Advantages of GAE include:

 Readily available servers with no configuration requirement


 Power scaling function all the way down to "free" when resource usage
is minimal
 Automated cloud computing tools

Sales Force.com
Customer relationship management (CRM) is the key feature of Salesforce
cloud vendor. The term is based on CRM cloud software systems.
Salesforce.com is used to manage sales and has the key products like
Chatter, Work.com, Service Cloud, Salesforce1 Platform, Salesforce
Communities, Exact Target Marketing Cloud, Pardo, and Sales Cloud.

The most popular product from Salesforce.com is Sales Cloud. This is a CRM
system that allows you to manage opportunities for your business, contacts,
leads and customers; forecast projected revenue; track customer cases,
follow the status of deals; feedback, problems and resolutions etc. Sales force
Sales Cloud is only a tool to manage your sales process. We need to
developed processes according to our unique business needs in order for it to
work.
Platform-as-a-Service
Platform-as-a-Service offers the runtime environment for applications. It
also offers development and deployment tools required to develop
applications. PaaS has a feature of point-and-click tools that enables non-
developers to create web applications.

App Engine of Google and Force.com are examples of PaaS offering


vendors. Developer may log on to these websites and use the built-in API to
create web-based applications.

But the disadvantage of using PaaS is that, the developer locks-in with a
particular vendor. For example, an application written in Python against API
of Google, and using App Engine of Google is likely to work only in that
environment.

The following diagram shows how PaaS offers an API and development tools
to the developers and how it helps the end user to access business
applications.
Benefits
Following are the benefits of PaaS model:

Lower administrative overhead


Customer need not bother about the administration because it is the
responsibility of cloud provider.

Lower total cost of ownership


Customer need not purchase expensive hardware, servers, power, and data
storage.

Scalable solutions
It is very easy to scale the resources up or down automatically, based on
their demand.

More current system software


It is the responsibility of the cloud provider to maintain software versions and
patch installations.
Issues
Like SaaS, PaaS also places significant burdens on customer's browsers to
maintain reliable and secure connections to the provider’s systems.
Therefore, PaaS shares many of the issues of SaaS. However, there are some
specific issues associated with PaaS as shown in the following diagram:

Lack of portability between PaaS clouds


Although standard languages are used, yet the implementations of platform
services may vary. For example, file, queue, or hash table interfaces of one
platform may differ from another, making it difficult to transfer the workloads
from one platform to another.

Event based processor scheduling


The PaaS applications are event-oriented which poses resource constraints
on applications, i.e., they have to answer a request in a given interval of time.
Security engineering of PaaS applications
Since PaaS applications are dependent on network, they must explicitly use
cryptography and manage security exposures.

Characteristics
Here are the characteristics of PaaS service model:

 PaaS offers browser based development environment. It allows the


developer to create database and edit the application code either via Application
Programming Interface or point-and-click tools.

 PaaS provides built-in security, scalability, and web service interfaces.

 PaaS provides built-in tools for defining workflow, approval processes,and


business rules.

 It is easy to integrate PaaS with other applications on the same platform.

 PaaS also provides web services interfaces that allow us to connect the
applications outside the platform.

Paas providers
Amazon Web Services – Elastic Beanstalk
Elastic Beanstalk is for deploying and scaling web applications which are developed
on Java, .NET, PHP, PHP, Node.js, Python, Ruby, Go, and Docker. These will run on
Apache servers as well as Nginx, Passenger and IIS.

One of the big benefits is that AWS is constantly adding new tools, so you are always
likely to have the latest tools to hand.

A handy feature for IaaS users is that they can also use PaaS to build apps, this is part
of an ongoing trend to blur the line between the two.

2. Salesforce

The PaaS options from Salesforce allows developers to build multi-tenant


applications. With Force.com the development is performed using nonstandard,
purpose-built tools and a development language called Apex.
Heroku has been around for a while, originally supporting Ruby programming
language, it has gone on to develop support for Java, Node.js, Scala, Clojure, Python
and PHP.

One of the downsides is that the number of add-ons vary and so do the load
requirements, this can lead to cost fluctuations which can make it difficult to plan
ahead.

Rackspace
The Rackspace Cloud is a set of cloud computing products and services
billed on a utility computing basis from the US-based
company Rackspace. Offerings include web
application hosting or platform as a service("Cloud Sites"), Cloud
Storage ("Cloud Files"), virtual private server ("Cloud Servers"), load
balancers, databases, backup, and monitoring.

Rackspace Cloud

Cloud FilesEdit
Cloud files is a cloud hosting service that provides "unlimited online storage
and CDN" for media (examples given include backups, video files, user
content) on a utility computing basis It was originally launched as Mosso
CloudFS as a private beta release on May 5, 2008 and is similar to Amazon
Simple Storage Service.[8] Unlimited files of up to 5 GB can be uploaded,
managed via the online control panel or RESTful API and optionally served out
via Akamai Technologies' Content Delivery Network
API
In addition to the online control panel, the service can be accessed over
a RESTful API with open source client code available
in C#/.NET, Python, PHP, Java, and Ruby. Rackspace-owned Jungle
Disk allows Cloud Files to be mounted as a local drive within
supported operating systems (Linux, Mac OS X, and Windows).
Security
Redundancy is achieved by replicating three full copies of data across multiple
computers in multiple "zones" within the same data center, where "zones" are
physically (though not geographically) separate and supplied separate power
and Internet services. Uploaded files can be distributed via Akamai
Technologies to "hundreds of endpoints across the world" which provides an
additional layer of data redundancy.
The control panel and API are protected by SSL and the requests themselves
are signed and can be safely delivered to untrusted clients. Deleted data is
zeroed out immediately.
Force.com
Force.com is a Platform as a Service (PaaS) product designed to simplify the
development and deployment of cloud-based applications and websites.
Developers can create apps and websites through the cloud IDE (integrated
development environment) and deploy them quickly to Force.com’s multi-
tenantservers. Force.com is owned by Software as a Service (SaaS) vendor
Salesforce.com, who calls the product a social and mobile app development
platform.

Unit-4
Infrastructure-as-a-service
Infrastructure-as-a-Service provides access to fundamental resources
such as physical machines, virtual machines, virtual storage, etc. Apart from
these resources, the IaaS also offers:

 Virtual machine disk storage

 Virtual local area network (VLANs)

 Load balancers

 IP addresses

 Software bundles

All of the above resources are made available to end user via server
virtualization. Moreover, these resources are accessed by the customers as
if they own them.
Benefits
IaaS allows the cloud provider to freely locate the infrastructure over the
Internet in a cost-effective manner. Some of the key benefits of IaaS are
listed below:

 Full control of the computing resources through administrative access to VMs.

 Flexible and efficient renting of computer hardware.

 Portability, interoperability with legacy applications.

Full control over computing resources through


administrative access to VMs
IaaS allows the customer to access computing resources through
administrative access to virtual machines in the following manner:

 Customer issues administrative command to cloud provider to run the virtual


machine or to save data on cloud server.

 Customer issues administrative command to virtual machines they owned to start


web server or to install new applications.
Flexible and efficient renting of computer hardware
IaaS resources such as virtual machines, storage devices, bandwidth, IP
addresses, monitoring services, firewalls, etc. are made available to the
customers on rent. The payment is based upon the amount of time the
customer retains a resource. Also with administrative access to virtual
machines, the customer can run any software, even a custom operating
system.

Portability, interoperability with legacy applications


It is possible to maintain legacy between applications and workloads between
IaaS clouds. For example, network applications such as web server or e-mail
server that normally runs on customer-owned server hardware can also run
from VMs in IaaS cloud.

Issues
IaaS shares issues with PaaS and SaaS, such as Network dependence and
browser based risks. It also has some specific issues, which are mentioned in
the following diagram:
Compatibility with legacy security vulnerabilities
Because IaaS offers the customer to run legacy software in provider's
infrastructure, it exposes customers to all of the security vulnerabilities of
such legacy software.

Virtual Machine sprawl


The VM can become out-of-date with respect to security updates because
IaaS allows the customer to operate the virtual machines in running,
suspended and off state. However, the provider can automatically update
such VMs, but this mechanism is hard and complex.

Robustness of VM-level isolation


IaaS offers an isolated environment to individual customers through
hypervisor. Hypervisor is a software layer that includes hardware support for
virtualization to split a physical computer into multiple virtual machines.

Data erase practices


The customer uses virtual machines that in turn use the common disk
resources provided by the cloud provider. When the customer releases the
resource, the cloud provider must ensure that next customer to rent the
resource does not observe data residue from previous customer.

Characteristics
Here are the characteristics of IaaS service model:

 Virtual machines with pre-installed software.

 Virtual machines with pre-installed operating systems such as Windows, Linux,


and Solaris.

 On-demand availability of resources.

 Allows to store copies of particular data at different locations.

 The computing resources can be easily scaled up and down.

Amazon EC2
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that
provides secure, resizable compute capacity in the cloud. It is designed to
make web-scale cloud computing easier for developers.
Amazon EC2’s simple web service interface allows you to obtain and
configure capacity with minimal friction. It provides you with complete
control of your computing resources and lets you run on Amazon’s proven
computing environment. Amazon EC2 reduces the time required to obtain
and boot new server instances to minutes, allowing you to quickly scale
capacity, both up and down, as your computing requirements change.
Amazon EC2 changes the economics of computing by allowing you to pay
only for capacity that you actually use. Amazon EC2 provides developers
the tools to build failure resilient applications and isolate them from
common failure scenarios.

Benefits

ELASTIC WEB-SCALE COMPUTING

Amazon EC2 enables you to increase or decrease capacity within minutes, not
hours or days. You can commission one, hundreds, or even thousands of server
instances simultaneously. You can also use Amazon EC2 Auto Scaling to
maintain availability of your EC2 fleet and automatically scale your fleet up and
down depending on its needs in order to maximize performance and minimize
cost. To scale multiple services, you can use AWS Auto Scaling.

COMPLETELY CONTROLLED

You have complete control of your instances including root access and the ability
to interact with them as you would any machine. You can stop any instance
while retaining the data on the boot partition, and then subsequently restart the
same instance using web service APIs. Instances can be rebooted remotely
using web service APIs, and you also have access to their console output.

FLEXIBLE CLOUD HOSTING SERVICES

You have the choice of multiple instance types, operating systems, and software
packages. Amazon EC2 allows you to select a configuration of memory, CPU,
instance storage, and the boot partition size that is optimal for your choice of
operating system and application. For example, choice of operating systems
includes numerous Linux distributions and Microsoft Windows Server.
INTEGRATED

Amazon EC2 is integrated with most AWS services such as Amazon Simple
Storage Service (Amazon S3), Amazon Relational Database Service (Amazon
RDS), and Amazon Virtual Private Cloud (Amazon VPC) to provide a complete,
secure solution for computing, query processing, and cloud storage across a
wide range of applications.

RELIABLE

Amazon EC2 offers a highly reliable environment where replacement instances


can be rapidly and predictably commissioned. The service runs within Amazon’s
proven network infrastructure and data centers. The Amazon EC2 Service Level
Agreement commitment is 99.99% availability for each Amazon EC2 Region.

SECURE

Cloud security at AWS is the highest priority. As an AWS customer, you will
benefit from a data center and network architecture built to meet the
requirements of the most security-sensitive organizations. Amazon EC2 works in
conjunction with Amazon VPC to provide security and robust networking
functionality for your compute resources.

INEXPENSIVE

Amazon EC2 passes on to you the financial benefits of Amazon’s scale. You
pay a very low rate for the compute capacity you actually consume.
See Amazon EC2 Instance Purchasing Options for more details.

EASY TO START

There are several ways to get started with Amazon EC2. You can use
the AWS Management Console, the AWS Command Line Tools (CLI),
or AWS SDKs. AWS is freeto get started.

GoGrid
GoGrid are a California company that has been providing IaaS since 2008.
They are a company with longevity and a healthy turnover - not a hyperscale
player, but not a niche player either.

They have three data centers packed with lots of Intel hardware, a layer
of Xenvirtualization and a layer of automation tools for customers. GoGrid
partner with other providers of Internet services to add to the
package. Edgecast are behind the CDN, Salesforce is hooked into support
functions, and Equinix provides some data center grunt. This combination of
components seems to put GoGrid right in the middle of the IaaS field.

In getting to know GoGrid IaaS, first we'll go through the sign-up steps and
create your first new virtual machine. Then we'll look at some of the
characteristics that differentiate GoGrid from other IaaS providers.

Going for a spin on GoGrid

Sign up to GoGrid

 Open a web browser.


 Go to the URL http://www.gogrid.com/. The GoGrid home page appears,
with a big Sign Up button.
 Click the Sign Up button. You are redirected to GoGrid's secure secure
sign-up form.
 Fill in the pages of information. GoGrid want to know about you, how you
will pay and check they have a working contact for you.
 Check your e-mail. A welcome message is waiting for you.

This is where sign-up self-service ends and GoGrid's customer service starts.
Getting started is a chore, so having real people offer to help you is good.irtual
machine
Microsoft Azure
Microsoft Azure, formerly known as Windows Azure, is Microsoft's public cloud
computing platform. It provides a range of cloud services, including those for
compute, analytics, storage and networking. Users can pick and choose from
these services to develop and scale new applications, or run existing
applications, in the public cloud.

Microsoft Azure is widely considered both a Platform as a Service


(PaaS) and Infrastructure as a Service (IaaS) offering.

Azure products and services

types:

Compute -- These services enable a user to deploy and manage


virtual machines (VMs), containers and batch processing, as well as
support remote application access.

Web -- These services support the development and deployment of


web applications, and also offer features for search, content
delivery, application programming interface (API) management,
notification and reporting.

Data storage -- This category of services provides scalable cloud


storage for structured and unstructured data and also supports big
data projects, persistent storage (for containers) and archival
storage.

Analytics -- These services provide distributed analytics and


storage, as well as features for real-time analytics, big data
analytics, data lakes, machine learning, business intelligence (BI),
internet of things (IoT) data streams and data warehousing.
Networking -- This group includes virtual networks, dedicated
connections and gateways, as well as services for traffic
management and diagnostics, load balancing, domain name system
(DNS) hosting, and network protection against distributed denial-
of-service (DDoS) attacks.

Media and content delivery network (CDN) -- These services


include on-demand streaming, digital rights protection, encoding
and media playback and indexing.

Hybrid integration -- These are services for server backup, site


recovery and connecting private and public clouds.

Identity and access management (IAM) -- These offerings


ensure only authorized users can access Azure services, and help
protect encryption keys and other sensitive information in the cloud.
Services include support for Azure Active Directory and multifactor
authentication (MFA).

Amazon EC service level agreement


This Amazon Compute Service Level Agreement (this “SLA”) is a policy
governing the use of the Included Products and Services (listed below) by you
or the entity you represent (“you”) under the terms of the AWS Customer
Agreement (the “AWS Agreement”) between Amazon Web Services, Inc. and
its affiliates (“AWS”, “us” or “we”) and you. This SLA applies separately to
each account using the Included Products and Services. Unless otherwise
provided herein, this SLA is subject to the terms of the AWS Agreement and
capitalized terms will have the meaning specified in the AWS Agreement. We
reserve the right to change the terms of this SLA in accordance with the AWS
Agreement.

IncludedIncluded Products and Services

 Amazon Elastic Compute Cloud (Amazon EC2)

 Amazon Elastic Block Store (Amazon EBS)

 Amazon Elastic Container Service (Amazon ECS)


 Amazon Fargate for Amazon ECS (Amazon Fargate)

Service Commitment

AWS will use commercially reasonable efforts to make the Included Products
and Services each available with a Monthly Uptime Percentage (defined below)
of at least 99.99%, in each case during any monthly billing cycle (the “Service
Commitment”). In the event any of the Included Products and Services do not
meet the Service Commitment, you will be eligible to receive a Service Credit
as described below.

Deployment model
As cloud technology is providing users with so many benefits, these benefits
must have to be categorized based on users requirement. Cloud deployment
model represents the exact category of cloud environment based on
proprietorship, size, and access and also describes the nature and purpose
of the cloud. Most organizations implement cloud infrastructure to minimize
capital expenditure & regulate operating costs. To know which deployment
model matches your requirement and desire it is necessary for users as well
as learners to understand the four sub-categories of models for deployment.

These are:

 Public Cloud Model


 Private Cloud Model
 Hybrid Cloud Model
 Community Cloud Model

Public cloud model


Public Cloud is a type of cloud hosting that allows the accessibility of systems
& its services to its clients/users easily. Some of the examples of those
companies which provide public cloud facilities are IBM, Google, Amazon,
Microsoft, etc. This cloud service is open for use. This type of cloud computing
is a true specimen of cloud hosting where the service providers render services
to various clients. From the technical point of view, there is the least difference
between private clouds and public clouds along with the structural design.
Only the security level depends based on the service providers and the type
of cloud clients use. Public cloud is better suited for business purposes for
managing the load. This type of cloud is economical due to the decrease in
capital overheads.

The advantages of the Public cloud are:

 Flexible
 Reliable
 High Scalable
 Low cost
 Place independence

This type also holds some disadvantages such as:

 Less Secured
 Poor Customizable

Private cloud model


Private Cloud also termed as 'Internal Cloud'; which allows the accessibility
of systems and services within a specific boundary or organization. The cloud
platform is implemented in a cloud-based secure environment that is guarded
by advanced firewalls under the surveillance of the IT department that belongs
to a particular organization. Private clouds permit only authorized users,
providing the organizations greater control over data and its security. Business
organizations that have dynamic, critical, secured, management demand
based requirement should adopt Private Cloud.

The advantages of using a private cloud are:

 Highly private and secured: Private cloud resource sharing is highly


secured.
 Control Oriented: Private clouds provide more control over its resources
than public cloud as it can be accessed within the organization's boundary.

The Private cloud has the following disadvantages:

 Poor scalability: Private type of clouds is scaled within internal limited


hosted resources.
 Costly: As it provides secured and more features, so it's more expensive
than a public cloud.
 Pricing: is inflexible; i.e., purchasing new hardware for up-gradation is
more costly.
 Restriction: It can be accessed locally within an organization and is
difficult to expose globally.

Hybrid cloud model


Hybrid Cloud is another cloud computing type, which is integrated, i.e., it
can be a combination of two or more cloud servers, i.e., private, public or
community combined as one architecture, but remain individual entities. Non-
critical tasks such as development and test workloads can be done using public
cloud whereas critical tasks that are sensitive such as organization data
handling are done using a private cloud. Benefits of both deployment models,
as well as a community deployment model, are possible in a hybrid cloud
hosting. It can cross isolation and overcome boundaries by the provider;
hence, it cannot be simply categorized into any of the three deployments -
public, private or community cloud.

Advantages of Hybrid Cloud Computing are:

 Flexible
 Secure
 Cost Effective
 Rich Scalable

Disadvantages of Hybrid Cloud are:

 Complex networking problem


 Organization's security Compliance

Community cloud model


computing in which the setup of the cloud is shared manually among
different organizations that belong to the same community or area. Example
of such a community is where organizations/firms are there along with the
financial institutions/banks. A multi-tenant setup developed using cloud
among different organizations that belong to a particular community or
group having similar computing concern.For joint business organizations,
ventures, research organizations and tenders community cloud is the
appropriate solution. Selection of the right type of cloud hosting is essential
in this case. Thus, community-based cloud users need to know and analyze
the business demand first.

Unit-5
Virtualization
The term 'Virtualization' can be used in many respect of computer. It is the
process of creating a virtual environment of something which may include
hardware platforms, storage devices, OS, network resources, etc. The
cloud's virtualization mainly deals with the server virtualization and how it
works and why it is termed so?

Defining Virtualization

Virtualization is the ability which allows sharing the physical instance of a


single application or resource among multiple organizations or users. This
technique is done by assigning a name logically to all those physical
resources & provides a pointer to those physical resources based on
demand.

Over an existing operating system & hardware, we generally create a virtual


machine which and above it we run other operating systems or applications.
This is called Hardware Virtualization. The virtual machine provides a
separate environment that is logically distinct from its underlying hardware.
Here, the system or the machine is the host & virtual machine is the guest
machine. This virtual environment is managed by a firmware which is
termed as a hypervisor.

Figure - The Cloud's Virtualization:


There are several approaches or ways to virtualizes cloud servers.

These are:

 Grid Approach: where the processing workloads are distributed among


different physical servers, and their results are then collected as one.
 OS - Level Virtualization: Here, multiple instances of an application can run
in an isolated form on a single OS
 Hypervisor-based Virtualization: which is currently the most widely used
technique

With hypervisor's virtualization, there are various sub-approaches to fulfill


the goal to run multiple applications & other loads on a single physical host.
A technique is used to allow virtual machines to move from one host to
another without any requirement of shutting down. This technique is termed
as "Live Migration". Another technique is used to actively load balance
among multiple hosts to efficiently utilize those resources available in a
virtual machine, and the concept is termed as Distributed Resource
Scheduling or Dynamic Resource Scheduling.
Types of Virtualization

The virtualization of cloud has been categorized into four different types
based on their characteristics. These are:

 Hardware Virtualization
o Full Virtualization
o Emulation Virtualization
o Para-virtualization
 Software Virtualization
 OS Virtualization
 Server Virtualization
 Storage Virtualization

How Virtualization Works in Cloud

Virtualization plays a significant role in cloud technology and its working


mechanism. Usually, what happens in the cloud - the users not only share
the data that are located in the cloud like an application but also share their
infrastructures with the help of virtualization. Virtualization is used mainly to
provide applications with standard versions for the cloud customers & with
the release of the latest version of an application the providers can
efficiently provide that application to the cloud and its users and it is
possible using virtualization only. By the use of this virtualization concept, all
servers & software other cloud providers require those are maintained by a
third-party, and the cloud provider pays them on a monthly or yearly basis.

In reality, most of the today's hypervisor make use of a combination of


different types of hardware virtualization. Mainly virtualization means
running multiple systems on a single machine but sharing all resources
(hardware) & it helps to share IT resources to get benefit in the business
field.
Difference Between Virtualization and Cloud

 Essentially there is a gap between these two terms, though cloud


technology requires the concept of virtualization. Virtualization is a
technology - it can also be treated as software that can manipulate
hardware. Whereas cloud computing is a service which is the result of the
manipulation.
 Virtualization is the foundation element of cloud computing whereas Cloud
technology is the delivery of shared resources as a service-on-demand via
the internet.
 Cloud is essentially made-up from the concept of virtualization.

Advantages of Virtualization

 The number of servers gets reduced by the use of virtualization concept


 Improve the ability of technology
 The business continuity also raised due to the use of virtualization
 It creates a mixed virtual environment
 Increase efficiency for development & test environment
 Lowers Total Cost of Ownership (TCO)

Features of Virtualization

 Partitioning: Multiple virtual servers can run on a physical server at the


same time
 Encapsulation of data: All data on the virtual server including boot disks is
encapsulated in a file format
 Isolation: The Virtual server running on the physical server are safely
separated & don't affect each other
 Hardware Independence: When the virtual server runs, it can migrate to
the different hardware platform

Hardware virtualization
It is the abstraction of computing resources from the software that uses
cloud resources. It involves embedding virtual machine software into the
server's hardware components. That software is called the hypervisor. The
hypervisor manages the shared physical hardware resources between the
guest OS & the host OS. The abstracted hardware is represented as actual
hardware. Virtualization means abstraction & hardware virtualization is
achieved by abstracting the physical hardware part using Virtual Machine
Monitor (VMM) or hypervisor. Hypervisors rely on command set extensions
in the processors to accelerate common virtualization activities for boosting
the performance. The term hardware virtualization is used when VMM or
virtual machine software or any hypervisor gets directly installed on the
hardware system. The primary task of the hypervisor is to process
monitoring, memory & hardware controlling. After hardware virtualization is
done, different operating systems can be installed, and various applications
can run on it. Hardware virtualization, when done for server platforms, is
also called server virtualization.

The benefits of hardware virtualization decrease the overall cost of cloud


users and increase the flexibility.
The advantages are:

 Lower Cost: Because of server consolidation, the cost decreases; now it is


possible for multiple OS to exist together in a single hardware. This
minimizes the quantity of rack space, reduces the number of servers and
eventually drops the power consumption.
 Efficient resource utilization: Physical resources can be shared among
virtual machines. The unused resources allocated by one virtual machine
can be used by another virtual machine in case of any need.
 Increase IT flexibility: The quick development of hardware resources
became possible became possible using virtualization, and the resources
can be managed consistently also.
 Advanced Hardware Virtualization features: With the advancement of
modern hypervisors highly complex operations maximize the abstraction of
hardware & ensure maximum uptime, and this technique helps to migrate
an ongoing virtual machine from one host to another host dynamically.
Types of Hardware Virtualization

Hardware virtualization is of three kinds.


These are:

 Full Virtualization: Here the hardware architecture is completely


simulated. Guest software doesn't need any modification to run any
applications.
 Emulation Virtualization: Here the virtual machine simulates the
hardware & is independent. Furthermore, the guest OS doesn't require any
modification.
 Para-Virtualization: Here, the hardware is not simulated; instead the
guest software runs its isolated system.

Software virtualization
It is also called application virtualization is the practice of running software
from a remote server. Software virtualization is similar to that of
virtualization except that it is capable to abstract the software installation
procedure and create virtual software installation. Many applications & their
distributions became typical tasks for IT firms and departments. The
mechanism for installing an application differs. So virtualized software is
introduced which is an application that will be installed into its self-contained
unit and provide software virtualization. Some of the examples are Virtual
Box, VMware, etc.

The DLL (Data Link Layer) redirect the entire virtualized program's calls to
the file system of the server. When the software is run from the server in
this procedure, no changes are required to be made on the local system.

Advantages of Software Virtualization

 Ease of Client Deployment: Virtual software makes it easy to link a file in a


network or file copying to the workstation.
 Software Migration: Before the concept of virtualization, shifting from one
software platform to another was time-consuming; and has a significant
 impact on the end-system user. The software virtualization environment
makes migration easier.
 Easy to Manage: Application updates become a simple task.

Server Virtualization
t is the division of physical server into several virtual servers and this
division is mainly done to improvise the utility of server resource. In other
word it is the masking of resources that are located in server which includes
the number & identity of processors, physical servers & the operating
system. This division of one physical server into multiple isolated virtual
servers is done by server administrator using software. The virtual
environment is sometimes called the virtual private-servers.

In this process, the server resources are kept hidden from the user. This
partitioning of physical server into several virtual environments; result in the
dedication of one server to perform a single application or task.

Usage of Server Virtualization

This technique is mainly used in web-servers which reduces the cost of web-
hosting services. Instead of having separate system for each web-server,
multiple virtual servers can run on the same system/computer.

The primary uses of server virtualization are:

 To centralize the server administration


 Improve the availability of server
 Helps in disaster recovery
 Ease in development & testing
 Make efficient use of server resources.

Approaches To Virtualization:

For Server Virtualization, there are three popular approaches.


These are:

 Virtual Machine model


 Para-virtual Machine model
 Operating System (OS) layer Virtualization

Server virtualization can be viewed as a part of overall virtualization trend in


the IT companies that include network virtualization, storage virtualization &
management of workload. This trend brings development in automatic
computing. Server virtualization can also used to eliminate server sprawl
(Server sprawl is a situation in which many under-utilized servers utilize
more space or consume more resources than can be justified by
their workload) & uses server resources efficiently.

 Virtual Machine model: are based on host-guest paradigm, where each


guest runs on a virtual replica of hardware layer. This technique of
virtualization provide guest OS to run without modification. However it
requires real computing resources from the host and for this a hypervisor
or VM is required to coordinate instructions to CPU.
 Para-Virtual Machine model: is also based on host-guest paradigm & uses
virtual machine monitor too. In this model the VMM modifies the guest
operating system's code which is called 'porting'. Like that of virtual
machine, similarly the Para-virtual machine is also capable of executing
multiple operating systems. The Para-virtual model is used by both Xen &
UML.
 Operating System Layer Virtualization:Virtualization at OS level functions in
a different way and is not based on host-guest paradigm. In this model the
host runs a single operating system kernel as its main/core and transfers
its functionality to each of the guests. The guest must use the same
operating system as the host. This distributed nature of architecture
eliminated system calls between layers and hence reduces overhead of CPU
usage. It is also a must that each partition remains strictly isolated from its
neighbors because any failure or security breach of one partition won't be
able to affect the other partitions.
Advantages of Server Virtualization

 Cost Reduction: Server virtualization reduces cost because less hardware is


required.
 Independent Restart: Each server can be rebooted independently and that
reboot won't affect the working of other virtual servers.

Storage Virtualization
It pools the physical storage from different network storage devices and
makes it appear to be a single storage unit that is handled from a single
console. As we all know there has been a strong bond between physical host
& locally installed storage device; and with the change in paradigm, local
storage is no longer needed. More advanced storage has come to the market
with an increase in functionality. Storage virtualization is the significant
component of storage servers & facilitates management and monitoring of
storage in a virtualized environment.

Storage virtualization helps the storage administrator to backup, archive and


recovery data more efficiently, in less amount of time by masking the actual
complexity of SAN (Storage Area Network). Through the use of software
hybrid appliances, the storage administrator can implement virtualization.

Importance of Storage Virtualization

Storage virtualization is becoming more and more important in different


forms such as:

 Storage Tiering: Using the storage technique as a bridge or as a stepping


stone, this technique analyzes and select out the most commonly used data
& place it on its highest performing storage pool and the least used data in
the weakest performance storage pool.
 WAN Environment: Instead of sending multiple copies of the same data
over WAN, WAN accelerator is used to locally cache the data and present it
in a LAN speed, and not impacting the WAN performance.
 SAN Storage: SAN technology present the storage as block-level storage &
the storage is presented over the Ethernet network of OS.
 File Server: OS writes the data to a remote server location to keep it
separate and secure from local users.

Benefits of Storage Virtualization

 Data is stored in a very convenient location. This is because if the host


failure data don't get compromised necessarily.
 By using storage level abstraction, it becomes flexible how storage is
provided, protected, partitioned and used.
 Storage Devices are capable of performing advanced functions such as
disaster recovery, duplication, replication of data & re-duplication of data.

Operating system virtualization


As in cloud technology, virtualization plays an important role to make things
easy and efficiently done, virtualization also need to be done at the OS level
also. With the technique of virtualized OS, nothing is required to be pre-
installed or permanently loaded on the local storage device. Everything runs
from network using a virtual; simulation & that virtual disk is a disk-image
(file) that remotely stored on a server i.e. Storage Area Network (SAN) or
Non-Volatile Attached Storage (NAS).

Defining Operating System Virtualization

It is also called OS-level virtualization is a type of virtualization technology


which work on OS layer. Here the kernel of an OS allows more than one
isolated user-space instances to exist. Such instances are called
containers/software containers or virtualization engines. In other words, OS
kernel will run a single operating system & provide that operating system's
functionality to replicate on each of the isolated partitions.

Uses of OS Virtualization

 Used for virtual hosting environment.


 Used for securely allocation of finite hardware resources among a large
number of distrusting users.
 System administrator uses it to integrate server hardware by moving
services on separate hosts.
 To improvised security by separating several applications to several
containers.
 These forms of virtualization don't require hardware to work efficiently.

How OS Virtualization Works

The steps for how these virtualization works are listed below:

 Connect to OS Virtualization Server


 Connect to virtual disk
 Then connect this virtual disk to the client
 OS is streamed to the client
 If further additional streaming is required, it is donel

Advantages of OS Virtualization

 OS virtualization usually imposes little or no overhead.


 OS Virtualization is capable of live migration
 It can also use dynamic load balancing of containers between nodes and a
cluster.
 The file level copy-on-write (CoW) mechanism is possible on OS virtualization
which makes easier to back up files, more space-efficient and simpler to
cache than the block-level copy-on-write schemes.

Virtual Disks in OS Virtualization

The client will be connected via the network to the virtual disk & will boot the
OS installed on virtual disk. Two types of virtual disks are there for
implementation.

These are:

 Private Virtual Disk: is used by one client only like that of a local hard disk.
Users can save information on the virtual disk based on the rights assigned.
So as the client restart the system, the settings are retained just like working
with physical local hard disk.
 Shared/Common Virtual Disk: It is used by multiple clients at the same time.
The changes are saved in a special cache & these caches gets cleaned as the
user restarts or shutdowns the system. In other words, when a client is
booting up, it will use the default configuration available on the virtual disk.

Desktop virtualization
Desktop virtualization provides a way for users to maintain their individual
desktops on a single, central server. The users may be connected to the
central server through a LAN, WAN or over the Internet.
Desktop virtualization has many benefits, including a lower total cost of
ownership (TCO), increased security, reduced energy costs, reduced
downtime and centralized management.
Limitations of desktop virtualization include difficulty in maintenance and set
up of printer drivers; increased downtime in case of network failures;
complexity and costs involved in VDI deployment and security risks in the
event of improper network management.
Network Virtualization
Network virtualization involves dividing available bandwidth into independent
channels, which are assigned, or reassigned, in real time to separate servers or
network devices.

Network virtualization is accomplished by using a variety of hardware and


software and combining network components. Software and hardware vendors
combine components to offer external or internal network virtualization. The
former combines local networks, or subdivides them into virtual networks, while
the latter configures single systems with containers, creating a network in a box.
Still other software vendors combine both types of network virtualization.

Data Virtualization
Many organizations run multiple types of database management systems,
such as Oracle and SQL servers, which do not work well with one another.
Therefore, enterprises face new challenges in data integration and storage of
huge amounts of data. With data virtualization, business users are able to
get real-time and reliable information quickly, which helps them to take
major business decisions.
The process of data virtualization involves abstracting, transforming,
federating and delivering data from disparate sources. The main goal of data
virtualization technology is to provide a single point of access to the data by
aggregating it from a wide range of data sources. This allows users to access
the applications without having to know their exact location.
The most recent implementation of the data virtualization concept is in cloud
computing technology.
Data virtualization software is often used in tasks such as:

 Data integration
 Business integration
 Service-oriented architecture data services
 Enterprise search

Some of the capabilities of data virtualization include:

 Abstraction of technical aspects of stored data, such as:


o Application programming interface
o Access language
o Location
o Storage structure
 Connection to disparate data sources and the ability to make data
accessible from a single place
 Data transformation, quality improvement and integration of data,
depending on the business requirements
 Ability to combine the data result sets across multiple sources (also
known as the data federation)
 Ability to deliver the data as requested by users

Memory Virtualization
Memory virtualization allows networked, and therefore distributed, servers
to share a pool of memory to overcome physical memory limitations, a
common bottleneck in software performance.[citation needed] With this
capability integrated into the network, applications can take advantage of a
very large amount of memory to improve overall performance, system
utilization, increase memory usage efficiency, and enable new use cases.
Software on the memory pool nodes (servers) allows nodes to connect to
the memory pool to contribute memory, and store and retrieve data.
Management software and the technologies of memory
overcommitmentmanage shared memory, data insertion, eviction and
provisioning policies, data assignment to contributing nodes, and handles
requests from client nodes. The memory pool may be accessed at the
application level or operating system level. At the application level, the pool
is accessed through an API or as a networked file system to create a high-
speed shared memory cache. At the operating system level, a page cache
can utilize the pool as a very large memory resource that is much faster
than local or networked storage.
Memory virtualization implementations are distinguished from shared
memory systems. Shared memory systems do not permit abstraction of
memory resources, thus requiring implementation with a single operating
system instance (i.e. not within a clustered application environment).
Memory virtualization is also different from storage based on flash memory
such as solid-state drives (SSDs) - SSDs and other similar technologies
replace hard-drives (networked or otherwise), while memory virtualization
replaces or complements traditional .

Microsoft Hyper V
Microsoft could not ignore the virtualization trend. Microsoft introduced Hyper-V
as a virtualization platform in 2008, and it continued to release new Hyper-V
versions with new Windows server versions. So far, there are a total of four
versions, including Windows Server 2012 R2, Windows Server 2012, Windows
Server 2008 R2 and Windows Server 2008.
Since Hyper-V’s debut, it has always been a Windows Server feature, which could
be installed whenever a server administrator decided to do so. It’s also available
as a separate product called Microsoft Hyper-V Server. Basically, Microsoft Hyper-
V Server is a standalone and shortened version of Windows Server where
Microsoft cut out everything irrelevant to virtualization, services and Graphical
User Interface (GUI) to make the server as small as possible. Plus, without the
bells and whistles, the server requires less maintenance time and it is less
vulnerable, because, for example, fewer components mean less patching.
Hyper-V is a hybrid hypervisor, which is installed from OS (via Windows wizard of
adding roles). However, during installation it redesigns the OS architecture and
becomes just like a next layer on these physical hardware.
VMware features and infrastructure

VMwareVMware is a virtualization and cloud computing software provider


based in Palo Alto, California. Founded in 1998, VMware is a subsidiary of Dell
Technologies. EMC Corporation originally acquired VMware in 2004; EMC was
later acquired by Dell Technologies in 2016. VMware bases its virtualization
technologies on its bare-metal hypervisorESX/ESXi in x86 architecture. With
VMware server virtualization, a hypervisor is installed on the physical server
to allow for multiple virtual machines (VMs) to run on the same physical
server. Each VM can run its own operating system (OS), which means multiple
OSes can run on one physical server. All of the VMs on the same physical
server share resources, such as networking and RAM.

VMWARE Infrastructure
VMware Infrastructure is a full infrastructure virtualization suite that
provides comprehensive virtualization, management, resource optimization,
application availability, and operational automation capabilities in an
integrated offering. VMware Infrastructure virtualizes and aggregates the
underlying physical hardware resources across multiple systems and provides
pools of virtual resources to datacenter in the virtual environment.
In addition, VMware Infrastructure brings about a set of distributed services
that enables fine-grain, policy-driven resource allocation, high availability,
and consolidated backup of the entire virtual datacenter. These distributed
services enable an IT organization to establish and meet their production
Service Level Agreements with their customers in a cost effective manner.
The relationships among the various components of the VMware Infrastructure
are shown in Figure 1-1.
Figure 1-1. VMware Infrastructure
VMware Infrastructure includes the following components shown in Figure 1-
1:
VMware ESX Server – A robust, production-proven virtualization layer run
on physical servers that abstracts processor, memory, storage, and
networking resources into multiple virtual machines.
VirtualCenter Management Server (VirtualCenter Server) – The central
point for configuring, provisioning, and managing virtualized IT environments.
Virtual Infrastructure Client (VI Client) – An interface that allows users
to connect remotely to the VirtualCenter Server or individual ESX Servers from
any Windows PC.
Virtual Infrastructure Web Access (VI Web Access) – A Web interface
that allows virtual machine management and access to remote consoles.
VMware Virtual Machine File System (VMFS) – A high-performance
cluster file system for ESX Server virtual machines.
VMware Virtual Symmetric Multi-Processing (SMP) – Feature that
enables a single virtual machine to use multiple physical processors
simultaneously.
VMware VMotion – Feature that enables the live migration of running virtual
machines from one physical server to another with zero down time, continuous
service availability, and complete transaction integrity.
VMware HA – Feature that provides easy-to-use, cost-effective high
availability for applications running in virtual machines. In the event of server
failure, affected virtual machines are automatically restarted on other
production servers that have spare capacity.
VMware Distributed Resource Scheduler (DRS) – Feature that allocates
and balances computing capacity dynamically across collections of hardware
resources for virtual machines.
VMware Consolidated Backup (Consolidated Backup) – Feature that
provides an easy-to-use, centralized facility for agent-free backup of virtual
machines. It simplifies backup administration and reduces the load on ESX
Servers.
VMware Infrastructure SDK – Feature that provides a standard interface
for VMware and third-party solutions to access the VMware Infrastructure.

Virtual box
A VirtualBox or VB is a software virtualization package that installs on an
operating system as an application. VirtualBox allows additional operating
systems to be installed on it, as a Guest OS, and run in a virtual environment.
In 2010, VirtualBox was the most popular virtualization software application.
Supported operating systems include Windows XP, Windows Vista, Windows
7, macOS X, Linux, Solaris, and OpenSolaris.
VirtualBox was originally developed by Innotek GmbH and released in 2007
as an open-source software package. The company was later purchased by
Sun Microsystems. Oracle Corporation now develops the software package
and titles it Oracle VM VirtualBox
Thin Client
Thin clients are a useful addition to any organization with a cloud computing
setup. They can also allow for added security and control over corporate and
proprietary information. Thin clients can also be a fantastic tool to save
money. They do not require a full and robust machine for each user. What is
a thin client, and how does the cloud work with one?

What is a Thin Client?

Thin Client Defined

A thin client is a lightweight computer that is purpose-built for remoting into


a server. It typically would remote into a cloud or desktop visualization
environment. It depends heavily on another computer, the server, to fulfill its
computational roles.

Note that a thin client REQUIRES the use of some form of cloud computing or
desktop visualization environment.

Thin Clients are lightweight


computers purpose-built for remoting into a server

Thin Clients For Business

A thin client can be a useful tool to any company with a cloud computing setup.
It comes with many unique advantages for business’ such as security, control,
and cost. Their ability to allow a desktop experience without storing data
locally is an invaluable tool for business owners.

Advantages of Thin Clients

o Low Operational Costs


A single server unit can access several workstations in an office, thereby
reducing operational costs. Thin clients are quick to set up, and have a much
longer lifespan than a desktop, therein reducing costs. Thin client’s energy
efficiency can further reduce costs.
o Increased Security
Users only have access to the server by network connections. Different users
can have different access levels, hence, users with lower access levels
aren’t able to hack confidential company files. The server secures all files,
which also secures data in the event of a natural disaster.
o Lower Infection Risk
Getting malware on the server is unlikely because inputs only come from the
keyboard, mouse actions, and screen images. The PCs get their software or
programs from the server itself. Thusly, the server implements patches and
software updates. It follows that the servers will be the one to process
information and store the information afterwards.

Disadvantages of Thin Clients

 Thin Client Companies are Subject to Limitations


Rich media access is usually disabled since thin clients do their processing at
the server. Concerns stem from poor performance when simultaneous
access to multimedia is taking place. Heavy applications like video streaming
can slow performance of the server. Video conferencing companies will see
presentations and video communication affected.
 Superior Network Connection Needed
Using a network that has latency issues can greatly affect the thin clients. It
can even mean rendering the units unusable because the server will not
fluently transmit the processing

You might also like