Cloud Computing Notes For BSC and BCA
Cloud Computing Notes For BSC and BCA
Unit II
Unit III
Cloud architecture: Cloud delivery model – SPI framework , SPI evolution , SPI vs.
traditional IT Model
Software as a Service (SaaS): SaaS service providers – Google App Engine, Salesforce.com
and google platfrom – Benefits – Operational benefits - Economic benefits – Evaluating
SaaS
Unit IV
Cloud deployment model : Public clouds – Private clouds – Community clouds - Hybrid
clouds - Advantages of Cloud computing
Unit V
Front End
Back End
Back End
The back End refers to the cloud itself. It consists of all the resources
required to provide cloud computing services. It comprises of huge data
storage, virtual machines, security mechanism, services, deployment
models, servers, etc.
Cloud Infrastructure
Cloud infrastructure consists of servers, storage devices, network, cloud
management software, deployment software, and platform virtualization.
Hypervisor
Hypervisor is a firmware or low-level program that acts as a Virtual
Machine Manager. It allows to share the single physical instance of cloud
resources between several tenants.
Management Software
It helps to maintain and configure the infrastructure.
Deployment Software
It helps to deploy and integrate the application on the cloud.
Network
It is the key component of cloud infrastructure. It allows to connect cloud
services over the Internet. It is also possible to deliver network as a utility
over the Internet, which means, the customer can customize the network
route and protocol.
Server
The server helps to compute the resource sharing and offers other services
such as resource allocation and de-allocation, monitoring the resources,
providing security etc.
Storage
Cloud keeps multiple replicas of storage. If one of the storage resources fails,
then it can be extracted from another one, which makes cloud computing
more reliable.
Infrastructural Constraints
Fundamental constraints that cloud infrastructure should implement are
shown in the following diagram:
Transparency
Virtualization is the key to share resources in cloud environment. But it is not
possible to satisfy the demand with single resource or server. Therefore,
there must be transparency in resources, load balancing and application, so
that we can scale them on demand.
Scalability
Scaling up an application delivery solution is not that easy as scaling up an
application because it involves configuration overhead or even re-architecting
the network. So, application delivery solution is need to be scalable which will
require the virtual infrastructure such that resource can be provisioned and
de-provisioned easily.
Intelligent Monitoring
To achieve transparency and scalability, application solution delivery will need
to be capable of intelligent monitoring.
Security
The mega data center in the cloud should be securely architected. Also the
control node, an entry point in mega data center, also needs to be secure.
The main differences between cloud hosting and traditional web hosting are:
With traditional IT infrastructure, you can only use the resources that are
already available to you. If you run out of storage space, the only solution is
to purchase or rent another server.If you hire more employees, you will
need to pay for additional software licences and have these manually
uploaded on your office hardware. This can be a costly venture, especially if
your business is growing quite rapidly.
Automation
A key difference between cloud computing and traditional IT infrastructure is
how they are managed. Cloud hosting is managed by the storage provider
who takes care of all the necessary hardware, ensures security measures are
in place, and keeps it running smoothly. Traditional data centres require
heavy administration in-house, which can be costly and time consuming for
your business. Fully trained IT personnel may be needed to ensure regular
monitoring and maintenance of your servers – such as upgrades,
configuration problems, threat protection and installations.
Running Costs
Cloud computing is more cost effective than traditional IT infrastructure due
to methods of payment for the data storage services. With cloud based
services, you only pay for what is used – similarly to how you pay for
utilities such as electricity. Furthermore, the decreased likelihood of
downtime means improved workplace performance and increased profits in
the long run.
Security
Cloud computing is an external form of data storage and software delivery,
which can make it seem less secure than local data hosting. Anyone with
access to the server can view and use the stored data and applications in the
cloud, wherever internet connection is available. Choosing a cloud service
provider that is completely transparent in its hosting of cloud platforms and
ensures optimum security measures are in place is crucial when transitioning
to the cloud.
Characteristics
Here are the characteristics of SaaS service model:
The license to the software may be subscription based or usage based. And it is
billed on recurring basis.
SaaS applications are cost-effective since they do not require any maintenance at
end user side.
SaaS offers shared data model. Therefore, multiple users can share single
instance of infrastructure. It is not required to hard code the functionality for
individual users.
Multitenant solutions
Issues
There are several issues associated with SaaS, some of them are listed below:
Network dependence
To avoid such risks, the customer can use multiple browsers and dedicate a
specific browser to access SaaS applications or can use virtual desktop while
accessing the SaaS applications.
Network dependence
The SaaS application can be delivered only when network is continuously
available. Also network should be reliable but the network reliability cannot
be guaranteed either by cloud provider or by the customer.
Service Providers
Shopify
Google owns 137 products that are focused on Internet-related services, like
search engine, digital analytics, document creation, online advertising, and
more. It was ranked 92 in market presence and 94 in satisfaction, leaving it
with an overall score of 93.
Salesforce
Sales Force.com
Customer relationship management (CRM) is the key feature of Salesforce
cloud vendor. The term is based on CRM cloud software systems.
Salesforce.com is used to manage sales and has the key products like
Chatter, Work.com, Service Cloud, Salesforce1 Platform, Salesforce
Communities, Exact Target Marketing Cloud, Pardo, and Sales Cloud.
The most popular product from Salesforce.com is Sales Cloud. This is a CRM
system that allows you to manage opportunities for your business, contacts,
leads and customers; forecast projected revenue; track customer cases,
follow the status of deals; feedback, problems and resolutions etc. Sales force
Sales Cloud is only a tool to manage your sales process. We need to
developed processes according to our unique business needs in order for it to
work.
Platform-as-a-Service
Platform-as-a-Service offers the runtime environment for applications. It
also offers development and deployment tools required to develop
applications. PaaS has a feature of point-and-click tools that enables non-
developers to create web applications.
But the disadvantage of using PaaS is that, the developer locks-in with a
particular vendor. For example, an application written in Python against API
of Google, and using App Engine of Google is likely to work only in that
environment.
The following diagram shows how PaaS offers an API and development tools
to the developers and how it helps the end user to access business
applications.
Benefits
Following are the benefits of PaaS model:
Scalable solutions
It is very easy to scale the resources up or down automatically, based on
their demand.
Characteristics
Here are the characteristics of PaaS service model:
PaaS also provides web services interfaces that allow us to connect the
applications outside the platform.
Paas providers
Amazon Web Services – Elastic Beanstalk
Elastic Beanstalk is for deploying and scaling web applications which are developed
on Java, .NET, PHP, PHP, Node.js, Python, Ruby, Go, and Docker. These will run on
Apache servers as well as Nginx, Passenger and IIS.
One of the big benefits is that AWS is constantly adding new tools, so you are always
likely to have the latest tools to hand.
A handy feature for IaaS users is that they can also use PaaS to build apps, this is part
of an ongoing trend to blur the line between the two.
2. Salesforce
One of the downsides is that the number of add-ons vary and so do the load
requirements, this can lead to cost fluctuations which can make it difficult to plan
ahead.
Rackspace
The Rackspace Cloud is a set of cloud computing products and services
billed on a utility computing basis from the US-based
company Rackspace. Offerings include web
application hosting or platform as a service("Cloud Sites"), Cloud
Storage ("Cloud Files"), virtual private server ("Cloud Servers"), load
balancers, databases, backup, and monitoring.
Rackspace Cloud
Cloud FilesEdit
Cloud files is a cloud hosting service that provides "unlimited online storage
and CDN" for media (examples given include backups, video files, user
content) on a utility computing basis It was originally launched as Mosso
CloudFS as a private beta release on May 5, 2008 and is similar to Amazon
Simple Storage Service.[8] Unlimited files of up to 5 GB can be uploaded,
managed via the online control panel or RESTful API and optionally served out
via Akamai Technologies' Content Delivery Network
API
In addition to the online control panel, the service can be accessed over
a RESTful API with open source client code available
in C#/.NET, Python, PHP, Java, and Ruby. Rackspace-owned Jungle
Disk allows Cloud Files to be mounted as a local drive within
supported operating systems (Linux, Mac OS X, and Windows).
Security
Redundancy is achieved by replicating three full copies of data across multiple
computers in multiple "zones" within the same data center, where "zones" are
physically (though not geographically) separate and supplied separate power
and Internet services. Uploaded files can be distributed via Akamai
Technologies to "hundreds of endpoints across the world" which provides an
additional layer of data redundancy.
The control panel and API are protected by SSL and the requests themselves
are signed and can be safely delivered to untrusted clients. Deleted data is
zeroed out immediately.
Force.com
Force.com is a Platform as a Service (PaaS) product designed to simplify the
development and deployment of cloud-based applications and websites.
Developers can create apps and websites through the cloud IDE (integrated
development environment) and deploy them quickly to Force.com’s multi-
tenantservers. Force.com is owned by Software as a Service (SaaS) vendor
Salesforce.com, who calls the product a social and mobile app development
platform.
Unit-4
Infrastructure-as-a-service
Infrastructure-as-a-Service provides access to fundamental resources
such as physical machines, virtual machines, virtual storage, etc. Apart from
these resources, the IaaS also offers:
Load balancers
IP addresses
Software bundles
All of the above resources are made available to end user via server
virtualization. Moreover, these resources are accessed by the customers as
if they own them.
Benefits
IaaS allows the cloud provider to freely locate the infrastructure over the
Internet in a cost-effective manner. Some of the key benefits of IaaS are
listed below:
Issues
IaaS shares issues with PaaS and SaaS, such as Network dependence and
browser based risks. It also has some specific issues, which are mentioned in
the following diagram:
Compatibility with legacy security vulnerabilities
Because IaaS offers the customer to run legacy software in provider's
infrastructure, it exposes customers to all of the security vulnerabilities of
such legacy software.
Characteristics
Here are the characteristics of IaaS service model:
Amazon EC2
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that
provides secure, resizable compute capacity in the cloud. It is designed to
make web-scale cloud computing easier for developers.
Amazon EC2’s simple web service interface allows you to obtain and
configure capacity with minimal friction. It provides you with complete
control of your computing resources and lets you run on Amazon’s proven
computing environment. Amazon EC2 reduces the time required to obtain
and boot new server instances to minutes, allowing you to quickly scale
capacity, both up and down, as your computing requirements change.
Amazon EC2 changes the economics of computing by allowing you to pay
only for capacity that you actually use. Amazon EC2 provides developers
the tools to build failure resilient applications and isolate them from
common failure scenarios.
Benefits
Amazon EC2 enables you to increase or decrease capacity within minutes, not
hours or days. You can commission one, hundreds, or even thousands of server
instances simultaneously. You can also use Amazon EC2 Auto Scaling to
maintain availability of your EC2 fleet and automatically scale your fleet up and
down depending on its needs in order to maximize performance and minimize
cost. To scale multiple services, you can use AWS Auto Scaling.
COMPLETELY CONTROLLED
You have complete control of your instances including root access and the ability
to interact with them as you would any machine. You can stop any instance
while retaining the data on the boot partition, and then subsequently restart the
same instance using web service APIs. Instances can be rebooted remotely
using web service APIs, and you also have access to their console output.
You have the choice of multiple instance types, operating systems, and software
packages. Amazon EC2 allows you to select a configuration of memory, CPU,
instance storage, and the boot partition size that is optimal for your choice of
operating system and application. For example, choice of operating systems
includes numerous Linux distributions and Microsoft Windows Server.
INTEGRATED
Amazon EC2 is integrated with most AWS services such as Amazon Simple
Storage Service (Amazon S3), Amazon Relational Database Service (Amazon
RDS), and Amazon Virtual Private Cloud (Amazon VPC) to provide a complete,
secure solution for computing, query processing, and cloud storage across a
wide range of applications.
RELIABLE
SECURE
Cloud security at AWS is the highest priority. As an AWS customer, you will
benefit from a data center and network architecture built to meet the
requirements of the most security-sensitive organizations. Amazon EC2 works in
conjunction with Amazon VPC to provide security and robust networking
functionality for your compute resources.
INEXPENSIVE
Amazon EC2 passes on to you the financial benefits of Amazon’s scale. You
pay a very low rate for the compute capacity you actually consume.
See Amazon EC2 Instance Purchasing Options for more details.
EASY TO START
There are several ways to get started with Amazon EC2. You can use
the AWS Management Console, the AWS Command Line Tools (CLI),
or AWS SDKs. AWS is freeto get started.
GoGrid
GoGrid are a California company that has been providing IaaS since 2008.
They are a company with longevity and a healthy turnover - not a hyperscale
player, but not a niche player either.
They have three data centers packed with lots of Intel hardware, a layer
of Xenvirtualization and a layer of automation tools for customers. GoGrid
partner with other providers of Internet services to add to the
package. Edgecast are behind the CDN, Salesforce is hooked into support
functions, and Equinix provides some data center grunt. This combination of
components seems to put GoGrid right in the middle of the IaaS field.
In getting to know GoGrid IaaS, first we'll go through the sign-up steps and
create your first new virtual machine. Then we'll look at some of the
characteristics that differentiate GoGrid from other IaaS providers.
Sign up to GoGrid
This is where sign-up self-service ends and GoGrid's customer service starts.
Getting started is a chore, so having real people offer to help you is good.irtual
machine
Microsoft Azure
Microsoft Azure, formerly known as Windows Azure, is Microsoft's public cloud
computing platform. It provides a range of cloud services, including those for
compute, analytics, storage and networking. Users can pick and choose from
these services to develop and scale new applications, or run existing
applications, in the public cloud.
types:
Service Commitment
AWS will use commercially reasonable efforts to make the Included Products
and Services each available with a Monthly Uptime Percentage (defined below)
of at least 99.99%, in each case during any monthly billing cycle (the “Service
Commitment”). In the event any of the Included Products and Services do not
meet the Service Commitment, you will be eligible to receive a Service Credit
as described below.
Deployment model
As cloud technology is providing users with so many benefits, these benefits
must have to be categorized based on users requirement. Cloud deployment
model represents the exact category of cloud environment based on
proprietorship, size, and access and also describes the nature and purpose
of the cloud. Most organizations implement cloud infrastructure to minimize
capital expenditure & regulate operating costs. To know which deployment
model matches your requirement and desire it is necessary for users as well
as learners to understand the four sub-categories of models for deployment.
These are:
Flexible
Reliable
High Scalable
Low cost
Place independence
Less Secured
Poor Customizable
Flexible
Secure
Cost Effective
Rich Scalable
Unit-5
Virtualization
The term 'Virtualization' can be used in many respect of computer. It is the
process of creating a virtual environment of something which may include
hardware platforms, storage devices, OS, network resources, etc. The
cloud's virtualization mainly deals with the server virtualization and how it
works and why it is termed so?
Defining Virtualization
These are:
The virtualization of cloud has been categorized into four different types
based on their characteristics. These are:
Hardware Virtualization
o Full Virtualization
o Emulation Virtualization
o Para-virtualization
Software Virtualization
OS Virtualization
Server Virtualization
Storage Virtualization
Advantages of Virtualization
Features of Virtualization
Hardware virtualization
It is the abstraction of computing resources from the software that uses
cloud resources. It involves embedding virtual machine software into the
server's hardware components. That software is called the hypervisor. The
hypervisor manages the shared physical hardware resources between the
guest OS & the host OS. The abstracted hardware is represented as actual
hardware. Virtualization means abstraction & hardware virtualization is
achieved by abstracting the physical hardware part using Virtual Machine
Monitor (VMM) or hypervisor. Hypervisors rely on command set extensions
in the processors to accelerate common virtualization activities for boosting
the performance. The term hardware virtualization is used when VMM or
virtual machine software or any hypervisor gets directly installed on the
hardware system. The primary task of the hypervisor is to process
monitoring, memory & hardware controlling. After hardware virtualization is
done, different operating systems can be installed, and various applications
can run on it. Hardware virtualization, when done for server platforms, is
also called server virtualization.
Software virtualization
It is also called application virtualization is the practice of running software
from a remote server. Software virtualization is similar to that of
virtualization except that it is capable to abstract the software installation
procedure and create virtual software installation. Many applications & their
distributions became typical tasks for IT firms and departments. The
mechanism for installing an application differs. So virtualized software is
introduced which is an application that will be installed into its self-contained
unit and provide software virtualization. Some of the examples are Virtual
Box, VMware, etc.
The DLL (Data Link Layer) redirect the entire virtualized program's calls to
the file system of the server. When the software is run from the server in
this procedure, no changes are required to be made on the local system.
Server Virtualization
t is the division of physical server into several virtual servers and this
division is mainly done to improvise the utility of server resource. In other
word it is the masking of resources that are located in server which includes
the number & identity of processors, physical servers & the operating
system. This division of one physical server into multiple isolated virtual
servers is done by server administrator using software. The virtual
environment is sometimes called the virtual private-servers.
In this process, the server resources are kept hidden from the user. This
partitioning of physical server into several virtual environments; result in the
dedication of one server to perform a single application or task.
This technique is mainly used in web-servers which reduces the cost of web-
hosting services. Instead of having separate system for each web-server,
multiple virtual servers can run on the same system/computer.
Approaches To Virtualization:
Storage Virtualization
It pools the physical storage from different network storage devices and
makes it appear to be a single storage unit that is handled from a single
console. As we all know there has been a strong bond between physical host
& locally installed storage device; and with the change in paradigm, local
storage is no longer needed. More advanced storage has come to the market
with an increase in functionality. Storage virtualization is the significant
component of storage servers & facilitates management and monitoring of
storage in a virtualized environment.
Uses of OS Virtualization
The steps for how these virtualization works are listed below:
Advantages of OS Virtualization
The client will be connected via the network to the virtual disk & will boot the
OS installed on virtual disk. Two types of virtual disks are there for
implementation.
These are:
Private Virtual Disk: is used by one client only like that of a local hard disk.
Users can save information on the virtual disk based on the rights assigned.
So as the client restart the system, the settings are retained just like working
with physical local hard disk.
Shared/Common Virtual Disk: It is used by multiple clients at the same time.
The changes are saved in a special cache & these caches gets cleaned as the
user restarts or shutdowns the system. In other words, when a client is
booting up, it will use the default configuration available on the virtual disk.
Desktop virtualization
Desktop virtualization provides a way for users to maintain their individual
desktops on a single, central server. The users may be connected to the
central server through a LAN, WAN or over the Internet.
Desktop virtualization has many benefits, including a lower total cost of
ownership (TCO), increased security, reduced energy costs, reduced
downtime and centralized management.
Limitations of desktop virtualization include difficulty in maintenance and set
up of printer drivers; increased downtime in case of network failures;
complexity and costs involved in VDI deployment and security risks in the
event of improper network management.
Network Virtualization
Network virtualization involves dividing available bandwidth into independent
channels, which are assigned, or reassigned, in real time to separate servers or
network devices.
Data Virtualization
Many organizations run multiple types of database management systems,
such as Oracle and SQL servers, which do not work well with one another.
Therefore, enterprises face new challenges in data integration and storage of
huge amounts of data. With data virtualization, business users are able to
get real-time and reliable information quickly, which helps them to take
major business decisions.
The process of data virtualization involves abstracting, transforming,
federating and delivering data from disparate sources. The main goal of data
virtualization technology is to provide a single point of access to the data by
aggregating it from a wide range of data sources. This allows users to access
the applications without having to know their exact location.
The most recent implementation of the data virtualization concept is in cloud
computing technology.
Data virtualization software is often used in tasks such as:
Data integration
Business integration
Service-oriented architecture data services
Enterprise search
Memory Virtualization
Memory virtualization allows networked, and therefore distributed, servers
to share a pool of memory to overcome physical memory limitations, a
common bottleneck in software performance.[citation needed] With this
capability integrated into the network, applications can take advantage of a
very large amount of memory to improve overall performance, system
utilization, increase memory usage efficiency, and enable new use cases.
Software on the memory pool nodes (servers) allows nodes to connect to
the memory pool to contribute memory, and store and retrieve data.
Management software and the technologies of memory
overcommitmentmanage shared memory, data insertion, eviction and
provisioning policies, data assignment to contributing nodes, and handles
requests from client nodes. The memory pool may be accessed at the
application level or operating system level. At the application level, the pool
is accessed through an API or as a networked file system to create a high-
speed shared memory cache. At the operating system level, a page cache
can utilize the pool as a very large memory resource that is much faster
than local or networked storage.
Memory virtualization implementations are distinguished from shared
memory systems. Shared memory systems do not permit abstraction of
memory resources, thus requiring implementation with a single operating
system instance (i.e. not within a clustered application environment).
Memory virtualization is also different from storage based on flash memory
such as solid-state drives (SSDs) - SSDs and other similar technologies
replace hard-drives (networked or otherwise), while memory virtualization
replaces or complements traditional .
Microsoft Hyper V
Microsoft could not ignore the virtualization trend. Microsoft introduced Hyper-V
as a virtualization platform in 2008, and it continued to release new Hyper-V
versions with new Windows server versions. So far, there are a total of four
versions, including Windows Server 2012 R2, Windows Server 2012, Windows
Server 2008 R2 and Windows Server 2008.
Since Hyper-V’s debut, it has always been a Windows Server feature, which could
be installed whenever a server administrator decided to do so. It’s also available
as a separate product called Microsoft Hyper-V Server. Basically, Microsoft Hyper-
V Server is a standalone and shortened version of Windows Server where
Microsoft cut out everything irrelevant to virtualization, services and Graphical
User Interface (GUI) to make the server as small as possible. Plus, without the
bells and whistles, the server requires less maintenance time and it is less
vulnerable, because, for example, fewer components mean less patching.
Hyper-V is a hybrid hypervisor, which is installed from OS (via Windows wizard of
adding roles). However, during installation it redesigns the OS architecture and
becomes just like a next layer on these physical hardware.
VMware features and infrastructure
VMWARE Infrastructure
VMware Infrastructure is a full infrastructure virtualization suite that
provides comprehensive virtualization, management, resource optimization,
application availability, and operational automation capabilities in an
integrated offering. VMware Infrastructure virtualizes and aggregates the
underlying physical hardware resources across multiple systems and provides
pools of virtual resources to datacenter in the virtual environment.
In addition, VMware Infrastructure brings about a set of distributed services
that enables fine-grain, policy-driven resource allocation, high availability,
and consolidated backup of the entire virtual datacenter. These distributed
services enable an IT organization to establish and meet their production
Service Level Agreements with their customers in a cost effective manner.
The relationships among the various components of the VMware Infrastructure
are shown in Figure 1-1.
Figure 1-1. VMware Infrastructure
VMware Infrastructure includes the following components shown in Figure 1-
1:
VMware ESX Server – A robust, production-proven virtualization layer run
on physical servers that abstracts processor, memory, storage, and
networking resources into multiple virtual machines.
VirtualCenter Management Server (VirtualCenter Server) – The central
point for configuring, provisioning, and managing virtualized IT environments.
Virtual Infrastructure Client (VI Client) – An interface that allows users
to connect remotely to the VirtualCenter Server or individual ESX Servers from
any Windows PC.
Virtual Infrastructure Web Access (VI Web Access) – A Web interface
that allows virtual machine management and access to remote consoles.
VMware Virtual Machine File System (VMFS) – A high-performance
cluster file system for ESX Server virtual machines.
VMware Virtual Symmetric Multi-Processing (SMP) – Feature that
enables a single virtual machine to use multiple physical processors
simultaneously.
VMware VMotion – Feature that enables the live migration of running virtual
machines from one physical server to another with zero down time, continuous
service availability, and complete transaction integrity.
VMware HA – Feature that provides easy-to-use, cost-effective high
availability for applications running in virtual machines. In the event of server
failure, affected virtual machines are automatically restarted on other
production servers that have spare capacity.
VMware Distributed Resource Scheduler (DRS) – Feature that allocates
and balances computing capacity dynamically across collections of hardware
resources for virtual machines.
VMware Consolidated Backup (Consolidated Backup) – Feature that
provides an easy-to-use, centralized facility for agent-free backup of virtual
machines. It simplifies backup administration and reduces the load on ESX
Servers.
VMware Infrastructure SDK – Feature that provides a standard interface
for VMware and third-party solutions to access the VMware Infrastructure.
Virtual box
A VirtualBox or VB is a software virtualization package that installs on an
operating system as an application. VirtualBox allows additional operating
systems to be installed on it, as a Guest OS, and run in a virtual environment.
In 2010, VirtualBox was the most popular virtualization software application.
Supported operating systems include Windows XP, Windows Vista, Windows
7, macOS X, Linux, Solaris, and OpenSolaris.
VirtualBox was originally developed by Innotek GmbH and released in 2007
as an open-source software package. The company was later purchased by
Sun Microsystems. Oracle Corporation now develops the software package
and titles it Oracle VM VirtualBox
Thin Client
Thin clients are a useful addition to any organization with a cloud computing
setup. They can also allow for added security and control over corporate and
proprietary information. Thin clients can also be a fantastic tool to save
money. They do not require a full and robust machine for each user. What is
a thin client, and how does the cloud work with one?
Note that a thin client REQUIRES the use of some form of cloud computing or
desktop visualization environment.
A thin client can be a useful tool to any company with a cloud computing setup.
It comes with many unique advantages for business’ such as security, control,
and cost. Their ability to allow a desktop experience without storing data
locally is an invaluable tool for business owners.