0% found this document useful (0 votes)
3 views

CC NOTES

Cloud computing is a technology that allows data and programs to be stored and accessed on remote servers via the internet, enabling operations like data storage, software delivery, and application development. It consists of various architectures, types of services (IaaS, PaaS, SaaS, FaaS), and characteristics such as scalability, cost efficiency, and reliability. Virtualization plays a key role in cloud computing by allowing multiple operating systems to run on the same hardware, enhancing resource allocation and reducing costs.

Uploaded by

bhargavii2409
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

CC NOTES

Cloud computing is a technology that allows data and programs to be stored and accessed on remote servers via the internet, enabling operations like data storage, software delivery, and application development. It consists of various architectures, types of services (IaaS, PaaS, SaaS, FaaS), and characteristics such as scalability, cost efficiency, and reliability. Virtualization plays a key role in cloud computing by allowing multiple operating systems to run on the same hardware, enhancing resource allocation and reducing costs.

Uploaded by

bhargavii2409
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 39

What Is Cloud Computing

Cloud Computing means storing and accessing the data and programs
on remote servers that are hosted on the internet instead of the
computer’s hard drive or local server. Cloud computing is also referred to
as Internet-based computing, it is a technology where the resource is
provided as a service through the Internet to the user. The data that is
stored can be files, images, documents, or any other storable document.
The following are some of the Operations that can be performed with
Cloud Computing
 Storage, backup, and recovery of data
 Delivery of software on demand
 Development of new applications and services
 Streaming videos and audio

Architecture Of Cloud Computing


Cloud computing architecture refers to the components and sub-
components required for cloud computing. These components typically
refer to:
1. Front end ( Fat client, Thin client)
2. Back-end platforms ( Servers, Storage )
3. Cloud-based delivery and a network ( Internet, Intranet, Intercloud )

1. Front End ( User Interaction Enhancement )


The User Interface of Cloud Computing consists of 2 sections of clients.
The Thin clients are the ones that use web browsers facilitating portable
and lightweight accessibilities and others are known as Fat Clients that
use many functionalities for offering a strong user experience.
2. Back-end Platforms ( Cloud Computing Engine )
The core of cloud computing is made at back-end platforms with several
servers for storage and processing computing. Management of
Applications logic is managed through servers and effective data handling
is provided by storage. The combination of these platforms at the backend
offers the processing power, and capacity to manage and store data
behind the cloud.
3. Cloud-Based Delivery and Network
On-demand access to the computer and resources is provided over the
Internet, Intranet, and Intercloud. The Internet comes with global
accessibility, the Intranet helps in internal communications of the services
within the organization and the Intercloud enables interoperability across
various cloud services. This dynamic network connectivity ensures an
essential component of cloud computing architecture on guaranteeing
easy access and data transfer.

Types of Cloud Computing Services?


The following are the types of Cloud Computing:
1. Infrastructure as a Service (IaaS)
2. Platform as a Service (PaaS)
3. Software as a Service (SaaS)
4. Function as as Service (FaaS)

1. Infrastructure as a Service ( IaaS )


 Flexibility and Control: IaaS comes up with providing virtualized
computing resources such as VMs, Storage, and networks facilitating
users with control over the Operating system and applications.
 Reducing Expenses of Hardware: IaaS provides business cost
savings with the elimination of physical infrastructure investments
making it cost-effective.
 Scalability of Resources: The cloud provides in scaling of hardware
resources up or down as per demand facilitating optimal performance
with cost efficiency.
2. Platform as a Service ( PaaS )
 Simplifying the Development: Platform as a Service offers
application development by keeping the underlying Infrastructure as an
Abstraction. It helps the developers to completely focus on application
logic ( Code ) and background operations are completely managed by
the AWS platform.
 Enhancing Efficiency and Productivity: PaaS lowers the
Management of Infrastructure complexity, speeding up the Execution
time and bringing the updates quickly to market by streamlining the
development process.
 Automation of Scaling: Management of resource scaling,
guaranteeing the program’s workload efficiency is ensured by PaaS.
3. SaaS (software as a service)
 Collaboration And Accessibility: Software as a Service (SaaS) helps
users to easily access applications without having the requirement of
local installations. It is fully managed by the AWS Software working as
a service over the internet encouraging effortless cooperation and ease
of access.
 Automation of Updates: SaaS providers manage the handling of
software maintenance with automatic latest updates ensuring users
gain experience with the latest features and security patches.
 Cost Efficiency: SaaS acts as a cost-effective solution by reducing the
overhead of IT support by eliminating the need for individual software
licenses.
4. Function as a Service (FaaS)
 Event-Driven Execution: FaaS helps in the maintenance of servers
and infrastructure making users worry about it. FaaS facilitates the
developers to run code as a response to the events.
 Cost Efficiency: FaaS facilitates cost efficiency by coming up with the
principle “Pay as per you Run” for the computing resources used.
 Scalability and Agility: Serverless Architectures scale effortlessly in
handing the workloads promoting agility in development and
deployment.

Characteristics of Cloud Computing


The following are the characteristics of Cloud Computing:
1. Scalability: With Cloud hosting, it is easy to grow and shrink the
number and size of servers based on the need. This is done by either
increasing or decreasing the resources in the cloud. This ability to alter
plans due to fluctuations in business size and needs is a superb benefit
of cloud computing, especially when experiencing a sudden growth in
demand.
2. Save Money: An advantage of cloud computing is the reduction in
hardware costs. Instead of purchasing in-house equipment, hardware
needs are left to the vendor. For companies that are growing rapidly,
new hardware can be large, expensive, and inconvenient. Cloud
computing alleviates these issues because resources can be acquired
quickly and easily. Even better, the cost of repairing or replacing
equipment is passed to the vendors. Along with purchase costs, off-site
hardware cuts internal power costs and saves space. Large data
centers can take up precious office space and produce a large amount
of heat. Moving to cloud applications or storage can help maximize
space and significantly cut energy expenditures.
3. Reliability: Rather than being hosted on one single instance of a
physical server, hosting is delivered on a virtual partition that draws its
resource, such as disk space, from an extensive network of underlying
physical servers. If one server goes offline it will have no effect on
availability, as the virtual servers will continue to pull resources from
the remaining network of servers.
4. Physical Security: The underlying physical servers are still housed
within data centers and so benefit from the security measures that
those facilities implement to prevent people from accessing or
disrupting them on-site.
5. Outsource Management: When you are managing the business,
Someone else manages your computing infrastructure. You do not
need to worry about management as well as degradation.

Advantages of Cloud Computing


The following are main advantages of Cloud Computing:
1. Cost Efficiency: Cloud Computing provides flexible pricing to the
users with the principal pay-as-you-go model. It helps in lessening
capital expenditures of Infrastructure, particularly for small and
medium-sized businesses companies.
2. Flexibility and Scalability: Cloud services facilitate the scaling of
resources based on demand. It ensures the efficiency of businesses in
handling various workloads without the need for large amounts of
investments in hardware during the periods of low demand.
3. Collaboration and Accessibility: Cloud computing provides easy
access to data and applications from anywhere over the internet. This
encourages collaborative team participation from different locations
through shared documents and projects in real-time resulting in quality
and productive outputs.
4. Automatic Maintenance and Updates: AWS Cloud takes care of the
infrastructure management and keeping with the latest software
automatically making updates they is new versions. Through this, AWS
guarantee the companies always having access to the newest
technologies to focus completely on business operations and
innvoations.
Disadvantages Of Cloud Computing
The following are the main disadvantages of Cloud Computing:
1. Security Concerns: Storing of sensitive data on external servers
raised more security concerns which is one of the main drawbacks of
cloud computing.
2. Downtime and Reliability: Even though cloud services are usually
dependable, they may also have unexpected interruptions and
downtimes. These might be raised because of server problems,
Network issues or maintenance disruptions in Cloud providers which
negative effect on business operations, creating issues for users
accessing their apps.
3. Dependency on Internet Connectivity: Cloud computing services
heavily rely on Internet connectivity. For accessing the cloud
resources, the users should have a stable and high-speed internet
connection for accessing and using cloud resources. In regions with
limited internet connectivity, users may face challenges in accessing
their data and applications.
4. Cost Management Complexity: The main benefit of cloud services is
their pricing model that coming with Pay as you go but it also leads to
cost management complexities. On without proper careful monitoring
and utilization of resources optimization, Organizations may end up
with unexpected costs as per their use scale. Understanding and
Controlled usage of cloud services requires ongoing attention.

Virtualization in Cloud Computing and Types


Virtualization is used to create a virtual version of an underlying service
with the help of Virtualization, multiple operating systems and applications
can run on the same machine and its same hardware at the same time,
increasing the utilization and flexibility of hardware. It was initially
developed during the mainframe era.
It is one of the main cost-effective, hardware-reducing, and energy-saving
techniques used by cloud providers. Virtualization allows sharing of a
single physical instance of a resource or an application among multiple
customers and organizations at one time. It does this by assigning a
logical name to physical storage and providing a pointer to that physical
resource on demand.

Virtualization

The term virtualization is often synonymous with hardware virtualization,


which plays a fundamental role in efficiently delivering Infrastructure-as-a-
Service (IaaS) solutions for cloud computing. Moreover, virtualization
technologies provide a virtual environment for not only executing
applications but also for storage, memory, and networking
 Host Machine: The machine on which the virtual machine is going to
be built is known as Host Machine.
 Guest Machine: The virtual machine is referred to as a Guest
Machine.
Working of Virtualization in Cloud Computing
Virtualization has a prominent impact on Cloud Computing. In the case of
cloud computing, users store data in the cloud, but with the help of
Virtualization, users have the extra benefit of sharing the infrastructure.
Cloud Vendors take care of the required physical resources, but these
cloud providers charge a huge amount for these services which impacts
every user or organization. Virtualization helps Users or Organisations in
maintaining those services which are required by a company through
external (third-party) people, which helps in reducing costs to the
company. This is the way through which Virtualization works in Cloud
Computing.
Benefits of Virtualization
Here are some of the benefits of using Virtualization in Cloud Computing –
 More flexible and efficient allocation of resources.
 Enhance development productivity.
 It lowers the cost of IT infrastructure.
 Remote access and rapid scalability.
 High availability and disaster recovery.
 Pay peruse of the IT infrastructure on demand.
 Enables running multiple operating systems.
Drawback of Virtualization
 High Initial Investment: Clouds have a very high initial investment,
but it is also true that it will help in reducing the cost of companies.
 Learning New Infrastructure: As the companies shifted from Servers
to Cloud, it requires highly skilled staff who have skills to work with the
cloud easily and for this, you have to hire new staff or provide training
to current staff.
 Risk of Data: Hosting data on third-party resources can lead to putting
the data at risk, it has the chance of getting attacked by any hacker or
cracker very easily.
For more benefits and drawbacks, you can refer to the Pros and Cons of
Virtualization.
Characteristics of Virtualization
 Increased Security: The ability to control the execution of a guest
program in a completely transparent manner opens new possibilities
for delivering a secure, controlled execution environment. All the
operations of the guest programs are generally performed against the
virtual machine, which then translates and applies them to the host
programs.
 Managed Execution: In particular, sharing, aggregation, emulation,
and isolation are the most relevant features.
 Sharing: Virtualization allows the creation of a separate computing
environment within the same host.
 Aggregation: It is possible to share physical resources among several
guests, but virtualization also allows aggregation, which is the opposite
process.

Types of Virtualization
1. Application Virtualization
2. Network Virtualization
3. Desktop Virtualization
4. Storage Virtualization
5. Server Virtualization
6. Data virtualization

Types of Virtualization

1. Application Virtualization: This lets you use an application on your


local device while it’s actually hosted on a remote server. Your personal
data and the app’s settings are stored on the server, but you can still run it
locally via the internet. It’s useful if you need to work with multiple
versions of the same software. Common examples include hosted or
packaged apps.
2. Network Virtualization: This allows multiple virtual networks to run on
the same physical network, each operating independently. You can
quickly set up virtual switches, routers, firewalls, and VPNs, making
network management more flexible and efficient.

Network Virtualization
3. Desktop Virtualization: With desktop virtualization, your operating
system is stored on a server and can be accessed from anywhere on any
device. It’s great for users who need flexibility, as it simplifies software
updates and provides portability.
4. Storage Virtualization: This combines storage from different servers
into a single system, making it easier to manage. It ensures smooth
performance and efficient operations even when the underlying hardware
changes or fails.
5. Server Virtualization: This splits a physical server into multiple virtual
servers, each functioning independently. It helps improve performance,
cut costs and makes tasks like server migration and energy management
easier.

Server Virtualization

6. Data Virtualization: This brings data from different sources together in


one place without needing to know where or how it’s stored. It creates a
unified view of the data, which can be accessed remotely via cloud
services. Companies like Oracle and IBM offer solutions for this.
Cloud Computing vs. Virtualization

S.NO Cloud Computing Virtualization

Cloud computing is used to


While It is used to make various
provide pools and automated
1. simulated environments through
resources that can be accessed
a physical hardware system.
on-demand.

2. Cloud computing setup is tedious, While virtualization setup is


complicated. simple as compared to cloud
S.NO Cloud Computing Virtualization

computing.

While virtualization is low


3. Cloud computing is high scalable. scalable compared to cloud
computing.

While virtualization is less


4. Cloud computing is Very flexible.
flexible than cloud computing.

In the condition of disaster


While it relies on single
5. recovery, cloud computing relies
peripheral device.
on multiple machines.

In cloud computing, the workload In virtualization, the workload is


6.
is stateless. stateful.

The total cost of cloud computing The total cost of virtualization is


7.
is higher than virtualization. lower than Cloud Computing.

Cloud computing requires many While single dedicated hardware


8.
dedicated hardware. can do a great job in it.

While storage space depends on


Cloud computing provides
9. physical server capacity in
unlimited storage space.
virtualization.

Virtualization is of two types :


Cloud computing is of two types :
10. Hardware virtualization and
Public cloud and Private cloud.
Application virtualization.

In Cloud Computing, In Virtualization, Configuration is


11.
Configuration is image based. template based.
S.NO Cloud Computing Virtualization

In cloud computing, we utilize the


In Virtualization, the entire
12. entire server capacity and the
servers are on-demand.
entire servers are consolidated.

In cloud computing, the pricing


In Virtualization, the pricing is
pay as you go model, and
13. totally dependent on
consumption is the metric on
infrastructure costs.
which billing is done.

Pros of Virtualization
 Utilization of Hardware Efficiently: With the help of Virtualization
Hardware is Efficiently used by user as well as Cloud Service Provider.
In this the need of Physical Hardware System for the User is decreases
and this results in less costly.In Service Provider point of View, they will
utilize the Hardware using Hardware Virtualization which decrease the
Hardware requirement from Vendor side.
 High Availability: One of the main benefit of Virtualization is that it
provides advance features which allow virtual instances to be available
all the times.
 Disaster Recovery is efficient and easy: With the help of
virtualization Data Recovery, Backup, Duplication becomes very easy.
In traditional method, if somehow due to some disaster if Server
system Damaged then the surety of Data Recovery is very less. But
with the tools of Virtualization real time data backup recovery and
mirroring become easy task and provide surety of zero percent data
loss.
 Virtualization saves Energy: Virtualization will help to save Energy
because while moving from physical Servers to Virtual Server’s, the
number of Server’s decreases due to this monthly power and cooling
cost decreases which will Save Money as well.
 Quick and Easy Set up: In traditional methods Setting up physical
system and servers are very time-consuming. Firstly, purchase them in
bulk after that wait for shipment. When Shipment is done then wait for
Setting up and after that again spend time in installing required
software etc. Which will consume very time. But with the help of
virtualization the entire process is done in very less time which results
in productive setup.
 Cloud Migration becomes easy: Most of the companies those who
already have spent a lot in the server have a doubt of Shifting
to Cloud. But it is more cost-effective to shift to cloud services because
all the data that is present in their servers can be easily migrated into
the cloud server and save something from maintenance charge, power
consumption, cooling cost, cost to Server Maintenance Engineer etc.
 Resource Optimization: Virtualization allows efficient utilization of
physical hardware by running multiple virtual machines (VMs) on a
single physical server. This consolidation leads to cost savings in terms
of hardware, power, cooling, and space
Cons of Virtualization
 High Initial Investment: While virtualization reduces costs in the long
run, the initial setup costs for storage and servers can be higher than a
traditional setup.
 Complexity: Managing virtualized environments can be complex,
especially as the number of VMs increases.
 Security Risks: Virtualization introduces additional layers, which may
pose security risks if not properly configured and monitored.
 Learning New Infrastructure: As Organization shifted from Servers to
Cloud. They required skilled staff who can work with cloud easily.
Either they hire new IT staff with relevant skill or provide training on
that skill which increase the cost of company.
 Data can be at Risk: Working on virtual instances on shared
resources means that our data is hosted on third party resource which
put’s our data in vulnerable condition. Any hacker can attack on our
data or try to perform unauthorized access. Without Security solution
our data is in threaten situation.

Data Virtualization
Data virtualization is used to combine data from different sources
into a single, unified view without the need to move or store the
data anywhere else. It works by running queries across various
data sources and pulling the results together in memory.
To make things easier, it adds a layer that hides the complexity of
how the data is stored. This means users can access and analyze
data directly from its source in a seamless way, thanks to
specialized tools.
Working on Data Virtualization
The data virtualization works in the following manner:
1. Data Abstraction
The process starts by pulling data from different sources—like
databases, cloud storage or APIs—and combining it into a single
virtual layer. This layer makes everything look unified and easy to
access without worrying about where the data lives.
2. Data Integration
Instead of copying or moving data, the platform integrates it. It
combines data from various systems into a single view, so you
can work with it all in one place, even if it’s coming from
completely different sources.
3. Querying and Transformation
Users can query the data using familiar tools like SQL or APIs. The
platform handles any transformations or joins in real time, pulling
everything together seamlessly—even if the data comes from
multiple systems.
4. Real-time Access
One of the best things about data virtualization is that you get
real-time or near-real-time access to up-to-date information. You
don’t have to wait for batch processes to refresh the data
because the system fetches it directly from the source.
5. Data Governance and Security
All access is managed centrally, so it’s easy to control who can
see what. Security and compliance rules are applied across all
data sources, ensuring sensitive information is protected while
giving the right people access to what they need.
6. Performance Optimization
To keep things running smoothly, the platform uses techniques
like caching frequently used data, optimizing queries, and
creating virtual indexes. This ensures that even complex queries
are fast and don’t slow down the source systems.
7. User Access
Finally, the data is made available through familiar tools like
Tableau, Power BI, or even custom applications. Users don’t need
to worry about the data’s location or structure—they just get a
clean, unified view that’s ready to use.
Features of Data Virtualization
 Time-to-market acceleration from data to final product:
Virtual data objects can be created considerably more quickly
than existing ETL tools and databases since they include
integrated data. Customers may now more easily get the
information they require.
 One-Stop Security: The contemporary data architecture
makes it feasible to access data from a single location. Data
can be secured down to the row and column level thanks to the
virtual layer that grants access to all organizational data.
Authorizing numerous user groups on the same virtual dataset
is feasible by using data masking, anonymization, and
pseudonymization.
 Combine data explicitly from different sources: The
virtual data layer makes it simple to incorporate distributed
data from Data Warehouses, Big Data Platforms, Data lakes,
Cloud Solutions, and Machine Learning into user-required data
objects.
 Flexibility: It is feasible to react quickly to new advances in
various sectors thanks to data virtualization. This is up to ten
times faster than conventional ETL and data warehousing
methods. By providing integrated virtual data objects, data
virtualization enables you to reply instantly to fresh data
requests. This does away with the necessity to copy data to
various data levels but just makes it virtually accessible.

Layers of Data Virtualization


Following are the working layers in data virtualization
architecture.
1. Connection Layer
This layer is all about connecting the virtualization platform to the
different data sources you need. Whether the data is structured,
like databases, or unstructured, like files or APIs, this layer
handles it.
 It connects to databases like MySQL, Oracle and MongoDB, as
well as cloud storage services like AWS or Azure.
 It can also handle APIs (REST or SOAP) and even semi-
structured or unstructured data like JSON, XML or plain files.
 Basically, it builds bridges to all the places where your data
lives, so you don’t have to physically move or copy anything.
2. Abstraction Layer
This is where the magic happens. The abstraction layer creates a
virtual version of your data, making it look clean and unified, no
matter how messy or complex the sources are.
 Instead of showing you the raw data tables or formats, this
layer simplifies things by creating virtual views.
 For example, if your data is spread across multiple systems,
this layer can merge it into one logical view. Let’s say you have
sales data in one database and customer data in another—this
layer can create a virtual table that combines them, so it looks
like a single source.
 It doesn’t move or store the data—it just provides a seamless,
virtual representation.
3. Consumption Layer
This is the user-facing layer that provides access to the unified
data. It’s designed to make it easy for tools, applications and
people to work with the data.
 This layer makes the virtualized data available through tools
and methods that users are already familiar with.
 For instance, you can query the data using SQL or access it
programmatically through APIs like REST or SOAP.
 It also supports integration with tools like Tableau, Power BI, or
Excel so you can use the data for dashboards, reports, or
analytics.

Hardware Based Virtualization


A platform virtualization approach that allows efficient full
virtualization with the help of hardware capabilities, primarily
from the host processor is referred to as Hardware based
virtualization in computing. To simulate a complete hardware
environment, or virtual machine, full virtualization is used in
which an unchanged guest operating system (using the common
instruction set as the host machine) executes in sophisticated
isolation.

The different logical layers of operating system-based


virtualization, in which the VM is first installed into a full host
operating system and subsequently used to generate virtual
machines.
An abstract execution environment in terms of computer
hardware in which guest OS can be run, referred to as Hardware-
level virtualization. In this, an operating system represents the
guest, the physical computer hardware represents a host, its
emulation represents a virtual machine, and the hypervisor
represents the Virtual Machine Manager. When the virtual
machines are allowed to interact with hardware without any
intermediary action requirement from the host operating system
generally makes hardware-based virtualization more efficient. A
fundamental component of hardware virtualization is the
hypervisor, or virtual machine manager (VMM).
Basically, there are two types of Hypervisors which are described
below:

 Type-I hypervisors:
Hypervisors of type I run directly on top of the hardware. As a
result, they stand in for operating systems and communicate
directly with the ISA interface offered by the underlying
hardware, which they replicate to allow guest operating
systems to be managed. Because it runs natively on hardware,
this sort of hypervisor is also known as a native virtual
machine.
 Type-II hypervisors:
To deliver virtualization services, Type II hypervisors require
the assistance of an operating system. This means they’re
operating system-managed applications that communicate
with it via the ABI and simulate the ISA of virtual hardware for
guest operating systems. Because it is housed within an
operating system, this form of hypervisor is also known as a
hosted virtual machine.
A hypervisor has a simple user interface that needs some storage
space. It exists as a thin layer of software and to establish a
virtualization management layer, it does hardware management
function. For the provisioning of virtual machines, device drivers
and support software are optimized while many standard
operating system functions are not implemented. Essentially, to
enhance performance overhead inherent to the coordination
which allows multiple VMs to interact with the same hardware
platform this type of virtualization system is used.
Hardware compatibility is another challenge for hardware-based
virtualization. The virtualization layer interacts directly with the
host hardware, which results that all the associated drivers and
support software must be compatible with the hypervisor. As
hardware devices drivers available to other operating systems
may not be available to hypervisor platforms similarly. Moreover,
host management and administration features may not contain
the range of advanced functions that are common to the
operating systems.
Note: Hyper-V communicates with the underlying hardware
mostly through vendor-supplied drivers.

features of hardware-based virtualization are:

Isolation: Hardware-based virtualization provides strong isolation


between virtual machines, which means that any problems in one
virtual machine will not affect other virtual machines running on
the same physical host.
Security: Hardware-based virtualization provides a high level of
security as each virtual machine is isolated from the host
operating system and other virtual machines, making it difficult
for malicious code to spread from one virtual machine to another.
Performance: Hardware-based virtualization provides good
performance as the hypervisor has direct access to the physical
hardware, which means that virtual machines can achieve close
to native performance.
Resource allocation: Hardware-based virtualization allows for
flexible allocation of hardware resources such as CPU, memory,
and I/O bandwidth to virtual machines.
Snapshot and migration: Hardware-based virtualization allows
for the creation of snapshots, which can be used for backup and
recovery purposes. It also allows for live migration of virtual
machines between physical hosts, which can be used for load
balancing and other purposes.
Support for multiple operating systems: Hardware-based
virtualization supports multiple operating systems, which allows
for the consolidation of workloads onto fewer physical machines,
reducing hardware and maintenance costs.
Compatibility: Hardware-based virtualization is compatible with
most modern operating systems, making it easy to integrate into
existing IT infrastructure.
Advantages of hardware-based virtualization –
It reduces the maintenance overhead of paravirtualization as it
reduces (ideally, eliminates) the modification in the guest
operating system. It is also significantly convenient to attain
enhanced performance. A practical benefit of hardware-based
virtualization has been mentioned by VMware engineers and
Virtual Iron.
Disadvantages of hardware-based virtualization –
Hardware-based virtualization requires explicit support in the host
CPU, which may not available on all x86/x86_64 processors. A
“pure” hardware-based virtualization approach, including the
entire unmodified guest operating system, involves many VM
traps, and thus a rapid increase in CPU overhead occurs which
limits the scalability and efficiency of server consolidation. This
performance hit can be mitigated by the use of para-virtualized
drivers; the combination has been called “hybrid virtualization”.

Server Virtualization


Server Virtualization is most important part of Cloud Computing. So,


talking about Cloud Computing, it is composed of two words, cloud and
computing. Cloud means Internet and computing means to solve
problems with help of computers. Computing is related to CPU & RAM in
digital world. Now Consider situation, You are using Mac OS on your
machine but particular application for your project can be operated only on
Windows. You can either buy new machine running windows or create
virtual environment in which windows can be installed and used. Second
option is better because of less cost and easy implementation. This
scenario is called Virtualization. In it, virtual CPU, RAM, NIC and other
resources are provided to OS which it needed to run. This resources is
virtually provided and controlled by an application called Hypervisor. The
new OS running on virtual hardware resources is collectively
called Virtual Machine (VM).
Now migrate this concept to data centers where lot of servers (machines
with fast CPU, large RAM and enormous storage) are available.
Enterprise owning data centre provide resources requested by customers
as per their need. Data centers have all resources and on user request,
particular amount of CPU, RAM, NIC and storage with preferred OS is
provided to users. This concept of virtualization in which services are
requested and provided over Internet is called Server Virtualization.

Figure – Server Virtualization


To implement Server Virtualization, hypervisor is installed on server which
manages and allocates host hardware requirements to each virtual
machine. This hypervisor sits over server hardware and regulates
resources of each VM. A user can increase or decrease resources or can
delete entire VM as per his/her need. This server with VM created on
them is called server virtualization and concept of controlling this VM by
users through internet is called Cloud Computing.
Advantages of Server Virtualization:
 Each server in server virtualization can be restarted separately without
affecting the operation of other virtual servers.
 Server virtualization lowers the cost of hardware by dividing a single
server into several virtual private servers.
 One of the major benefits of server virtualization is disaster recovery. In
server virtualization, data may be stored and retrieved from any
location and moved rapidly and simply from one server to another.
 It enables users to keep their private information in the data centers.
Disadvantages of Server Virtualization:
 The major drawback of server virtualization is that all websites that are
hosted by the server will cease to exist if the server goes offline.
 The effectiveness of virtualized environments cannot be measured.
 It consumes a significant amount of RAM.
 Setting it up and keeping it up are challenging.
 Virtualization is not supported for many essential databases and apps.
Types of Server Virtualization in Computer Network
Server Virtualization is the partitioning of a physical server into
a number of small virtual servers, each running its own operating
system. These operating systems are known as guest operating
systems. These are running on another operating system known
as the host operating system. Each guest running in this manner
is unaware of any other guests running on the same host.
Different virtualization techniques are employed to achieve this
transparency.
Types of Server virtualization:
1. Hypervisor –
A Hypervisor or VMM (virtual machine monitor) is a layer that
exists between the operating system and hardware. It provides
the necessary services and features for the smooth running of
multiple operating systems.
It identifies traps, responds to privileged CPU instructions, and
handles queuing, dispatching, and returning the hardware
requests. A host operating system also runs on top of the
hypervisor to administer and manage the virtual machines.
2. Para Virtualization –
It is based on Hypervisor. Much of the emulation and trapping
overhead in software implemented virtualization is handled in this
model. The guest operating system is modified and recompiled
before installation into the virtual machine.
Due to the modification in the Guest operating system,
performance is enhanced as the modified guest operating system
communicates directly with the hypervisor and emulation
overhead is removed.
Example: Xen primarily uses Paravirtualization, where a
customized Linux environment is used to support the
administrative environment known as domain 0.

Advantages:
 Easier
 Enhanced Performance
 No emulation overhead
Limitations:
 Requires modification to a guest operating system
3. Full Virtualization –
It is very much similar to Para virtualization. It can emulate the
underlying hardware when necessary. The hypervisor traps the
machine operations used by the operating system to perform I/O
or modify the system status. After trapping, these operations are
emulated in software and the status codes are returned very
much consistent with what the real hardware would deliver. This
is why an unmodified operating system is able to run on top of the
hypervisor.
Example: VMWare ESX server uses this method. A customized
Linux version known as Service Console is used as the
administrative operating system. It is not as fast as Para
virtualization.

Advantages:
 No modification to the Guest operating system is required.
Limitations:
 Complex
 Slower due to emulation
 Installation of the new device driver is difficult.
4. Hardware-Assisted Virtualization –
It is similar to Full Virtualization and Paravirtualization in terms of
operation except that it requires hardware support. Much of the
hypervisor overhead due to trapping and emulating I/O operations
and status instructions executed within a guest OS is dealt with
by relying on the hardware extensions of the x86 architecture.
Unmodified OS can be run as the hardware support for
virtualization would be used to handle hardware access requests,
privileged and protected operations, and to communicate with the
virtual machine.
Examples: AMD – V Pacifica and Intel VT Vanderpool provide
hardware support for virtualization.
Advantages:
 No modification to a guest operating system is required.
 Very less hypervisor overhead
Limitations:
 Hardware support Required
5. Kernel level Virtualization –
Instead of using a hypervisor, it runs a separate version of the
Linux kernel and sees the associated virtual machine as a user-
space process on the physical host. This makes it easy to run
multiple virtual machines on a single host. A device driver is used
for communication between the main Linux kernel and the virtual
machine.
Processor support is required for virtualization ( Intel VT or AMD –
v). A slightly modified QEMU process is used as the display and
execution containers for the virtual machines. In many ways,
kernel-level virtualization is a specialized form of server
virtualization.
Examples: User – Mode Linux( UML ) and Kernel Virtual
Machine( KVM )

Advantages:
 No special administrative software is required.
 Very less overhead
Limitations:
 Hardware Support Required
6. System Level or OS Virtualization –
Runs multiple but logically distinct environments on a single
instance of the operating system kernel. Also called shared kernel
approach as all virtual machines share a common kernel of host
operating system. Based on the change root concept “chroot”.
chroot starts during bootup. The kernel uses root filesystems to
load drivers and perform other early-stage system initialization
tasks. It then switches to another root filesystem using chroot
command to mount an on-disk file system as its final root
filesystem and continue system initialization and configuration
within that file system.
The chroot mechanism of system-level virtualization is an
extension of this concept. It enables the system to start virtual
servers with their own set of processes that execute relative to
their own filesystem root directories.
The main difference between system-level and server
virtualization is whether different operating systems can be run
on different virtual systems. If all virtual servers must share the
same copy of the operating system it is system-level virtualization
and if different servers can have different operating systems
( including different versions of a single operating system) it is
server virtualization.
Examples: FreeVPS, Linux Vserver, and OpenVZ are some
examples.

Advantages:
 Significantly lightweight than complete machines(including a
kernel)
 Can host many more virtual servers
 Enhanced Security and isolation
 Virtualizing an operating system usually has little to no
overhead.
 Live migration is possible with OS Virtualization.
 It can also leverage dynamic container load balancing between
nodes and clusters.
 On OS virtualization, the file-level copy-on-write (CoW) method
is possible, making it easier to back up data, more space-
efficient, and easier to cache than block-level copy-on-write
schemes.

Network Virtualization in Cloud Computing


Prerequisite – Virtualization and its Types in Cloud Computing
Network Virtualization is a process of logically grouping physical
networks and making them operate as single or multiple
independent networks called Virtual Networks.
General Architecture Of Network Virtualization

Tools for Network Virtualization:


1. Physical switch OS –
It is where the OS must have the functionality of network
virtualization.
2. Hypervisor –
It is which uses third-party software or built-in networking and
the functionalities of network virtualization.
The basic functionality of the OS is to give the application or the
executing process with a simple set of instructions. System calls
that are generated by the OS and executed through the libc
library are comparable to the service primitives given at the
interface between the application and the network through the
SAP (Service Access Point).
The hypervisor is used to create a virtual switch and configuring
virtual networks on it. The third-party software is installed onto
the hypervisor and it replaces the native networking functionality
of the hypervisor. A hypervisor allows us to have various VMs all
working optimally on a single piece of computer hardware.
Functions of Network Virtualization :
 It enables the functional grouping of nodes in a virtual network.
 It enables the virtual network to share network resources.
 It allows communication between nodes in a virtual network
without routing of frames.
 It restricts management traffic.
 It enforces routing for communication between virtual
networks.
Network Virtualization in Virtual Data Center :
1. Physical Network
 Physical components: Network adapters, switches, bridges,
repeaters, routers and hubs.
 Grants connectivity among physical servers running a
hypervisor, between physical servers and storage systems and
between physical servers and clients.
2. VM Network
 Consists of virtual switches.
 Provides connectivity to hypervisor kernel.
 Connects to the physical network.
 Resides inside the physical server.

Network Virtualization In VDC

Advantages of Network Virtualization :


Improves manageability –
 Grouping and regrouping of nodes are eased.
 Configuration of VM is allowed from a centralized management
workstation using management software.
Reduces CAPEX –
 The requirement to set up separate physical networks for
different node groups is reduced.
Improves utilization –
 Multiple VMs are enabled to share the same physical network
which enhances the utilization of network resource.
Enhances performance –
 Network broadcast is restricted and VM performance is
improved.
Enhances security –
 Sensitive data is isolated from one VM to another VM.
 Access to nodes is restricted in a VM from another VM.
Disadvantages of Network Virtualization :
 It needs to manage IT in the abstract.
 It needs to coexist with physical devices in a cloud-integrated
hybrid environment.
 Increased complexity.
 Upfront cost.
 Possible learning curve.
Examples of Network Virtualization :
Virtual LAN (VLAN) –
 The performance and speed of busy networks can be improved
by VLAN.
 VLAN can simplify additions or any changes to the network.
Network Overlays –
 A framework is provided by an encapsulation protocol called
VXLAN for overlaying virtualized layer 2 networks over layer 3
networks.
 The Generic Network Virtualization Encapsulation protocol
(GENEVE) provides a new way to encapsulation designed to
provide control-plane independence between the endpoints of
the tunnel.
Network Virtualization Platform: VMware NSX –
 VMware NSX Data Center transports the components of
networking and security such as switching, firewalling and
routing that are defined and consumed in software.
 It transports the operational model of a virtual machine (VM)
for the network.
Applications of Network Virtualization :
 Network virtualization may be used in the development of
application testing to mimic real-world hardware and system
software.
 It helps us to integrate several physical networks into a single
network or separate single physical networks into multiple
analytical networks.
 In the field of application performance engineering, network
virtualization allows the simulation of connections between
applications, services, dependencies, and end-users for
software testing.
 It helps us to deploy applications in a quicker time frame,
thereby supporting a faster go-to-market.
 Network virtualization helps the software testing teams to
derive actual results with expected instances and congestion
issues in a networked environment.

Operating system based Virtualization


Prerequisites – Types of Server Virtualization, Hardware based
Virtualization
Operating system-based Virtualization refers to an operating
system feature in which the kernel enables the existence of
various isolated user-space instances. The installation of
virtualization software also refers to Operating system-based
virtualization. It is installed over a pre-existing operating system
and that operating system is called the host operating system.
In this virtualization, a user installs the virtualization software in
the operating system of his system like any other program and
utilizes this application to operate and generate various virtual
machines. Here, the virtualization software allows direct access to
any of the created virtual machines to the user. As the host OS
can provide hardware devices with the mandatory support,
operating system virtualization may affect compatibility issues of
hardware even when the hardware driver is not allocated to the
virtualization software.
Virtualization software is able to convert hardware IT resources
that require unique software for operation into virtualized IT
resources. As the host OS is a complete operating system in itself,
many OS-based services are available as organizational
management and administration tools can be utilized for the
virtualization host management.

Some major operating system-based services are mentioned


below:
1. Backup and Recovery.
2. Security Management.
3. Integration to Directory Services.
Various major operations of Operating System Based
Virtualization are described below:
1. Hardware capabilities can be employed, such as the network
connection and CPU.
2. Connected peripherals with which it can interact, such as a
webcam, printer, keyboard, or Scanners.
3. Data that can be read or written, such as files, folders, and
network shares.
The Operating system may have the capability to allow or deny
access to such resources based on which the program requests
them and the user account in the context of which it runs. OS
may also hide these resources, which leads that when a computer
program computes them, they do not appear in the enumeration
results. Nevertheless, from a programming perspective, the
computer program has interacted with those resources and the
operating system has managed an act of interaction.
With operating-system-virtualization or containerization, it is
probable to run programs within containers, to which only parts of
these resources are allocated. A program that is expected to
perceive the whole computer, once run inside a container, can
only see the allocated resources and believes them to be all that
is available. Several containers can be formed on each operating
system, to each of which a subset of the computer’s resources is
allocated. Each container may include many computer programs.
These programs may run parallel or distinctly, even interrelate
with each other.

features of operating system-based virtualization are:

 Resource isolation: Operating system-based virtualization


provides a high level of resource isolation, which allows each
container to have its own set of resources, including CPU,
memory, and I/O bandwidth.
 Lightweight: Containers are lightweight compared to
traditional virtual machines as they share the same host
operating system, resulting in faster startup and lower
resource usage.
 Portability: Containers are highly portable, making it easy to
move them from one environment to another without needing
to modify the underlying application.
 Scalability: Containers can be easily scaled up or down based
on the application requirements, allowing applications to be
highly responsive to changes in demand.
 Security: Containers provide a high level of security by
isolating the containerized application from the host operating
system and other containers running on the same system.
 Reduced Overhead: Containers incur less overhead than
traditional virtual machines, as they do not need to emulate a
full hardware environment.
 Easy Management: Containers are easy to manage, as they
can be started, stopped, and monitored using simple
commands.
Operating system-based virtualization can raise demands and
problems related to performance overhead, such as:
1. The host operating system employs CPU, memory, and other
hardware IT resources.
2. Hardware-related calls from guest operating systems need to
navigate numerous layers to and from the hardware, which
shrinkage overall performance.
3. Licenses are frequently essential for host operating systems, in
addition to individual licenses for each of their guest operating
systems.
Advantages of Operating System-Based Virtualization:
 Resource Efficiency: Operating system-based virtualization
allows for greater resource efficiency as containers do not need
to emulate a complete hardware environment, which reduces
resource overhead.
 High Scalability: Containers can be quickly and easily scaled
up or down depending on the demand, which makes it easy to
respond to changes in the workload.Easy
Management: Containers are easy to manage as they can be
managed through simple commands, which makes it easy to
deploy and maintain large numbers of containers.
Reduced Costs: Operating system-based virtualization can
significantly reduce costs, as it requires fewer resources and
infrastructure than traditional virtual machines.
 Faster Deployment: Containers can be deployed quickly,
reducing the time required to launch new applications or
update existing ones.
 Portability: Containers are highly portable, making it easy to
move them from one environment to another without requiring
changes to the underlying application.
Disadvantages of Operating System-Based Virtualization:
 Security: Operating system-based virtualization may pose
security risks as containers share the same host operating
system, which means that a security breach in one container
could potentially affect all other containers running on the
same system.
 Limited Isolation: Containers may not provide complete
isolation between applications, which can lead to performance
degradation or resource contention.
 Complexity: Operating system-based virtualization can be
complex to set up and manage, requiring specialized skills and
knowledge.
 Dependency Issues: Containers may have dependency
issues with other containers or the host operating system,
which can lead to compatibility issues and hinder deployment.
 Limited Hardware Access: Containers may have limited
access to hardware resources, which can limit their ability to
perform certain tasks or applications that require direct
hardware access.

Cloud Based Services



Cloud Computing can be defined as the practice of using a network of


remote servers hosted on the Internet to store, manage, and process
data, rather than a local server or a personal computer. Companies
offering such kinds of cloud computing services are called cloud
providers and typically charge for cloud computing services based on
usage. Grids and clusters are the foundations for cloud computing.
Types of Cloud Computing
Most cloud computing services fall into five broad categories:
1. Software as a service (SaaS)
2. Platform as a service (PaaS)
3. Infrastructure as a service (IaaS)
4. Anything/Everything as a service (XaaS)
5. Function as a Service (FaaS)
These are sometimes called the cloud computing stack because they
are built on top of one another. Knowing what they are and how they are
different, makes it easier to accomplish your goals. These abstraction
layers can also be viewed as a layered architecture where services of a
higher layer can be composed of services of the underlying layer i.e,
SaaS can provide Infrastructure.

Software as a Service(SaaS)

Software-as-a-Service (SaaS) is a way of delivering services and


applications over the Internet. Instead of installing and maintaining
software, we simply access it via the Internet, freeing ourselves from the
complex software and hardware management. It removes the need to
install and run applications on our own computers or in the data centers
eliminating the expenses of hardware as well as software maintenance.
SaaS provides a complete software solution that you purchase on a pay-
as-you-go basis from a cloud service provider. Most SaaS applications
can be run directly from a web browser without any downloads or
installations required. The SaaS applications are sometimes called Web-
based software, on-demand software, or hosted software.
Advantages of SaaS
1. Cost-Effective: Pay only for what you use.
2. Reduced time: Users can run most SaaS apps directly from their web
browser without needing to download and install any software. This
reduces the time spent in installation and configuration and can reduce
the issues that can get in the way of the software deployment.
3. Accessibility: We can Access app data from anywhere.
4. Automatic updates: Rather than purchasing new software, customers
rely on a SaaS provider to automatically perform the updates.
5. Scalability: It allows the users to access the services and features on-
demand.
The various companies providing Software as a service are Cloud9
Analytics, Salesforce.com, Cloud Switch, Microsoft Office 365, Big
Commerce, Eloqua, dropBox, and Cloud Tran.
Disadvantages of Saas :
1. Limited customization: SaaS solutions are typically not as
customizable as on-premises software, meaning that users may have
to work within the constraints of the SaaS provider’s platform and may
not be able to tailor the software to their specific needs.
2. Dependence on internet connectivity: SaaS solutions are typically
cloud-based, which means that they require a stable internet
connection to function properly. This can be problematic for users in
areas with poor connectivity or for those who need to access the
software in offline environments.
3. Security concerns: SaaS providers are responsible for maintaining
the security of the data stored on their servers, but there is still a risk of
data breaches or other security incidents.
4. Limited control over data: SaaS providers may have access to a
user’s data, which can be a concern for organizations that need to
maintain strict control over their data for regulatory or other reasons.

Platform as a Service

PaaS is a category of cloud computing that provides a platform and


environment to allow developers to build applications and services over
the internet. PaaS services are hosted in the cloud and accessed by users
simply via their web browser.
A PaaS provider hosts the hardware and software on its own
infrastructure. As a result, PaaS frees users from having to install in-
house hardware and software to develop or run a new application. Thus,
the development and deployment of the application take
place independent of the hardware.
The consumer does not manage or control the underlying cloud
infrastructure including network, servers, operating systems, or storage,
but has control over the deployed applications and possibly configuration
settings for the application-hosting environment. To make it simple, take
the example of an annual day function, you will have two options either to
create a venue or to rent a venue but the function is the same.
Advantages of PaaS:
1. Simple and convenient for users: It provides much of the
infrastructure and other IT services, which users can access anywhere
via a web browser.
2. Cost-Effective: It charges for the services provided on a per-use basis
thus eliminating the expenses one may have for on-premises hardware
and software.
3. Efficiently managing the lifecycle: It is designed to support the
complete web application lifecycle: building, testing, deploying,
managing, and updating.
4. Efficiency: It allows for higher-level programming with reduced
complexity thus, the overall development of the application can be
more effective.
The various companies providing Platform as a service are Amazon Web
services Elastic Beanstalk, Salesforce, Windows Azure, Google App
Engine, cloud Bees and IBM smart cloud.
Disadvantages of Paas:
1. Limited control over infrastructure: PaaS providers typically manage
the underlying infrastructure and take care of maintenance and
updates, but this can also mean that users have less control over the
environment and may not be able to make certain customizations.
2. Dependence on the provider: Users are dependent on the PaaS
provider for the availability, scalability, and reliability of the platform,
which can be a risk if the provider experiences outages or other issues.
3. Limited flexibility: PaaS solutions may not be able to accommodate
certain types of workloads or applications, which can limit the value of
the solution for certain organizations.

Infrastructure as a Service

Infrastructure as a service (IaaS) is a service model that delivers


computer infrastructure on an outsourced basis to support various
operations. Typically IaaS is a service where infrastructure is provided as
outsourcing to enterprises such as networking equipment, devices,
database, and web servers.
It is also known as Hardware as a Service (HaaS). IaaS customers pay
on a per-user basis, typically by the hour, week, or month. Some
providers also charge customers based on the amount of virtual machine
space they use.
It simply provides the underlying operating systems, security, networking,
and servers for developing such applications, and services, and deploying
development tools, databases, etc.
Advantages of IaaS:
1. Cost-Effective: Eliminates capital expense and reduces ongoing cost
and IaaS customers pay on a per-user basis, typically by the hour,
week, or month.
2. Website hosting: Running websites using IaaS can be less expensive
than traditional web hosting.
3. Security: The IaaS Cloud Provider may provide better security than
your existing software.
4. Maintenance: There is no need to manage the underlying data center
or the introduction of new releases of the development or underlying
software. This is all handled by the IaaS Cloud Provider.
The various companies providing Infrastructure as a service are Amazon
web services, Bluestack, IBM, Openstack, Rackspace, and Vmware.
Disadvantages of laaS :
1. Limited control over infrastructure: IaaS providers typically manage
the underlying infrastructure and take care of maintenance and
updates, but this can also mean that users have less control over the
environment and may not be able to make certain customizations.
2. Security concerns: Users are responsible for securing their own data
and applications, which can be a significant undertaking.
3. Limited access: Cloud computing may not be accessible in certain
regions and countries due to legal policies.

Anything as a Service

It is also known as Everything as a Service. Most of the cloud service


providers nowadays offer anything as a service that is a compilation of all
of the above services including some additional services.
Advantages of XaaS:
1. Scalability: XaaS solutions can be easily scaled up or down to meet
the changing needs of an organization.
2. Flexibility: XaaS solutions can be used to provide a wide range of
services, such as storage, databases, networking, and software, which
can be customized to meet the specific needs of an organization.
3. Cost-effectiveness: XaaS solutions can be more cost-effective than
traditional on-premises solutions, as organizations only pay for the
services.
Disadvantages of XaaS:
1. Dependence on the provider: Users are dependent on the XaaS
provider for the availability, scalability, and reliability of the service,
which can be a risk if the provider experiences outages or other issues.
2. Limited flexibility: XaaS solutions may not be able to accommodate
certain types of workloads or applications, which can limit the value of
the solution for certain organizations.
3. Limited integration: XaaS solutions may not be able to integrate with
existing systems and data sources, which can limit the value of the
solution for certain organizations.
Function as a Service :
FaaS is a type of cloud computing service. It provides a platform for its
users or customers to develop, compute, run and deploy the code or
entire application as functions. It allows the user to entirely develop the
code and update it at any time without worrying about the maintenance of
the underlying infrastructure. The developed code can be executed with
response to the specific event. It is also as same as PaaS.
FaaS is an event-driven execution model. It is implemented in the
serverless container. When the application is developed completely, the
user will now trigger the event to execute the code. Now, the triggered
event makes response and activates the servers to execute it. The
servers are nothing but the Linux servers or any other servers which is
managed by the vendor completely. Customer does not have clue about
any servers which is why they do not need to maintain the server hence it
is serverless architecture.
Both PaaS and FaaS are providing the same functionality but there is still
some differentiation in terms of Scalability and Cost.
FaaS, provides auto-scaling up and scaling down depending upon the
demand. PaaS also provides scalability but here users have to configure
the scaling parameter depending upon the demand.
In FaaS, users only have to pay for the number of execution time
happened. In PaaS, users have to pay for the amount based on pay-as-
you-go price regardless of how much or less they use.
Advantages of FaaS :
 Highly Scalable: Auto scaling is done by the provider depending upon
the demand.
 Cost-Effective: Pay only for the number of events executed.
 Code Simplification: FaaS allows the users to upload the entire
application all at once. It allows you to write code for independent
functions or similar to those functions.
 Maintenance of code is enough and no need to worry about the
servers.
 Functions can be written in any programming language.
 Less control over the system.
The various companies providing Function as a Service are Amazon Web
Services – Firecracker, Google – Kubernetes, Oracle – Fn, Apache
OpenWhisk – IBM, OpenFaaS,
Disadvantages of FaaS :
1. Cold start latency: Since FaaS functions are event-triggered, the first
request to a new function may experience increased latency as the
function container is created and initialized.
2. Limited control over infrastructure: FaaS providers typically manage
the underlying infrastructure and take care of maintenance and
updates, but this can also mean that users have less control over the
environment and may not be able to make certain customizations.
3. Security concerns: Users are responsible for securing their own data
and applications, which can be a significant undertaking.
4. Limited scalability: FaaS functions may not be able to handle high
traffic or large number of requests.

Business Drivers in Cloud Computing


Business Driver :
It is the interface or resource, and a process that is used for the
growth and success of the business. Every business has its own
driver to which they decide as per the circumstances. Business
drivers are the key inputs that drive a business operationally and
financially. Businesses have been motivated to adopt such
business drivers to achieve organizational goals.
Example –
Some common examples of business drivers are the quantity and
price of the products sold, units of production, number of
enterprises, salespeople, etc.
BusinessDriversin CloudComputing :
Business drivers have motivated organizations to adopt cloud
computing to meet and support the requirements of these drivers.
They have also motivated organizations to become providers of
the cloud environment. There are three types of Business Drivers
as follows.
1. Capacity Planning
2. Cost Reduction
3. Organizational Agility
CapacityPlanning:
Capacity planning is the process in which an organization
estimates the production capacity needed for its products to cope
with the ever-changing demands in the market. This involves
estimating the storage, infrastructure, hardware and software,
availability of resources, etc. for over a future period of time.
There are three major consideration’s incapacity planning as
follows.
1. Level of Demand
2. Cost of Production
3. Availability of Funds
OtherStrategy :
Taking these considerations into account let us look at the
different capacity planning strategies that exist. Let’s discuss it
one by one.
Lead Strategy –
This is a strategy where the capacity is added beforehand in
reference to a future increase in demand. This strategy keeps the
customers intact and prevents competitors from luring them back
in.

Lag Strategy –
This strategy is where the capacity is added only when it is
required, that is, only when the demand is observed and not
based on anticipation. This strategy is more conservative, as it
reduces the risk of wastage but at the same time, it can result in
late delivery of goods if not planned outright.

Match Strategy –
This strategy is where small amounts of capacity are added
gradually in required intervals of time, keeping in mind the
demand and the market potential of the product. This strategy is
said to improve performance in heterogeneous environments and
hybrid clouds.
Cost Reduction :
Cost reduction is the process by which organizations reduce
unnecessary costs in order to increase their profits in the
business. There is a direct alignment between the cost and the
growth of the company, which is why cost reduction is an
important factor in the organization’s productivity. The maximum
usage requirements should be kept in mind when dealing with the
performance of the organization.
Cost factor –
Two costs should be taken into account as follows.
 The cost of acquiring new infrastructure.
 The cost of its ongoing ownership.
Tools and Techniques for cost reduction –
There are the following tools and techniques that are used to
reduce costs as follows.
 Budgetary Control
 Standard Costing
 Simplification and Variety Reduction
 Planning and Control of Finance
 Cost-Benefit Analysis
 Value Analysis
Organizational Agility :
Organization agility is the process by which an organization will
adapt and evolve to sudden changes caused by internal and
external factors. It measures how quickly an organization will get
back on its feet, in the face of problems. Agility requires stability,
and for an organization to reach organizational agility, it should
build a stable foundation. In the IT field, one should respond to
business change by scaling its IT resources. If infrastructure
seems to be the problem, changing the business needs and
prioritizing as per the circumstances should be the solution.
Principles of Organizational Agility –
The five principles of Organizational Agility are as follows.
1. Frame your problems properly
2. Limit Change
3. Simplify Change
4. Subtract before you Add
5. Verify Outcomes

Scalability and Elasticity in Cloud


Computing
Cloud Computing
Cloud Elasticity: Elasticity refers to the ability of a cloud to automatically
expand or compress the infrastructural resources on a sudden up and
down in the requirement so that the workload can be managed efficiently.
This elasticity helps to minimize infrastructural costs. This is not applicable
for all kinds of environments, it is helpful to address only those scenarios
where the resource requirements fluctuate up and down suddenly for a
specific time interval. It is not quite practical to use where persistent
resource infrastructure is required to handle the heavy workload.
The versatility is vital for mission basic or business basic applications
where any split the difference in the exhibition may prompts enormous
business misfortune. Thus, flexibility comes into picture where extra
assets are provisioned for such application to meet the presentation
prerequisites.
It works such a way that when number of client access expands,
applications are naturally provisioned the extra figuring, stockpiling and
organization assets like central processor, Memory, Stockpiling or transfer
speed what’s more, when fewer clients are there it will naturally diminish
those as
per prerequisite.
The Flexibility in cloud is a well-known highlight related with scale-out
arrangements (level scaling), which takes into consideration assets to be
powerfully added or eliminated when required.
It is for the most part connected with public cloud assets which is
generally highlighted in pay-per-use or pay-more only as costs arise
administrations.
The Flexibility is the capacity to develop or contract framework assets
(like process, capacity or organization) powerfully on a case by case basis
to adjust to responsibility changes in the
applications in an autonomic way.
It makes make most extreme asset use which bring about reserve funds in
foundation costs in general.
Relies upon the climate, flexibility is applied on assets in the framework
that isn’t restricted to equipment, programming, network, QoS and
different arrangements.
The versatility is totally relying upon the climate as now and again it might
become negative characteristic where execution of certain applications
probably ensured execution.
It is most commonly used in pay-per-use, public cloud services. Where IT
managers are willing to pay only for the duration to which they consumed
the resources.
Example: Consider an online shopping site whose transaction workload
increases during festive season like Christmas. So for this specific period
of time, the resources need a spike up. In order to handle this kind of
situation, we can go for a Cloud-Elasticity service rather than Cloud
Scalability. As soon as the season goes out, the deployed resources can
then be requested for withdrawal.
Cloud Scalability: Cloud scalability is used to handle the growing
workload where good performance is also needed to work efficiently with
software or applications. Scalability is commonly used where the
persistent deployment of resources is required to handle the workload
statically.
Example: Consider you are the owner of a company whose database size
was small in earlier days but as time passed your business does grow and
the size of your database also increases, so in this case you just need to
request your cloud service vendor to scale up your database capacity to
handle a heavy workload.
It is totally different from what you have read above in Cloud Elasticity.
Scalability is used to fulfill the static needs while elasticity is used to fulfill
the dynamic need of the organization. Scalability is a similar kind of
service provided by the cloud where the customers have to pay-per-use.
So, in conclusion, we can say that Scalability is useful where the workload
remains high and increases statically.
Types of Scalability:
1. Vertical Scalability (Scale-up) –
In this type of scalability, we increase the power of existing resources in
the working environment in an upward direction.
2. Horizontal Scalability: In this kind of scaling, the resources are added
in a horizontal row.

3. Diagonal Scalability –
It is a mixture of both Horizontal and Vertical scalability where the
resources are added both vertically and horizontally.

Difference Between Cloud Elasticity and Scalability :


Cloud Elasticity Cloud Scalability

Elasticity is used just to meet the


Scalability is used to meet the static
sudden up and down in the workload
increase in the workload.
1 for a small period of time.

Elasticity is used to meet dynamic


Scalability is always used to address the
changes, where the resources need can
increase in workload in an organization.
2 increase or decrease.

Elasticity is commonly used by small Scalability is used by giant companies


companies whose workload and whose customer circle persistently
demand increases only for a specific grows in order to do the operations
3 period of time. efficiently.

It is a short term planning and adopted


Scalability is a long term planning and
just to deal with an unexpected
adopted just to deal with an expected
increase in demand or seasonal
increase in demand.
4 demands.

You might also like