0% found this document useful (0 votes)
13 views

CLOUD 1

The document outlines the top trends in cloud computing, including the integration of AI and ML, enhanced data security measures, and the rise of multi-cloud and hybrid deployments. It also discusses the benefits of low-code/no-code solutions, edge computing, IoT, and serverless architectures, as well as the importance of DevSecOps and disaster recovery strategies. Additionally, it covers the architecture of cloud computing, its components, and the pros and cons of virtualization technologies.

Uploaded by

sunkadsameer95
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

CLOUD 1

The document outlines the top trends in cloud computing, including the integration of AI and ML, enhanced data security measures, and the rise of multi-cloud and hybrid deployments. It also discusses the benefits of low-code/no-code solutions, edge computing, IoT, and serverless architectures, as well as the importance of DevSecOps and disaster recovery strategies. Additionally, it covers the architecture of cloud computing, its components, and the pros and cons of virtualization technologies.

Uploaded by

sunkadsameer95
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Trends in Cloud Computing

Many cloud service providers like Google, Microsoft, IBM, Amazon, etc are
working on these trends so that more cost-effective services can be provided to
users with high efficiency. The top 10 cloud computing trends are mentioned
below.

1. AI and ML
One of the most trending technologies that are close to cloud computing is
Artificial Intelligence and Machine Learning. They are cost-effective
technologies as they require high computational power and storage for the
collection of data and training. Major trends that will grow in this sector in the
upcoming years are self-automation, self-learning, personalized cloud, high data
security, and privacy. Many cloud service provider companies such as Amazon,
Google, IBM, etc are investing a lot in artificial intelligence and machine
learning. Amazon’s AWS DeepLens camera and Google Lens are two such
examples of their products based on machine learning.

2. Data Security
When it comes to data security, no business or organization wants to
compromise. Security of the organization’s data is a top priority. Threats such as
data leaks, data deletion, and unauthorized amendments to the data need to be
minimized. Certain steps can be taken to minimize the losses and ensure high
data security. Data breaches can be minimized with the help of encryption and
authentication. Data losses can be reduced with the help of backups, reviewing
privacy policies, and data recovery systems. Security testing will be done
thoroughly to detect any loopholes and patches. High-security measures should
be taken during storage and transfer of data. Cloud service providers secure the
data with many security protocols and data encryption algorithms.

3. Multi and Hybrid Cloud Deployment


The use of multi-cloud and hybrid solutions is increasing. Many organizations
like banks, insurance companies, etc are using hybrid cloud service that offers a
combination of both private and public clouds to store their data.
Now, businesses are dividing their workload among multiple cloud service
providers to control their data and resources as well as utilize the strength of
each cloud service provider. The use of multi-cloud minimizes the potential
risks and failure points and provides cost-effectiveness. In multi-cloud, you can
choose a particular service of a particular cloud service provider that meets your
requirements instead of deploying your entire application on that cloud. This
will also ignite the cloud service providers to embed new services.

4. Low Code and No Code Cloud Solutions


Those days are gone when users need to write hundreds of lines of code to
create applications and solve real-world problems and have deep technical
knowledge. Businesses can create applications and make use of AI and its
subdomains with low-code and no-code cloud solutions. These solutions can
help in the development of websites, apps, services, etc without having any
technical knowledge. This helps in reducing the time and cost involved to create
these solutions. These solutions increase product development speed and result
in a smaller number of errors. Tools such as Figma and Zoho enable users to
design and develop websites, apps, and services without any computing
infrastructure and coding knowledge involved.

5. Edge computing
Edge computing includes storage of data, data processing, and data analytics
which is done geographically nearer to the source. It means that the
computation and storage of data are brought closer to the source sensors and
devices. It provides many benefits like reduced latency, enhanced efficiency,
increased privacy, security, and a high rate of data transmission. It works in real-
time and processes data that is not bounded by time. As the use of 5G is
increasing, it is easy to achieve fast processing and reduced latency. Also, many
telecom and IT organizations are uniting, resulting in the rise in edge
computing. With the rise in IoT devices, edge computing will play a huge role
in providing real-time data and data analysis.
6. IoT
The Internet of Things (IoT) is a trend that is becoming popular day by
day. IoT involves the use of many sensors that generate huge amounts of data
which gets storage on cloud servers. IoT makes use of many sensors, and
actuators and performs analysis on the data collected to yield results that will
help in taking business decisions. It involves connectivity among computers,
networks, and servers. It can remotely collect data and communicate with the
devices.
IoT collects data from various sensors and devices and acts as an intermediator
between remote systems and smart device management. Smart connectivity
plays a major role in making IoT a trend in cloud computing.

7. Kubernetes and Docker


Kubernetes is an open-source orchestration platform where scaling,
management, and deployment of applications is done automatically. It provides
automation to the cloud network users. Organizations can choose a particular
Kubernetes platform based on their requirements.
Docker is a platform where developers can package applications and can deploy
them anywhere in the form of containers.
Kubernetes and Docker are among the trending and evolving technologies in
cloud computing. They are an open-source platform that manages services and
workloads from a single location while running applications from a single
source. They provide scalability and efficiency to many large-scale
deployments. As the use of cloud computing services is increasing, Kubernetes
and Docker play a major role in managing cloud deployments of cloud users
and organizations.

8. Serverless architecture/computing
Serverless computing is a methodology that provides backend services on a per-
user basis. There is no need for developers to manage the servers while running
their code. Code execution is managed by the cloud service provider. Cloud
users will pay as per the pay-as-you-go format which means that users will only
pay when their code runs instead for a fixed server. There is no need to purchase
the servers as a third party will manage the cost for you. This will help in
reducing infrastructure costs and will enhance scalability.

This trend can be automatically scaled as per its demand. Serverless architecture
offers many advantages such as no requirement for system administration, low
cost and liability, easy management of operations, and enhanced user experience
even in case of no internet.
9. DevSecOps
Cloud computing provides many benefits to its customers in managing their
data but along with that, many security issues are sometimes faced by the users.
Risks involving network invasion, Denial of Service (DoS) attacks, issues in
virtualization, unauthorized use of data, etc. This can be minimized with the
help of DevSecOps.

DevSecOps is an integration of security with the ongoing development process.


It embeds many processes in its workflow to ensure secure task automation.
Many cloud service providers provide various tools and services to help
businesses apply DevSecOps methods. It will provide all the required security
to provide a secure system to the users.

10. Disaster recovery and backup


Disaster recovery plays a crucial role in the restoration of critical data and
systems in case of any kind of disaster. Many organizations have faced huge
losses of unsaved data due to server crashes. With the help of cloud computing,
a backup of critical data of businesses can be stored to quickly recover from
disruptions such as data loss, power outages, natural disasters, cyberattacks, or
hardware failures. For any organization, a strong disaster recovery and backup
plan with the help of cloud computing can save them from a huge loss. Many
enterprises keep electronic records and files and upload those documents on an
external cloud server automatically.
Cloud Architecture
cloud computing technology is used by both small and large organizations
to store the information in cloud and access it from anywhere at anytime using
the internet connection.

Cloud computing architecture is a combination of service-oriented


architecture and event-driven architecture.

Cloud computing architecture is divided into the following two parts -

o Front End
o Back End

The below diagram shows the architecture of cloud computing -

Front End

The front end is used by the client. It contains client-side interfaces and
applications that are required to access the cloud computing platforms. The
front end includes web servers (including Chrome, Firefox, internet explorer,
etc.), thin & fat clients, tablets, and mobile devices.

Back End

The back end is used by the service provider. It manages all the resources that
are required to provide cloud computing services. It includes a huge amount of
data storage, security mechanism, virtual machines, deploying models, servers,
traffic control mechanisms, etc.

Components of Cloud Computing Architecture

There are the following components of cloud computing architecture -

1. Client Infrastructure

Client Infrastructure is a Front end component. It provides GUI (Graphical User


Interface) to interact with the cloud.

2. Application

The application may be any software or platform that a client wants to access.

3. Service

A Cloud Services manages that which type of service you access according to
the client’s requirement.

Cloud computing offers the following three type of services:

i. Software as a Service (SaaS) – It is also known as cloud application


services. Mostly, SaaS applications run directly through the web browser means
we do not require to download and install these applications. Some important
example of SaaS is given below –

Example: Google Apps, Salesforce Dropbox, Slack, Hubspot, Cisco WebEx.

ii. Platform as a Service (PaaS) – It is also known as cloud platform services.


It is quite similar to SaaS, but the difference is that PaaS provides a platform for
software creation, but using SaaS, we can access software over the internet
without the need of any platform.

Example: Windows Azure, Force.com, Magento Commerce Cloud, OpenShift.


iii. Infrastructure as a Service (IaaS) – It is also known as cloud
infrastructure services. It is responsible for managing applications data,
middleware, and runtime environments.

Example: Amazon Web Services (AWS) EC2, Google Compute Engine


(GCE), Cisco Metapod.

4. Runtime Cloud

Runtime Cloud provides the execution and runtime environment to the virtual
machines.

5. Storage

Storage is one of the most important components of cloud computing. It


provides a huge amount of storage capacity in the cloud to store and manage
data.

6. Infrastructure

It provides services on the host level, application level, and network level.
Cloud infrastructure includes hardware and software components such as
servers, storage, network devices, virtualization software, and other storage
resources that are needed to support the cloud computing model.

7. Management

Management is used to manage components such as application, service,


runtime cloud, storage, infrastructure, and other security issues in the backend
and establish coordination between them.

8. Security

Security is an in-built back end component of cloud computing. It implements a


security mechanism in the back end.

9. Internet

The Internet is medium through which front end and back end can interact and
communicate with each other.
Pros And Cons of Virtualization :
Pros of Virtualization in Cloud Computing :

Utilization of Hardware Efficiently –

With the help of Virtualization Hardware is Efficiently used by user as well as


Cloud Service Provider. In this the need of Physical Hardware System for the
User is decreases and this results in less costly.In Service Provider point of
View, they will vitalize the Hardware using Hardware Virtualization which
decrease the Hardware requirement from Vendor side which are provided to
User is decreased. Before Virtualization, Companies and organizations have to
set up their own Server which require extra space for placing them, engineer’s
to check its performance and require extra hardware cost but with the help of
Virtualization the all these limitations are removed by Cloud vendor’s who
provide Physical Services without setting up any Physical Hardware system.

Availability increases with Virtualization –

One of the main benefit of Virtualization is that it provides advance features


which allow virtual instances to be available all the times. It also has capability
to move virtual instance from one virtual Server another Server which is very
tedious and risky task in Server Based System. During migration of Data from
one server to another it ensures its safety. Also, we can access information from
any location and any time from any device.

Disaster Recovery is efficient and easy –

With the help of virtualization Data Recovery, Backup, Duplication becomes


very easy. In traditional method , if somehow due to some disaster if Server
system Damaged then the surety of Data Recovery is very less. But with the
tools of Virtualization real time data backup recovery and mirroring become
easy task and provide surety of zero percent data loss.

Virtualization saves Energy –

Virtualization will help to save Energy because while moving from physical
Servers to Virtual Server’s, the number of Server’s decreases due to this
monthly power and cooling cost decreases which will Save Money as well. As
cooling cost reduces it means carbon production by devices also decreases
which results in Fresh and pollution free environment.

Quick and Easy Set up –

In traditional methods Setting up physical system and servers are very time-
consuming. Firstly Purchase them in bulk after that wait for shipment. When
Shipment is done then wait for Setting up and after that again spend time in
installing required software etc. Which will consume very time. But with the
help of virtualization the entire process is done in very less time which results in
productive setup.

Cloud Migration becomes easy –

Most of the companies those who already have spent a lot in the server have a
doubt of Shifting to Cloud. But it is more cost-effective to shift to cloud
services because all the data that is present in their server’s can be easily
migrated into the cloud server and save something from maintenance charge,
power consumption, cooling cost, cost to Server Maintenance Engineer etc.

Cons of Virtualization :

Data can be at Risk –

Working on virtual instances on shared resources means that our data is hosted
on third party resource which put’s our data in vulnerable condition. Any hacker
can attack on our data or try to perform unauthorized access. Without Security
solution our data is in threaten situation.

Learning New Infrastructure –

As Organization shifted from Servers to Cloud. They required skilled staff who
can work with cloud easily. Either they hire new IT staff with relevant skill or
provide training on that skill which increase the cost of company.
Examples of Virtualization Technologies
Xen-Virtulization

A Xen based system is handled by Xen hypervisor, which is executed in the


most privileged mode and maintains the access of guest operating system to the
basic hardware. Guest operating system are run between domains, which
represents virtual machine instances.
In addition, particular control software, which has privileged access to the host
and handles all other guest OS, runs in a special domain called Domain 0. This
the only one loaded once the virtual machine manager has fully booted, and
hosts an HTTP server that delivers requests for virtual machine creation,
configuration, and termination. This component establishes the primary version
of a shared virtual machine manager (VMM), which is a necessary part of
Cloud computing system delivering Infrastructure-as-a-Service (IaaS) solution.
Ring 0,
Ring 1,
Ring 2,
Ring 3
Here, Ring 0 represents the level having most privilege and Ring 3 represents
the level having least privilege. Almost all the frequently used Operating
system, except for OS/2, uses only two levels i.e. Ring 0 for the Kernel code
and Ring 3 for user application and non-privilege OS program. This provides a
chance to the Xen to implement paravirtualization. This enables Xen to control
unchanged the Application Binary Interface (ABI) thus allowing a simple shift
to Xen-virtualized solutions, from an application perspective.

Due to the structure of x86 instruction set, some instructions allow code
execution in Ring 3 to switch to Ring 0 (Kernel mode). Such an operation is
done at hardware level, and hence between a virtualized environment, it will
lead to a TRAP or a silent fault, thus preventing the general operation of the
guest OS as it is now running in Ring 1.

This condition is basically occurred by a subset of system calls. To eliminate


this situation, implementation in operating system requires a modification and
all the sensitive system calls needs re-implementation with hypercalls. Here,
hypercalls are the particular calls revealed by the virtual machine (VM)
interface of Xen and by use of it, Xen hypervisor tends to catch the execution of
all the sensitive instructions, manage them, and return the control to the guest
OS with the help of a supplied handler.

Paravirtualization demands the OS codebase be changed, and hence all


operating systems can not be referred to as guest OS in a Xen-based
environment. This condition holds where hardware-assisted virtualization can
not be free, which enables to run the hypervisor in Ring 1 and the guest OS in
Ring 0. Hence, Xen shows some limitations in terms of legacy hardware and in
terms of legacy OS.
In fact, these are not possible to modify to be run in Ring 1 safely as their
codebase is not reachable, and concurrently, the primary hardware hasn’t any
support to execute them in a more privileged mode than Ring 0. Open source
OS like Linux can be simply modified as its code is openly available, and Xen
delivers full support to virtualization, while components of Windows are
basically not compatible with Xen, unless hardware-assisted virtualization is
available. As new releases of OS are designed to be virtualized, the problem is
getting resolved and new hardware supports x86 virtualization.

Pros:
a) Xen server is developed over open-source Xen hypervisor and it uses a
combination of hardware-based virtualization and paravirtualization. This
tightly coupled collaboration between the operating system and virtualized
platform enables the system to develop lighter and flexible hypervisor that
delivers their functionalities in an optimized manner.
b) Xen supports balancing of large workload efficiently that capture CPU,
Memory, disk input-output and network input-output of data. It offers two
modes to handle this workload: Performance enhancement, and For handling
data density.
c) It also comes equipped with a special storage feature that we call Citrix
storage link. Which allows a system administrator to uses the features of arrays
from Giant companies- Hp, Netapp, Dell Equal logic etc.
d) It also supports multiple processor, Iive migration one machine to another,
physical server to virtual machine or virtual server to virtual machine
conversion tools, centralized multiserver management, real time performance
monitoring over window and linux.
Cons:

a) Xen is more reliable over linux rather than on window.


b) Xen relies on 3rd-party component to manage the resources like drivers,
storage, backup, recovery & fault tolerance.
c) Xen deployment could be a burden some on your Linux kernel system as
time passes.
d) Xen sometimes may cause increase in load on your resources by high input-
output rate and and may cause starvation of other Vm’s.

VM Ware -Full Virtualization


In full virtualization primary hardware is replicated and made available to
the guest operating system, which executes unaware of such abstraction and no
requirements to modify. Technology of VMware is based on the key concept of
Full Virtualization. Either in desktop environment, with the help of type-II
hypervisor, or in server environment, through type-I hypervisor, VMware
implements full virtualization. In both the cases, full virtualization is possible
through the direct execution for non-sensitive instructions and binary translation
for sensitive instructions or hardware traps, thus enabling the virtualization of
architecture like x86.
Full Virtualization and Binary Translation –
VMware is widely used as it tends to virtualize x86 architectures, which
executes unmodified on-top of their hypervisors. With the introduction of
hardware-assisted virtualization, full virtualization is possible to achieve by
support of hardware. But earlier, x86 guest operating systems unmodified in a
virtualized environment could be executed only with the use of dynamic binary
translation.

Full Virtualization Reference Model


The major benefit of this approach is that guests can run unmodified in a
virtualized environment, which is an important feature for operating system
whose source code does not existed. Binary translation is portable for full
virtualization. As well as translation of instructions at runtime presents an
additional overhead that is not existed in other methods like paravirtualization
or hardware-assisted virtualization. Contradict, binary translation is only
implemented to a subset of the instruction set, while the others are managed
through direct execution on the primary hardware. This depletes somehow the
impact on performance of binary translation.
Advantages of Binary Translation –
 This kind of virtualization delivers the best isolation and security for
Virtual Machine.
 Truly isolated numerous guest OS can execute concurrently on the same
hardware.
 It is only implementation that needs no hardware assist or operating
system assist to virtualize sensitive instruction as well as privileged
instruction.

Disadvantages of Binary Translation –


 It is time consuming at run-time.
 It acquires a large performance overhead.
 It employs a code cache to stock the translated most used instructions to
enhance the performance, but it increases memory utilization along with
the hardware cost.
 The performance of full virtualization on the x86 architecture is 80 to 95
percent that of the host machine.

You might also like