0% found this document useful (0 votes)
18 views

Cloud Computing

Cloud computing provides on-demand access to shared computing resources like servers, storage, and applications over the internet. Users can access and use these resources as needed without upfront infrastructure costs, paying only for what they consume. The NIST cloud computing reference architecture defines five major performers: cloud providers, cloud carriers, cloud brokers, cloud auditors, and cloud consumers.

Uploaded by

Dharu Dharani
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Cloud Computing

Cloud computing provides on-demand access to shared computing resources like servers, storage, and applications over the internet. Users can access and use these resources as needed without upfront infrastructure costs, paying only for what they consume. The NIST cloud computing reference architecture defines five major performers: cloud providers, cloud carriers, cloud brokers, cloud auditors, and cloud consumers.

Uploaded by

Dharu Dharani
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Cloud computing is a technology model that provides on-demand access to a shared pool of

computing resources, such as servers, storage, and applications, over the internet. Users can access
and use these resources as needed, paying only for what they consume, without the need for
upfront infrastructure investment.

NIST Cloud Computing reference architecture defines five major performers:

Cloud Provider

Cloud Carrier

Cloud Broker

Cloud Auditor

Cloud Consumer

Each performer is an object (a person or an organization) that contributes to a transaction or


method and/or performs tasks in Cloud computing. There are five major actors defined in the NIST
cloud computing reference architecture, which are described below:

1. Cloud Service Providers:

A group or object that delivers cloud services to cloud consumers or end-users. It offers
various components of cloud computing. Cloud computing consumers purchase a growing
variety of cloud services from cloud service providers.
There are various categories of cloud-based services mentioned below:

IaaS Providers: In this model, the cloud service providers offer infrastructure components
that would exist in an on-premises data center. These components consist of servers,
networking, and storage as well as the virtualization layer.

SaaS Providers: In Software as a Service (SaaS), vendors provide a wide sequence of business
technologies, such as Human resources management (HRM) software, customer relationship
management (CRM) software, all of which the SaaS vendor hosts and provides services
through the internet.

PaaS Providers: In Platform as a Service (PaaS), vendors offer cloud infrastructure and
services that can access to perform many functions. In PaaS, services and products are
mostly utilized in software development. PaaS providers offer more services than IaaS
providers. PaaS providers provide operating system and middleware along with application
stack, to the underlying infrastructure.

2. Cloud Carrier:

The mediator who provides offers connectivity and transport of cloud services within cloud
service providers and cloud consumers. It allows access to the services of the cloud through
Internet networks, telecommunication, and other access devices. Network and telecom
carriers or a transport agent can provide distribution. A consistent level of services is
provided when cloud providers set up Service Level Agreements (SLA) with a cloud carrier. In
general, Carrier may be required to offer dedicated and encrypted connections.
3. Cloud Broker:

An organization or a unit that manages the performance, use, and delivery of cloud services
by enhancing specific capability and offers value-added services to cloud consumers. It
combines and integrates various services into one or more new services. They provide
service arbitrage which allows flexibility and opportunistic choices.
There are major three services offered by a cloud broker:

Service Intermediation.
Service Aggregation.
Service Arbitrage.

Service intermediation is like having a helpful guide in the middle, connecting you to
different services. It makes sure information flows smoothly between different parts.

Service aggregation is like putting together different services in one place, making it
convenient for you. It's like having a menu with all your favourite dishes in one restaurant.

Service arbitrage is like finding the best deal among different services. It's like having a savvy
shopper who compares options to get you the most value for your needs.

4. Cloud Auditor:

An entity that can conduct independent assessment of cloud services, security,


performance, and information system operations of the cloud implementations.which the
controls are implemented correctly, operating as planned and constructing the desired
outcome with respect to meeting the security necessities for the system.

There are three major roles of Cloud Auditor which are mentioned below:

Security Audit.
Privacy Impact Audit.
Performance Audit.

- A security audit is like a digital checkup to make sure computer systems are safe. It looks
for and fixes any weaknesses that could be used by unauthorized people.
-A privacy impact audit is like hiring someone to check if personal information is handled
safely and respectfully, ensuring that sensitive data is protected.

-A performance audit is like a review to see how well computer systems are working. It
checks efficiency to make sure everything runs smoothly.

5. Cloud Consumer:

A cloud consumer is an end-user who uses services from Cloud Service Providers (CSPs) by
setting up contracts and paying for measured usage.
A cloud consumer is someone who uses services from companies that offer stuff on the
internet, like storing files or running software. They decide what services they need, make
agreements, and pay for what they use. To make sure things are safe and work well, they
check agreements called SLAs that cover things like quality and security. In a big market, they
can choose the best company with good prices and terms for what they want.

Example 1 2 3 book la iruku

DESIGN CHALLENGES IN CLOUD COMPUTING


1. **Scalability:**
- *Challenge:* Designing a system that can handle a sudden increase in users or data
without slowing down.
- *Example:* An e-commerce platform must accommodate a surge in traffic during a
holiday sale without crashing or slowing down response times.

2. **Data Security and Privacy:**

- *Challenge:* Ensuring that sensitive data, such as customer information or financial


records, is protected from unauthorized access.
- *Example:* A healthcare application storing patient records in the cloud must adhere to
strict privacy regulations like HIPAA to safeguard patient confidentiality.

3. **Data Migration:**

- *Challenge:* Moving large amounts of data seamlessly between on-premises systems and
the cloud.
- *Example:* A company transitioning from in-house servers to a cloud-based storage
solution must migrate its existing data without disrupting business operations.

4. **Interoperability:**

- *Challenge:* Ensuring that different cloud services and platforms work well together and
with existing on-premises systems.
- *Example:* Integrating a customer relationship management (CRM) system in the cloud
with an on-premises enterprise resource planning (ERP) system for seamless business
operations.

5. **Cost Management:**

- *Challenge:* Optimizing cloud resource usage to minimize costs without sacrificing


performance.
- *Example:* A startup using cloud services for its web application must carefully manage
resource allocation to stay within budget while accommodating user growth.

6. **Reliability and Availability:**


- *Challenge:* Designing for continuous availability and minimizing downtime.
- *Example:* An online banking application must ensure that customers can access their
accounts and perform transactions 24/7 without interruptions.

6. **Network Performance:**

- *Challenge:* Overcoming issues related to latency and bandwidth limitations in a


distributed cloud environment.
- *Example:* A video streaming service must deliver content with low latency and high
quality to users worldwide without buffering delays.

7. **Compliance and Legal Considerations:**


- *Challenge:* Adhering to industry-specific regulations and legal requirements in the
design and operation of cloud solutions.
- *Example:* A financial institution moving its operations to the cloud must comply with
regulations like GDPR or SOX to protect customer data and ensure transparency.

8. **Vendor Lock-In:**

- *Challenge:* Avoiding dependence on a single cloud service provider to maintain


flexibility and avoid potential issues with changing providers.
- *Example:* A company using a specific cloud provider for computing services should
design its applications and data storage to be portable, enabling a switch to a different
provider if needed.

9. **Complexity in Cloud Architectures:**

- *Challenge:* Managing the intricacies of multi-tiered and distributed cloud architectures.


- *Example:* A complex microservices-based architecture for an online retail platform
requires effective orchestration and coordination to ensure smooth communication between
services.
https://www.geeksforgeeks.org/an-overview-of-cluster-computing/
https://www.geeksforgeeks.org/what-is-p2p-peer-to-peer-process/
https://www.geeksforgeeks.org/grid-computing/
Cluster computing is a collection of tightly or loosely connected
computers that work together so that they act as a single entity. The
connected computers execute operations all together thus creating
the idea of a single system. The clusters are generally connected
through fast local area networks (LANs)

Types of Cluster computing :


1. High performance (HP) clusters :
HP clusters use computer clusters and supercomputers to solve advance
computational problems. They are used to performing functions that need
nodes to communicate as they perform their jobs. They are designed to take
benefit of the parallel processing power of several nodes.
2. Load-balancing clusters :
Incoming requests are distributed for resources among several nodes running
similar programs or having similar content. This prevents any single node
from receiving a disproportionate amount of task. This type of distribution is
generally used in a web-hosting environment.
3. High Availability (HA) Clusters :
HA clusters are designed to maintain redundant nodes that can act as backup
systems in case any failure occurs. Consistent computing services like
business activities, complicated databases, customer services like e-websites
and network file distribution are provided. They are designed to give
uninterrupted data availability to the customers.
Classification of Cluster :

1. Open Cluster :
IPs are needed by every node and those are accessed only through the
internet or web. This type of cluster causes enhanced security concerns.
2. Close Cluster :
The nodes are hidden behind the gateway node, and they provide increased
protection. They need fewer IP addresses and are good for computational
tasks.
Cluster Computing Architecture :
• It is designed with an array of interconnected individual computers
and the computer systems operating collectively as a single
standalone system.
• It is a group of workstations or computers working together as a
single, integrated computing resource connected via high speed
interconnects.
• A node – Either a single or a multiprocessor network having
memory, input and output functions and an operating system.
• Two or more nodes are connected on a single line or every node
might be connected individually through a LAN connection.
Cluster Computing Architecture

Components of a Cluster Computer :


1. Cluster Nodes
2. Cluster Operating System
3. The switch or node interconnect
4. Network switching hardware
Advantages of Cluster Computing :

1. High Performance :
The systems offer better and enhanced performance than that of mainframe
computer networks.
2. Easy to manage :
Cluster Computing is manageable and easy to implement.
3. Scalable :
Resources can be added to the clusters accordingly.
4. Expandability :
Computer clusters can be expanded easily by adding additional computers to
the network. Cluster computing is capable of combining several additional
resources or the networks to the existing computer system.
5. Availability :
The other nodes will be active when one node gets failed and will function
as a proxy for the failed node. This makes sure for enhanced availability.
6. Flexibility :
It can be upgraded to the superior specification or additional nodes can be
added.
Disadvantages of Cluster Computing :

1. High cost :
It is not so much cost-effective due to its high hardware and its design.
2. Problem in finding fault :
It is difficult to find which component has a fault.
3. More space is needed :
Infrastructure may increase as more servers are needed to manage and
monitor.
Applications of Cluster Computing :
• Various complex computational problems can be solved.
• It can be used in the applications of aerodynamics, astrophysics and
in data mining.
• Weather forecasting.
• Image Rendering.
• Various e-commerce applications.
• Earthquake Simulation.
• Petroleum reservoir simulation.

A peer-to-peer network is a simple network of computers. It first


came into existence in the late 1970s. Here each computer acts as a
node for file sharing within the formed network. Here each node acts
as a server and thus there is no central server in the network. This
allows the sharing of a huge amount of data. The tasks are equally
divided amongst the nodes. Each node connected in the network
shares an equal workload. For the network to stop working, all the
nodes need to individually stop working. This is because each node
works independently.

Types of P2P networks


1. Unstructured P2P networks: In this type of P2P network, each
device is able to make an equal contribution. This network is easy
to build as devices can be connected randomly in the network. But
being unstructured, it becomes difficult to find content. For
example, Napster, Gnutella, etc.
2. Structured P2P networks: It is designed using software that
creates a virtual layer in order to put the nodes in a specific
structure. These are not easy to set up but can give easy access to
users to the content. For example, P-Grid, Kademlia, etc.
3. Hybrid P2P networks: It combines the features of both P2P
networks and client-server architecture. An example of such a
network is to find a node using the central server.

P2P Network Architecture

In the P2P network architecture, the computers connect with each other in a
workgroup to share files, and access to internet and printers.
• Each computer in the network has the same set of responsibilities
and capabilities.
• Each device in the network serves as both a client and server.
• The architecture is useful in residential areas, small offices, or
small companies where each computer act as an independent
workstation and stores the data on its hard drive.
• Each computer in the network has the ability to share data with
other computers in the network.
• The architecture is usually composed of workgroups of 12 or more
computers.
How Does P2P Network Work?

Let’s understand the working of the Peer-to-Peer network through an


example. Suppose, the user wants to download a file through the peer-to-
peer network then the download will be handled in this way:
• If the peer-to-peer software is not already installed, then the user
first has to install the peer-to-peer software on his computer.
• This creates a virtual network of peer-to-peer application users.
• The user then downloads the file, which is received in bits that
come from multiple computers in the network that have already
that file.
• The data is also sent from the user’s computer to other computers
in the network that ask for the data that exist on the user’s
computer
• Grid Computing can be defined as a network of computers working
together to perform a task that would rather be difficult for a single
machine. All machines on that network work under the same protocol
to act as a virtual supercomputer. The task that they work on may
include analyzing huge datasets or simulating situations that require
high computing power. Computers on the network contribute
resources like processing power and storage capacity to the network.
• Grid Computing is a subset of distributed computing
Working:
A Grid computing network mainly consists of these three types of machines
1. Control Node: A computer, usually a server or a group of servers
which administrates the whole network and keeps the account of
the resources in the network pool.
2. Provider: The computer contributes its resources to the network
resource pool.
3. User: The computer that uses the resources on the network.
When a computer makes a request for resources to the control node, the
control node gives the user access to the resources available on the network.
When it is not in use it should ideally contribute its resources to the network.
Hence a normal computer on the node can swing in between being a user or
a provider based on its needs. The nodes may consist of machines with similar
platforms using the same OS called homogeneous networks, else machines
with different platforms running on various different OSs called
heterogeneous networks. This is the distinguishing part of grid computing
from other distributed computing architectures.

You might also like