0% found this document useful (0 votes)
103 views

Cloud Computoing Module I

This document provides an introduction to cloud computing. It defines cloud computing as manipulating, configuring, and accessing hardware and software resources remotely via the internet. The three major deployment models are public clouds, private clouds, and hybrid clouds. Cloud computing provides advantages like mobility, scalability, and reduced costs. The key technologies that enabled cloud computing are distributed systems, virtualization, Web 2.0, service-oriented computing, and utility-oriented computing.

Uploaded by

JEEVA P G
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
103 views

Cloud Computoing Module I

This document provides an introduction to cloud computing. It defines cloud computing as manipulating, configuring, and accessing hardware and software resources remotely via the internet. The three major deployment models are public clouds, private clouds, and hybrid clouds. Cloud computing provides advantages like mobility, scalability, and reduced costs. The key technologies that enabled cloud computing are distributed systems, virtualization, Web 2.0, service-oriented computing, and utility-oriented computing.

Uploaded by

JEEVA P G
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Module 1-Introduction

What is Cloud?

The term Cloud refers to a Network or Internet. In other words, we can say
that Cloud is something, which is present at a remote location. Cloud can
provide services over public and private networks, i.e., WAN, LAN or VPN.

Applications such as e-mail, web conferencing, customer relationship


management (CRM) execute on cloud.

What is Cloud Computing?

Cloud Computing refers to manipulating, configuring, and accessing the


hardware and software resources remotely. It offers online data storage,
infrastructure, and application.

Cloud computing offers platform independency, as the software is not


required to be installed locally on the PC. Hence, Cloud Computing is making our
business applications mobile and collaborative.

● It is a technology that uses remote servers on the internet to store,


manage, and access data online rather than local drives. The data can be
anything such as files, images, documents, audio, video, and more.
● System developers can concentrate on the business logic rather than
dealing with the complexity of infrastructure management and
scalability
● Small enterprises and startups can afford to translate their ideas into
business results more quickly without excessive upfront costs.
● Cloud computing is helping enterprises,government ,public and private
institutions.
● Large enterprises can offload some of their activities to cloud based
systems.
● End users can have their documents accessible from anywhere and any
device.

Basic Concepts
Deployment models define the type of access to the cloud, i.e., how the cloud is
located

Department of CSA Sreekala C


Module 1-Introduction

The three major models for deployment and accessibility of cloud


computing environments are:
1. Public clouds
2. Private /enterprise cloud
3. Hybrid clouds

Public clouds are the most common deployment model in which necessary IT infrastructure
is established by a 3rd party service provider who makes it available to any consumer on
subscription basis.Public cloud allows systems and services to be easily
accessible to the general public.Public cloud may be less secure because of
its openness.
The private cloud(operated by a single organization)allows systems and
services to be accessible within an organization.It is more secure because
of its private nature.The use of cloud-based in-house solutions is also
driven by the need of keeping confidential information within the
organization’s premises.Institutions such as government and banks with
high security ,privacy and regulatory concerns prefer to build and use
their own private or enterprise cloud.

Whenever private cloud resources are unable to meet users quality of


service requirements such as the deadline,hybrid computing systems are
created to meet the organizations needs. The hybrid cloud is a mixture of
public and private cloud, in which the critical activities are performed using
a private cloud while the non-critical activities are performed using a public
cloud.

The community cloud allows systems and services to be accessible by a


group of organizations.

Department of CSA Sreekala C


Module 1-Introduction

CLOUD COMPUTING REFERENCE MODEL

Cloud computing is based on service models. These are categorized into three basic
service models which are -

● Infrastructure-as–a-Service (IaaS)
● Platform-as-a-Service (PaaS)
● Software-as-a-Service (SaaS)

Department of CSA Sreekala C


Module 1-Introduction

IaaS is also known as Hardware as a Service (HaaS). It is a computing


infrastructure managed over the internet. The main advantage of using IaaS
is that it helps users to avoid the cost and complexity of purchasing and
managing the physical servers.It allows customers to outsource their IT
infrastructures such as servers, networking, processing, storage, virtual
machines, and other resources. Customers access these resources on the
Internet using a pay-as-per use model.

Users of IaaS are IT Administrators.IaaS providers include Amazon web


services,Microsoft Azure.

PaaS provides the runtime environment for applications, development and


deployment tools, etc.PaasS provides cloud platforms and runtime
environments for developing ,testing and managing applications.Users of
PaaS are software developers.

Characteristics of Cloud Computing


● No upfront commitments

Department of CSA Sreekala C


Module 1-Introduction

● On demand access-The Cloud computing services does not require


any human administrators, users themselves are able to provision,
monitor and manage computing resources as needed.
● Nice pricing
● Simplified application acceleration and scalability-compute, storage,
networking and other assets can be added or removed as needed.
● Efficient resource allocation
● Energy efficiency
● Seamless creation and the use of third party services

Benefits of cloud computing

1) Back-up and restore data

Once the data is stored in the cloud, it is easier to get back-up and restore
that data using the cloud.

2) Improved collaboration

Cloud applications improve collaboration by allowing groups of people to


quickly and easily share information in the cloud via shared storage.

3) Excellent accessibility

Cloud allows us to quickly and easily access store information anywhere,


anytime in the whole world, using an internet connection. An internet cloud
infrastructure increases organization productivity and efficiency by
ensuring that our data is always accessible.

4)Increased economical return due to reduced maintenance costs and


operational costs related to IT software and infrastructure.

5) Mobility

Cloud computing allows us to easily access all cloud data via mobile.

Department of CSA Sreekala C


Module 1-Introduction

6) IServices in the pay-per-use model

Cloud computing offers Application Programming Interfaces (APIs) to the


users for access services on the cloud and pays the charges as per the usage
of service.

7) Unlimited storage capacity

Cloud offers us a huge amount of storage capacity for storing our important
data such as documents, images, audio, video, etc. in one place.

8) Data security
Data security is one of the biggest advantages of cloud computing. Cloud
offers many advanced features related to security and ensures that data is
securely stored and handled

Challenges
● Technical challenges arise for cloud service providers for the
management of large computing infrastructures.
● Security in terms of confidentiality,secrecy and protection of data in a
cloud environment.
● Legal issues may arise when cloud computing infrastructures across
diverse geographical locations.(different countries may potentially
create disputes on what are the rights that third parties have on your
data)

Department of CSA Sreekala C


Module 1-Introduction

HISTORICAL DEVELOPMENTS

Five core technologies that played an important role in realization of Cloud


Computing
Distributed Systems, Virtualization, Web 2.0, Service oriented
computing and utility oriented computing
1. Distributed systems
A distributed system is a collection of independent computers that appears
to its users as a single coherent system.
Three major milestones have led to cloud computing: mainframe computing,
cluster computing, and grid computing.
2. Virtualization
Virtualization is essentially a technology that allows creation of different
computing environments. These environments are called virtual because
they simulate the interface that is expected by a guest
EX: Hardware virtualization, Storage and Network virtualization

Department of CSA Sreekala C


Module 1-Introduction

3. Web 2.0
➔Web is the primary interface through which cloud computing delivers
its services
➔ A set of technologies and services that facilitate interactive
information sharing, collaboration, user-centered design, and
application composition.
➔ Web 2.0 brings interactivity and flexibility into Web pages, providing
enhanced user experience by gaining Web-based access to all the
functions that are normally found in desktop applications.
➔ These capabilities are obtained by integrating a collection of standards
and technologies such as XML, Asynchronous JavaScript and XML
(AJAX), Web Services, and others
➔ Ex: Google Documents, Google Maps, Flickr, Facebook, Twitter,
YouTube Blogger, and Wikipedia.

➔Service orientation is the core reference model for cloud computing


systems.
➔ This approach adopts the concept of services as the main building
blocks of application and system development.
➔ Service-oriented computing (SOC) supports the development of
rapid, low-cost, flexible, interoperable, and evolvable applications
and systems.
➔ A service is an abstraction representing a self-describing and
platform-agnostic component that can perform any function—
anything from a simple function to a complex business process
➔A service is supposed to be loosely coupled, reusable, programming
language independent, and location transparent.
5. Utility-oriented computing
➔ Utility computing is a vision of computing that defines a service-
provisioning model for compute services in which resources such as
storage, compute power, applications, and infrastructure are
packaged and offered on a pay-per-use basis

Department of CSA Sreekala C


Module 1-Introduction

➔ With the advent of cloud computing, the idea of providing computing


as a utility like natural gas, water, power, and telephone connection
has become a reality today

Building cloud computing environments


The creation of cloud computing environments encompasses both the
development of applications and systems that influence cloud computing
solutions and the creation of frameworks,platforms,and infrastructures
delivering cloud computing services.
1. Application development
Web applications, enterprise applications,resource intensive
applications -Data-intensive ,compute-intensive applications.(scientific
applications)
2. Infrastructure and system development
Distributed computing, virtualization, service orientation, and Web 2.0 form
the core technologies enabling the provisioning of cloud services from
anywhere on the globe.

Computing platforms and technologies


1. Amazon web services (AWS)
AWS offers comprehensive cloud IaaS services ranging from virtual
compute, storage, and networking to complete computing stacks.
EX: Elastic Compute Cloud (EC2)
Simple Storage Service (S3)
2. Google AppEngine
Google AppEngine is a scalable runtime environment mostly devoted to
executing Web applications.
The languages currently supported are Python, Java, and Go.
3. Microsoft Azure
★ It provides a scalable runtime environment for Web applications
and distributed applications
★ Applications in Azure are organized around the concept of roles,
which identify a distribution unit for applications and embody the

Department of CSA Sreekala C


Module 1-Introduction

application’s logic. Currently, there are three types of roles: Web role,
worker role, and virtual machine role.
★The Web role is designed to host a Web application,
★the worker role used to perform workload processing,
★ The virtual machine role provides a virtual environment in which the
computing stack can be fully customized, including the operating
systems.
4. Hadoop
★ Apache Hadoop is an open-source framework that is suited for
processing large data sets on commodity hardware
★Hadoop is an implementation of MapReduce,

★ MapReduce is programming model which provides two fundamental


operations for data processing
➢ map
➢ reduce.
★Map transforms and synthesizes the input data provided by the user;
★Reduce aggregates the output obtained by the map operations.

5. Force.com and Salesforce.com


★ Force.com is a cloud computing platform for developing social
enterprise applications.
★ SalesForce.com is a Software-as-a-Service solution for customer
relationship management.
6. Manjrasoft Aneka
★ Manjrasoft Aneka is a cloud application platform for rapid creation
of scalable applications and their deployment on various types of
clouds in a seamless and elastic manner.
★It supports a collection of programming abstractions
★ Developers can choose different abstractions to design their
application

Department of CSA Sreekala C


Module 1-Introduction

Principles of parallel and distributed computing


Cloud Computing is a new technological trend and it supports better
utilisation of IT infrastructure, services and applications. It adopts a Service
Delivery model based on a pay per use approach, users do not own
infrastructure, platform and applications but use them for the time they
need them.These IT assets are owned and maintained by service providers
who make them accessible through the internet.

Eras of Computing
● The two fundamental and dominant models of computing are-
sequential and parallel
● the four key elements of computing developed during this eras were
- architectures, compilers, applications and problem solving
environments

Department of CSA Sreekala C


Module 1-Introduction

Parallel Vs Distributed Computing


● The term parallel computing refers to a model where the
computation is divided among several processors sharing the same
memory and they were considered as a single computer.
● The architecture of a parallel computing system is often characterized
by the homogeneity of components- each processor is of the same
type and it has the same capability of the others.The shared memory
has a single address space which is accessible to all the processors .
● the term distributed implies that the location of computing elements
is not the same and such elements might be heterogeneous in terms of
hardware and software features.
● the architecture of distributed computing is characterized by the
execution of the different units of a computation on heterogeneous
processors or nodes.

ELEMENTS OF PARALLEL COMPUTING

● What is parallel processing?


Processing of multiple tasks simultaneously on multiple processors
is called parallel processing. A given task is divided into multiple
subtasks using divide and conquer technique and each one of them is
processed on different CPUs. Programming on a multiprocessor
system using divide and conquer technique is called parallel
programming.

● Hardware architectures for parallel processing


Based on the number of instructions and data streams that can be
processed simultaneously, computing system are classified into the
following four categories
1. SINGLE INSTRUCTION SINGLE DATA(SISD)
2. SINGLE INSTRUCTION MULTIPLE DATA(SIMD)
3. MULTIPLE INSTRUCTION SINGLE DATA(MISD)
4. MULTIPLE INSTRUCTION MULTIPLE DATA(MIMD)

Department of CSA Sreekala C


Module 1-Introduction

Single Instruction Single Data

● An SISD computing system is a uniprocessor machine capable of


executing a single instruction which operates only single data stream
● In SISD machine instructions are processed sequentially, and hence
computers adopting this model are called sequential computers
● examples:IBM-PC,Macintosh,etc

SINGLE INSTRUCTION MULTIPLE DATA

Department of CSA Sreekala C


Module 1-Introduction

● An SIMD computing system is a multiprocessor machine capable of


executing the same instructions on all the CPUs but operating on
different data streams.

MULTIPLE INSTRUCTION SINGLE DATA

● A MISD Computing system is a multiprocessor machine capable of


executing different instructions on different processing elements, but
all of them operating on the same dataset.

MULTIPLE INSTRUCTION MULTIPLE DATA(MIMD)

● A MIMD Computing system is a multi processor machine capable of


executing multiple instructions on multiple datasets.Each
PE(processing elements ) in the MIMD model has separate
instructions and data streams and hence machines built using this
model are suited for any kind of applications.

Department of CSA Sreekala C


Module 1-Introduction

➔MIMD machines are broadly categorised into shared memory MIMD


and distributed memory MIMD based on how processing elements
are coupled to the main memory.
➔ Shared Memory MIMD Machine-In the shared memory MIMD
model ,all the PE's are connected to a single Global memory and
they all have access to it. systems based on this model are also
called tightly coupled multiprocessor systems.The communication
between processing elements takes place through the shared
memory; modification of the data stored in the memory by one
processing element is visible to all other processing element

➔Distributed Memory MIMD Machine-Distributed memory MIMD


model processing elements have a local memory. systems based on

Department of CSA Sreekala C


Module 1-Introduction

this model are also called loosely coupled multiprocessor systems.


the communication between processing elements in this model
takes place through the interconnection

The most prominent parallel programming approaches are:


1. Data Parallelism:In case of data parallelism,divide and conquer
technique is used to split data into multiple sets,and each data is
processed on different processing elements by using the same
instruction.
2. Process parallelism:In the case of process parallelism, a given
operation has multiple (but distinct)activities,which can be processed
on multiple processors.
3. Farmer and worker model:In this model a job distribution
approach is used;one processor is configured as master and only all
other remaining processing elements are designated as slaves;the
master assigns a job to the slave processing element,and they on
completion inform the master which in inturn collects the results.

LEVELS OF PARALLELISM
Levels of parallelism are decided based on the lumps of code (grain size) that
can be a potential candidate for parallelism.

Department of CSA Sreekala C


Module 1-Introduction

Parallelism within an application can be detected at several levels:


● Large grain (or task level)
● Medium grain (or control level)
● Fine grain (data level)
● Very fine grain (multiple-instruction issue)

Department of CSA Sreekala C


Module 1-Introduction

ELEMENTS OF DISTRIBUTED COMPUTING

DEFINITIONS:-
A distributed system is a collection of independent computers connected through a
network ,communicating and coordinating their actions only by passing messages.It
appears to its users as a single coherent system.

Components of a distributed system

The different layers that are involved in providing the services of a


distributed system.

Department of CSA Sreekala C


Module 1-Introduction

● At the very bottom layer ,computer and network hardware


constitute the physical infrastructure.;these components are
directly managed by the operating system that provides the basic
services.for interprocess communication,process scheduling and
management,and resource management .
● The middleware layer provides services to build a uniform
environment for the development and deployment of distributed
applications.
● The top of the distributed system stack is represented by the
applications and services designed and developed to use the
middleware.It is accessible through the Internet via browser.

Architectural styles for distributed computing


Architectural styles help in understanding and classifying the
organization of software systems in distributed computing.The
architectural styles can be organized into two major classes.
1. Software architectural styles-relates to the logical organization of
software
2. System architectural styles-It includes all those styles that describe
the physical organization of distributed software systems.

● Components and Connectors


A component represents a unit of software that encapsulates a
function or feature of the sytem.eg:programs,objects
A connector is a communication mechanism that allows the
cooperation and coordination among components.

● Software architectural styles


➔Based on the logical arrangement of software components.
➔ -These architectures identify the
data as the fundamental element of the software system, and

Department of CSA Sreekala C


Module 1-Introduction

access to shared data is the core characteristic of the


data-centered architectures.

➔ In the case of data-flow architectures, it


is the availability of data that controls the computation. With
respect to the data-centered styles, in which the access to data
is the core feature, data-flow styles explicitly incorporate the
pattern of data flow

➔ The virtual machine class of


architectural styles is characterized by the presence of an
abstract execution environment (generally referred as a virtual
machine) that simulates features that are not available in the
hardware or software.
➔ This category identifies all systems
that are organised into components mostly connected together
by method calls.
➔ This
class of architectural style models systems in terms of

Department of CSA Sreekala C


Module 1-Introduction

independent components that have their own life cycles, which


interact with each other to perform their activities.

System architectural styles


● System architectural styles cover the physical organization of
components and processes over a distributed infrastructure.
● They provide a set of reference models for the deployment of such
systems

1. Client/server -This architecture is very popular in distributed computing
and is suitable for a wide variety of applications. The client/server model
features two major components: a server and a client . These two
components interact with each other through a network connection using
a given protocol. The communication is unidirectional: The client issues a
request to the server, and after processing the request the server returns a
response.
● For the client design, two major models are:
1. Thin-client model. In this model, the load of data processing
and transformation is put on the server side, and the client
has a light implementation that is mostly concerned with
retrieving and returning the data it is being asked for, with no
considerable further processing.
2. Fat-client model. In this model, the client component is also
responsible for processing and transforming the data before
returning it to the user, whereas the server features a
relatively light implementation that is mostly concerned with
the management of access to the data.

● The three major components in the client-server model:


presentation, application logic, and data storage. In the thin-client
model, the client embodies only the presentation component, while

Department of CSA Sreekala C


Module 1-Introduction

the server absorbs the other two. In the fat-client model, the client
encapsulates presentation and most of the application logic, and
the server is principally responsible for the data storage and
maintenance.

Fig:Client/server architectural styles.


● Presentation, application logic, and data maintenance can be seen
as conceptual layers, which are more appropriately called tiers.
● Two major classes exist:
1. Two-tier architecture-
● This architecture partitions the systems into two tiers,
which are located one in the client component and the
other on the server. The client is responsible for the
presentation tier by providing a user interface; the server
concentrates the application logic and the data store into a
single tier.
● This architecture is suitable for systems of limited size and
suffers from scalability issues.

Department of CSA Sreekala C


Module 1-Introduction

● Another limitation is caused by the dimension of the data to


maintain, manage, and access, which might be prohibitive
for a single computation node or too large for serving the
clients with satisfactory performance.
2. Three-tier architecture/N-tier architecture
➔The three-tier architecture separates the presentation of data, the
application logic, and the data storage into three tiers, these
systems are also more complex to understand and manage.
➔ A classic example of three-tier architecture is constituted by a
medium-size Web application that relies on a relational database
management system for storing its data.

2. Peer-to-peer
● The peer-to-peer model, introduces a symmetric architecture in
which all the components, called peers, play the same role and
incorporate both client and server capabilities of the client/server
model.
● Each peer acts as a server when it processes requests from other
peers and as a client when it issues requests to other peers.
● With respect to the client/ server model that partitions the
responsibilities of the IPC between server and clients, the peer-to
peer model attributes the same responsibilities to each component.

Department of CSA Sreekala C


Module 1-Introduction

Models for interprocess communication


● IPC is a fundamental aspect of distributed systems design and
implementation. IPC is used to either exchange data and
information or coordinate the activity of processes
● There are several different models in which processes can interact
with each other-

● Sockets are the most popular IPC primitive for implementing


communication channels between distributed processes. Sockets
provide the basic capability of transferring a sequence of bytes

Department of CSA Sreekala C

You might also like