CLOUD COMPUTING
[Link], [Link]
Malla Reddy Engineering College.
CLOUD COMPUTING
• Computing Paradigms: High-Performance Computing, Parallel Computing,
Distributed Computing, Cluster Computing, Grid Computing, Cloud
Computing, Bio computing, Mobile Computing, Quantum Computing, Optical
Computing, Nano computing.
• Textbook: 1. Essentials of Cloud Computing: K. Chandrasekhran, CRC press,
2014.
• References:
• 1. “Cloud Computing: Principles and Paradigms”, by Rajkumar Buyya, James
Broberg and Andrzej M. Goscinski, Wiley, 2011.
• 2. “Distributed and Cloud Computing”, Kai Hwang, Geoffery [Link], Jack
[Link], Elsevier, 2012.
• 3. “Cloud Security and Privacy: An Enterprise Perspective on Risks and
Compliance”, Tim Mather, Subra Kumaraswamy, Shahed Latif, O’Reilly, SPD,
rp2011.
HIGH-PERFORMANCE
COMPUTING
• In high-performance computing systems, a pool of processors (CPUs) connected
(networked) with other resources like memory, storage, and input and output
devices, and the deployed software is enabled to run in the entire system of
connected components.
• The processor machines can be of homogeneous or heterogeneous type.
• The legacy meaning of high-performance computing (HPC) is the supercomputers.
• Examples of HPC include a small cluster of desktop computers or personal
computers (PCs) to the fastest supercomputers.
• HPC systems are normally used in the application areas where to use or solve
scientific problems.
• Most of the time, the challenge in working with these kinds of problems is to perform
suitable simulation study, and this can be accomplished by HPC without any
difficulty.
• Scientific examples such as protein folding in molecular biology and studies on
developing models and applications based on nuclear fusion are worth noting as
potential applications for HPC.
HIGH-PERFORMANCE
COMPUTING
• HPC contains mainly 3 components. They are Compute, network and
Storage.
• To build a HPC architecture, compute servers are networked together
into a cluster. Software programs and algorithms are run simultaneously on
the servers in the cluster. The cluster is networked to the data storage to
capture the output. Together, these components operate seamlessly to
complete a diverse set of tasks.
HIGH-PERFORMANCE COMPUTING:
APPLICATIONS
• Research labs: HPC is used to help scientists find sources of renewable
energy, understand the evolution of our universe, predict and track storms,
and create new materials.
• Media and entertainment: HPC is used to edit feature films, render mind-
blowing special effects, and stream live events around the world.
• Oil and gas: HPC is used to more accurately identify where to drill for new
wells and to help boost production from existing wells.
• Artificial intelligence and machine learning: HPC is used to detect credit
card fraud, provide self-guided technical support, teach self-driving
vehicles, and improve cancer screening techniques.
• Financial services: HPC is used to track real-time stock trends and automate
trading.
• HPC is used to design new products, simulate test scenarios, Forecasting
weather.
• HPC is used to help develop cures for diseases like diabetes and cancer and to
enable faster, more accurate patient diagnosis.
Fig 1 High Performance Computing
PARALLEL COMPUTING
• In Parallel Computing, a set of processors work cooperatively to solve a
computational problem. These processor machines or CPUs are mostly of
homogeneous type.
• Both Parallel computing and HPC are examples for super computers.
• We can distinguish between conventional computers and parallel computers
in the way the applications are executed.
• In serial or sequential computers, the following apply:
• It runs on a single computer/processor machine having a single CPU.
• A problem is broken down into a discrete series of instructions.
• Instructions are executed one after another.
PARALLEL
COMPUTING
• In parallel computing, since there is simultaneous use of multiple
processor machines, the following apply:
• It is run using multiple processors (multiple CPUs).
• A problem is broken down into discrete parts that can be solved
concurrently.
• Each part is further broken down into a series of instructions.
• Instructions from each part are executed simultaneously on different
processors.
• An overall control/coordination mechanism is employed.
Fig 2Parallel Computing
DISTRIBUTED
COMPUTING
• Distributed computing is also a computing system that consists of multiple
computers or processor machines connected through a network, which
can be homogeneous or heterogeneous, but run as a single system.
• The connectivity can be such that the CPUs in a distributed system can be
physically close together and connected by a local network (in a single
room/in a building), or they can be geographically distant and
connected by a wide area network.
• The heterogeneity in a distributed system supports any number of
possible configurations in the processor machines, such as mainframes,
PCs, workstations, and minicomputers.
• Distributed computing systems are advantageous over centralized systems,
because there is a support for the following characteristic features:
Fig 3 Distributed
Computing
DISTRIBUTED
COMPUTING
1. Scalability: It is the ability of the system to be easily expanded by
adding more machines as needed, and vice versa, without affecting
the existing setup.
2. Redundancy or replication: Here, several machines can provide the
same services, so that even if one is unavailable (or failed), work does
not stop because other similar computing supports will be available.
CLUSTER
COMPUTING:
• A cluster computing system consists of a set of the same or similar type of
processor(Homogeneous) machines connected using a dedicated network
infrastructure.
• All computers work on Same task. Computers are connected closely.
• All processor machines share resources such as a common home directory and
have a software such as a message passing interface (MPI) implementation
installed to allow programs to be run across all nodes simultaneously.
• The individual computers in a cluster can be referred to as nodes.
• This cab also be considered as HPC is due to the fact that the individual nodes
can work together to solve a problem larger than any computer can easily solve.
Fig 4 Cluster Computing
GRID
COMPUTING
• The computing resources in most of the organizations are underutilized
but are necessary for certain operations.
• Computers can be either Homogeneous or Heterogeneous.
• Each node is autonomous, means can run any tasks.
• The idea of grid computing is to make use of such non-utilized computing
power by the needy organizations, and thereby the return on investment
(ROI) on computing investments can be increased.
• Thus, grid computing is a network of computing or processor machines
managed with a kind of software such as middleware, in order to access and
use the resources remotely.
• The managing activity of grid resources through the middleware is called grid
services. Grid services provide access control, security, access to data
including digital libraries and databases, and access to large-scale
interactive and long-term storage facilities.
Fig 5Grid
Computing
GRID
COMPUTING
Grid computing is more popular due to the following reasons:
• Its ability to make use of unused computing power, and thus, it is a cost-
effective solution (reducing investments, only recurring costs)
• As a way to solve problems in line with any HPC-based application.
ELECTRIC POWER GRID VS GRID
COMPUTING
Electric Power Grid Grid Computing
We don’t bother electricity that we are We don’t bother the computer power
using comes from i.e, whether it is from that we are using comes from ie,
coal in Australia, from wind power in whether it is from a supercomputer in
the United States, or from a nuclear Germany, a computer farm in India,
plant in France. or a laptop in New Zealand.
We can simply plug the electrical We can simply plug in the computer and
appliance into the wall-mounted socket the Internet and it will get the
and it will get the electrical power that application execution done.
we need to operate the appliance.
The infrastructure that makes this The infrastructure that makes this
possible is called the power grid. It links possible is called the computing grid. It
together many different kinds of power links together computing resources,
plants with our home, through such as PCs, workstations, servers,
transmission stations, power and storage elements, and provides
stations, transformers, power lines, the mechanism needed to access them
and so forth. via the Internet.
ELECTRIC POWER GRID VS GRID
COMPUTING
Electric Power Grid Grid Computing
The power grid is pervasive(spread/ The grid is also pervasive(spread/
distributed) electricity is available distributed) in the sense that the remote
essentially everywhere, and one can computing resources would be
simply access it through a standard accessible from different platforms,
wall-mounted socket. including laptops and mobile phones,
and one can simply access the grid
computing power through the web
browser.
The power grid is a utility. we ask for The grid computing is also a utility. we
electricity and we get it. We also pay for ask for computing power or storage
what we get. capacity and we get it. We also pay for
what we get.
CLOUD
COMPUTING
• The computing trend moved toward cloud from the concept of grid
computing, particularly when large computing resources are required
to solve a single problem, using the ideas of computing power as a
utility and other allied concepts.
• However, the potential difference between grid and cloud is that grid
computing supports leveraging several computers in parallel to solve a
particular application, while cloud computing supports leveraging multiple
resources, including computing resources, to deliver a unified service
to the end user.
• In cloud computing, the IT and business resources, such as servers,
storage, network, applications, and processes, can be dynamically
provisioned to the user needs and workload.
BIOCOMPUTING
• Biocomputing is defined as the process of building computers that use
biological materials, mimic biological organisms or are used to study
biological organisms.
• Neural Networks and Genetic algorithms are being used in Biological
Computing.
• A Genetic Algorithm (GA) is a metaheuristic inspired by the process of
natural selection that belongs to the larger class of evolutionary
algorithms (EA). Genetic algorithms are commonly used to generate high-
quality solutions to optimization and search problems by relying on
biologically inspired operators such as mutation, crossover and selection.
Some examples of GA applications include optimizing decision trees for
better performance, solving sudoku puzzles, hyperparameter
optimization, etc.
• Neural networks are mathematical models that use learning algorithms
inspired by the brain to store information. Since neural networks are used
in machines, they are collectively called an ‘artificial neural network.’
Fig 7Bio Computing
BIOCOMPUTING
• Similar to the brain, neural networks are built up of many neurons with many
connections between them.
• Examples of successful applications of neural networks are classifications of
handwritten digits, speech recognition, and the prediction of stock prices.
Moreover, neural networks are more and more used in medical applications.
• Biocomputing systems use the concepts of biologically derived or simulated
molecules (or models) that perform computational processes in order to solve a
problem. The biologically derived models aid in structuring the computer programs
that become part of the application.
• Biocomputing provides the theoretical background and practical tools for scientists to
explore proteins and DNA.
• DNA and proteins are nature’s building blocks, but these building blocks are not
exactly used as bricks; the function of the final molecule rather strongly depends on
the order of these blocks.
• Thus, the Biocomputing scientist works on inventing the order suitable for
various applications mimicking biology. Biocomputing shall, therefore, lead to a
better understanding of life and the molecular causes of certain diseases.
MOBILE
COMPUTING
• Mobile Computing is a technology that allows us to transmit data, audio,
and video via devices that are not connected with any physical link. The
key features of mobile computing are that the computing devices are
portable and connected over a network.
• Key components of Mobile Computing are Mobile Communication, Mobile
Hardware and Mobile Software.
• In mobile computing, the processing (or computing) elements are small (i.e.,
handheld devices) and the communication between various resources is
taking place using wireless media.
Fig 8Mobile
Computing
MOBILE
COMPUTING
• Mobile communication for voice applications (e.g., cellular phone) is
widely established throughout the world and witnesses a very rapid growth
in all its dimensions including the increase in the number of subscribers of
various cellular networks.
• An extension of this technology is the ability to send and receive data across
various cellular networks using small devices such as smartphones.
• There can be numerous applications based on this technology; for
example, video call or conferencing is one of the important applications
that people prefer to use in place of existing voice (only) communications on
mobile phones.
QUANTUM COMPUTING
• Quantum computing is an area of computing focused on developing computer
technology based on the principles of quantum theory.
• Computers used today can only encode information in bits that take the value
of 1 or 0. Quantum computing, on the other hand, uses quantum bits or
qubits. It harnesses the unique ability of subatomic particles that allows
them to exist in more than one state (i.e., a 1 and a 0 at the same time).
• Superposition and entanglement are two features of quantum physics on
which these supercomputers are based. This empowers quantum computers
to handle operations at speeds exponentially higher than conventional
computers and at much lesser energy consumption.
• Quantum computers harness the unique behavior of quantum physics —
such as superposition, entanglement, and quantum interference—and
apply it to computing. This introduces new concepts to traditional
programming methods.
Fig 9Quantum
Computing
QUANTUM COMPUTING
• The quantum in "quantum computing" refers to the quantum mechanics that the
system uses to calculate outputs. In physics, a quantum is the smallest
possible discrete unit of any physical property. It usually refers to properties of
atomic or subatomic particles, such as electrons, neutrons, and photons.
• Manufacturers of computing systems say that there is a limit for cramming
more and more transistors into smaller and smaller spaces of integrated
circuits (ICs) and thereby doubling the processing power about every
18 months.
• This problem will have to be overcome by a new quantum computing–based
solution, wherein the dependence is on quantum information, the rules that
govern the subatomic world.
• Quantum computers are millions of times faster than even our most powerful
supercomputers today. Since quantum computing works differently on the most
fundamental level than the current technology, and although there are working
prototypes, these systems have not so far proved to be alternatives to today’s
silicon-based machines.
Fig 10 Optical Computing
OPTICAL
COMPUTING
• Optical computing system uses the photons in visible light or infrared
beams, rather than electric current, to perform digital computations.
• The speed of computation depends on two factors: how fast the
information can be transferred and how fast that information can be
processed that is data computation.
• An electric current flows at only about 10% of the speed of light. This
limits the rate at which data can be exchanged over long distances and is
one of the factors that led to the evolution of optical fiber.
• By applying some of the advantages of visible and/or IR networks at the
device and component scale, a computer can be developed that can
perform operations 10 or more times faster than a conventional electronic
computer.
NANO
COMPUTING
• Nano computing refers to computing systems that are constructed from
nanoscale components. The silicon transistors in traditional computers may
be replaced by transistors based on carbon nanotubes.
• The successful realization of nanocomputers relates to the scale and
integration of these nanotubes or components.
• The issues of scale relate to the dimensions of the components; they are, at
most, a few nanometers in at least two dimensions.
• The issues of integration of the components are twofold: first, the
manufacture of complex arbitrary patterns may be economically infeasible,
and second, nanocomputers may include massive quantities of devices.
Researchers are working on all these issues to bring nanocomputing a
reality.
NETWORK
COMPUTING
• Network computing is a way of designing systems to take advantage of the
latest technology and maximize its positive impact on business solutions and
their ability to serve their customers using a strong underlying network of
computing resources.
• In any network computing solution, the client component of a networked
architecture or application will be with the customer or client or end user,
and in modern days, they provide an essential set of functionality necessary
to support the appropriate client functions at minimum cost and maximum
simplicity.
NETWORK
COMPUTING
• Unlike conventional PCs, they do not need to be individually configured and
maintained according to their intended use. The other end of the client
component in the network architecture will be a typical server
environment to push the services of the application to the client end
Almost all the computing paradigms that were discussed earlier are of this
nature.
• Even in the future, if any one invents a totally new computing paradigm, it
would be based on a networked architecture, without which it is impossible
to realize the benefits for any end user.