0% found this document useful (0 votes)
32 views

Bangladesh University of Professionals: Cloud Computing

This document provides definitions and explanations of key concepts related to cloud computing requirements: 1) It defines connectivity, interactivity, reliability, performance, pay-as-you-go, programmability, managing large amounts of data, and efficiency as important requirements for transforming IT to a service. 2) It discusses several technologies that combine with cloud computing, including the Internet, Web 2.0, fault tolerance, parallel and distributed computing, and utility computing. 3) Key cloud computing concepts like parallel computing, distributed computing, and utility computing models are briefly explained.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views

Bangladesh University of Professionals: Cloud Computing

This document provides definitions and explanations of key concepts related to cloud computing requirements: 1) It defines connectivity, interactivity, reliability, performance, pay-as-you-go, programmability, managing large amounts of data, and efficiency as important requirements for transforming IT to a service. 2) It discusses several technologies that combine with cloud computing, including the Internet, Web 2.0, fault tolerance, parallel and distributed computing, and utility computing. 3) Key cloud computing concepts like parallel computing, distributed computing, and utility computing models are briefly explained.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Bangladesh University of Professionals

Assignment on

CLOUD COMPUTING

Assignment -1

Submitted By Submitted To
TASFIA BINTY SERAJ TANIA Dr. Khondokar Habibul Kabir
Roll: 1904022 Head, Energy and Environmental Centre (EEC)
Associate Professor, EEE

Submission Date: 29th July, 2019


Requirements to Transform IT to a Service:

Connectivity: - A generic term for connecting devices to each other in order to transfer data back and forth.
It often refers to network connections, which embraces bridges, routers, switches and gateways as well
as backbone networks. It may also refer to connecting a home or office to the Internet or connecting a
digital camera to a computer or printer.

Interactivity: - Interactive designs are complete experiences of doing stuff that engages users with web
pages whilst the user is going through the information displayed on the website. This is what makes these
examples stand out. The use of inter-activities woven very creatively with website content makes them
successful in capturing user-attention.

Reliability- The quality of being able to be trusted or believed because of working or behaving well. The
system should continue to work correctly (performing the correct function at the desired level of
performance) even in the face of adversity (hardware or software faults, and even human error)

Performance: - How well a person, machine, etc. does a piece of work or an activity seamlessly.

Pay-as-you-go: - Pay As You Go (PAYG) is a utility computing billing method that is implemented in cloud
computing and geared toward organizations and end users. A PAYG user is billed for procured, rather
than actual, computing resources. The PAYG mechanism is derived from utility computing

Programmability: - Used to describe a computer or machine that is able to accept instructions to do a


range of tasks, rather than just one easily. The capability within hardware and software to change; to
accept a new set of instructions that alter its behavior. Programmability generally refers to program logic
(business rules), but it also refers to designing the user interface which includes the choices of menus,
buttons and dialogs.

Manage Large Amounts of Data: - Very large sets of data that are produced by people using the internet,
and that can only be stored, understood, and used with the help of special tools and methods in an easy
way. Data scientists, engineers and analysts to ensure that your organization takes maximum advantage
of big data to support better business decisions, create new sources of competitive advantage and create
new and sustainable revenue streams.
Efficiency: - Situation in which a system, or machine works well and quickly. Efficiency is important in
keeping costs down, reducing dependence on government subsidies and freeing resources for investment
in expansion and maintenance. The definition of the parameters measuring efficiency in terms of service
can be QOS.

Scalability & Elasticity: - The ability of a business or system to grow larger and the ability to stretch. It is
the ability of a system to grow in its capacity to meet the rising demand for its services offered. System
scalability and elasticity criteria could include the ability to accommodate increasing number of service
when required.
All Technologies That Combine Cloud Computing

Internet:
A means of connecting a computer to any other computer anywhere in the world via dedicated routers
and servers. The origins of the Internet date back to research commissioned by the federal government
of the United States in the 1960s to build robust, fault-tolerant communication with computer
networks. The primary precursor network, the ARPANET, initially served as a backbone for
interconnection of regional academic and military networks in the 1980s.

Inter is the global system of


interconnected computer networks that use
the Internet protocol suite (TCP/IP) to link
devices worldwide. It is a network of
networks that consists of private, public,
academic, business, and government networks of
local to global scope, linked by a broad array of
electronic, wireless, and optical networking
technologies. More than 190 countries are linked
into exchanges of data, news and opinions.

According to Internet Live Stats, as of August 12, 2016 there was an estimated 3,432,809,100 Internet
users worldwide. The number of Internet users represents nearly 40% of the world's population. The
largest number of Internet users by country is China, followed by the United States and India. In
September 2014, the total number of websites with a unique hostname online exceeded 1 billion. This
is an increase from one website (info.cern.ch) in 1991. The first billion Internet users worldwide was
reached in 2005.

Web 2.0:

Web 2.0 is the current state of online technology as


it compares to the early days of the Web,
characterized by greater user interactivity and
collaboration, more pervasive network connectivity
and enhanced communication channels.
One of the most significant differences between
Web 2.0 and the traditional World Wide Web
(WWW, retroactively referred to as Web 1.0) is
greater collaboration among Internet users,
content providers and enterprises. Originally,
data was posted on Web sites, and users simply
viewed or downloaded the content. Increasingly,
users have more input into the nature and scope
of Web content and in some cases exert real-time
control over it.

So the major difference between web 1.0 and web 2.0 is that web 2.0 websites enable users to create,
share, collaborate and communicate their work with others, without any need of any web design or
publishing skills. These capabilities were not present in Web 1.o environment. Now-a-days, the way
web users are getting information has drastically changed. Today, users use content they are
specifically interested in, often using Web 2.0 tools.

Fault Tolerance:
Fault tolerance is the way in which an operating system (OS) responds to a hardware or software
failure. The term essentially refers to a system’s ability to allow for failures or malfunctions, and this
ability may be provided by software, hardware or a combination of both. To handle faults gracefully,
some computer systems have two or more duplicate systems.

A fault-tolerant design enables a system to continue its intended operation, possibly at a reduced
level, rather than failing completely, when some part of the system fails. The term is most commonly
used to describe computer systems designed to continue more or less fully operational with, perhaps,
a reduction in throughput or an increase in response time in the event of some partial failure. That is,
the system as a whole is not stopped due to problems either in the hardware or the software.

Parallel & Distributed Computing:


a. Parallel Computing:
Parallel computing is the term refer to
High Performance Computing (HPC).
Parallel computing is a computation type
in which multiple processors execute
multiple tasks simultaneously. Parallel
computing is a type of computing
architecture in which several processors
execute or process an application or
computation simultaneously. Parallel
computing helps in performing large
computations
by dividing the workload between more than one processor, all of which work through the
computation at the same time. Parallel computing is also known as parallel processing.

b. Distributed Computing:

A distributed system is a network of


autonomous computers that
communicate with each other in
order to achieve a goal. The
computers in a distributed system
are independent and do not
physically share memory or
processors. They communicate
with each other using messages,
pieces of information transferred
from one computer to another over
a network. Messages can
communicate many things:
computers can tell other computers
to execute a procedures with particular arguments, they can send and receive packets of data, or send
signals that tell other computers to behave a certain way.

Computers in a distributed system can have different roles. A computer's role depends on the goal of the
system and the computer's own hardware and software properties. There are two predominant ways
of organizing computers in a distributed system. The first is the client-server architecture, and the
second is the peer-to-peer architecture.

Utility Computing:

Utility computing is the process of providing computing service through an on-demand, pay-per-use
billing method. Utility computing is one of the most popular IT service models, primarily because of
the flexibility and economy it provides. This model is based on that used by conventional utilities such
as telephone services, electricity and gas. The principle behind utility computing is simple.

The consumer has access to a virtually unlimited supply of computing solutions over the Internet or a
virtual private network, which can be sourced and used whenever it's required. The back-end
infrastructure and computing resources management and delivery is governed by the provider.
Utility computing solutions can include virtual servers, virtual storage, virtual software, backup and
most IT solutions. Cloud computing, grid computing and managed IT services are based on the concept
of utility computing.

Programming Model:
A Programming model refers to the style of programming where execution is invoked by making
what appear to be library calls. For example, the C programming language has no execution model for
thread behavior. But thread behavior can be invoked from C syntax, by making, what appears to be, a
call to a normal C library.
What distinguishes a programming model from a normal library is that the behavior of the call cannot
be understood in terms of the language the program is written in.
Programming models are typically focused on achieving increased developer productivity, performance,
and portability to other system designs. The rapidly changing nature of processor architectures and the
complexity of designing an exascale platform provide significant challenges for these goals. Several other
factors are likely to impact the design of future programming models. In particular, the representation and
management of increasing levels of parallelism, concurrency

4
and memory hierarchies, combined with the ability to maintain a progressive level of interoperability
with today’s applications are of significant concern.

Storage Technologies:
Computer data storage, often called storage or memory, is a technology consisting of computer
components and recording media that are used to retain digital data. The central processing unit (CPU)
of a computer is what manipulates data by performing computations.
There are basically two types of storage, one is static or non-volatile storage and the other is Dynamic
or volatile storage as I already discussed about these in the previous blog of RAM. There are basically
three kinds of data storage Technologies they are-
1. Magnetic Storage
2. Storage

3. Solid State Storage

Virtualization:
In computing, virtualization refers to the act of creating a virtual (rather than actual) version of
something, including virtual computer hard ware platforms, storage devices, and computer network
resources.
We can define virtualization as the art and science of making the function of an object or resource
simulated or emulated in software identical to that of the corresponding physically realized object. In
other words, we use an abstraction to make software look and behave like hardware, with
corresponding benefits in flexibility, cost, scalability, reliability, and often overall capability and
performance, and in a broad range of applications. Virtualization, then, makes “real” that which is not,
applying the flexibility and convenience of software-based capabilities and services as a transparent
substitute for the same realized in hardware.

A virtual machine is nothing but a data file on a physical computer that can be moved and copied to
another computer, just like a normal data file. The computers in the virtual environment use two types
of file structures: one defining the hardware and the other defining the hard drive. The virtualization
software, or the hypervisor, offers caching technology that can be used to cache
changes to the virtual hardware or the virtual hard disk for writing at a later time. This technology
enables a user to discard the changes done to the operating system, allowing it to boot from a known
state.

You might also like