Bangladesh University of Professionals: Cloud Computing
Bangladesh University of Professionals: Cloud Computing
Assignment on
CLOUD COMPUTING
Assignment -1
Submitted By Submitted To
TASFIA BINTY SERAJ TANIA Dr. Khondokar Habibul Kabir
Roll: 1904022 Head, Energy and Environmental Centre (EEC)
Associate Professor, EEE
Connectivity: - A generic term for connecting devices to each other in order to transfer data back and forth.
It often refers to network connections, which embraces bridges, routers, switches and gateways as well
as backbone networks. It may also refer to connecting a home or office to the Internet or connecting a
digital camera to a computer or printer.
Interactivity: - Interactive designs are complete experiences of doing stuff that engages users with web
pages whilst the user is going through the information displayed on the website. This is what makes these
examples stand out. The use of inter-activities woven very creatively with website content makes them
successful in capturing user-attention.
Reliability- The quality of being able to be trusted or believed because of working or behaving well. The
system should continue to work correctly (performing the correct function at the desired level of
performance) even in the face of adversity (hardware or software faults, and even human error)
Performance: - How well a person, machine, etc. does a piece of work or an activity seamlessly.
Pay-as-you-go: - Pay As You Go (PAYG) is a utility computing billing method that is implemented in cloud
computing and geared toward organizations and end users. A PAYG user is billed for procured, rather
than actual, computing resources. The PAYG mechanism is derived from utility computing
Manage Large Amounts of Data: - Very large sets of data that are produced by people using the internet,
and that can only be stored, understood, and used with the help of special tools and methods in an easy
way. Data scientists, engineers and analysts to ensure that your organization takes maximum advantage
of big data to support better business decisions, create new sources of competitive advantage and create
new and sustainable revenue streams.
Efficiency: - Situation in which a system, or machine works well and quickly. Efficiency is important in
keeping costs down, reducing dependence on government subsidies and freeing resources for investment
in expansion and maintenance. The definition of the parameters measuring efficiency in terms of service
can be QOS.
Scalability & Elasticity: - The ability of a business or system to grow larger and the ability to stretch. It is
the ability of a system to grow in its capacity to meet the rising demand for its services offered. System
scalability and elasticity criteria could include the ability to accommodate increasing number of service
when required.
All Technologies That Combine Cloud Computing
Internet:
A means of connecting a computer to any other computer anywhere in the world via dedicated routers
and servers. The origins of the Internet date back to research commissioned by the federal government
of the United States in the 1960s to build robust, fault-tolerant communication with computer
networks. The primary precursor network, the ARPANET, initially served as a backbone for
interconnection of regional academic and military networks in the 1980s.
According to Internet Live Stats, as of August 12, 2016 there was an estimated 3,432,809,100 Internet
users worldwide. The number of Internet users represents nearly 40% of the world's population. The
largest number of Internet users by country is China, followed by the United States and India. In
September 2014, the total number of websites with a unique hostname online exceeded 1 billion. This
is an increase from one website (info.cern.ch) in 1991. The first billion Internet users worldwide was
reached in 2005.
Web 2.0:
So the major difference between web 1.0 and web 2.0 is that web 2.0 websites enable users to create,
share, collaborate and communicate their work with others, without any need of any web design or
publishing skills. These capabilities were not present in Web 1.o environment. Now-a-days, the way
web users are getting information has drastically changed. Today, users use content they are
specifically interested in, often using Web 2.0 tools.
Fault Tolerance:
Fault tolerance is the way in which an operating system (OS) responds to a hardware or software
failure. The term essentially refers to a system’s ability to allow for failures or malfunctions, and this
ability may be provided by software, hardware or a combination of both. To handle faults gracefully,
some computer systems have two or more duplicate systems.
A fault-tolerant design enables a system to continue its intended operation, possibly at a reduced
level, rather than failing completely, when some part of the system fails. The term is most commonly
used to describe computer systems designed to continue more or less fully operational with, perhaps,
a reduction in throughput or an increase in response time in the event of some partial failure. That is,
the system as a whole is not stopped due to problems either in the hardware or the software.
b. Distributed Computing:
Computers in a distributed system can have different roles. A computer's role depends on the goal of the
system and the computer's own hardware and software properties. There are two predominant ways
of organizing computers in a distributed system. The first is the client-server architecture, and the
second is the peer-to-peer architecture.
Utility Computing:
Utility computing is the process of providing computing service through an on-demand, pay-per-use
billing method. Utility computing is one of the most popular IT service models, primarily because of
the flexibility and economy it provides. This model is based on that used by conventional utilities such
as telephone services, electricity and gas. The principle behind utility computing is simple.
The consumer has access to a virtually unlimited supply of computing solutions over the Internet or a
virtual private network, which can be sourced and used whenever it's required. The back-end
infrastructure and computing resources management and delivery is governed by the provider.
Utility computing solutions can include virtual servers, virtual storage, virtual software, backup and
most IT solutions. Cloud computing, grid computing and managed IT services are based on the concept
of utility computing.
Programming Model:
A Programming model refers to the style of programming where execution is invoked by making
what appear to be library calls. For example, the C programming language has no execution model for
thread behavior. But thread behavior can be invoked from C syntax, by making, what appears to be, a
call to a normal C library.
What distinguishes a programming model from a normal library is that the behavior of the call cannot
be understood in terms of the language the program is written in.
Programming models are typically focused on achieving increased developer productivity, performance,
and portability to other system designs. The rapidly changing nature of processor architectures and the
complexity of designing an exascale platform provide significant challenges for these goals. Several other
factors are likely to impact the design of future programming models. In particular, the representation and
management of increasing levels of parallelism, concurrency
4
and memory hierarchies, combined with the ability to maintain a progressive level of interoperability
with today’s applications are of significant concern.
Storage Technologies:
Computer data storage, often called storage or memory, is a technology consisting of computer
components and recording media that are used to retain digital data. The central processing unit (CPU)
of a computer is what manipulates data by performing computations.
There are basically two types of storage, one is static or non-volatile storage and the other is Dynamic
or volatile storage as I already discussed about these in the previous blog of RAM. There are basically
three kinds of data storage Technologies they are-
1. Magnetic Storage
2. Storage
Virtualization:
In computing, virtualization refers to the act of creating a virtual (rather than actual) version of
something, including virtual computer hard ware platforms, storage devices, and computer network
resources.
We can define virtualization as the art and science of making the function of an object or resource
simulated or emulated in software identical to that of the corresponding physically realized object. In
other words, we use an abstraction to make software look and behave like hardware, with
corresponding benefits in flexibility, cost, scalability, reliability, and often overall capability and
performance, and in a broad range of applications. Virtualization, then, makes “real” that which is not,
applying the flexibility and convenience of software-based capabilities and services as a transparent
substitute for the same realized in hardware.
A virtual machine is nothing but a data file on a physical computer that can be moved and copied to
another computer, just like a normal data file. The computers in the virtual environment use two types
of file structures: one defining the hardware and the other defining the hard drive. The virtualization
software, or the hypervisor, offers caching technology that can be used to cache
changes to the virtual hardware or the virtual hard disk for writing at a later time. This technology
enables a user to discard the changes done to the operating system, allowing it to boot from a known
state.