Unit 1 ECM Notes
Unit 1 ECM Notes
UNIT I
Cloud computing
Definition:
The cloud is a large group of interconnected computers. These computers can be personal
computers or network servers; they can be public or private.
Cloud computing is a technology that uses the internet and central remote servers to maintain data
and applications.
Eg: Yahoo email or Gmail etc
SN
O Actor Definition
Cloud carriers provide the connectivity and transport of cloud services from cloud providers to
cloud consumers.
A cloud provider participates in and arranges for two unique service level agreements (SLAs), one
with a cloud carrier (e.g. SLA2) and one with a cloud consumer (e.g. SLA1).
A cloud provider arranges service level agreements (SLAs) with encrypted connections to ensure
the cloud services are consumed at a consistent level according to the contractual obligations with
the cloud consumers.
In this case, the provider may specify its requirements on capability, flexibility and functionality in
SLA2 in order to provide essential requirements in SLA1.
For example, cloud must be replicated on other computers in the cloud. If that one computer goes
offline, the cloud’s programming automatically redistributes that computer’s data to a new computer
in the cloud.
Examples of cloud computing applications: Google Docs & Spreadsheets, Google Calendar, Gmail,
Picasa.
PROS AND CONS OF CLOUD COMPUTING
Advantages
Dis-Advantages
● Requires a Constant Internet Connection
● Doesn’t Work Well with Low-Speed Connections
● Can Be Slow
● Features Might Be Limited
● Stored Data Might Not Be Secure
● Problem will arise If Data loss occurs
HISTORICAL DEVELOPMENTS
In client/server model all the software applications, data, and the control resided on huge
mainframe computers, known as servers.
If a user wanted to access specific data or run a program, he had to connect to the mainframe,
gain appropriate access, and then do his business.
Users connected to the server via a computer terminal, called a workstation or client.
Access was not immediate nor could two users access the same data at the same time.
When multiple people are sharing a single computer, you have to wait for your turn.
So the client/server model, while providing similar centralized storage, differed from cloud
computing in that it did not have a user-centric focus. It was not a user-enabling environment.
P2P computing defines a network architecture in which each computer has equivalent capabilities
and responsibilities.
In the P2P environment, every computer is a client and a server; there are no masters and slaves.
There is no need for a central server, because any computer can function in that capacity when
called on to do so.
P2P was a decentralizing concept. Control is decentralized, with all computers functioning as
equals. Content is also dispersed among the various peer computers.
Distributed Computing: Providing More Computing Power
distributed computing, where idle PCs across a network or Internet are tapped to provide
computing power for large, processor-intensive projects.
Multiple users to work simultaneously on the same computer-based project called collaborative
computing.
The goal was to enable multiple users to collaborate on group projects online, in real time.
To collaborate on any project, users must first be able to talk to one another.
Most collaboration systems offer the complete range of audio/video options, for full-featured
multiple-user video conferencing.
In addition, users must be able to share files and have multiple users work on the same document
simultaneously.
Real-time white boarding is also common, especially in corporate and education environments.
With the growth of the Internet, there was no need to limit group collaboration to a single
enterprise’s network environment.
Users from multiple locations within a corporation, and from multiple organizations, desired to
collaborate on projects that crossed company and geographic boundaries.
To do this, projects had to be housed in the “cloud” of the Internet, and accessed from any
Internet-enabled location.
DISTRIBUTED SYSTEM
A distributed system contains multiple nodes that are physically separate but linked together using
the network.
All the nodes in this system communicate with each other and handle processes in tandem.
Each of these nodes contains a small part of the distributed operating system software.
The nodes in the distributed systems can be arranged in the form of client/server systems or peer to
peer systems. Details about these are as follows:
Client/Server Systems
In client server systems, the client requests a resource and the server provides that resource.
A server may serve multiple clients at the same time while a client is in contact with only one
server.
Both the client and server usually communicate via a computer network and so they are a part of
distributed systems.
VIRTUALIZATION
Hypervisor
The hypervisor is a firmware or low-level program that acts as a Virtual Machine Manager.
Google AppEngine: Launched in 2008, it provides applications (SaaS) and raw hardware
(IaaS).
App Engine managed infrastructure, provides a development platform to create apps, leveraging
Google's infrastructure as a hosting platform.
Microsoft Azure: It is also a scalable runtime environment for web & distributed applications.
it provides additional services such as support for storage (relational data & blobs), networking,
caching, content delivery & others.
Cloud application, or cloud app, is a software program where cloud-based and local components
work together. This model relies on remote servers for processing logic that is accessed through a
web browser with a continual internet connection.
Cloud application servers typically are located in a remote data center operated by a third-
party cloud services infrastructure provider. Cloud-based application tasks may encompass email,
file storage and sharing, order entry, inventory management, word processing, customer relationship
management (CRM), data collection, or financial accounting features.
Fast response to business needs. Cloud applications can be updated, tested and deployed quickly,
providing enterprises with fast time to market and agility. This speed can lead to culture shifts in
business operations.
Simplified operation. Infrastructure management can be outsourced to third-party cloud providers.
Instant scalability. As demand rises or falls, available capacity can be adjusted.
API use. Third-party data sources and storage services can be accessed with an application
programming interface (API). Cloud applications can be kept smaller by using APIs to hand data to
applications or API-based back-end services for processing or analytics computations, with the
results handed back to the cloud application. Vetted APIs impose passive consistency that can speed
development and yield predictable results.
Gradual adoption. Refactoring legacy, on-premises applications to a cloud architecture in steps,
allows components to be implemented on a gradual basis.
Reduced costs. The size and scale of data centers run by major cloud infrastructure and service
providers, along with competition among providers, has led to lower prices. Cloud-based
applications can be less expensive to operate and maintain than equivalents on-premises installation.
Improved data sharing and security. Data stored on cloud services is instantly available to
authorized users. Due to their massive scale, cloud providers can hire world-class security experts
and implement infrastructure security measures that typically only large enterprises can obtain.
Centralized data managed by IT operations personnel is more easily backed up on a regular
schedule and restored should disaster recovery become necessary.
Microsoft Azure–
● Microsoft Azure is a Cloud operating system and a platform in which users can develop the
applications in the cloud.
● Azure provides a set of services that support storage, networking, caching, content delivery,
and others.
Hadoop
● Apache Hadoop is an open source framework that is appropriate for processing large data
sets on commodity hardware.
● Hadoop is an implementation of MapReduce, an application programming model which is
developed by Google.
● This model provides two fundamental operations for data processing: map and reduce.