100% found this document useful (1 vote)
522 views

Cloud Computing Notes

This document discusses cloud computing over 4 units: Unit 1 covers the origins, basic concepts, goals, risks, roles, characteristics, delivery models, and deployment models of cloud computing. It also discusses enabling technologies like virtualization. Unit 2 discusses common standards and emerging programming environments for cloud applications. It also covers moving applications to the cloud using platforms from Microsoft, Google, and Amazon. Unit 3 addresses cloud security mechanisms, issues, and trends in supporting ubiquitous computing. Unit 4 explores enabling technologies for the Internet of Things and innovative IoT applications. It also discusses how the cloud will change operating systems and future cloud-based technologies and applications.

Uploaded by

Sohail Khan
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
522 views

Cloud Computing Notes

This document discusses cloud computing over 4 units: Unit 1 covers the origins, basic concepts, goals, risks, roles, characteristics, delivery models, and deployment models of cloud computing. It also discusses enabling technologies like virtualization. Unit 2 discusses common standards and emerging programming environments for cloud applications. It also covers moving applications to the cloud using platforms from Microsoft, Google, and Amazon. Unit 3 addresses cloud security mechanisms, issues, and trends in supporting ubiquitous computing. Unit 4 explores enabling technologies for the Internet of Things and innovative IoT applications. It also discusses how the cloud will change operating systems and future cloud-based technologies and applications.

Uploaded by

Sohail Khan
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 134

Cloud

Computing

-Sohail Khan
Cloud Computing

Unit 1:
Origins and Influences, Basic Concepts and Terminology, Goals and Benefits, Risks and
Challenges, Roles and Boundaries, Cloud Characteristics, Cloud Delivery Models, Cloud
Deployment Models, Federated Cloud/Intercloud, Types of Clouds. Cloud-Enabling Technology:
Broadband Networks and Internet Architecture, Data Center Technology, Virtualization
Technology, Web Technology, Multitenant Technology, Service Technology.
Implementation Levels of Virtualization, Virtualization Structures/Tools and Mechanisms, Types
of Hypervisors, Virtualization of CPU, Memory, and I/O Devices, Virtual Clusters and Resource
Management, Virtualization for Data-Center Automation.

Unit 2:
Common Standards: The Open Cloud Consortium, Open Virtualization Format, Standards for
Application Developers: Browsers (Ajax), Data (XML, JSON), Solution Stacks (LAMP and
LAPP),Syndication (Atom, Atom Publishing Protocol, and RSS), Standards for Security
Features of Cloud and Grid Platforms, Programming Support of Google App Engine,
Programming on Amazon AWS and Microsoft Azure, Emerging Cloud Software Environments,
Understanding Core OpenStack Ecosystem.Applications: Moving application to cloud, Microsoft
Cloud Services, Google Cloud Applications, Amazon Cloud Services, Cloud Applications (Social
Networking, E-mail, Office Services, Google Apps, Customer Relationship Management).

Unit 3:
Basic Terms and Concepts, Threat Agents, Cloud Security Threats and Attacks, Additional
Considerations.Cloud Security Mechanisms: Encryption, Hashing, Digital Signature, Public Key
Infrastructure (PKI), Identity and Access Management (IAM), Single Sign-On (SSO), Hardened
Virtual Server Images.
Cloud Issues: Stability, Partner Quality, Longevity, Business Continuity, Service-Level
Agreements, Agreeing on the Service of Clouds, Solving Problems, Quality of Service, Regulatory
Issues and Accountability.Cloud Trends in Supporting Ubiquitous Computing, Performance of
Distributed Systems and the Cloud.
Unit 4:
Enabling Technologies for the Internet of Things (RFID, Sensor Networks and ZigBee
Technology, GPS), Innovative Applications of the Internet of Things (Smart Buildings and Smart
Power Grid, Retailing and Supply-Chain Management, Cyber-Physical System), Online Social and
Professional Networking.
How the Cloud Will Change Operating Systems, Location-Aware Applications, Intelligent Fabrics,
Paints, and More, The Future of Cloud TV, Future of Cloud-Based Smart Devices, Faster Time to
Market for Software Applications, Home-Based Cloud Computing, Mobile Cloud, Autonomic
Cloud Engine, Multimedia Cloud, Energy Aware Cloud Computing, Jungle Computing, Docker at
a Glance: Process Simplification, Broad Support and Adoption, Architecture, Getting the Most
from Docker, The Docker Workflow.
Unit 1
Origins and Influences:

The last decades have reinforced the idea that information processing can be done more efficiently
centrally, on large farms of computing and storage systems accessible via the Internet. When computing
resources in distant data centers are used rather than local computing systems, we talk about
network-centric computing and network-centric content. Advancements in networking and other areas
are responsible for the acceptance of the two new computing models and led to the grid computing
movement in the early 1990s and, since 2005, to utility computing and cloud computing.
In utility computing the hardware and software resources are concentrated in large data centers and
users can pay as they consume computing, storage, and communication resources. Utility computing
often requires a cloud-like infrastructure, but its focus is on the business model for providing the
computing services. Cloud computing is a path to utility computing embraced by major IT companies
such as Amazon, Apple, Google, HP, IBM, Microsoft, Oracle, and others.
Cloud computing delivery models, deployment models, defining attributes, resources, and organization
of the infrastructure discussed in this chapter are summarized in Figure 1.1. There are three cloud
delivery models: Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-
Service (IaaS), deployed as public, private, community, and hybrid clouds.

The defining attributes of the new philosophy for delivering computing services are as follows:
• Cloud computing uses Internet technologies to offer elastic services. The term elastic computing
refers to the ability to dynamically acquire computing resources and support a variable workload. A
cloud service provider maintains a massive infrastructure to support elastic services.
• The resources used for these services can be metered and the users can be charged only for the
resources they use.
• Maintenance and security are ensured by service providers.
• Economy of scale allows service providers to operate more efficiently due to specialization and
centralization.
• Cloud computing is cost-effective due to resource multiplexing; lower costs for the service provider
are passed on to the cloud users.
• The application data is stored closer to the site where it is used in a device- and location-independent
manner; potentially, this data storage strategy increases reliability and security and, at the same time,
it lowers communication costs.

Cloud computing is a technical and social reality and an emerging technology. At this time, one
can only speculate how the infrastructure for this new paradigm will evolve and what applications will
migrate to it. The economical, social, ethical, and legal implications of this shift in technology, in which
users rely on services provided by large data centers and store private data and software on systems
they do not control, are likely to be significant. it in the cloud where the application runs. This insight
leads us to believe that several new classes of
cloud computing applications could emerge in the next few years [25].
As always, a good idea has generated a high level of excitement that translated into a flurry of
publications
– some of a scholarly depth, others with little merit or even bursting with misinformation. In this
book we attempt to sift through the large volume of information and dissect the main ideas related to
cloud computing. We first discuss applications of cloud computing and then analyze the infrastructure
for the technology.
Several decades of research in parallel and distributed computing have paved the way for cloud
computing. Through the years we have discovered the challenges posed by the implementation, as well
as the algorithmic level, and the ways to address some of them and avoid the others. Thus, it is
important
to look back at the lessons we learned from this experience through the years; for this reason we start
our discussion with an overview of parallel computing and distributed systems.

Basic Concepts and Terminology:

With the rise in adoption of computer computing all over the world, people have started using desktops,
laptops, and tablets in their day-to-day life for various types of needs. As there is an increase in the use
of computers among people, the internet is another technology which has spread like a wildfire in each
and every person’s life. As desktops and internet go hand-in-hand, there are many functions which can
be performed by the users for any type of needs they might have. These two technologies give birth to
‘data’ which is created through multiple functions and applications. Loads of data is generated when
there are files which are continuously growing in the database. Now the main concern for any individual
is the data storage function. As data plays an important role in any field, it is essential for an individual
to store it in the most appropriate manner so that it can be referred to whenever needed. Individuals
and organizations generate data on daily basis but the only question which arises is how this data can be
kept safe. Well, there are options which one might suggest like hard drives or storage devices but there
is always a better option when there is such rapidly evolving technology.
‘Cloud Computing’ is a type of computing service which provides various types of functions like storage,
database, servers networking etc. Through the internet, an individual can connect to the cloud and
make use of all these services. There are companies who offer cloud services and they are called Cloud
Service Providers who charge for cloud-related services based on what type of service you have
selected. Companies spend a fortune to maintain their data but cloud service charges an individual
based on the resources consumed. Not all organizations can afford to spend high cost on IT
infrastructure and maintaining hardware and databases and so cloud computing is an ideal choice for
them because it is a cheap solution. Earlier companies weren’t so sure about this technology but as
years have gone by, they have started seeing the real benefits of cloud service and how profitable it can
be for their business. Organizations have different motives and use for the data they generate on daily
basis and so many companies are turning to cloud services.
There are different types of cloud models which are offered to the customers based on their
requirements. Cloud service providers have mastered this technology and they have created different
models for different types of business needs which are cost-effective, allows various types of
applications and storage needs. Let’s learn about multiple types of cloud models:
1. Private Cloud: In a private cloud, the computing resources are deployed for one single customer by
the cloud service provider. The resources are owned, governed and operated by that customer.
2. Community Cloud: This type of cloud service is offered to an entire community or group of businesses.
3. Public Cloud: Here the cloud service is open to the general public which operates in a virtualized
environment which offers services like applications and storage functions.
4. Hybrid Cloud: Hybrid cloud is a mix of public and private cloud where the user can switch between
these two types of the cloud to manage loads or costs.

Below mentioned are the uses of Cloud services:


1. Storage, data backup and recovery
2. Creating news Applications
3. Ideal to host websites
4. Streaming services
5. Data analysis

There are various benefits of cloud computing which are:


1. No maintenance of any type of hardware from client’s side
2. Low IT costs for clients
3. Improved and faster performance
4. Constant software updates
5. Higher data security
6. Great performance and scalability

We have discussed so many functions which revolve around cloud computing, we should also look at the
advantages which cloud computing provides.

1. Opportunity to grow
With cloud computing, businesses no longer need to spend on building datacenters or managing it with
dozens of IT personnel’s which take up almost half of their IT budget. Cloud services allow organizations
to reduce the data center footprint altogether by shifting to the cloud. A lot of money can be saved
through this transition and organizations can focus on their core business functions.
2. Total flexibility in costing

Cloud models have an option of pay-per-consume which is based on charging only for the resources
which are consumed by a customer rather the overall resources allotted. The traditional computing
charges for the resources allotted and not only for the resources which are consumed. Thus the cloud is
very cost effective as it does not charge any extra amount for the resources which were not used.

3. Always available

Cloud services are up at all times due to the fact the servers are always available and functioning
because of continuous monitoring and management. Cloud service providers promise 99% uptime
guarantee throughout the year which is a good service so that the business never stops. When the
internet is connected to a cloud service, a customer is able to access every type of service with the
lowest of downtime.

4. Remote functions
The mobility function of the cloud is one of the best features because an individual can access his
applications or storage functions through any location. Internet connection is once again very essential
to connect to the cloud server from the current location.

Goals and Benefits:


The common benefits associated with adopting cloud computing are explained in this section.
The following sections make reference to the terms “public cloud” and “private cloud.” These terms are
described in the Cloud Deployment Models section.
 Reduced Investments and Proportional Costs
 Increased Scalability
 Increased Availability and Reliability

Cloud computing operates on a similar principle as web-based email clients, allowing users to access all
of the features and files of the system without having to keep the bulk of that system on their own
computers. In fact, most people already use a variety of cloud computing services without even realising
it. Gmail, Google Drive, TurboTax, and even Facebook and Instagram are all cloud-based applications.
For all of these services, users are sending their personal data to a cloud-hosted server that stores the
information for later access. And as useful as these applications are for personal use, they're even more
valuable for businesses that need to be able to access large amounts of data over a secure, online
network connection.
For example, employees can access customer information via cloud-based CRM software
like Salesforce from their smartphone or tablet at home or while traveling, and can quickly share that
information with other authorised parties anywhere in the world.
Still, there are those leaders that are remaining hesitant about committing to cloud-computing solutions
for their organisations. So, we’d like to take a few minutes and share 12 business advantages of cloud
computing.

1. Cost Savings
2. Security
3. Flexibility
4. Mobility
5. Insight
6. Increased Collaboration
7. Quality Control
8. Disaster Recovery
9. Loss Prevention
10. Automatic Software Updates
11. Competitive Edge
12. Sustainability
Top Challenges and Risks:
#1. Data Security and Privacy
The biggest concern with cloud computing is data security and privacy. As organizations adopt the cloud
on a global scale, the risks have become more grave than ever, with lots of consumer and business data
available for hackers to breach. According to Statista, 64% of respondents in a survey conducted in 2021
said data loss or leakage is their biggest challenge with cloud computing. Similarly, 62% said data privacy
was their second most challenge. The problem with cloud computing is that the user cannot view where
their data is being processed or stored. And if it is not handled correctly during cloud management or
implementation, risks can happen such as data theft, leaks, breaches, compromised credentials, hacked
APIs, authentication breaches, account hijacking, etc.
#2. Compliance Risks
Compliance rules are getting more stringent due to the increased cyberattacks and data privacy issues.
Regulatory bodies like HIPAA, GDPR, etc., ensure organizations comply with applicable state or federal
rules and regulations to maintain data security and privacy for their business and customers. However,
compliance is another big challenge for organizations adopting the cloud. In the same survey by Statista,
compliance is the third most significant challenge for 44% of respondents. The issues arise for anyone
using cloud storage or backup services. When organizations move their data from on-premises to the
cloud, they must comply with the local laws. For example, every healthcare institution must comply with
HIPAA in the US. And if they don’t do it by any means, they could face penalties that can tarnish their
reputation and cost them money and customer trust.
#3. Reduced Visibility and Control
Cloud computing offers the benefit of not having to manage the infrastructure and resources like servers
to keep the systems working. Although it saves time, expenses, and effort, the users end up having
reduced control and visibility into their software, systems, applications, and computing assets. As a
result, organizations find it challenging to verify how efficient the security systems are due to no access
to the data and security tools on the cloud platform. They also can’t implement incident response
because they don’t have complete control over their cloud-based assets. In addition, organizations can’t
have complete insight into their services, data, and users to identify abnormal patterns that can lead to
a breach.
#4. Cloud Migration
Cloud migration means moving your data, services, applications, systems, and other information or
assets from on-premises (servers or desktops) to the cloud. This process enables computing capabilities
to take place on the cloud infrastructure instead of on-premise devices. When an organization wants to
embrace the cloud, it can face many challenges while moving all its legacy or traditional systems to the
cloud. The overall process can consume a lot of time, resources, and they have little idea how to deal
with expert cloud providers already in business for years. Similarly, when they want to migrate from one
cloud provider to another, they have to do it all over again, and they are not sure how the next provider
will serve them. They face challenges like extensive troubleshooting, speed, security, application
downtime, complexity, expenses, and more. All these are troublesome for organizations and also for
their users. Ultimately, it can lead to poor user experience and thus, affect organizations in various
directions.
#5. Incompatibility
While moving your workload to the cloud from on-premises, incompatibility issues may arise between
the cloud services and on-premises infrastructure. This is a big challenge that may require the
organizations to invest in making it compatible by any means or by creating a new service altogether.
Either way, it invites troubles and expenditures for organizations.
#6. Improper Access Controls and Management

Improper or inadequate cloud access controls and management can lead to various risks for an
organization. Cybercriminals leverage web apps, steal credentials, perform data breaches, and whatnot.
They may face access management issues if they have a large or distributed workforce. In addition,
organizations can also face password fatigue and other issues such as inactive users signed for long
terms, poorly protected credentials, weak passwords, multiple admin accounts, mismanagement of
passwords, certificates, and keys, and more. As a result of poor access controls and management,
organizations can be vulnerable to attacks. And their business information and user data can be
exposed. Ultimately, it can cause reputation damage and increase unnecessary expenses.
#7. Lack of Expertise
Cloud technologies are rapidly advancing, and more and more services and applications are being
released to cater to different needs. However, it’s also becoming difficult for organizations to find skilled
professionals to maintain the cloud systems. It’s also costly for small and medium-sized businesses to
hire expert cloud professionals. The reason is the cloud is a new concept for many, and it’s still not
mainstream. Not everyone in your team will be familiar with cloud technologies. And hence, your IT staff
must also be trained how to use the cloud technologies efficiently by themselves. It again incurs a high
cost, which is a burden for organizations with a limited budget. They will have to pay for the instructor
and invest in recruiting and onboarding cloud professionals.
8. Downtime
Another irritating thing about the cloud for many organizations can be downtime due to poor internet
connection. If you have a consistent and high-speed internet connection, you can make the most of their
cloud services. But if you don’t, you may face repeated downtimes, lags, and errors. It not only
frustrates the users but also reduces their productivity. This way, organizations with poor internet
connectivity are likely to face disruption in their business operations. They won’t be able to access their
data whenever they want. Hence, they can meet a lot of inefficiencies, missed deadlines, and whatnot.
All these can invite bottlenecks for business operations and lead to reduced sales, revenue, and profit
margins.
#9. Insecure APIs
Using application interfaces APIs in cloud infrastructure enables you to implement better controls for
your systems and applications. They are either in-built into the mobile apps or web to allow the
employees and users to access the systems. However, if the external APIs you use are insecure, it can
invite a lot of trouble for you in terms of security. These issues can provide an entry point for attackers
to hack into your confidential data, manipulate services, and do other harm. Insecure APIs can cause
broken authentication, security misconfigurations, break function-level authorization, expose data, and
mismanagement of resources and assets.

Roles and Boundaries:


The upcoming sections cover introductory topic areas pertaining to the fundamental models used to
categorize and define clouds and their most common service offerings, along with definitions of
organizational roles and the specific set of characteristics that collectively distinguish a cloud.
Organizations and humans can assume different types of pre-defi ned roles depending on how they
relate to and/or interact with a cloud and its hosted IT resources. Each of the upcoming roles
participates and carries out responsibilities in relation to cloud-based activity. Organisation provides
cloud-based IT resources
Responsible for making cloud services available to cloud consumers, as per agrees upon SLA guaarantees
The following sections define these roles and identify their main interactions.
This section covers the following topics:
 Cloud Provider
 Cloud Consumer
 Cloud Service Owner
 Cloud Resource Administrator
 Additional Resources
 Organizational Boundaries
 Trust Boundaries

Characteristics of Cloud Computing:


There are many characteristics of Cloud Computing here are few of them :
1. On-demand self-services: The Cloud computing services does not require any human
administrators, user themselves are able to provision, monitor and manage computing
resources as needed.
2. Broad network access: The Computing services are generally provided over standard
networks and heterogeneous devices.
3. Rapid elasticity: The Computing services should have IT resources that are able to scale
out and in quickly and on as needed basis. Whenever the user require services it is
provided to him and it is scale out as soon as its requirement gets over.
4. Resource pooling: The IT resource (e.g., networks, servers, storage, applications, and
services) present are shared across multiple applications and occupant in an uncommitted
manner. Multiple clients are provided service from a same physical resource.
5. Measured service: The resource utilization is tracked for each application and occupant, it
will provide both the user and the resource provider with an account of what has been
used. This is done for various reasons like monitoring billing and effective use of resource.
6. Multi-tenancy: Cloud computing providers can support multiple tenants (users or
organizations) on a single set of shared resources.
7. Virtualization: Cloud computing providers use virtualization technology to abstract
underlying hardware resources and present them as logical resources to users.
8. Resilient computing: Cloud computing services are typically designed with redundancy
and fault tolerance in mind, which ensures high availability and reliability.
9. Flexible pricing models: Cloud providers offer a variety of pricing models, including pay-
per-use, subscription-based, and spot pricing, allowing users to choose the option that
best suits their needs.
10. Security: Cloud providers invest heavily in security measures to protect their users’ data
and ensure the privacy of sensitive information.
11. Automation: Cloud computing services are often highly automated, allowing users to
deploy and manage resources with minimal manual intervention.
12. Sustainability: Cloud providers are increasingly focused on sustainable practices, such as
energy-efficient data centers and the use of renewable energy sources, to reduce their
environmental impact.

Cloud Delivery Models:


A cloud delivery model represents a specific, pre-packaged combination of IT resources offered by a
cloud provider. Three common cloud delivery models have become widely established and formalized:

 Infrastructure-as-a-Service (IaaS)
 Platform-as-a-Service (PaaS)
 Software-as-a-Service (SaaS)

These three models are interrelated in how the scope of one can encompass that of another, as
explored in the Combining Cloud Delivery Models section later in this chapter. All three models serve
different purposes based on a company's IT requirements and budget. However, they all provide
significantly more flexibility for businesses compared to onsite hosting. Each of these service models can
be used independently or in conjunction with another, depending on business needs.
Cloud Computing is a cornerstone topic in the IT industry with big players like Amazon, Microsoft,
Google, and others providing cloud services offerings. These are the top 30 cloud computing terms you
should be familiar with if you want to pursue a career in the cloud.

Note: Many specialized variations of the three base cloud delivery models have emerged, each
comprised of a distinct combination of IT resources. Some examples include:

 Storage-as-a-Service
 Database-as-a-Service
 Security-as-a-Service
 Communication-as-a-Service
 Integration-as-a-Service
 Testing-as-a-Service
 Process-as-a-Service
Note also that a cloud delivery model can be referred to as a cloud service delivery model because each
model is classified as a different type of cloud service offering.
This section covers the following topics:
 Infrastructure-as-a-Service (IaaS)
 Platform-as-a-Service (PaaS)
 Software-as-a-Service (SaaS)
 Comparing Cloud Delivery Models
 Combining Cloud Delivery Models

Cloud Deployment Models:


In cloud computing, we have access to a shared pool of computer resources (servers, storage,
programs, and so on) in the cloud. You simply need to request additional resources when you require
them. Getting resources up and running quickly is a breeze thanks to the clouds. It is possible to
release resources that are no longer necessary. This method allows you to just pay for what you use.
Your cloud provider is in charge of all upkeep.
What is a Cloud Deployment Model?
Cloud Deployment Model functions as a virtual computing environment with a deployment
architecture that varies depending on the amount of data you want to store and who has access to
the infrastructure.

Types of Cloud Computing Deployment Models:


The cloud deployment model identifies the specific type of cloud environment based on ownership,
scale, and access, as well as the cloud’s nature and purpose. The location of the servers you’re
utilizing and who controls them are defined by a cloud deployment model. It specifies how your cloud
infrastructure will look, what you can change, and whether you will be given services or will have to
create everything yourself. Relationships between the infrastructure and your users are also defined
by cloud deployment types. Different types of cloud computing deployment models are described
below.

- Public Cloud
- Private Cloud
- Hybrid Cloud
- Community Cloud
- Multi-Cloud
Public Cloud: The public cloud makes it possible for anybody to access systems and services. The
public cloud may be less secure as it is open to everyone. The public cloud is one in which cloud
infrastructure services are provided over the internet to the general people or major industry
groups. The infrastructure in this cloud model is owned by the entity that delivers the cloud services,
not by the consumer. It is a type of cloud hosting that allows customers and users to easily access
systems and services. This form of cloud computing is an excellent example of cloud hosting, in which
service providers supply services to a variety of customers. In this arrangement, storage backup and
retrieval services are given for free, as a subscription, or on a per-user basis. For example, Google App
Engine etc.

Public Cloud
Advantages of the Public Cloud Model
 Minimal Investment: Because it is a pay-per-use service, there is no substantial upfront
fee, making it excellent for enterprises that require immediate access to resources.
 No setup cost: The entire infrastructure is fully subsidized by the cloud service providers,
thus there is no need to set up any hardware.
 Infrastructure Management is not required: Using the public cloud does not necessitate
infrastructure management.
 No maintenance: The maintenance work is done by the service provider (not users).
 Dynamic Scalability: To fulfill your company’s needs, on-demand resources are accessible.
Disadvantages of the Public Cloud Model
 Less secure: Public cloud is less secure as resources are public so there is no guarantee of
high-level security.
 Low customization: It is accessed by many public so it can’t be customized according to
personal requirements.
Private Cloud: The private cloud deployment model is the exact opposite of the public cloud
deployment model. It’s a one-on-one environment for a single user (customer). There is no need to
share your hardware with anyone else. The distinction between private and public clouds is in how
you handle all of the hardware. It is also called the “internal cloud” & it refers to the ability to access
systems and services within a given border or organization. The cloud platform is implemented in a
cloud-based secure environment that is protected by powerful firewalls and under the supervision of
an organization’s IT department. The private cloud gives greater flexibility of control over cloud
resources.

Private Cloud
Advantages of the Private Cloud Model
 Better Control: You are the sole owner of the property. You gain complete command over
service integration, IT operations, policies, and user behavior.
 Data Security and Privacy: It’s suitable for storing corporate information to which only
authorized staff have access. By segmenting resources within the same infrastructure,
improved access and security can be achieved.
 Supports Legacy Systems: This approach is designed to work with legacy systems that are
unable to access the public cloud.
 Customization: Unlike a public cloud deployment, a private cloud allows a company to
tailor its solution to meet its specific needs.
Disadvantages of the Private Cloud Model
 Less scalable: Private clouds are scaled within a certain range as there is less number of
clients.
 Costly: Private clouds are more costly as they provide personalized facilities.

Hybrid Cloud: By bridging the public and private worlds with a layer of proprietary software, hybrid
cloud computing gives the best of both worlds. With a hybrid solution, you may host the app in a safe
environment while taking advantage of the public cloud’s cost savings. Organizations can move data
and applications between different clouds using a combination of two or more cloud deployment
methods, depending on their needs.
Hybrid Cloud
Advantages of the Hybrid Cloud Model
 Flexibility and control: Businesses with more flexibility can design personalized solutions
that meet their particular needs.
 Cost: Because public clouds provide scalability, you’ll only be responsible for paying for
the extra capacity if you require it.
 Security: Because data is properly separated, the chances of data theft by attackers are
considerably reduced.
Disadvantages of the Hybrid Cloud Model
 Difficult to manage: Hybrid clouds are difficult to manage as it is a combination of both
public and private cloud. So, it is complex.
 Slow data transmission: Data transmission in the hybrid cloud takes place through the
public cloud so latency occurs.

Community Cloud: It allows systems and services to be accessible by a group of organizations. It is a


distributed system that is created by integrating the services of different clouds to address the specific
needs of a community, industry, or business. The infrastructure of the community could be shared
between the organization which has shared concerns or tasks. It is generally managed by a third party
or by the combination of one or more organizations in the community.

Community Cloud
Advantages of the Community Cloud Model
 Cost Effective: It is cost-effective because the cloud is shared by multiple organizations or
communities.
 Security: Community cloud provides better security.
 Shared resources: It allows you to share resources, infrastructure, etc. with multiple
organizations.
 Collaboration and data sharing: It is suitable for both collaboration and data sharing.
Disadvantages of the Community Cloud Model
 Limited Scalability: Community cloud is relatively less scalable as many organizations share
the same resources according to their collaborative interests.
 Rigid in customization: As the data and resources are shared among different
organizations according to their mutual interests if an organization wants some changes
according to their needs they cannot do so because it will have an impact on other
organizations.

Multi-Cloud: We’re talking about employing multiple cloud providers at the same time under this
paradigm, as the name implies. It’s similar to the hybrid cloud deployment approach, which combines
public and private cloud resources. Instead of merging private and public clouds, multi-cloud uses
many public clouds. Although public cloud providers provide numerous tools to improve the reliability
of their services, mishaps still occur. It’s quite rare that two distinct clouds would have an incident at
the same moment. As a result, multi-cloud deployment improves the high availability of your services
even more.

Advantages of the Multi-Cloud Model


 You can mix and match the best features of each cloud provider’s services to suit the
demands of your apps, workloads, and business by choosing different cloud providers.
 Reduced Latency: To reduce latency and improve user experience, you can choose cloud
regions and zones that are close to your clients.
 High availability of service: It’s quite rare that two distinct clouds would have an incident
at the same moment. So, the multi-cloud deployment improves the high availability of
your services.
Disadvantages of the Multi-Cloud Model
 Complex: The combination of many clouds makes the system complex and bottlenecks
may occur.
 Security issue: Due to the complex structure, there may be loopholes to which a hacker
can take advantage hence, makes the data insecure.

What is the Right Choice for Cloud Deployment Model?


As of now, no such approach fits picking a cloud deployment model. We will always consider the best
cloud deployment model as per our requirements. Here are some factors which should be considered
before choosing the best deployment model.
 Cost: Cost is an important factor for the cloud deployment model as it tells how much
amount you want to pay for these things.
 Scalability: Scalability talks about the current activity status and how much we can scale it.
 Easy to use: It tells how much your resources are trained and how easily can you manage
these models.
 Compliance: Compliance talks about the laws and regulations which impact the
implementation of the model.
 Privacy: Privacy talks about what data you gather for the model.
Each model has some advantages and some disadvantages, and the selection of the best is only done
on the basis of your requirement. If your requirement changes, you can switch to any other model.
Federated Cloud/Intercloud:
Cloud Federation, also known as Federated Cloud is the deployment and management of several
external and internal cloud computing services to match business needs. It is a multi-national cloud
system that integrates private, community, and public clouds into scalable computing platforms.
Federated cloud is created by connecting the cloud environment of different cloud providers using a
common standard.

The architecture of Federated Cloud:

The architecture of Federated Cloud consists of three basic components:


1. Cloud Exchange
The Cloud Exchange acts as a mediator between cloud coordinator and cloud broker. The demands of
the cloud broker are mapped by the cloud exchange to the available services provided by the cloud
coordinator. The cloud exchange has a track record of what is the present cost, demand patterns, and
available cloud providers, and this information is periodically reformed by the cloud coordinator.
2. Cloud Coordinator
The cloud coordinator assigns the resources of the cloud to the remote users based on the quality of
service they demand and the credits they have in the cloud bank. The cloud enterprises and their
membership are managed by the cloud controller.
3. Cloud Broker
The cloud broker interacts with the cloud coordinator, analyzes the Service-level agreement and the
resources offered by several cloud providers in cloud exchange. Cloud broker finalizes the most
suitable deal for their client.
Properties of Federated Cloud:
1. In the federated cloud, the users can interact with the architecture either centrally or in a
decentralized manner. In centralized interaction, the user interacts with a broker to
mediate between them and the organization. Decentralized interaction permits the user
to interact directly with the clouds in the federation.
2. Federated cloud can be practiced with various niches like commercial and non-
commercial.
3. The visibility of a federated cloud assists the user to interpret the organization of several
clouds in the federated environment.
4. Federated cloud can be monitored in two ways. MaaS (Monitoring as a Service) provides
information that aids in tracking contracted services to the user. Global monitoring aids in
maintaining the federated cloud.
5. The providers who participate in the federation publish their offers to a central entity. The
user interacts with this central entity to verify the prices and propose an offer.
6. The marketing objects like infrastructure, software, and platform have to pass through
federation when consumed in the federated cloud.
Benefits of Federated Cloud:
1. It minimizes the consumption of energy.
2. It increases reliability.
3. It minimizes the time and cost of providers due to dynamic scalability.
4. It connects various cloud service providers globally. The providers may buy and sell
services on demand.
5. It provides easy scaling up of resources.
Challenges in Federated Cloud:
1. In cloud federation, it is common to have more than one provider for processing the
incoming demands. In such cases, there must be a scheme needed to distribute the
incoming demands equally among the cloud service providers.
2. The increasing requests in cloud federation have resulted in more heterogeneous
infrastructure, making interoperability an area of concern. It becomes a challenge for
cloud users to select relevant cloud service providers and therefore, it ties them to a
particular cloud service provider.
3. A federated cloud means constructing a seamless cloud environment that can interact
with people, different devices, several application interfaces, and other entities.

Cloud‐Enabling Technology:
Enabling technologies;
1. Broadband networks and internet architecture
2. Data center technology
3. Virtualization technology
4. Web technology
5. Multitenant technology

Broadband networks & Internet architecture:

• All clouds must be connected to a network


• Internet’s largest backbone networks, established and deployed by ISPs, are interconnected by core
routers ISP: internet service provider

Two fundamental components:

1. Connectionless packet switching


 End‐to‐end (sender‐receiver pair) data flows are divided into packets of a limited size.
 Packets are processed through network switches and routers, then queued and forwarded
from one intermediary node to the next
2. Router‐based interconnectivity
 A router is a device that is connected to multiple networks through which it forwards packets 
Each packet is individually processed
 Use multiple alternative network routes

Packets travelling through Internet

Data Center Technology:

A data center is a facility used to house computer systems and associated components, such as
telecommunications and storage systems.
 Virtualization: Data center virtualization is the transfer of physical data centers into digital data
centers (i.e., virtual) using a cloud software platform, enabling companies to remotely access
information and applications. Virtualization is a process of converting a physical IT resource into
a virtual IT resource
 Standardization and Modularity: Data centers are built upon standardized commodity
hardware and designed with modular architectures, aggregating multiple identical building
blocks of facility infrastructure and equipment to support scalability, growth, and speedy
hardware replacements. Modularity and standardization are key requirements for reducing
investment and operational costs as they enable economies of scale for the procurement,
acquisition, deployment, operation, and maintenance processes. Common virtualization
strategies and the constantly improving capacity and performance of physical devices both favor
IT resource consolidation, since fewer physical components are needed to support complex
configurations. Consolidated IT resources can serve different systems and be shared among
different cloud consumers.
 Automation: Data center automation is the process in which routine tasks of data center
operations such as new equipment provisioning, auditing, monitoring, and reporting are
completed with little to no manual effort.
 Remote Operation and Management: Most of the operational and administrative tasks of IT
resources in data centers are commanded through the networks remote consoles and
management systems. Technical personnel are not required to visit the dedicated rooms that
house servers, except to perform highly specific tasks, such as equipment handling and cabling
or hardware-level installation and maintenance.

Virtualization technology:

Virtualization simulates the interface to a physical object by any one of four means:
1. Multiplexing. Create multiple virtual objects from one instance of a physical object. For example, a
processor is multiplexed among a number of processes or threads.
2. Aggregation. Create one virtual object from multiple physical objects. For example, a number of
physical disks are aggregated into a RAID disk.
3. Emulation. Construct a virtual object from a different type of physical object. For example, a physical
disk emulates a random access memory.
4. Multiplexing and emulation. Examples: Virtual memory with paging multiplexes real memory and disk,
and a Virtual address emulates a real address; TCP emulates a reliable bit pipe and multiplexes a
physical communication channel and a processor. Virtualization abstracts the underlying resources and
simplifies their use, isolates users from one another, and supports replication, which, in turn, increases
the elasticity of the system.
Virtualization is a critical aspect of cloud computing, equally important to the providers and consumers
of cloud services, and plays an important role in:
• System security because it allows isolation of services running on the same hardware.
• Performance and reliability because it allows applications to migrate from one platform to another.
• The development and management of services offered by a provider.
• Performance isolation.
Virtualization has been used successfully since the late 1950s. A virtual memory based on paging was
first implemented on the Atlas computer at the University of Manchester in the United Kingdom in
1959. In a cloud computing environment a VMM runs on the physical hardware and exports hardware
level abstractions to one or more guest operating systems. A guest OS interacts with the virtual
hardware in the same way it would interact with the physical hardware, but under the watchful eye of
the VMM which traps all privileged operations and mediates the interactions of the guest OS with the
hardware. For example, a VMM can control I/O operations to two virtual disks implemented as two
different sets of tracks on a physical disk. New services can be added without the need to modify an
operating system.

Web technology:

Cloud computing relies on internet. Web technology is generally used as both the implementation
medium and the management interface for cloud services

Basic web technology


1. Uniform resource locator (URL): Commonly informally referred to as a web address. A reference
to a web resource that specifies its location on a computer network and a mechanism for
retrieving it.
Example: http://www.example.com/index.html
2. Hypertext transfer protocol (HTTP): Primary communication protocol used to exchange content.

3. Markup languages (HTML, XML): Express Web‐centric data and metadata

Web applications: Applications running in a web browser rely on web browsers for the presentation of
user‐interfaces.

Multitenant technology:

The multitenant application design was created to enable multiple users (tenants) to access the same
application logic simultaneously. Each tenant has its own view of the application that it uses,
administers, and customizes as a dedicated instance of the software while remaining unaware of other
tenants that are using the same application. Multitenant applications ensure that tenants do not have
access to data and configuration information that is not their own. Tenants can individually customize
features of the application, such as:
 User Interface – Tenants can define a specialized “look and feel” for their application
interface.
 Business Process – Tenants can customize the rules, logic, and workflows of the business
processes that are implemented in the application.
 Data Model – Tenants can extend the data schema of the application to include, exclude, or
rename fields in the application data structures.
 Access Control – Tenants can independently control the access rights for users and groups.

Multitenant application architecture is often significantly more complex than that of single-tenant
applications. Multitenant applications need to support the sharing of various artifacts by multiple users
(including portals, data schemas, middleware, and databases), while maintaining security levels that
segregate individual tenant operational environments.

Common characteristics of multitenant applications include:


 Usage Isolation – The usage behavior of one tenant does not affect the application
availability and performance of other tenants.
 Data Security – Tenants cannot access data that belongs to other tenants.
 Recovery – Backup and restore procedures are separately executed for the data of each
tenant.
 Application Upgrade – Tenants are not negatively affected by the synchronous upgrading of
shared software artifacts.
 Scalability – The application can scale to accommodate increases in usage by existing tenants
and/or increases in the number of tenants.
 Metered Usage – Tenants are charged only for the application processing and features that
are actually consumed.
 Data Tier Isolation – Tenants can have individual databases, tables, and/or schemas isolated
from other tenants. Alternatively, databases, tables, and/or schemas can be designed to be
intentionally shared by tenants.
Implementation Levels of Virtualization:
In the world of computing, using just one software instance is not enough anymore. Professionals are
looking to test their programs or software on multiple platforms.
But constraints create challenges in doing so. The solution? Virtualization. Here, users can create various
platform instances such as operating systems, applications, etc.

The Five Levels of Implementing Virtualization

1. Instruction Set Architecture Level (ISA)


2. Hardware Abstraction Level (HAL)
3. Operating System Level
4. Library Level
5. Application Level

Virtualization has been present since the 1960s, when it was introduced by IBM. Yet, it has only recently
caught the expected traction owing to the influx of cloud-based systems.

Instruction Set Architecture Level (ISA): In ISA, virtualization works through an ISA emulation. This is
helpful to run heaps of legacy code which was originally written for different hardware configurations.
These codes can be run on the virtual machine through an ISA. A binary code that might need additional
layers to run can now run on an x86 machine or with some tweaking, even on x64 machines. ISA helps
make this a hardware-agnostic virtual machine. The basic emulation, though, requires an interpreter.
This interpreter interprets the source code and converts it to a hardware readable format for
processing.
Hardware Abstraction Level (HAL): As the name suggests, this level helps perform virtualization at the
hardware level. It uses a bare hypervisor for its functioning. This level helps form the virtual machine
and manages the hardware through virtualization. It enables virtualization of each hardware component
such as I/O devices, processors, memory, etc. This way multiple users can use the same hardware with
numerous instances of virtualization at the same time. IBM had first implemented this on the IBM
VM/370 back in 1960. It is more usable for cloud-based infrastructure. Thus, it is no surprise that
currently, Xen hypervisors are using HAL to run Linux and other OS on x86 based machines.

Operating System Level: At the operating system level, the virtualization model creates an abstract layer
between the applications and the OS. It is like an isolated container on the physical server and operating
system that utilizes hardware and software. Each of these containers functions like servers.
When the number of users is high, and no one is willing to share hardware, this level of virtualization
comes in handy. Here, every user gets their own virtual environment with dedicated virtual hardware
resources. This way, no conflicts arise.

Library Level: OS system calls are lengthy and cumbersome. Which is why applications opt for APIs from
user-level libraries. Most of the APIs provided by systems are rather well documented. Hence, library
level virtualization is preferred in such scenarios. Library interfacing virtualization is made possible by
API hooks. These API hooks control the communication link from the system to the applications. Some
tools available today, such as vCUDA and WINE, have successfully demonstrated this technique.

Application Level: Application-level virtualization comes handy when you wish to virtualize only an
application. It does not virtualize an entire platform or environment. On an operating system,
applications work as one process. Hence it is also known as process-level virtualization. It is generally
useful when running virtual machines with high-level languages. Here, the application sits on top of the
virtualization layer, which is above the application program. The application program is, in turn, residing
in the operating system. Programs written in high-level languages and compiled for an application-level
virtual machine can run fluently here.

Virtualization Structures/Tools and Mechanisms:


 Before virtualization --> OS manages the hardware.
 After virtualization → --> a virtualization layer is inserted between the hardware and the OS.
 The virtualization layer converts portions of the real hardware into virtual ardware.
 Thus, different operating systems such as Linux and Windows can run on the same physical
machine, simultaneously.
 Depending on the position of the virtualization layer, there are several classes of VM
architectures;
a. the hypervisor architecture
b. paravirtualization
c. host-based virtualization.

In general, there are three common categories of VM architecture. Figure 3.1 shows the mechanical
properties of the machine before and after the material is made. Prior to virtualization, the operating
system controls the hardware. After virtualization, a virtualization layer is inserted between the
hardware and the operating system. In such a case, the virtualization layer is responsible for converting
parts of real hardware into visual hardware. Therefore, different operating systems like Linux and
Windows can operate on the same physical machine, simultaneously. Depending on the virtualization
layer structure, there are several categories of VM structures, namely hypervisor architecture, para-
virtualization, and host-based virtualization. The hypervisor is also known as the VMM (Virtual Machine
Monitor). They both perform the same virtualization operations.

Hypervisor:

 Hardware virtualization technique allowing multiple OS called guests to run on a host machine. Also
called the Virtual Machine Monitor (VMM).
 Supports hardware-level virtualization on bare metal devices like CPU, memory, disk and network
interfaces
 Sits directly between the physical hardware and its OS.
 Provides hypercalls for the guest OSes and applications.
 Assumes a micro-kernel architecture like the Microsoft Hyper-V.
1. Includes only the basic and unchanging functions (such as physical memory management
and processor scheduling).
2. The device drivers and other changeable components are outside the hypervisor
 Or it can assume a monolithic hypervisor architecture like the VMware ESX for server virtualization.
 Implements all the above functions, including those of the device drivers.
 So the size of the hypervisor code of a micro-kernel hypervisor is smaller than that of a monolithic
hypervisor.
 A hypervisor must be able to convert physical devices into virtual resources dedicated for the
deployed VM to use.

The Xen Architecture:

The hypervisor supports hardware-level virtualization (see Figure 3.1 (b)) for empty hardware such as
CPU, memory, disk and network connectors. The hypervisor software sits directly between the physi-cal
hardware and its OS. This virtualization layer is called a VMM or hypervisor. Hypervisor provides
hypercalls to visitor OS and applications. Depending on the performance, the hypervisor can take over a
micro-kernel architecture similar to Microsoft Hyper-V. Or it could take a monolithic hypervisor
architecture like VMware ESX for server virtualization.
The micro-kernel hypervisor includes only basic and consistent functions (such as visual memory
management and processor editing). Device drivers and other variables are outside the hypervisor. The
monolithic hypervisor uses all of the above functions, including those for device drivers. Therefore, the
hypervisor code size of the micro-kernel hyper-visor is smaller than that of the monolithic hypervisor. In
fact, the hypervisor should be able to convert portable devices into visual aids dedicated to a fixed VM
for use.

What Are the Different Types of Hypervisors in Cloud Computing?

There are mainly two types of hypervisors in cloud computing. There’s the Type 1 or bare metal
hypervisor, and there’s the Type 2 or hosted hypervisor. In this post, we will describe each type, discuss
its advantages and disadvantages, discuss the things to consider when choosing one over the other, and
review the benefits of hypervisors in general.
We’ve got a lot to cover, so let’s get this show on the road.

Different Types of Hypervisors in Cloud Computing


Hypervisor technology is one of the key enablers of cloud infrastructure. By understanding the different
types of hypervisors in cloud computing, you’ll better grasp the inner workings of cloud environments.
This will help you make informed decisions in your cloud initiatives.
Regardless of which hypervisor you choose, the basic functionality is the same. All types of hypervisors
enable you to create virtual machines (VMs). Each VM will have its own allocation of resources from the
underlying infrastructure as well as its own OS. A VM’s OS is called a guest OS. The resource allocations,
as well as the guest OS, can vary from one VM to another. All characteristics and capabilities of each VM
are made possible by the hypervisor.
As mentioned earlier, there are two different types of hypervisors in cloud computing. We’ll discuss
their differences in detail later, but let’s go over the basics of each one now.

Type 1 Hypervisors
A Type 1 hypervisor runs directly on a physical host. That’s why it’s also known as a bare metal
hypervisor. Basically, you would install a Type 1 hypervisor before anything else on a physical host, so it
sort of acts like that host’s operating system.
Consequently, a Type 1 hypervisor has direct access to the underlying physical host’s resources—e.g.,
CPU, RAM, storage, and network interface. Most cloud service providers use Type 1 hypervisors for
reasons we’ll discuss soon. The most commonly used Type 1 hypervisors are VMware ESXi and Microsoft
Hyper-V.

Type 2 Hypervisors
A Type 2 hypervisor runs on top of a host OS. For this reason, it’s also known as a hosted hypervisor. So,
you would have to install a host OS on your physical host before you can install a Type 2 hypervisor.
When a Type 2 hypervisor needs to communicate with the underlying hardware or access hardware
resources, it must go through the host OS first. Type 2 hypervisors are usually easier to set up and use.
Hence, they’re more common among end users. VirtualBox and Parallels® Desktop, the most popular
solution for running Windows on Macs, are Type 2 hypervisors.

Advantages and Disadvantages of Type 1 Hypervisors


Let’s now discuss the advantages and disadvantages of Type 1 hypervisors.

Advantages of Type 1 Hypervisors


Some of the advantages of Type 1 Hypervisors are that they are:
 Generally faster than Type 2. This is because Type 1 hypervisors have direct access to the
underlying physical host’s resources such as CPU, RAM, storage, and network interfaces. For
this reason, Type 1 hypervisors have lower latency compared to Type 2.
 More resource-rich. A Type 1 hypervisor doesn’t have to share the underlying resources
with a host OS. Thus, it can access a greater amount of CPU, RAM, storage, and network
bandwidth. This attribute also contributes to the performance of a Type 1 hypervisor.
 More secure. Since a host OS doesn’t exist in a Type 1 hypervisor deployment, that
deployment’s attack surface is much smaller than that of Type 2. In turn, this means threat
actors will have substantially fewer vulnerabilities to exploit.
 More stable. The absence of a host OS also eliminates host OS-related issues that may
affect the performance and availability of the virtual machines running on top of the
hypervisor.

Disadvantages of Type 1 Hypervisors


Type 1 hypervisors also have some disadvantages. They can be:
 Harder to set up and administer. With a Type 1 hypervisor, you must start from scratch, as
you’ll be dealing with a bare metal server. Even if you still need to install a host OS in a Type
2 deployment, IT administrators (even junior administrators) are more familiar with OS
installation and configuration. Hence, they might find it more challenging to administer a
Type 1.
 Dependent on an external administrative interface. Normally, Type 1 hypervisors are
managed through an external administrative interface. That means you’ll need a separate
system/computer to set up a Type 1 hypervisor.

Advantages and Disadvantages of Type 2 Hypervisors


Let’s now move on to the advantages and disadvantages of Type 2 hypervisors.

Advantages of Type 2 Hypervisors


Type 2 hypervisors have the advantages of being:
 More affordable. The superior capabilities of Type 1 hypervisors from reliability, security,
and efficiency standpoint come at a cost. They’re naturally more expensive. Conversely,
Type 2 hypervisors are more affordable. That’s why, although they can theoretically be used
in enterprise use cases, the target market of Type 2 hypervisors is normally end users.
 Easier to use. Affordability isn’t the only reason why Type 2 hypervisors are more suitable
for end users. Type 2 hypervisors are also usually easier to use. Hence, they’re more
appropriate for the less technical crowd.

Disadvantages of Type 2 Hypervisors


Some disadvantages when using Type 2 hypervisors are they’re:
 Slower than Type 1. Having a layer, the host OS, between a Type 2 hypervisor and the
underlying physical host adds latency. Hence, Type 2 hypervisors are generally slower than
their Type 1 counterparts.
 Not as host-resource accessible. Since a Type 2 hypervisor shares CPU, RAM, storage, and
network bandwidth from the underlying physical infrastructure with a host OS, the amount
of resources a Type 2 hypervisor has access to is limited compared to that of a Type 1.
 Less secure. The presence of a host OS increases the attack surface of the entire system.
This means threat actors have more vulnerabilities to exploit.
 Less stable. Any performance and availability issues in the host OS certainly affect the Type
2 hypervisor, and its VMs, running on top of it.
What Is CPU Virtualization?

CPU virtualization is a technology that allows multiple virtual machines to run on a single physical server.
It is a key component of cloud computing, allowing for the efficient use of computing resources and the
ability to quickly scale up or down as needed. CPU virtualization allows for the creation of multiple
virtual machines on a single physical server, each with its own operating system and applications. This
allows for the efficient use of computing resources and the ability to quickly scale up or down as needed.

Benefits of CPU Virtualization

CPU virtualization offers a number of benefits for cloud computing. It allows for the efficient use of
computing resources, as multiple virtual machines can be run on a single physical server. This allows for
the efficient use of computing resources and the ability to quickly scale up or down as needed.
Additionally, CPU virtualization allows for the creation of multiple virtual machines on a single physical
server, each with its own operating system and applications. This allows for the efficient use of
computing resources and the ability to quickly scale up or down as needed.

How CPU Virtualization Works

CPU virtualization works by allowing multiple virtual machines to run on a single physical server. This is
done by using a hypervisor, which is a software layer that sits between the physical server and the
virtual machines. The hypervisor is responsible for managing the resources of the physical server and
allocating them to the virtual machines. This allows for the efficient use of computing resources and the
ability to quickly scale up or down as needed.

Advantages of CPU Virtualization

CPU virtualization offers a number of advantages for cloud computing. It allows for the efficient use of
computing resources, as multiple virtual machines can be run on a single physical server. This allows for
the efficient use of computing resources and the ability to quickly scale up or down as needed.
Additionally, CPU virtualization allows for the creation of multiple virtual machines on a single physical
server, each with its own operating system and applications. This allows for the efficient use of
computing resources and the ability to quickly scale up or down as needed.

Disadvantages of CPU Virtualization

CPU virtualization also has some disadvantages. It can be difficult to manage multiple virtual machines
on a single physical server, as the hypervisor must be configured correctly in order for the virtual
machines to run properly. Additionally, CPU virtualization can be resource-intensive, as the hypervisor
must manage the resources of the physical server and allocate them to the virtual machines. This can
lead to increased costs for cloud computing.
Memory, and I/O Devices:

To support virtualization, processors such as the x86 employ a special running mode and instructions,
known as hardware-assisted virtualization. In this way, the VMM and guest OS run in different modes
and all sensitive instructions of the guest OS and its applications are trapped in the VMM. To save
processor states, mode switching is completed by hardware. For the x86 architecture, Intel and AMD
have proprietary technologies for hardware-assisted virtualization.

1. Hardware Support for Virtualization

Modern operating systems and processors permit multiple processes to run simultaneously. If there is
no protection mechanism in a processor, all instructions from different processes will access the
hardware directly and cause a system crash. Therefore, all processors have at least two modes, user
mode and supervisor mode, to ensure controlled access of critical hardware. Instructions running in
supervisor mode are called privileged instructions. Other instructions are unprivileged instructions. In a
virtualized environment, it is more difficult to make OSes and applications run correctly because there
are more layers in the machine stack. Example 3.4 discusses Intel’s hardware support approach.

At the time of this writing, many hardware virtualization products were available. The VMware
Workstation is a VM software suite for x86 and x86-64 computers. This software suite allows users to set
up multiple x86 and x86-64 virtual computers and to use one or more of these VMs simultaneously with
the host operating system. The VMware Workstation assumes the host-based virtualization. Xen is a
hypervisor for use in IA-32, x86-64, Itanium, and PowerPC 970 hosts. Actually, Xen modifies Linux as the
lowest and most privileged layer, or a hypervisor.
One or more guest OS can run on top of the hypervisor. KVM (Kernel-based Virtual Machine) is a Linux
kernel virtualization infrastructure. KVM can support hardware-assisted virtualization and
paravirtualization by using the Intel VT-x or AMD-v and VirtIO framework, respectively. The VirtIO
framework includes a paravirtual Ethernet card, a disk I/O controller, a balloon device for adjusting
guest memory usage, and a VGA graphics interface using VMware drivers.

Hardware-Assisted CPU Virtualization

This technique attempts to simplify virtualization because full or paravirtualization is complicated. Intel
and AMD add an additional mode called privilege mode level (some people call it Ring-1) to x86
processors. Therefore, operating systems can still run at Ring 0 and the hypervisor can run at Ring -1. All
the privileged and sensitive instructions are trapped in the hypervisor automatically. This technique
removes the difficulty of implementing binary translation of full virtualization. It also lets the operating
system run in VMs without modification.

Example 3.5 Intel Hardware-Assisted CPU Virtualization

Although x86 processors are not virtualizable primarily, great effort is taken to virtualize them. They are
used widely in comparing RISC processors that the bulk of x86-based legacy systems cannot discard
easily. Virtuali-zation of x86 processors is detailed in the following sections. Intel’s VT-x technology is an
example of hardware-assisted virtualization, as shown in Figure 3.11. Intel calls the privilege level of x86
processors the VMX Root Mode. In order to control the start and stop of a VM and allocate a memory
page to maintain the
CPU state for VMs, a set of additional instructions is added. At the time of this writing, Xen, VMware,
and the Microsoft Virtual PC all implement their hypervisors by using the VT-x technology.
Generally, hardware-assisted virtualization should have high efficiency. However, since the transition
from the hypervisor to the guest OS incurs high overhead switches between processor modes, it
sometimes cannot outperform binary translation. Hence, virtualization systems such as VMware now
use a hybrid approach, in which a few tasks are offloaded to the hardware but the rest is still done in
software. In addition, para-virtualization and hardware-assisted virtualization can be combined to
improve the performance further.

3. Memory Virtualization

Virtual memory virtualization is similar to the virtual memory support provided by modern operat-ing
systems. In a traditional execution environment, the operating system maintains mappings of virtual
memory to machine memory using page tables, which is a one-stage mapping from virtual memory to
machine memory. All modern x86 CPUs include a memory management unit (MMU) and a translation
lookaside buffer (TLB) to optimize virtual memory performance. However, in a virtual execution
environment, virtual memory virtualization involves sharing the physical system memory in RAM and
dynamically allocating it to the physical memory of the VMs.
That means a two-stage mapping process should be maintained by the guest OS and the VMM,
respectively: virtual memory to physical memory and physical memory to machine memory.
Furthermore, MMU virtualization should be supported, which is transparent to the guest OS. The guest
OS continues to control the mapping of virtual addresses to the physical memory addresses of VMs. But
the guest OS cannot directly access the actual machine memory. The VMM is responsible for mapping
the guest physical memory to the actual machine memory. Figure 3.12 shows the two-level memory
mapping procedure.
Since each page table of the guest OSes has a separate page table in the VMM corresponding to it, the
VMM page table is called the shadow page table. Nested page tables add another layer of indirection to
virtual memory. The MMU already handles virtual-to-physical translations as defined by the OS. Then
the physical memory addresses are translated to machine addresses using another set of page tables
defined by the hypervisor. Since modern operating systems maintain a set of page tables for every
process, the shadow page tables will get flooded. Consequently, the performance overhead and cost of
memory will be very high.

Virtual Clusters and Resource Management:


As with traditional physical servers, virtual machines (VMs) can also be clustered. A VM cluster starts
with two or more physical servers; we'll call them Server A and Server B. In simple deployments if Server
A fails, its workloads restart on Server B

Virtual Cluster features


• HA: Virtual machines can be restarted on another hosts if the host where the virtual machine running
fails.
• DRS: (Distributed Resource Scheduler): Virtual machines can be load balanced so that none of the
hosts is too overloaded or too much empty in the cluster.
• Live migration: of virtual machines from one host to other.
In a traditional VM initialization, the administrator manually writes configuration information/specify
the configuration sources. With many VMs, an inefficient configuration always causes problems with
overloading or underutilization.
Amazon’s EC2 provides elastic computing power in a cloud. EC2 permits customers to create VMs and to
manage user accounts over the time of their use (resizable capacity).
XenServer and VMware ESXi Server support a bridging mode which allows all domains to appear on the
network as individual hosts.
With this mode VMs can communicate with one another freely through the virtual network interface
card and configure the network automatically.
Virtual clusters are built with VMs installed at distributed servers from one or more physical clusters.
The VMs in a virtual cluster are interconnected logically by a virtual network across several physical
networks.

Provisioning of VMs in Virtual Clusters: The provisioning of VMs to a virtual cluster is done dynamically
to have some interesting properties:

The virtual cluster nodes can be either physical or virtual machines. Multiple VMs running with different
OSes can be deployed on the same physical node. A VM runs with a guest OS, which is often different
from the host OS. The purpose of using VMs is to consolidate multiple functionalities on the same
server. This will greatly enhance server utilization and application flexibility. VMs can be colonized
(replicated) in multiple servers for the purpose of promoting distributed parallelism, fault tolerance,
disaster recovery. The size of a virtual cluster can grow or shrink dynamically. The failure of any physical
nodes may disable some VMs installed on the failing nodes. But the failure of VMs will not pull down the
host system.

Virtual Clusters Management: It is necessary to effectively manage VMs running on virtual clusters and
consequently build a high-performance virtualized computing environment. This involves – virtual
cluster deployment, – monitoring and management over large-scale clusters, resource scheduling, load
balancing, – server consolidation, fault tolerance, and other techniques. Since large number of VM
images might be present, the most important thing is to determine how to store those images in the
system efficiently. Apart from it there are common installations for most users or applications, such as
OS or user-level programming libraries. These software packages can be preinstalled as templates
(called template VMs).

Deployment:
There are four steps to deploy a group of VMs onto a target cluster: – preparing the disk image, –
configuring the VMs, – choosing the destination nodes, and – executing the VM deployment command
on every host. Many systems use templates to simplify the disk image preparation process. A template is
a disk image that includes a preinstalled operating system with or without certain application software.
Users choose a proper template according to their requirements and make a duplicate of it as their own
disk image. Templates could implement the COW (Copy on Write) format. A new COW backup file is very
small and easy to create and transfer. Therefore, it definitely reduces disk space consumption. In
addition, VM deployment time is much shorter than that of copying the whole raw image file. VM is
configured with a name, disk image, network setting, and allocated CPU and memory. One needs to
record each VM configuration into a file. However, this method is inefficient when managing a large
group of VMs. VMs with the same configurations could use pre-edited profiles to simplify the process. In
this scenario, the system configures the VMs according to the chosen profile. Most configuration items
use the same settings, while other items, such as UUID, VM name, and IP address, are assigned with
automatically calculated values

Copy-on-write:
An optimization strategy in which if multiple callers ask for resources which are initially
indistinguishable, give them pointers to the same resource. This function can be maintained until a caller
tries to modify its "copy" of the resource, at which point a true private copy is created to prevent the
changes becoming visible to everyone else. All of this happens transparently to the callers. The primary
advantage is that if a caller never makes any modifications, no private copy need ever be created. All
changes are recorded in a separate file preserving the original image. Several COW files can point to the
same image to test several configurations simultaneously without jeopardizing the basic system. Unlike
the snapshot, the copy-on-write uses multiple files and allows to simultaneously run multiple instances
of the basic machine.

VIRTUALIZATION FOR DATA-CENTER AUTOMATION:

Data centers have grown rapidly in recent years, and all major IT companies are pouring their resources
into building new data centers. In addition, Google, Yahoo!, Amazon, Microsoft, HP, Apple, and IBM are
all in the game. All these companies have invested billions of dollars in data-center construction and
automation. Data-center automation means that huge volumes of hardware, software, and database
resources in these data centers can be allocated dynamically to millions of Internet users
simultaneously, with guaranteed QoS and cost-effectiveness.

This automation process is triggered by the growth of virtualization products and cloud com-puting
services. From 2006 to 2011, according to an IDC 2007 report on the growth of virtuali-zation and its
market distribution in major IT sectors. In 2006, virtualization has a market share of $1,044 million in
business and enterprise opportunities. The majority was dominated by pro-duction consolidation and
software development. Virtualization is moving towards enhancing mobility, reducing planned
downtime (for maintenance), and increasing the number of virtual clients.

The latest virtualization development highlights high availability (HA), backup services, workload
balancing, and further increases in client bases. IDC projected that automation, service orientation,
policy-based, and variable costs in the virtualization market. The total business opportunities may
increase to $3.2 billion by 2011. The major market share moves to the areas of HA, utility computing,
production consolidation, and client bases. In what follows, we will discuss server consolidation, virtual
storage, OS support, and trust management in automated data-center designs.

1. Server Consolidation in Data Centers

In data centers, a large number of heterogeneous workloads can run on servers at various times. These
heterogeneous workloads can be roughly divided into two categories: chatty workloads and nonwinter-
active workloads. Chatty workloads may burst at some point and return to a silent state at some other
point. A web video service is an example of this, whereby a lot of people use it at night and few people
use it during the day. Noninteractive workloads do not require people’s efforts to make progress after
they are submitted. High-performance computing is a typical example of this. At various stages, the
requirements for resources of these workloads are dramatically different. However, to guarantee that a
workload will always be able to cope with all demand levels, the workload is statically allocated enough
resources so that peak demand is satisfied.
Therefore, it is common that most servers in data centers are underutilized. A large amount of
hardware, space, power, and management cost of these servers is wasted. Server consolidation is an
approach to improve the low utility ratio of hardware resources by reducing the number of physical
servers. Among several server consolidation techniques such as centralized and physical consolidation,
virtualization-based server consolidation is the most powerful. Data centers need to optimize their
resource management. Yet these techniques are performed with the granularity of a full server
machine, which makes resource management far from well optimized. Server virtualization enables
smaller resource allocation than a physical machine.

2.Virtual Storage Management

The term “storage virtualization” was widely used before the renaissance of system virtualization. Yet
the term has a different meaning in a system virtualization environment. Previously, storage virtualiza-
tion was largely used to describe the aggregation and repartitioning of disks at very coarse time scales
for use by physical machines. In system virtualization, virtual storage includes the storage managed by
VMMs and guest OSes. Generally, the data stored in this environment can be classified into two
categories: VM images and application data. The VM images are special to the virtual environment,
while application data includes all other data which is the same as the data in traditional OS
environments.

The most important aspects of system virtualization are encapsulation and isolation. Traditional
operating systems and applications running on them can be encapsulated in VMs. Only one operating
system runs in a virtualization while many applications run in the operating system. System virtualization
allows multiple VMs to run on a physical machine and the VMs are completely isolated. To achieve
encapsulation and isolation, both the system software and the hardware platform, such as CPUs and
chipsets, are rapidly updated. However, storage is lagging. The storage systems become the main
bottleneck of VM deployment.

3. Cloud OS for Virtualized Data Centers

Data centers must be virtualized to serve as cloud providers. Table 3.6 summarizes
four virtual infrastructure (VI) managers and OSes. These VI managers and OSes are specially tailored
for virtualizing data centers which often own a large number of servers in clusters. Nimbus, Eucalyptus,
and OpenNebula are all open source software available to the general public. Only vSphere 4 is a
proprietary OS for cloud resource virtualization and management over data centers.

These VI managers are used to create VMs and aggregate them into virtual clusters as elastic resources.
Nimbus and Eucalyptus support essentially virtual networks. OpenNebula has additional features to
provision dynamic resources and make advance reservations. All three public VI managers apply Xen and
KVM for virtualization. vSphere 4 uses the hypervisors ESX and ESXi from VMware. Only vSphere 4
supports virtual storage in addition to virtual networking and data protection. We will study Eucalyptus
and vSphere 4 in the next two examples.
Unit 2: Common Standards
The Open Cloud Consortium:

It is an organization that supports the development of standards for cloud computing and for
interoperating with the various frameworks.

OCC working groups perform these functions: They develop benchmarks for measuring cloud
computing performance. Their benchmark and data generator for measuring large data clouds is called
MalStone. They provide testbeds that vendors can use to test their applications, including the Open
Cloud Testbed and the Intercloud Testbed. They support the development of open-source reference
implementations for cloud computing. MapReduce is Google's patented software framework that
supports large distributed data sets organized by the Google File System (GFS) accessed by clusters of
computers. They support the management of cloud computing research. infrastructure for scientific

Moving Applications to the Cloud: Moving some applications fully or partly from a local or on-premises
installation benefit from cloud deployment, and the cloud enhances some features. The process for
determining whether, what, and when to move your applications to the cloud involves an analysis of
what critical features of the application need to be supported. A particular cloud service provider is
finalized based on our application's critical features they support.

Application porting methods: Physical hardware is eliminated by moving the entire application to the
cloud. A system is essentially cloned to the cloud. Factors such as access to data, latencies, data security
etc. limit application porting abilities. When you move an application to the cloud, you must use the APIs
of your particular cloud service provider. There are APIs for each of the types of cloud services:
infrastructure, software services, and applications. These APIs are generally not interoperable.

Applications in the Clouds: Applications in the cloud, must account for system abstraction and
redirection, scalability, a whole new set of application and system APIs, LAN/WAN latencies, and other
factors that are specific to one cloud platform or another. Any application can run either completely or
partially in the cloud. A developer should analyze whether his application's function is best served by
cloud or local deployment. It depends upon the application's attributes that has to preserve or enhance,
and how locating those services in the cloud impacts those attributes. The location of an application or
service plays a fundamental role in how the application must be written. An application or process that
runs on a desktop or server is executed coherently, as a unit, under the control of an integrated
program. An action triggers a program call, code executes, and a result is returned and may be acted
upon. Taken as a unit, "Request Process => Response" is an atomic transaction

ACID Principle: The properties necessary to guarantee a reliable transaction in databases and other
applications and the technologies necessary to achieve them. The acronym stands for:
a. Atomicity: Defines a transaction as something that cannot be subdivided and must be completed or
abandoned as a unit.
b. Consistency: States that the system must go from one known state to another and that the system
integrity must be maintained.
c. Isolation: States that the system cannot have other transactions operate on data that is currently
being processed by a transaction.
d. Durability: States that the system must have a mechanism to recover from committed transactions
when required.

An application that runs as a service on the Internet has a client portion that makes a request and a
server portion that responds to that request. The request has been decoupled from the response
because the transaction is executing in two or more places. In order to create a stateful system in a
distributed architecture, a transaction manager or broker must be added. When applications get moved
to cloud physical systems become virtualized. The place where the program execution occurs is different
every time.

What is Open Virtualization Format (OVF)?

Open Virtualization Format (OVF) is an open source standard for packaging and distributing software
applications and services for virtual machines (VMs). As the adoption of virtual infrastructure increases,
there is a greater need for an open, standard, portable and platform-independent metadata format to
distribute virtual systems onto and between virtualization platforms. OVF provides such a packaging and
distribution format to facilitate the mobility of VMs. The standard also describes multiple VMs with their
relationships. These VMs can be wrapped up in a single virtual appliance file to enable broader
distribution.

Open Virtualization Format explained

OVF is not a specification describing a virtual disk. Rather, it is a standard representation of VM


metadata. This VM metadata includes the following:

 name
 configured memory
 CPU
 storage settings
 network

In addition to describing the above attributes of virtual hardware, OVF also allows virtual appliance
vendors to add comments about the VM and other characteristics, such as an end-user license
agreement (EULA), boot parameters and minimum requirements. They can also encrypt, compress and
digitally sign their content. OVF, which is specified by the Distributed Management Task Force (DMTF)
and published by the International Organization for Standardization (ISO) as ISO 1720, is independent of
any particular processor or hypervisor architecture. It leverages DMTF's Common Information Model
(CIM) to allow management software to understand and map resource properties by using the OVF
open standard. As a packaging format for virtual appliances, OVF enables the mobility of virtual
machines across multiple platforms by facilitating the distribution of enterprise software in a flexible,
secure and efficient manner.

Consequently, both vendors and users can follow OVF specifications to deploy a VM on any
virtualization platform. They can take full advantage of virtualization's benefits, including the following:
 enhanced flexibility
 portability
 verification
 version control
 signing
 better licensing terms

Features of Open Virtualization Format

Key features of OVF are as follows:

Validation support. OVF supports the validation of every VM and the complete package.
Supports single and multiple VM configurations. OVF supports both single VM packages and complex
multi-tier package services involving more than one interdependent VM.
Content verification support. Depending on the industry-standard public key infrastructure (PKI), OVF
enables integrity checking and content verification.
Licensing support. OVF supports management and software licensing strategies.
Platform-independent. OVF was designed to be platform-independent, whether it's a guest OS, host
platform or virtualization platform.
Extensible. OVF can support new technological advancements in virtualization and virtual appliances.
Portable packaging. OVF allows vendors to add platform-specific enhancements to their appliances and
software.

Standards for Application Developers:

The purpose of application development standards is to ensure uniform, consistent, high-quality


software solutions. Programming standards help to improve the readability of the software, allowing
developers to understand new code more quickly and thoroughly. Commonly used application
standards are available for the Internet in browsers, for transferring data, sending messages, and
securing data.

Browsers (Ajax):
AJAX (Asynchronous JavaScript and XML), is a group of interrelated web development techniques used
to create interactive web applications or rich Internet applications.
Using Ajax, web applications can retrieve data from the server asynchronously, without interfering with
the display and behavior of the browser page currently being displayed to the user.
The use of Ajax has led to an increase in interactive animation on web pages.
Using Ajax, a web application can request only the content that needs to be updated in the web pages.
This greatly reduces networking bandwidth usage and page load times. Sections of pages can be
reloaded individually. An Ajax framework helps developers to build dynamic web pages on the client
side. Data is sent to or from the server using requests, usually written in JavaScript. ICEfaces is an open
source Ajax framework developed as Java product and maintained by http://icefaces.org.
ICEfaces Ajax Application Framework: ICEfaces is an integrated Ajax application framework
that enables Java EE application developers to easily create and deploy thin-client rich Internet
applications in pure Java. To run ICEfaces applications, users need to download and install the
following products:

 Java 2 Platform,
 Standard Edition
 Ant
 Tomcat
 ICEfaces
 Web browser (if you don’t already have one installed)

Security Features in ICEfaces Ajax Application Framework:

1.ICEfaces is the one of the most secure Ajax solutions available.


2.It is Compatible with SSL (Secure Sockets Layer) protocol.
3.It prevents cross-site scripting, malicious code injection, and unauthorized data mining.
4.ICEfaces does not expose application logic or user data.
5.It is effective in preventing fake form submits and SQL (Structured Query Language) injection
attacks.

Data (XML, JSON):

1.Extensible Markup Language (XML) allows to define markup elements.


2.Its purpose is to enable sharing of structured data.
3. XML is often used to describe structured data and to serialize Objects.
4.XML provides a basic syntax that can be used to share information among different kinds of
computers, different applications, and different organizations without needing to be converted
from one to another.

Data (XML, JSON) JSON (JavaScript Object Notation ) is a lightweight computer data
interchange format. It is a text-based, human-readable format for representing simple data
structures and associative arrays (called objects). The JSON format is often used for transmitting
structured data over a network connection in a process called serialization. Its main application is
in Ajax web application programming, where it serves as an alternative to the XML format.

Solution Stacks (LAMP and LAPP):


What is LAMP?

LAMP is an open-source Web development platform that uses Linux as the operating system, Apache as
the Web server, MySQL as the relational database management system and PHP/Perl/Python as the
object-oriented scripting language. Sometimes LAMP is referred to as a LAMP stack because the
platform has four layers. Stacks can be built on different operating systems. LAMP is a example of a web
service stack, named as an acronym. The LAMP components are largely interchangeable and not limited
to the original selection. LAMP is suitable for building dynamic web sites and web applications. Since its
creation, the LAMP model has been adapted to another component, though typically consisting of free
and open-source software. Developers that use these tools with a Windows operating system instead of
Linux are said to be using WAMP, with a Macintosh system MAMP, and with a Solaris system SAMP.

Linux, Apache, MySQL and PHP, all of them add something unique to the development of high-
performance web applications. Originally popularized from the phrase Linux, Apache, MySQL, and PHP,
the acronym LAMP now refers to a generic software stack model. The modularity of a LAMP stack may
vary. Still, this particular software combination has become popular because it is sufficient to host a
wide variety of website frameworks, such as Joomla, Drupal, and WordPress.

The components of the LAMP stack are present in the software repositories of the most Linux
distributions. The LAMP bundle can be combined with many other free and open-source software
packages, such as the following:
o netsniff-ng for security testing and hardening
o intrusion prevention (IPS) system and Snort an intrusion detection (IDS)
o RRD tool for diagrams
o Nagios, Cacti, or Collectd for monitoring

LAPP - WEB STACK

The LAPP stack is an open-source web platform that can be used to run dynamic web sites and servers. It
is considered by many to be a powerful alternative to the more popular LAMP stack and includes Linux,
Apache, PostgreSQL (instead of MySQL) and PHP, Python and Perl. Lapp is a user-centered mobile cloud
computing platform that enables experiential learning of laparoscopic surgery’s practical skills. It is
based on high-quality audiovisual material that demonstrates the step-by-step execution of different
exercises. Students watch the tutorials and then record themselves performing the procedure on their
own. The videos are uploaded to the platform in order to be reviewed by experts. Through various
audiovisual interactive tools, tutors provide personalized remote asynchronous feedback until the
expected competency level is achieved. It is foreseen that the student will improve at least as much as
with a tutor by their side.

Syndication (Atom, Atom Publishing Protocol, and RSS):


In general, syndication is the supply of material for reuse and integration with other material, often
through a paid service subscription. The most common example of syndication is in newspapers, where
such content as wire-service news, comics, columns, horoscopes, and crossword puzzles are usually
syndicated content. Newspapers receive the content from the content providers, reformat it as
required, integrate it with other copy, print it, and publish it.

Atom: Atom is an XML-based document format that describes lists of related information known as
"feeds". Feeds are composed of a number of items, known as "entries", each with an extensible set of
attached meta-data. For example, each entry has a title. The primary use case that Atom addresses is
the syndication of Web content such as web logs and news headlines to Web sites as well as directly to
user agents.
Atom Publishing Protocol: The Atom Publishing Protocol (AtomPub) is an application level protocol for
publishing and editing Web resources. The protocol is based on HTTP transfer of Atom-formatted
representations. The Atom format is documented in the Atom Syndication Format.

Protocol: The protocol supports the creation of Web Resources and provides facilities for:
a. Collections: Sets of Resources, which can be retrieved in whole or in part.
b. Services: Discovery and description of Collections.
c. Editing: Creating, editing, and deleting Resources.

RSS: RSS stands for Really Simple Syndication. RSS allows you to syndicate your site content. RSS defines
an easy way to share and view headlines and content. RSS files can be automatically updated. RSS allows
personalized views for different sites. RSS is written in XML.

Why use RSS?

RSS was designed to show selected data. Without RSS, users will have to check your site daily for new
updates. This may be too time-consuming for many users. With an RSS feed (RSS is often called a News
feed or RSS feed) they can check your site faster using an RSS aggregator (a site or program that gathers
and sorts out RSS feeds). Since RSS data is small and fast-loading, it can easily be used with services like
cell phones or PDA's. Web-rings with similar information can easily share data on their web sites to
make them better and more useful.

Cloud Security Standards:

Cloud-based services are now a crucial component of many businesses, with technology providers
adhering to strict privacy and data security guidelines to protect the privacy of user information.
Cloud security standards assist and guide organizations in ensuring secure cloud operations.
What are Cloud Security Standards?
It was essential to establish guidelines for how work is done in the cloud due to the different security
dangers facing the cloud. They offer a thorough framework for how cloud security is upheld with
regard to both the user and the service provider.
 Cloud security standards provide a roadmap for businesses transitioning from a traditional
approach to a cloud-based approach by providing the right tools, configurations, and
policies required for security in cloud usage.
 It helps to devise an effective security strategy for the organization.
 It also supports organizational goals like privacy, portability, security, and interoperability.
 Certification with cloud security standards increases trust and gives businesses a
competitive edge.

Need for Cloud Security Standards


1. Ensure cloud computing is an appropriate environment: Organizations need to make sure
that cloud computing is the appropriate environment for the applications as security and
mitigating risk are the major concerns.
2. To ensure that sensitive data is safe in the cloud: Organizations need a way to make sure
that the sensitive data is safe in the cloud while remaining compliant with standards and
regulations.
3. No existing clear standard: Cloud security standards are essential as earlier there were no
existing clear standards that can define what constitutes a secure cloud environment.
Thus, making it difficult for cloud providers and cloud users to define what needs to be
done to ensure a secure environment.
4. Need for a framework that addresses all aspects of cloud security: There is a need for
businesses to adopt a

Lack of Cloud Security Standards


 Enterprises and CSPs have been forced to fumble while relying on an endless variety of
auditing needs, regulatory requirements, industry mandates, and data Centre standards to
offer direction on protecting their cloud environments due to the lack of adequate cloud
security standards.
 Because of this, the Cloud Security Alliance is more difficult to understand than it first
appears, and its fragmented strategy does not meet the criteria for “excellent security”.

Top 5 Cloud Computing Features:

When a user searches for a cloud service provider, he or she needs to make sure that it must have these
five cloud computing features:

Feature 1: Advanced Perimeter Firewall


Most of the firewalls are simple because they inspect the source and destination packets only. However,
there are some more advanced firewalls available that perform stable packet inspection. It will check
the file packets’ integrity to ensure the stability before approving or rejecting the packet.
The top-of-the-line firewalls, for example, Palo Alto Networks’ perimeter firewall, will check the data
stored in the file packet to examine the file type, including source, destination, and integrity. This
granularity is necessary to prevent the most advanced persistent threats.

Features 2: Intrusion Detection Systems with Event Logging


All IT security compliance standards must involve the businesses to have a means, which can track and
record all types of intrusion attempts. Thus, IDS event logging solutions are necessary to all companies
that want to meet the compliance standards like PCI and HIPAA.
There are some cloud providers, who offer IDS monitoring service and update the security rules for their
firewalls to counter the threat signals and malicious IP addresses, which are detected for all cloud users.

Features 3: Internal Firewalls for Each Application & Databases


Using a secure or top-in-line perimeter firewall will block the external cyber attacks only, but internal
attacks are still a significant danger. However, if there are no internal firewalls in infrastructures to
restrict sensitive data, access, and applications are not considered secure. For example, an employee
user account can allow hackers to bypass the perimeter firewall altogether.

Feature 4: Data-at-Rest Encryption


Data encryption is one of the effective methods to keep the most sensitive data stored in the cloud
infrastructure safe and secure from unauthorized use. Moreover, a durable type of encryption will
minimize the chance of stolen data used for some purpose. Besides, a user has an opportunity to alert
them, and they can take steps to protect their individuality.

Feature 5: Tier IV Data Centers with Strong Physical Security


The last possible way for hackers and the industrial spies is the physical hardware, which is used to run a
cloud environment to steal the most crucial data. If hackers get direct access to the device, which runs
the cloud, they have free reign to take the data or upload the malware directly to the local machine.
Thus, a user must use tier IV data centers that will protect the cloud environment and restrict access to
the physical systems. However, a secure tier IV data centers use measures like:
 24X7 CCTV monitoring
 Controlled access checkpoints via biometric security controls
 Armed security patrols
These security measures are essential for keeping unauthorized users away from directly accessing the
hardware through which the cloud is running.

What is grid computing?

Grid computing is a computing infrastructure that combines computer resources spread over different
geographical locations to achieve a common goal. All unused resources on multiple computers are
pooled together and made available for a single task. Organizations use grid computing to perform large
tasks or solve complex problems that are difficult to do on a single computer.

For example, meteorologists use grid computing for weather modeling. Weather modeling is a
computation-intensive problem that requires complex data management and analysis. Processing
massive amounts of weather data on a single computer is slow and time consuming. That’s why
meteorologists run the analysis over geographically dispersed grid computing infrastructure and
combine the results.

Why is grid computing important?

Organizations use grid computing for several reasons.

Efficiency

With grid computing, you can break down an enormous, complex task into multiple subtasks. Multiple
computers can work on the subtasks concurrently, making grid computing an efficient computational
solution.

Cost

Grid computing works with existing hardware, which means you can reuse existing computers. You can
save costs while accessing your excess computational resources. You can also cost-effectively access
resources from the cloud.
Flexibility

Grid computing is not constrained to a specific building or location. You can set up a grid computing
network that spans several regions. This allows researchers in different countries to work collaboratively
with the same supercomputing power.

Programming Support of Google App Engine:


Programming Support of Google App Engine GAE programming model for two supported languages:
Java and Python. A client environment includes an Eclipse plug-in for Java allows you to debug your GAE
on your local machine. Google Web Toolkit is available for Java web application developers. Python is
used with frameworks such as Django and CherryPy, but Google also has webapp Python environment.
There are several powerful constructs for storing and accessing data. The data store is a NOSQL data
management system for entities. Java offers Java Data Object (JDO) and Java Persistence API (JPA)
interfaces implemented by the Data Nucleus Access platform, while Python has a SQL-like query
language called GQL.

The performance of the data store can be enhanced by in-memory caching using the memcache, which
can also be used independently of the data store. Recently, Google added the blobstore which is
suitable for large files as its size limit is 2 GB. There are several mechanisms for incorporating external
resources. The Google SDC Secure Data Connection can tunnel through the Internet and link your
intranet to an external GAE application. The URL Fetch operation provides the ability for applications to
fetch resources and communicate with other hosts over the Internet using HTTP and HTTPS requests.

An application can use Google Accounts for user authentication. Google Accounts handles user account
creation and sign-in, and a user that already has a Google account (such as a Gmail account) can use that
account with your app. GAE provides the ability to manipulate image data using a dedicated Images
service which can resize, rotate, flip, crop, and enhance images. A GAE application is configured to
consume resources up to certain limits or quotas. With quotas, GAE ensures that your application won’t
exceed your budget, and that other applications running on GAE won’t impact the performance of your
app. In particular, GAE use is free up to certain quotas. Google File System (GFS) GFS is a fundamental
storage service for Google’s search engine. GFS was designed for Google applications, and Google
applications were built for GFS.

There are several concerns in GFS. rate). As servers are composed of inexpensive commodity
components, it is the norm rather than the exception that concurrent failures will occur all the time.
Another concerns the file size in GFS. GFS typically will hold a large number of huge files, each 100 MB or
larger, with files that are multiple GB in size quite common. Thus, Google has chosen its file data block
size to be 64 MB instead of the 4 KB in typical traditional file systems. The I/O pattern in the Google
application is also special. Files are typically written once, and the write operations are often the
appending data blocks to the end of files. Multiple appending operations might be concurrent. The
customized API can simplify the problem and focus on Google applications.

Big Table
BigTable was designed to provide a service for storing and retrieving structured and semistructured
data. BigTable applications include storage of web pages, per-user data, and geographic locations. The
database needs to support very high read/write rates and the scale might be millions of operations per
second. Also, the database needs to support efficient scans over all or interesting subsets of data, as well
as efficient joins of large one-to-one and one-to- many data sets. The application may need to examine
data changes over time. The BigTable system is scalable, which means the system has thousands of
servers, terabytes of in-memory data, petabytes of disk-based data, millions of reads/writes per second,
and efficient scans. BigTable is used in many projects, including Google Search, Orkut, and Google
Maps/Google Earth, among others. The BigTable system is built on top of an existing Google cloud
infrastructure. BigTable uses the following building blocks: 1. GFS: stores persistent state 2. Scheduler:
schedules jobs involved in BigTable serving 3. Lock service: master election, location bootstrapping 4.
MapReduce: often used to read/write BigTable data BigTable provides a simplified data model called
Web Table, compared to traditional database systems. Figure (a) shows the data model of a sample
table. Web Table stores the data about a web page. Each web page can be accessed by the URL. The URL
is considered the row index.

Programming on Amazon AWS:


AWS platform has many features and offers many services

Features:

1. Relational Database Service (RDS) with a messaging interface


2. Elastic MapReduce capability
3. NOSQL support in SimpleDB

Capabilities:

1. Auto-scaling enables you to automatically scale your Amazon EC2 capacity up or down according to
conditions.
2. Elastic load balancing automatically distributes incoming application traffic across multiple Amazon
EC2 instances.
3. CloudWatch is a web service that provides monitoring for AWS cloud resources, operational
performance, and overall demand patterns—including metrics such as CPU utilization, disk reads
and writes, and network traffic.

A mazon provides several types of preinstalled VMs. Instances are often called Amazon Machine Images
(AMIs) which are preconfigured with operating systems based on Linux or Windows, and additional
software. A MIs are the templates for instances, which are running VMs. The AMIs are formed from the
virtualized compute, storage, and server resource.
Private AMI: Images created by you, which are private by default. You can grant access to other users to
launch your private images.
Public AMI: Images created by users and released to the AWS community, so anyone can launch
instances based on them.
Paid QAMI: You can create images providing specific functions that can be launched by anyone willing to
pay you per each hour of usage.
Amazon Simple Storage Service (S3)

Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount
of data, at any time, from anywhere on the web. S3 provides the object-oriented storage service for
users. Users can access their objects through Simple Object Access Protocol (SOAP) with either browsers
or other client programs which support SOAP. SQS is responsible for ensuring a reliable message service
between two processes.
The fundamental operation unit of S3 is called an object. Each object is stored in a bucket and retrieved
via a unique, developer-assigned key. In other words, the bucket is the container of the object. Besides
unique key attributes, the object has other attributes such as values, metadata, and access control
information. Through the key-value programming interface, users can write, read, and delete objects
containing from 1 byte to 5 gigabytes of data each. There are two types of web service interface for the
user to access the data stored in Amazon clouds. One is a REST (web 2.0) interface, and the other is a
SOAP interface.

Here are some key features of S3:


 Redundant through geographic dispersion.
 Designed to provide 99.99% durability and 99.99 %availability of objects over a given year with
cheaper reduced redundancy storage (RRS).
 Authentication mechanisms to ensure that data is kept secure from unauthorized access. Objects
can be made private or public, and rights can be granted to specific users.
 Per-object URLs and ACLs (access control lists).
 Default download protocol of HTTP

Amazon Elastic Block Store (EBS) and SimpleDB

The Elastic Block Store (EBS) provides the volume block interface for saving and restoring the virtual
images of EC2 instances. The status of EC2 can now be saved in the EBS system after the machine is shut
down.Users can use EBS to save persistent data and mount to the running instances of EC2. S3 is
“Storage as a Service” with a messaging interface. Multiple volumes can be mounted to the same
instance. These storage volumes behave like raw, unformatted block devices, with user-supplied device
names and a block device interface.

Amazon SimpleDB Service

SimpleDB provides a simplified data model based on the relational database data model. Structured
data from users must be organized into domains. Each domain can be considered a table. The items are
the rows in the table. A cell in the table is recognized as the value for a specific attribute (column name)
of the corresponding row. it is possible to assign multiple values to a single cell in the table. This is not
permitted in a traditional relational database. SimpleDB, like Azure Table, could be called “LittleTable” as
they are aimed at managing small amounts of information stored in a distributed table.

Programming on Microsoft Azure:


The Windows Azure Platform stack consists of a few distinct pieces, one of which (Windows Azure) is
examined in detail throughout this book. However, before beginning to examine Windows Azure, you
should know what the other pieces do, and how they fit in. The Windows Azure Platform is a group of
cloud technologies to be used by applications running in Microsoft’s data centers, on-premises and on
various devices. The first question people have when seeing its architecture is “Do I need to run my
application on Windows Azure to take advantage of the services on top?” The answer is “no.” You can
access Azure AppFabric services and SQL Azure, as well as the other pieces from your own data center or
the box under your desk, if you choose to.

This is not represented by a typical technology stack diagram—the pieces on the top don’t necessarily
run on the pieces on the bottom, and you’ll find that the technology powering these pieces is quite
different. For example, the authentication mechanism used in SQL Azure is different from the one used
in Windows Azure. A diagram showing the Windows Azure platform merely shows Microsoft’s vision in
the cloud space. Some of these products are nascent, and you’ll see them converge over time.

Now, let’s take a look at some of the major pieces.

Azure AppFabric

Azure AppFabric services provide typical infrastructure services required by both on-premises and cloud
applications. These services act at a higher level of the “stack” than Windows Azure (which you’ll learn
about shortly). Most of these services can be accessed through a public HTTP REST API, and hence can
be used by applications running on Windows Azure, as well as your applications running outside
Microsoft’s data centers. However, because of networking latencies, accessing these services from
Windows Azure might be faster because they are often hosted in the same data centers. Since this is a
distinct piece from the rest of the Windows Azure platform, we will not cover it in this book.

Following are the components of the Windows Azure AppFabric platform:

Service Bus: Hooking up services that live in different networks is tricky. There are several issues to work
through: firewalls, network hardware, and so on. The Service Bus component of Windows Azure
AppFabric is meant to deal with this problem. It allows applications to expose Windows Communication
Foundation (WCF) endpoints that can be accessed from “outside” (that is, from another application not
running inside the same location). Applications can expose service endpoints as public HTTP URLs that
can be accessed from anywhere. The platform takes care of such challenges as network address
translation, reliably getting data across, and so on.

Access Control: This service lets you use federated authentication for your service based on a claims-
based, RESTful model. It also integrates with Active Directory Federation Services, letting you integrate
with enterprise/on-premises applications.

SQL Azure

In essence, SQL Azure is SQL Server hosted in the cloud. It provides relational database features, but
does it on a platform that is scalable, highly available, and load-balanced. Most importantly, unlike SQL
Server, it is provided on a pay-as-you-go model, so there are no capital fees upfront (such as for
hardware and licensing). As you’ll see shortly, there are several similarities between SQL Azure and the
table services provided by Windows Azure. They both are scalable, reliable services hosted in Microsoft
data centers. They both support a pay-for-usage model. The fundamental differences come down to
what each system was designed to do.
Windows Azure tables were designed to provide low-cost, highly scalable storage. They don’t have any
relational database features—you won’t find foreign keys or joins or even SQL. SQL Azure, on the other
hand, was designed to provide these features. We will examine these differences in more detail later in
this book in the discussions about storage.

Windows Azure

Windows Azure is Microsoft’s platform for running applications in the cloud. You get on-demand
computing and storage to host, scale, and manage web applications through Microsoft data centers.
Unlike other versions of Windows, Windows Azure doesn’t run on any one machine—it is distributed
across thousands of machines. There will never be a DVD of Windows Azure that you can pop in and
install on your machine. Before looking at the individual features and components that make up
Windows Azure, let’s examine how Microsoft got to this point, and some of the thinking behind the
product.

Emerging Cloud Software Environments:

Innovative technologies like Edge computing, Containers, Artificial Intelligence (AI), Machine Learning
(ML), and Serverless computing are transforming the cloud big time. It has greatly enhanced the way
businesses function especially with enterprises learning to use cloud capabilities tactfully and
strategically. With the eruption of digital revolution, the cloud is an imperative enabler in optimizing
costs, providing flexibility, reliability, and automation at scale.

1.Containers

A container is a technology used to bundle software along with all its library and config files that are
required to run applications in a more flexible, reliable and accelerated manner on the cloud. They can
be easily moved and run on any operating system in any context in portable cloud environments.
Containerization is a virtualization technique that plays a vital role in modernizing application
development. It allows the developers to streamline and simplify the software development process
through container orchestration. This enables automated provisioning, deployment, management,
scaling, load balancing and networking of operations needed to run the packaged workloads and
services as container units.
The adoption of containerization is on the rise as many tech-oriented organizations see them as an
alternative to traditional virtual machines (VMs). Kubernetes is the most popular and widely used
orchestration platform which is also provided by most of the cloud service providers.
Most companies have developed cloud-native applications leveraging containerization to –
 Build once but portably run it anywhere
 Bring in significant savings on resources and operations
 Enable quicker software development
 Accomplish robust yet seamless horizontal scaling
It’s limitations
 Containers provide lightweight isolation from the host OS s well as containers within the
same system, leading to weaker security boundaries when compared to virtual
machines.
 It can run well if using only one operating system, but it’s a disadvantage if you need to
use it across other operating systems. You can run previous versions of the same
operating system using lightweight virtual machines.
2.Serverless

Serverless is a function-as-service model that handles critical hardware and software maintenance,
provisioning and scaling to accelerate efficient solutions on the cloud. It allows developers to build and
run services without managing the underlying infrastructure. Serverless computing is reviving the trend
towards pay-as-you-go and pay-as-you-use computing models, addressing much of the software
burden.
Serverless still use physical servers, but they are taken care of by external vendors. The responsibility of
scaling, scheduling, patching, provisioning, and other routine infrastructure management tasks are
offloaded to cloud service providers and tools using the “serverless”, also known as the cloud computing
application execution model. Thus, it enables developers to focus more on specific business logic for
their apps or processes.
Few benefits of serverless include –
 Easy scalability of code
 Pay-as-you-go model enabling cost-effectiveness
 Reduced time-to-market
 Innovation and flexibility
It’s limitations
 Each time a serverless instance starts, it creates a new version of itself. This means it’s
difficult to replicate the created environment to verify how the code would work. Also,
developers do not have the visibility of backend processes making debugging a big
challenge, as the applications are compartmentalized into distinct, smaller functions.
 Building serverless functions on one platform can make it difficult to migrate to another.
For example, moving from AWS to Azure or Google Cloud requires code rewrites, APIs
that exist on one platform may not exist on another. It usually requires additional
manpower and costs as well.

3.Microservices

Microservices is a cloud-native architecture where large-sized or monolithic applications are


compartmentalized into smaller components to simplify and fast-track software delivery. Here, a single
application comprises of independent smaller services or modules that can be deployed easily. This
modular approach facilitates multiple small teams within an enterprise to deploy loosely coupled
broken-down modules, regardless of the actual size of the application. This enables continuous delivery
of the latest updated software, ultimately resulting in faster app delivery cycles. With Microservices –
 Code can be updated with ease
 New functionalities can be incorporated without disturbing the entire application
 Developers can use discrete stacks & programming languages for different components
 Broken-down modules can be scaled independently
The microservice architecture enables enhanced productivity, better resiliency and greater business
agility as a whole.
It’s limitations
 Microservices are typically written in multiple programming languages, use different
technology stacks and have limited ability to reuse code. This can lead to increased
development time and costs. Also, sharing code between microservices can become a
challenge as well.
 Microservices are designed to be self-contained, and they rely extensively on the
network to communicate with each other. This can result in increased network traffic
and slower response times (network latency). Additionally, it is a challenge to track
down errors when several microservices are interacting with each other.

4.Internet of Things (IoT):

According to the market research platform – IoT Analytics, the number of IoT-connected devices is
expected to grow by 22% to 27 billion devices by 2025.
The Internet of Things (IoT) describes the network of physical objects— “things”—that are embedded
with internet connectivity, sensors, software, and other hardware to connect and exchange data with
other devices and controls via the web. IoT can be present and utilized in ordinary household devices as
well as in sophisticated industrial tools. Devices and objects embedded with sensors connect to an
Internet of Things platform that integrates data from a variety of devices, applies analytics and shares
the most important information with applications designed for specific needs.
These powerful IoT platforms can identify useful information and the ones that are safe to ignore. We
can use this information to identify patterns, make recommendations, and identify potential problems
before they occur.
Benefits of the Internet of Things –
 Effective use of resources
 Reduction of human efforts
 Decreases cost and improves productivity
 Enhances customer experience
It’s limitations
 IoT systems are interconnected and communicate through networks. Therefore, despite
all security measures, the system remains largely uncontrollable and is highly prone to
different types of network attack.
 There is high dependency on the internet as it cannot function effectively without the
same.

5.Edge Computing

With the anticipated surge in data volumes, there is a common concern that enterprises will struggle to
reduce latency and inefficiency in data processing. This is where edge computing comes into play. Edge
computing allows enterprises to optimize their systems by offloading data processing to the source
where the data was created, rather than relying on the data center to process and analyze data.
Edge computing architecture moves critical processing tasks from a central location to servers and
devices at the “edge” of a network. In this framework, much of the data collected from edge endpoints
are never returned to the network core for processing and analysis. Instead, this data is processed
almost instantly by on-premise computing resources, allowing edge devices and applications to respond
rapidly to the changing condition and needs.
To put it simply, edge computing is computing that runs at or near data sources to reduce latency and
enhance bandwidth.
Advantages of Edge Computing –
 Bandwidth relaxation
 Improved data management
 Improved security
 Enhanced reliability & resilience
It’s limitations
 Ensuring adequate security is often a challenge in edge distributed environments.
Because data processing takes place on the edge of the network, the risk of identity
theft and cyber security breaches is high. Also, with each new IoT device added, the
chances of an attacker breaking into the device increases.
 Implementing edge infrastructure can be costly and complex. It requires additional
equipment and resources that are expensive and need high maintenance. Edge also
requires additional local hardware for IoT devices to function aptly which pose as an
add-on investment.
6.Artificial Intelligence

Artificial intelligence is the next-gen of technology solutions that portray an all-together new perspective
towards the world of digitization. With solutions featuring machine intelligence independent of human
support, AI gains significant market dominance among existing tools today.
However, building AI applications are complex for many enterprises. With abundant computing and
storage options, the cloud plays a key role in offering deep learning tools. Cloud-based AI solutions are
powerful and popular technologies that help with the automation of tasks, increase business
productivity and enable improved decision-making. AI on the cloud support the building of models and
apps as well as operating, monitoring and sharing them through machine learning algorithms.
Benefits of Artificial Intelligence –
 Improved monitoring and insights
 Intelligent automation
 Higher productivity and cost-efficiency
 Enhanced data analytics and management
It’s limitations
 With absence of human intelligence, machines can only perform the tasks they were
designed or programmed to do. However, upon requesting an unscripted command,
machines would fail to provide appropriate results leading to dissatisfying experiences.
 Being complex, AI comes with a huge price. Apart from the cost of installation, their
repair and maintenance also add up significantly to the costs. In addition, software
programs are updated frequently to meet the needs of a changing environment, leading
to a continuous rise in expenses.

Understanding Core OpenStack Ecosystem:

OpenStack gives you a modular cloud infrastructure that runs off of standard hardware—letting you
deploy the tools you need, when you need them, all from one place.
OpenStack is an open source platform that uses pooled virtual resources to build and
manage private and public clouds. The tools that comprise the OpenStack platform, called "projects,"
handle the core cloud-computing services of compute, networking, storage, identity, and image services.
More than a dozen optional projects can also be bundled together to create unique, deployable clouds.

In virtualization, resources such as storage, CPU, and RAM are abstracted from a variety of vendor-
specific programs and split by a hypervisor before being distributed as needed. OpenStack uses a
consistent set of application programming interfaces (APIs) to abstract those virtual resources 1 step
further into discrete pools used to power standard cloud computing tools that administrators and users
interact with directly.
How does Openstack works:

OpenStack is essentially a series of commands known as scripts. Those scripts are bundled into packages
called projects that relay tasks that create cloud environments. In order to create those environments,
OpenStack relies on 2 other types of software:
 Virtualization that creates a layer of virtual resources abstracted from hardware
 A base operating system (OS) that carries out commands given by OpenStack scripts
Think about it like this: OpenStack itself doesn't virtualize resources, but rather uses them to build
clouds. OpenStack also doesn’t execute commands, but rather relays them to the base OS. All 3
technologies—OpenStack, virtualization, and the base OS—must work together. That interdependency
is why so many OpenStack clouds are deployed using Linux®, which was the inspiration
behind RackSpace and NASA’s decision to release OpenStack as open source software.

What is cloud migration? An introduction to moving to the cloud:


Cloud migration is the procedure of transferring applications, data, and other types of business
components to any cloud computing platform. There are several parts of cloud migration an
organization can perform. The most used model is the applications and data transfer through an on-
premises and local data center to any public cloud. But, a cloud migration can also entail transferring
applications and data from a single cloud environment or facilitate them to another- a model
called cloud-to-cloud migration. The other type of cloud migration is reverse cloud migration, cloud exit,
and cloud repatriation where applications or data are transferred and back to the local data center.

Pros of Cloud Migration: Organizations migrate to a cloud for various reasons, but, normally when faced
with many challenges of developing IT infrastructure within the most secure and cost-effective way
possible.

Some of the advantages of migrating to a cloud are as follows:


o Flexibility: No organization facilitating experiences a similar demand level by a similar number of
users every time. If our apps face fluctuations in traffic, then cloud infrastructure permits us to
scale down and up to meet the demand. Hence, we can apply only those resources we require.
o Scalability: The analytics grow as the organization grows with databases, and other escalates
workloads. The cloud facilitates the ability to enhance existing infrastructure. Therefore,
applications have space to raise without impacting work.
o Agility: The part of the development is remaining elastic enough for responding to rapid
modifications within the technology resources. Cloud adoption offers this by decreasing the time
drastically it takes for procuring new storage and inventory.
o Productivity: Our cloud provider could handle the complexities of our infrastructure so we can
concentrate on productivity. Furthermore, the remote accessibility and simplicity of most of the
cloud solutions define that our team can concentrate on what matters such as growing our
business.
o Security: The cloud facilitates security than various others data centers by centrally storing data.
Also, most of the cloud providers give some built-in aspects including cross-enterprise visibility,
periodic updates, and security analytics.
o Profitability: The cloud pursues a pay-per-use technique. There is no requirement to pay for
extra charges or to invest continually in training on, maintaining, making, and updating space for
various physical servers.

Cloud Migration Strategies Types: Migrating to a cloud can be a good investment for our business. We
might be admiring where to start like several companies. Gartner specified some options that are widely
called "the six Rs of migration", defined as follows:

1. Rehosting (lift-and-shift)
The most general path is rehosting (or lift-and-shift), which implements as it sounds. It holds our
application and then drops it into our new hosting platform without changing the architecture and code
of the app. Also, it is a general way for enterprises unfamiliar with cloud computing, who profit from the
deployment speed without having to waste money or time on planning for enlargement. Besides, by
migrating our existing infrastructure, we are applying a cloud just like other data centers. It pays for
making good use of various cloud services present for a few enterprises. For example, adding scalable
functions to our application to develop the experience for an improving segment of many users.

2. Re-platforming
Re-platforming is called "lift-tinker-and-shift". It includes making some cloud optimizations without
modifying our app's core architecture. It is the better strategy for enterprises that are not ready for
configuration and expansion, or those enterprises that wish to improve trust inside the cloud.

3. Re-factoring
It means to rebuild our applications from leverage to scratch cloud-native abilities. We could not
perform serverless computing or auto-scaling. A potential disadvantage is vendor lock-in as we are re-
creating on the cloud infrastructure. It is the most expensive and time-consuming route as we may
expect. But, it is also future-proof for enterprises that wish to take benefit from more standard cloud
features.
It covers the most common three approaches for migrating our existing infrastructure.

4. Re-purchasing
It means replacing our existing applications along with a new SaaS-based and cloud-native platform
(such as a homegrown CRM using Salesforce). The complexity is losing the existing training and code's
familiarity with our team over a new platform. However, the profit is ignoring the cost of the
development.
Re-purchasing is the most cost-effective process if moving through a highly personalized legacy
landscape and minimizing the apps and service number we have to handle. Once we have accessed the
nature and size of our application portfolio, we may detect cloud migration is not correct for us.

5. Retiring
When we don't find an application useful and then simply turn off these applications. The
consequencing savings may boost our business situation for application migration if we are accessible
for making the move.

6. Re-visiting
Re-visiting may be all or some of our applications must reside in the house. For example, applications -
cloud computing at any later date. We must migrate only what makes effects to the business.

Cloud Migration Tools: Third-party vendors and cloud providers facilitate a lot of automated, cloud-
based, and open-source services and tools designed to:
o Certify post-migration success
o Manage and monitor its progress
o Help develop for cloud migration

Microsoft Cloud Services:

Microsoft has a very widespread cloud computation which is under active development. Various efforts
are made by Microsoft for adding capabilities to existing Microsoft's Tools. Microsoft's advancement is
to make progress towards software along with a service. Cloud services completely run on cloud and can
be accessed using browsers via standard Service Oriented Architecture (SOA) protocols.

Microsoft's Cloud
Microsoft named their Cloud operating system Windows Azure Platform. It is the infrastructure created
by Microsoft for building, deploying and managing services and applications through the network
worldwide and is managed by the Data-centers of Microsoft. Microsoft Azure was announced in October
2008 and released on 1st February 2010.
It provides a wide variety of services which cloud users can use without purchasing your own hardware.
It enables rapid development of solutions and providing the resources for that environment. Without
the need to worry about assembling of physical infrastructure, Azure's compute, Azure's compute,
network, storage and application services allow users to focus on building great solutions.

Azure Services
Azure includes various services in its cloud technology. These are:

 Compute Services: This holds MS Azure services such as: Azure VM, Azure Website, Mobile
services etc.
 Data Services: It includes MS Azure Storage, Azure SQL database, etc.
 Application Services: It Includes those services that helps users to build and operate applications
such as Azure Active Directory, Service bus for connecting distributed systems, processing big-
data etc.
 Network Services: It includes Virtual network, content delivery network and Traffic manager of
Azure.

There are other services such as:

 BizTalk
 Big Compute
 Identity
 Messaging
 Media
 CDN etc...
The starting area for Microsoft's Cloud technology efforts may be found at Microsoft.com/cloud. It has a
vast range of cloud technology products and some leading Web-applications of the industry. Microsoft
Messenger became the market leader after America Online Instant Messenger (AIM). Gradually with the
rise of e-facility and marketing arena, Microsoft sees its future as giving best Web experience for
different types of devices such as PCs, desktops, laptops, tablets, smart-phones, etc.

Azure Virtual Machines


It is one of the centralized features of IaaS capability of MS Azure together with virtual network. Azure's
VM sustain the development of Windows Server (or Linux VM) in MS Azure's datacenter; where you
have complete control over the Virtual machine's configuration. Azure's VM has three possible states:

 Running
 Stopped
 Stopped (De-allocated)

The VM acquires the stop (de-allocated) state by default when it is stopped in the Azure Management
Portal. If we want to keep it stopped as well as allocated we have to use PowerShell cmdlet with the
following command:
> Stop-AzureVM -Name "az-essential" -ServiceName "az-essential" -StayProvisioned

Elements of Microsoft Azure


There are 6 main elements that form Windows Azure. These are:

 Compute
 Storage
 Application
 Fabric
 VM (Virtual Machines)
 Config (Configuration)

Amazon Cloud Services:

AWS tutorial provides basic and advanced concepts. Our AWS tutorial is designed for beginners and
professionals.
AWS stands for Amazon Web Services which uses distributed IT infrastructure to provide different IT
resources on demand.
Our AWS tutorial includes all the topics such as introduction, history of AWS, global infrastructure,
features of AWS, IAM, Storage services, Database services, etc.

What is AWS?
o AWS stands for Amazon Web Services.
o The AWS service is provided by the Amazon that uses distributed IT infrastructure to provide
different IT resources available on demand. It provides different services such as infrastructure
as a service (IaaS), platform as a service (PaaS) and packaged software as a service (SaaS).
o Amazon launched AWS, a cloud computing platform to allow the different organizations to take
advantage of reliable IT infrastructure.

Uses of AWS
o A small manufacturing organization uses their expertise to expand their business by leaving their
IT management to the AWS.
o A large enterprise spread across the globe can utilize the AWS to deliver the training to the
distributed workforce.
o An architecture consulting company can use AWS to get the high-compute rendering of
construction prototype.
o A media company can use the AWS to provide different types of content such as ebox or audio
files to the worldwide files.

Pay-As-You-Go
Based on the concept of Pay-As-You-Go, AWS provides the services to the customers.
AWS provides services to customers when required without any prior commitment or upfront
investment. Pay-As-You-Go enables the customers to procure services from AWS.
o Computing
o Programming models
o Database storage
o Networking

Advantages of AWS
1) Flexibility
o We can get more time for core business tasks due to the instant availability of new features and
services in AWS.
o It provides effortless hosting of legacy applications. AWS does not require learning new
technologies and migration of applications to the AWS provides the advanced computing and
efficient storage.
o AWS also offers a choice that whether we want to run the applications and services together or
not. We can also choose to run a part of the IT infrastructure in AWS and the remaining part in
data centres.

2) Cost-effectiveness
AWS requires no upfront investment, long-term commitment, and minimum expense when compared
to traditional IT infrastructure that requires a huge investment.

3) Scalability/Elasticity
Through AWS, autoscaling and elastic load balancing techniques are automatically scaled up or down,
when demand increases or decreases respectively. AWS techniques are ideal for handling unpredictable
or very high loads. Due to this reason, organizations enjoy the benefits of reduced cost and increased
user satisfaction.
4) Security
o AWS provides end-to-end security and privacy to customers.
o AWS has a virtual infrastructure that offers optimum availability while managing full privacy and
isolation of their operations.
o Customers can expect high-level of physical security because of Amazon's several years of
experience in designing, developing and maintaining large-scale IT operation centers.
o AWS ensures the three aspects of security, i.e., Confidentiality, integrity, and availability of
user's data.

Cloud Applications:
Cloud service providers provide various applications in the field of art, business, data storage and
backup services, education, entertainment, management, social networking, etc.

Role of Cloud Computing In Social Networking

Cloud computing has become an integral part of modern social networking platforms. It is a model that
enables users to access shared computing resources, such as servers, storage, applications, and
services, over the Internet. Social networking platforms have evolved from simple text-based forums to
complex platforms that support multimedia content, real-time messaging, and social gaming. As these
platforms have grown in complexity, the need for scalable and reliable computing resources has
become increasingly important. Cloud computing has emerged as a viable solution to this problem,
providing the necessary computing power and storage space to support large-scale social networking
applications.
In this article, we will explore the role of cloud computing in social networking, discussing its benefits
and challenges and how it shapes the future of social networking.

Advantages of Cloud Computing in Social Networking: Cloud computing comes with various advantages
that can help in social networking. Some of its advantages include the following –

Scalability And Flexibility


One of the biggest benefits of cloud computing in social networking is its ability to scale quickly and
easily. Cloud computing platforms allow users to scale up or down depending on the demand for their
services. It means that social networking sites can handle large amounts of traffic during peak usage
periods without experiencing any downtime or slow loading times.
Cloud computing allows for flexibility in terms of data storage and processing. With the ability to easily
add or remove computing resources as needed, social networking sites can quickly adapt to changes in
user demand or unexpected events. This scalability and flexibility are crucial for social networking sites
to keep up with their users' ever-changing needs and maintain a competitive edge in the market. They
also enable social networking platforms to expand their reach and capabilities without worrying about
infrastructure constraints.

Cost-Efficient
Cloud computing is cost-effective. With cloud computing, social networking platforms can save much
money as they don't need to invest in expensive hardware or software; they only need to pay for what
they use. Cloud computing also eliminates the need for maintaining and managing physical servers,
reducing the costs of IT infrastructure and maintenance. As a result, social networking platforms can
redirect their resources towards enhancing user experience and developing innovative features.

Improved Collaboration
Cloud computing has revolutionized collaboration in social networking by enabling users to work on the
same project or document in real time, regardless of their location. Cloud-based tools such as Google
Docs and Dropbox allow multiple users to edit and share files simultaneously, which has made it easier
for remote teams to work together seamlessly. This has significantly improved productivity and reduced
turnaround time, making it a valuable asset for businesses and individuals.

Data Security
One of the primary concerns with cloud computing is that data is stored on remote servers and may be
accessed by unauthorized users. This can happen due to vulnerabilities in the software or infrastructure
or due to insider threats. However, cloud providers typically have robust security measures to protect
against these risks, such as encryption, access controls, and monitoring. Users can also take steps to
protect their data, such as implementing strong passwords and two-factor authentication and regularly
reviewing their security settings.
While there is a risk of data breaches in cloud computing, it is important to weigh this against the many
benefits that it provides. By taking appropriate security measures and partnering with a reputable cloud
provider, organizations can safely and securely leverage the power of the cloud to enhance their social
networking capabilities.

Role of Cloud Computing in Social Networking Platforms:

Cloud-based social networking platforms like Facebook, Twitter, LinkedIn, and Instagram are built on
cloud computing technology. These platforms use cloud computing to store and manage vast amounts
of user-generated data. These social networking platforms use cloud computing in various ways.

Role of Cloud Computing in E-mails:


Arguably, one of the most game-changing outcomes from the rise of cloud computing, has been the rise
of as a Service (aaS) offerings. No longer do you need to break the bank to purchase physical hardware
to run computer systems in-house, or be beholden to rigid and overpriced equipment lease agreements
with vendors; as a Service allow companies to commission a cloud-hosted software platform without
building or managing it themselves, as well as scale their platform easily on demand.
From Software as a Service (SaaS) and Security as a Service (SaaS), to Infrastructure as a Service (IaaS)
and Platform as a Service (PaaS), to Analytics as a Service (AnaaS) and Data as a service (DaaS), there are
almost as many Anything (or Everything) as a Service (XaaS) plays as there are Uber for Anything (Uber
for X) offerings. But that’s the beauty of cloud computing; if you can dream it, you can build offer it…
without the physical hardware or in-house expertise.

Email as a Service: Email as a Service (EaaS) is an offering that provides access to an external, cloud-
hosted, email application, so that you do not need to build or power your email platform in-house.
It’s ideal for telcos and service providers who provide a consumer or SMB email platform for their
customers but know that email is not their core business specialty, so they prefer not to consume
valuable internal technical resources on email, when that investment is better spent elsewhere.
By subscribing to an Email as a Service, companies can enjoy the benefits of a managed email service,
without the associated long-term investments in capital infrastructure and human resources that are
typically required to deliver and maintain large email platforms.

Role of Cloud Computing in Customer Relationship Management:

CRM in cloud computing is a tool that is used to build a strong relationship with the customer. It is a
cloud-based software that allows access from any part of the world. So, in this blog, we try to explain
the meaning, types, benefits, and examples of customer relationship management software. Plus, its
relation and significance with mobile CRM. CRM stands for Customer relationship management and
refers to all the tools, techniques, strategies, and technologies which are used by an organization to
build, retain, acquiring customer relationships and customer data. Customer relationship management
makes sure the smooth functioning of storage of customer data like demographics, purchase behavior,
pattern, history, etc, and every interaction with a customer to build strong relations and increase the
sales and profits of an organization.
CRM in cloud computing is referred to as software that is in cloud-based form for easily accessible to
customers over the internet. Nowadays many organizations uses CRM cloud so that the customer can
easily access information via the internet. Moreover, the computing system has become so strong that
customers can easily access it via their phones. As a result, easy access to information made quick sales
or conversions. Plus, the CRM cloud provides the facilities for information sharing, backup, and storing,
and access from any part of the world.
SRM Institute
of Science and
Technology
Unit-3.5
VIRTUALIZATION
STRUCTURES/TOOL
S AND
MECHANISMS
In general, there are three
common categories of VM
architecture. Figure 3.1 shows
the
mechanical properties of the
machine before and after
the material is made. Prior
to
virtualization, the operating
system controls the
hardware. After
virtualization, a
virtualization layer is inserted
between the hardware and the
operating system. In such a
case,
the virtualization layer is
responsible for converting
parts of real hardware into
visual
hardware. Therefore, different
operating systems like Linux
and Windows can operate on
the
same physical machine,
simultaneously. Depending on
the virtualiza-tion layer
structure,
there are several categories of
VM structures, namely
hypervisor architecture, para-
virtualization, and host-based
virtualization. The hypervisor
is also known as the
VMM
(Virtual Machine Monitor).
They both perform the same
virtualization operations.
1. Hypervisor and Xen
Architecture
The hypervisor supports
hardware-level virtualization
(see Figure 3.1 (b)) for
empty
hardware such as CPU,
memory, disk and network
connectors. The hypervisor
software sits
directly between the physi-cal
hardware and its OS. This
virtualization layer is called a
VMM
or hypervisor. Hypervisor
provides hypercalls to visitor
OS and applications. Depending
on
the performance, the
hypervisor can take over a
micro-kernel architecture
similar to
Microsoft Hyper-V. Or it could
take a monolithic hypervisor
architecture like VMware ESX
for server virtualization.

SRM Institute
of Science and
Technology
Unit-3.5
VIRTUALIZATION
STRUCTURES/TOOL
S AND
MECHANISMS
In general, there are three
common categories of VM
architecture. Figure 3.1 shows
the
mechanical properties of the
machine before and after
the material is made. Prior
to
virtualization, the operating
system controls the
hardware. After
virtualization, a
virtualization layer is inserted
between the hardware and the
operating system. In such a
case,
the virtualization layer is
responsible for converting
parts of real hardware into
visual
hardware. Therefore, different
operating systems like Linux
and Windows can operate on
the
same physical machine,
simultaneously. Depending on
the virtualiza-tion layer
structure,
there are several categories of
VM structures, namely
hypervisor architecture, para-
virtualization, and host-based
virtualization. The hypervisor
is also known as the
VMM
(Virtual Machine Monitor).
They both perform the same
virtualization operations.
1. Hypervisor and Xen
Architecture
The hypervisor supports
hardware-level virtualization
(see Figure 3.1 (b)) for
empty
hardware such as CPU,
memory, disk and network
connectors. The hypervisor
software sits
directly between the physi-cal
hardware and its OS. This
virtualization layer is called a
VMM
or hypervisor. Hypervisor
provides hypercalls to visitor
OS and applications. Depending
on
the performance, the
hypervisor can take over a
micro-kernel architecture
similar to
Microsoft Hyper-V. Or it could
take a monolithic hypervisor
architecture like VMware ESX
for server virtualization.

Unit 3:

Basic Terms and Concepts:

Information security is a complex ensemble of techniques, technologies, regulations, and behaviors that
collaboratively protect the integrity of and access to computer systems and data. IT security measures
aim to defend against threats and interference that arise from both malicious intent and unintentional
user error. The upcoming sections define fundamental security terms relevant to cloud computing and
describe associated concepts.
Confidentiality: Confidentiality is the characteristic of something being made accessible only to
authorized parties (Figure 6.1). Within cloud environments, confidentiality primarily pertains to
restricting access to data in transit and storage.
Integrity: Integrity is the characteristic of not having been altered by an unauthorized party (Figure 6.2).
An important issue that concerns data integrity in the cloud is whether a cloud consumer can be
guaranteed that the data it transmits to a cloud service matches the data received by that cloud service.
Integrity can extend to how data is stored, processed, and retrieved by cloud services and cloud-based
IT resources.

Authenticity: Authenticity is the characteristic of something having been provided by an authorized


source. This concept encompasses non-repudiation, which is the inability of a party to deny or challenge
the authentication of an interaction. Authentication in non-repudiable interactions provides proof that
these interactions are uniquely linked to an authorized source. For example, a user may not be able to
access a non-repudiable file after its receipt without also generating a record of this access.
Availability: Availability is the characteristic of being accessible and usable during a specified time
period. In typical cloud environments, the availability of cloud services can be a responsibility that is
shared by the cloud provider and the cloud carrier. The availability of a cloud-based solution that
extends to cloud service consumers is further shared by the cloud consumer.
Threat: A threat is a potential security violation that can challenge defenses in an attempt to breach
privacy and/or cause harm. Both manually and automatically instigated threats are designed to exploit
known weaknesses, also referred to as vulnerabilities. A threat that is carried out results in an attack.
Vulnerability: A vulnerability is a weakness that can be exploited either because it is protected by
insufficient security controls, or because existing security controls are overcome by an attack. IT
resource vulnerabilities can have a range of causes, including configuration deficiencies, security policy
weaknesses, user errors, hardware or firmware flaws, software bugs, and poor security architecture.
Risk: Risk is the possibility of loss or harm arising from performing an activity. Risk is typically measured
according to its threat level and the number of possible or known vulnerabilities. Two metrics that can
be used to determine risk for an IT resource are:
• the probability of a threat occurring to exploit vulnerabilities in the IT resource
• the expectation of loss upon the IT resource being compromised
Details regarding risk management are covered later in this chapter.
Security Controls: Security controls are countermeasures used to prevent or respond to security threats
and to reduce or avoid risk. Details on how to use security countermeasures are typically outlined in the
security policy, which contains a set of rules and practices specifying how to implement a system,
service, or security plan for maximum protection of sensitive and critical IT resources.
Security Mechanisms: Countermeasures are typically described in terms of security mechanisms, which
are components comprising a defensive framework that protects IT resources, information, and services.
Security Policies: A security policy establishes a set of security rules and regulations. Often, security
policies will further define how these rules and regulations are implemented and enforced. For example,
the positioning and usage of security controls and mechanisms can be determined by security policies.
Summary of Key Points
• Confidentiality, integrity, authenticity, and availability are characteristics that can be associated with
measuring security.
• Threats, vulnerabilities, and risks are associated with measuring and assessing insecurity, or the lack of
security.
• Security controls, mechanisms, and policies are associated with establishing countermeasures and
safeguards in support of improving security.

Threat Agents:
A threat agent is an entity that poses a threat because it is capable of carrying out an attack. Cloud
security threats can originate either internally or externally, from humans or software programs.
Corresponding threat agents are described in the upcoming sections. Figure 6.3 illustrates the role a
threat agent assumes in relation to vulnerabilities, threats, and risks, and the safeguards established by
security policies and security mechanisms.
Anonymous Attacker: An anonymous attacker is a non-trusted cloud service consumer without
permissions in the cloud (Figure 6.4). It typically exists as an external software program that launches
network-level attacks through public networks. When anonymous attackers have limited information on
security policies and defenses, it can inhibit their ability to formulate effective attacks. Therefore,
anonymous attackers often resort to committing acts like bypassing user accounts or stealing user
credentials, while using methods that either ensure anonymity or require substantial resources for

prosecution. Figure 6.4. The notation used for an anonymous attacker.

Malicious Service: Agent A malicious service agent is able to intercept and forward the network traffic
that flows within a cloud (Figure 6.5). It typically exists as a service agent (or a program pretending to be
a service agent) with compromised or malicious logic. It may also exist as an external program able to
remotely intercept and potentially corrupt message contents. Figure 6.5. The notation used for a

malicious service agent.

Trusted Attacker: A trusted attacker shares IT resources in the same cloud environment as the cloud
consumer and attempts to exploit legitimate credentials to target cloud providers and the cloud tenants
with whom they share IT resources (Figure 6.6). Unlike anonymous attackers (which are nontrusted),
trusted attackers usually launch their attacks from within a cloud’s trust boundaries by abusing
legitimate credentials or via the appropriation of sensitive and confidential information. Trusted
attackers (also known as malicious tenants) can use cloud-based IT resources for a wide range of
exploitations, including the hacking of weak authentication processes, the breaking of encryption, the
spamming of e-mail accounts, or to launch common attacks, such as denial of service campaigns.

Malicious Insider: Malicious insiders are human threat agents acting on behalf of or in relation to the
cloud provider. They are typically current or former employees or third parties with access to the cloud
provider’s premises. This type of threat agent carries tremendous damage potential, as the malicious
insider may have administrative privileges for accessing cloud consumer IT resources. Note A notation
used to represent a general form of human-driven attack is the workstation combined with a lightning
bolt (Figure 6.7). This generic symbol does not imply a specific threat agent, only that an attack was
initiated via a workstation. Figure 6.7. The notation used for an attack originating from a workstation.

The human symbol is optional

Summary of Key Points:


• An anonymous attacker is a non-trusted threat agent that usually attempts attacks from outside of a
cloud’s boundary.
• A malicious service agent intercepts network communication in an attempt to maliciously use or
augment the data.
• A trusted attacker exists as an authorized cloud service consumer with legitimate credentials that it
uses to exploit access to cloud-based IT resources.
• A malicious insider is a human that attempts to abuse access privileges to cloud premises.

Cloud Security Threats:


This section introduces several common threats and vulnerabilities in cloud-based environments and
describes the roles of the aforementioned threat agents. Security mechanisms that are used to counter
these threats are covered in Chapter 10.

Traffic Eavesdropping: Traffic eavesdropping occurs when data being transferred to or within a cloud
(usually from the cloud consumer to the cloud provider) is passively intercepted by a malicious service
agent for illegitimate information gathering purposes (Figure 6.8). The aim of this attack is to directly
compromise the confidentiality of the data and, possibly, the confidentiality of the relationship between
the cloud consumer and cloud provider. Because of the passive nature of the attack, it can more easily
go undetected for extended periods of time.
Malicious Intermediary: The malicious intermediary threat arises when messages are intercepted and
altered by a malicious service agent, thereby potentially compromising the message’s confidentiality
and/or integrity. It may also insert harmful data into the message before forwarding it to its destination.
Figure 6.9 illustrates a common example of the malicious intermediary attack. Note: While not as
common, the malicious intermediary attack can also be carried out by a malicious cloud service
consumer program.

Denial of Service The objective of the denial of service (DoS) attack is to overload IT resources to the
point where they cannot function properly. This form of attack is commonly launched in one of the
following ways: • The workload on cloud services is artificially increased with imitation messages or
repeated communication requests. • The network is overloaded with traffic to reduce its responsiveness
and cripple its performance. • Multiple cloud service requests are sent, each of which is designed to
consume excessive memory and processing resources. Successful DoS attacks produce server
degradation and/or failure, as illustrated in Figure 6.10.
Insufficient Authorization: The insufficient authorization attack occurs when access is granted to an
attacker erroneously or too broadly, resulting in the attacker getting access to IT resources that are
normally protected. This is often a result of the attacker gaining direct access to IT resources that were
implemented under the assumption that they would only be accessed by trusted consumer programs
(Figure 6.11).

A variation of this attack, known as weak authentication, can result when weak passwords or shared
accounts are used to protect IT resources. Within cloud environments, these types of attacks can lead to
significant impacts depending on the range of IT resources and the range of access to those IT resources
the attacker gains.
Virtualization Attack: Virtualization provides multiple cloud consumers with access to IT resources that
share underlying hardware but are logically isolated from each other. Because cloud providers grant
cloud consumers administrative access to virtualized IT resources (such as virtual servers), there is an
inherent risk that cloud consumers could abuse this access to attack the underlying physical IT
resources. A virtualization attack exploits vulnerabilities in the virtualization platform to jeopardize its
confidentiality, integrity, and/or availability. This threat is illustrated in Figure 6.13, where a trusted
attacker successfully accesses a virtual server to compromise its underlying physical server. With public
clouds, where a single physical IT resource may be providing virtualized IT resources to multiple cloud
consumers, such an attack can have significant repercussions.

Overlapping Trust Boundaries: If physical IT resources within a cloud are shared by different cloud
service consumers, these cloud service consumers have overlapping trust boundaries. Malicious cloud
service consumers can target shared IT resources with the intention of compromising cloud consumers
or other IT resources that share the same trust boundary. The consequence is that some or all of the
other cloud service consumers could be impacted by the attack and/or the attacker could use virtual IT
resources against others that happen to also share the same trust boundary. Figure 6.14 illustrates an
example in which two cloud service consumers share virtual servers hosted by the same physical server
and, resultantly, their respective trust boundaries overlap.
Summary of Key Points • Traffic eavesdropping and malicious intermediary attacks are usually
carried out by malicious service agents that intercept network traffic. • A denial of service attack
occurs when a targeted IT resource is overloaded with requests in an attempt to cripple or render
it unavailable. The insufficient authorization attack occurs when access is granted to an attacker
erroneously or too broadly, or when weak passwords are used. • A virtualization attack exploits
vulnerabilities within virtualized environments to gain unauthorized access to underlying
physical hardware. Overlapping trust boundaries represent a threat whereby attackers can exploit
cloud-based IT resources shared by multiple cloud consumers.

Additional Considerations:

This section provides a diverse checklist of issues and guidelines that relate to cloud security.
The listed considerations are in no particular order.

Security Policy Disparity:

When a cloud consumer places IT resources with a public cloud provider, it may need to accept
that its traditional information security approach may not be identical or even similar to that of
the cloud provider. This incompatibility needs to be assessed to ensure that any data or other IT
assets being relocated to a public cloud are adequately protected. Even when leasing raw
infrastructure-based IT resources, the cloud consumer may not be granted sufficient
administrative control or influence over security policies that apply to the IT resources leased
from the cloud provider. This is primarily because those IT resources are still legally owned by
the cloud provider and continue to fall under its responsibility. Furthermore, with some public
clouds, additional third parties, such as security brokers and certificate authorities, may introduce
their own distinct set of security policies and practices, further complicating any attempts to
standardize the protection of cloud consumer assets.

Encryption:
Google Cloud encrypts data in transit between our facilities and at rest, which ensures the data can be
accessed only by the authorized roles and services with audited access to the encryption keys.

Encryption at rest: Cloud Storage always encrypts your data on the server side, before it is written to
disk, at no additional charge. Besides this standard, Google-managed behavior, there are additional
ways to encrypt your data when using Cloud Storage.

Encryption in transit: Encryption in transit protects your data if communications are intercepted while
data moves between your site and the cloud provider or between two services. This protection is
achieved by encrypting the data before transmission, authenticating the endpoints, and decrypting and
verifying the data on arrival.

Encryption in use: Encryption in use protects your data when it is being used by servers to run
computations. Using Confidential Computing, Google Cloud encrypts data in use with Confidential VMs
and Confidential Google Kubernetes Engine Nodes.
What is a digital signature?

A digital signature is a mathematical technique used to validate the authenticity and integrity of a digital
document, message or software. It's the digital equivalent of a handwritten signature or stamped seal,
but it offers far more inherent security. A digital signature is intended to solve the problem of tampering
and impersonation in digital communications.
Digital signatures can provide evidence of origin, identity and status of electronic documents,
transactions or digital messages. Signers can also use them to acknowledge informed consent. In many
countries, including the U.S., digital signatures are considered legally binding in the same way as
traditional handwritten document signatures.

How do digital signatures work?


Digital signatures are based on public key cryptography, also known as asymmetric cryptography. Using
a public key algorithm -- such as Rivest-Shamir-Adleman, or RSA -- two keys are generated, creating a
mathematically linked pair of keys: one private and one public.
Digital signatures work through public key cryptography's two mutually authenticating cryptographic
keys. For encryption and decryption, the person who creates the digital signature uses a private key to
encrypt signature-related data. The only way to decrypt that data is with the signer's public key.
If the recipient can't open the document with the signer's public key, that indicates there's a problem
with the document or the signature. This is how digital signatures are authenticated.
Digital certificates, also called public key certificates, are used to verify that the public key belongs to the
issuer. Digital certificates contain the public key, information about its owner, expiration dates and the
digital signature of the certificate's issuer. Digital certificates are issued by trusted third-party certificate
authorities (CAs), such as DocuSign or GlobalSign, for example. The party sending the document and the
person signing it must agree to use a given CA.
Digital signature technology requires all parties trust that the person who creates the signature image
has kept the private key secret. If someone else has access to the private signing key, that party could
create fraudulent digital signatures in the name of the private key holder.

What is Hashing?

Hashing is an algorithm that calculates a fixed-size bit string value from a file. A file basically contains
blocks of data. Hashing transforms this data into a far shorter fixed-length value or key which represents
the original string. The hash value can be considered the distilled summary of everything within that file.
A good hashing algorithm would exhibit a property called the avalanche effect, where the resulting hash
output would change significantly or entirely even when a single bit or byte of data within a file is
changed. A hash function that does not do this is considered to have poor randomization, which would
be easy to break by hackers.
A hash is usually a hexadecimal string of several characters. Hashing is also a unidirectional process so
you can never work backwards to get back the original data.
A good hash algorithm should be complex enough such that it does not produce the same hash value
from two different inputs. If it does, this is known as a hash collision. A hash algorithm can only be
considered good and acceptable if it can offer a very low chance of collision.
What are the benefits of Hashing?
One main use of hashing is to compare two files for equality. Without opening two document files to
compare them word-for-word, the calculated hash values of these files will allow the owner to know
immediately if they are different.
Hashing is also used to verify the integrity of a file after it has been transferred from one place to
another, typically in a file backup program like SyncBack. To ensure the transferred file is not corrupted,
a user can compare the hash value of both files. If they are the same, then the transferred file is an
identical copy.
In some situations, an encrypted file may be designed to never change the file size nor the last
modification date and time (for example, virtual drive container files). In such cases, it would be
impossible to tell at a glance if two similar files are different or not, but the hash values would easily tell
these files apart if they are different.

Types of Hashing
There are many different types of hash algorithms such as RipeMD, Tiger, xxhash and more, but the
most common type of hashing used for file integrity checks are MD5, SHA-2 and CRC32.
MD5 - An MD5 hash function encodes a string of information and encodes it into a 128-bit fingerprint.
MD5 is often used as a checksum to verify data integrity. However, due to its age, MD5 is also known to
suffer from extensive hash collision vulnerabilities, but it’s still one of the most widely used algorithms in
the world.
SHA-2 – SHA-2, developed by the National Security Agency (NSA), is a cryptographic hash function. SHA-
2 includes significant changes from its predecessor, SHA-1. The SHA-2 family consists of six hash
functions with digests (hash values) that are 224, 256, 384 or 512 bits: SHA-224, SHA-256, SHA-384, SHA-
512, SHA-512/224, SHA-512/256.
CRC32 – A cyclic redundancy check (CRC) is an error-detecting code often used for detection of
accidental changes to data. Encoding the same data string using CRC32 will always result in the same
hash output, thus CRC32 is sometimes used as a hash algorithm for file integrity checks. These days,
CRC32 is rarely used outside of Zip files and FTP servers.

Public Key Infrastructure (PKI):


The most distinct feature of Public Key Infrastructure (PKI) is that it uses a pair of keys to achieve the
underlying security service. The key pair comprises of private key and public key.
Since the public keys are in open domain, they are likely to be abused. It is, thus, necessary to establish
and maintain some kind of trusted infrastructure to manage these keys.

Key Management
It goes without saying that the security of any cryptosystem depends upon how securely its keys are
managed. Without secure procedures for the handling of cryptographic keys, the benefits of the use of
strong cryptographic schemes are potentially lost.
It is observed that cryptographic schemes are rarely compromised through weaknesses in their design.
However, they are often compromised through poor key management.
There are some important aspects of key management which are as follows −
 Cryptographic keys are nothing but special pieces of data. Key management refers to the secure
administration of cryptographic keys.
 Key management deals with entire key lifecycle as depicted in the following illustration −
 There are two specific requirements of key management for public key cryptography.
o Secrecy of private keys. Throughout the key lifecycle, secret keys must remain
secret from all parties except those who are owner and are authorized to use them.
o Assurance of public keys. In public key cryptography, the public keys are in open
domain and seen as public pieces of data. By default there are no assurances of
whether a public key is correct, with whom it can be associated, or what it can be
used for. Thus key management of public keys needs to focus much more explicitly
on assurance of purpose of public keys.
The most crucial requirement of ‘assurance of public key’ can be achieved through the public-key
infrastructure (PKI), a key management systems for supporting public-key cryptography.

Public Key Infrastructure (PKI)


PKI provides assurance of public key. It provides the identification of public keys and their distribution.
An anatomy of PKI comprises of the following components.
 Public Key Certificate, commonly referred to as ‘digital certificate’.
 Private Key tokens.
 Certification Authority.
 Registration Authority.
 Certificate Management System.

Identity and Access Management:

IAM is a cloud service that controls the permissions and access for users and cloud resources. IAM
policies are sets of permission policies that can be attached to either users or cloud resources to
authorize what they access and what they can do with it.
The concept “identity is the new perimeter” goes as far back as the ancient times of 2012, when AWS
first announced their IAM service. We’re now seeing a renewed focus on IAM due to the rise of
abstracted cloud services and the recent wave of high-profile data breaches.
Services that don’t expose any underlying infrastructure rely heavily on IAM for security. For example,
consider an application that follows this flow: a Simple Notification Service (SNS) topic triggers a Lambda
function, which in turn puts an item in a DynamoDB table. In this type of application, there is no network
to inspect, so identity and permissions become the most significant aspects of security.

Figure 1: Example application flow


As an example of the impact of a strict (or over-permissive) IAM profile, let’s consider the Lambda
function. The function is only supposed to put items in the DynamoDB table. What happens if the
function has full DynamoDB permissions? If the function is compromised for whatever reason, the
DynamoDB table is immediately compromised as well, since the function could be leveraged to exfiltrate
data.
If the IAM profile follows the “least-privilege” principle and only allows the function to put items in the
table, the blast radius will be greatly reduced in the case of an incident. A hands-on example of this can
be found in this CNCF webinar.
Managing a large number of privileged users with access to an ever-expanding set of services is
challenging. Managing separate IAM roles and groups for these users and resources adds yet another
layer of complexity. Cloud providers like AWS and Google Cloud help customers solve these problems
with tools like the Google Cloud IAM recommender (currently in beta) and the AWS IAM access advisor.
These tools attempt to analyze the services last accessed by users and resources, and help you find out
which permissions might be over-privileged.
These tools indicate that cloud providers recognize these access challenges, which is definitely a step in
the right direction. However, there are a few more challenges we need to consider.

Introduction of Single Sign On (SSO):


Single Sign On (SSO) is an authentication scheme where users can securely authenticate and gain
access to multiple applications and websites by only logging in with a single username and password.
For example, logging in to your Google account once will allow you to access Google applications such
as Google Docs, Gmail, and Google Drive.

Without SSO solution, the website maintains a database of login credentials – username and
passwords. Each time the user login to the website, it checks the user’s credentials against its
database and authenticates the user.
With the SSO solution, the website does not store login credentials in its database. Instead, SSO
makes use of a shared cluster of authentication servers where users are only required to enter their
login credentials once for authentication. With this feature of one login and multiple access, it is
crucial to protect login credentials in SSO systems.
Hence it is highly recommended to integrate SSO with other strong authentication means such as
smart tokens or one-time passwords to achieve multi-factor authentication.
How does SSO work?

1. User enters login credentials on the website and the website checks to see if the user has
already been authenticated by SSO solution. If so, the SSO solution would give the user
access to the website. Otherwise, it presents the user with the SSO solution for login.
2. The user enters username and password on the SSO solution.
3. The user’s login credentials are sent to SSO solution.
4. The SSO solution seeks authentication from the identity provider, such as an Active
Directory, to verify the user’s identity. Once the user’s identity is verified, the identity
provider sends a verification to the SSO solution.
5. The authentication information is passed from the SSO solution to the website where the
user will be granted access to the website.
6. Upon successful login with SSO, the website passes authentication data in the form of
tokens as a form of verification that the user is authenticated as the user navigates to a
different application or web page.

Advantages of SSO:
These are advantages for users, for businesses.
For Users –
 Risk of access to 3rd party sites are mitigated as the website database do not store the
user’s login credentials.
 Increased convenience for users as they only need to remember and key in login
information once.
 Increased security assurance for users as website owners do not store login credentials.

Disadvantages of SSO:
 Increased security risk if login credentials are not securely protected and are exposed or
stolen as adversaries can now access many websites and applications with a single
credential.
 Authentication systems must have high availability as loss of availability can lead to denial
of service for applications using a shared cluster of authentication systems.

Hardened Virtual Server Image:

A virtual server is created from a template configuration called a virtual server image or virtual image
machine. Hardening is the process of stripping unnecessary software from a system to limit potential
vulnerabilities that can be exploited by attackers. Removing redundant programs, closing unnecessary
server ports, and disabling unused services, internal root accounts, and guest access are all examples of
hardening.
A hardened virtual server image is a template for virtual service instance creation that has been
subjected to a hardening process (Figure 1). This generally results in a virtual server template that is
significantly more secure than the original standard image.
Hardened virtual server images help counter the denial of service, insufficient authorization, and
overlapping trust boundaries threats.
A hardened virtual server image is created by removing redundant software, closing un-necessary
communication ports and disabling unused services/internal root account/guest access from or on the
standard virtual server image – the resulting virtual server template is significantly more secure than
the original standard image.
Hardened virtual servers’ images may counter security threats such as the denial of service, insufficient
authorization and overlapping trust boundaries.

Applied Security Policies:


 Close unused/unnecessary server ports
 Disable unused/unnecessary services
 Disable unnecessary internal root accounts
 Disable guest access to system directories
 Uninstall redundant software
 Establish memory quotas

Cloud Issues:
Cloud Computing is a new name for an old concept. The delivery of computing services from a remote
location. Cloud Computing is Internet-based computing, where shared resources, software, and
information are provided to computers and other devices on demand.

Stability:
These are major issues related to Stability in Cloud Computing:
1. Privacy: The user data can be accessed by the host company with or without permission. The
service provider may access the data that is on the cloud at any point in time. They could accidentally
or deliberately alter or even delete information.
2. Compliance: There are many regulations in places related to data and hosting. To comply with
regulations (Federal Information Security Management Act, Health Insurance Portability and
Accountability Act, etc.) the user may have to adopt deployment modes that are expensive.
3. Security: Cloud-based services involve third-party for storage and security. Can one assume that a
cloud-based company will protect and secure one’s data if one is using their services at a very low or
for free? They may share users’ information with others. Security presents a real threat to the cloud.
4. Sustainability: This issue refers to minimizing the effect of cloud computing on the environment.
Citing the server’s effects on the environmental effects of cloud computing, in areas where climate
favors natural cooling and renewable electricity is readily available, the countries with favorable
conditions, such as Finland, Sweden, and Switzerland are trying to attract cloud computing data
centers. But other than nature’s favors, would these countries have enough technical infrastructure to
sustain the high-end clouds?

Partner Quality:
When it comes to migrating or optimizing your cloud environment, you may be weighing the pros and
cons of leveraging your internal staff or an external partner. For businesses that are looking to leverage
their internal team, consider these questions.
 Does your staff have the skills to manage a technical cloud migration?
 Does the staff have insight into the various kinds of cloud providers, and what would work best
for your business?
 Does your staff currently spend more than 50% of its time managing systems and infrastructure
instead of worrying about strategic initiatives?
A cloud partner may be able to help you avoid strain on your resources, all while ensuring you have a
cloud environment that meets the needs of your business. Below are four core areas to look for in a
cloud partner.
 Strategy: As we learned in a previous blog, an effective cloud strategy considers people,
processes, technology, data, and governance. Without a roadmap, you run the risk of missing
critical use cases, department misalignment, decreased process efficiency, and added cost. Look
for a partner that can provide a strategic assessment of your environment and evaluate your
long term and short-term goals across the business.
 Architecture: Cloud is more than storage. By leveraging an effective cloud service architecture,
you can efficiently access connected applications and data resources. When looking for a cloud
partner, consider their ability to layout the cloud infrastructure itself and the relationship
between each database, application, and software.
 Migration: The term "lift and shift" illudes to the idea of taking your existing on-premise
environment and moving it to the cloud. It's essential that your cloud partner best optimizes the
cloud environment to meet your business's needs, not just move your data from one place to
another. Effective cloud partners consider whether your business needs a public, private, hybrid,
or multi-cloud environment.
 Optimization: In a RightScale survey of technology executives, 58% say optimizing cloud costs is
a top initiative, up from 53% a year ago. Optimizing your cloud environment considers cloud
management, network, connectivity, security, and storage costs. When searching for a cloud
partner, ensure they are proactive in evaluating security threats and have a methodology for
data governance to avoid a costly breach.

Longevity:
Cloud computing has revolutionized the way businesses and individuals store, manage, and access data
and applications. With its scalability, flexibility, and cost-efficiency, the cloud has become an integral
part of modern technology infrastructure. In this, we will explore the longevity of the cloud and why it
continues to be a driving force in the digital landscape.

1. Evolution and Adaptability:


One of the key reasons behind the longevity of the cloud is its ability to evolve and adapt to changing
technology trends. Cloud providers continually enhance their services, introducing new features and
functionalities to meet the evolving needs of users. From infrastructure-as-a-service (IaaS) to platform-
as-a-service (PaaS) and software-as-a-service (SaaS), the cloud has expanded its offerings, enabling
organizations to leverage cutting-edge technologies such as artificial intelligence, machine learning, and
big data analytics. This adaptability ensures that the cloud remains relevant and continues to deliver
value to businesses over the long term.

2. Scalability and Elasticity:


Scalability is a crucial aspect of the cloud that contributes to its longevity. Cloud environments allow
businesses to scale their resources up or down based on demand, ensuring optimal performance and
cost-efficiency. Whether an organization experiences sudden spikes in traffic or needs to expand its
operations, the cloud provides the necessary scalability to accommodate changing requirements. This
elasticity ensures that businesses can grow and adapt without the limitations imposed by traditional on-
premises infrastructure.

3. Cost-Efficiency and Flexibility:


The cost-efficiency of the cloud is another factor that contributes to its long-term viability. By shifting
from capital expenditures (CapEx) to operational expenditures (OpEx), businesses can reduce upfront
costs associated with hardware and infrastructure. Cloud services offer pay-as-you-go models, allowing
organizations to pay only for the resources they consume, optimizing cost management. Additionally,
the flexibility of the cloud enables businesses to easily scale resources, deploy new applications, and
experiment with innovative solutions without significant upfront investments.

Business Continuity:
The cloud offers responsiveness and resilience to business continuity. However, you can’t simply put
your data into the cloud and expect to be covered. Whatever problems can befall your physical assets
can also happen to cloud setups.
Cloud service providers suffer downtime and outages. Natural disasters can affect power grids to data
centers or facilities. Hackers target data. IT people make mistakes. Old servers fail. Data is overwritten,
which is possible with SaaS systems transacting large amounts of data daily.

When thinking about a plan for the cloud, consider these cloud-specific issues:
 Loss of Control: When you choose a public or third-party cloud, you're adding layers between
you and your data and relinquishing some control. “If I have my own servers, I can install some
disaster recovery software, and I can quickly recover to another device, or spin that server up
somewhere else,” shares Fraser. “But now, if you abstract away any of the underlying hardware,
you no longer have control over the uptime. It becomes vitally important to think about the
vendors you're engaging with and their overall business continuity and disaster recovery plan. If
they go down, what is the impact on your company?”
 Integrations Bring Dependencies: When platforms communicate, that means they have
dependencies. “Think about using Google accounts as your authentication login and single-sign-
on method,” advises Strawser. “If Google breaks, now you can't get into your SaaS application or
your cloud-based system. When we start to layer on these integrations, we don't always think
about what happens if that integration breaks.” From a security standpoint, integrations
increase the number of items to track and increase the risk of a data breach or leak. For more
ideas on cloud security, see our cloud security essential guide.
 Critical Functions: It’s vital to understand which SaaS programs you need the most. “You need
to gauge different services, depending on their importance to your business,” says Fraser. “If
FreshBooks or QuickBooks online goes down, your risk tolerance is probably a bit greater (you
can weather the outage more easily) than if your email service goes down. You can always
reconcile your books. It's not catastrophic to your company, unlike if business productivity apps
that multiple users access daily go down.”
 Due Diligence: It’s up to you to perform due diligence to understand how providers operate and
cover disruptions. Don’t assume all data is always protected and always backed up. As Strawser
suggests, “Always be looking for the backup.” You should also back up your SaaS systems.
“Certain vendors have done a pretty good job at allowing you to back up your data and have it
tied to products like Box or Dropbox,” states Fraser. “It gets a little more difficult in the business-
line apps where you really have to search for an FAQ. You may have to tell the vendor point-
blank, ‘I’d really like to have a backup of my data.’” He says that with the General Data
Protection Regulation (GDPR), many platforms now offer a way to export data.
 Shadow IT and Data: With the focus on maintaining productivity, remote employees may want
to download free and paid tools without the IT department’s permission. This so-called shadow
IT presents many problems, not least of which is security. Also, some freeware platforms own
your data and don’t delete the work when you close an account. In addition, learn where your
data and documents reside. It’s easy for employees to keep local copies of documents, which
they then forget or misplace. Fraser gives the example of managing IT for homeowners
associations. He established backup systems for the management company and encouraged
board members to create shared backup copies of documents for when they left the board,
rather than keeping local copies. “Technically, that's not your personal information,” Fraser
explains.
 The Migration Plan: Consider migration plans for moving data, and negotiate with your service
provider for returning any recovered data in-house. “Even if you have a copy of your data, what
is the process to get that data into another system or restored into the existing system?” asks
Fraser. “Re-importing can be more arduous than you'll want to handle. In some cases, you may
have to hire an IT company to help.”

Service level agreements in Cloud computing:

A Service Level Agreement (SLA) is the bond for performance negotiated between the cloud services
provider and the client. Earlier, in cloud computing all Service Level Agreements were negotiated
between a client and the service consumer. Nowadays, with the initiation of large utility-like cloud
computing providers, most Service Level Agreements are standardized until a client becomes a large
consumer of cloud services. Service level agreements are also defined at different levels which are
mentioned below:
 Customer-based SLA
 Service-based SLA
 Multilevel SLA
Few Service Level Agreements are enforceable as contracts, but mostly are agreements or contracts
which are more along the lines of an Operating Level Agreement (OLA) and may not have the
restriction of law. It is fine to have an attorney review the documents before making a major
agreement to the cloud service provider.
Service Level Agreements usually specify some parameters which are mentioned below:
1. Availability of the Service (uptime)
2. Latency or the response time
3. Service component’s reliability
4. Each party accountability
5. Warranties
In any case, if a cloud service provider fails to meet the stated targets of minimums, then the provider
has to pay the penalty to the cloud service consumer as per the agreement. So, Service Level
Agreements are like insurance policies in which the corporation has to pay as per the agreements if
any casualty occurs. Microsoft publishes the Service Level Agreements linked with the Windows Azure
Platform components, which is demonstrative of industry practice for cloud service vendors. Each
individual component has its own Service Level Agreements.
Agreeing on the Service of Clouds:
Below are two major Service Level Agreements (SLA) described:

1. Windows Azure SLA – Window Azure has different SLA’s for compute and storage. For compute,
there is a guarantee that when a client deploys two or more role instances in separate fault and
upgrade domains, client’s internet facing roles will have external connectivity minimum 99.95% of
the time. Moreover, all of the role instances of the client are monitored and there is guarantee of
detection 99.9% of the time when a role instance’s process is not runs and initiates properly.
2. SQL Azure SLA – SQL Azure clients will have connectivity between the database and internet
gateway of SQL Azure. SQL Azure will handle a “Monthly Availability” of 99.9% within a month.
Monthly Availability Proportion for a particular tenant database is the ratio of the time the
database was available to customers to the total time in a month. Time is measured in some
intervals of minutes in a 30-day monthly cycle. Availability is always remunerated for a complete
month. A portion of time is marked as unavailable if the customer’s attempts to connect to a
database are denied by the SQL Azure gateway.

Service Level Agreements are based on the usage model. Frequently, cloud providers charge their pay-
as-per-use resources at a premium and deploy standards Service Level Agreements only for that
purpose. Clients can also subscribe at different levels that guarantees access to a particular amount of
purchased resources. The Service Level Agreements (SLAs) attached to a subscription many times
offer various terms and conditions. If client requires access to a particular level of resources, then the
client need to subscribe to a service. A usage model may not deliver that level of access under peak
load condition.

Solving Problems:

According to a recent Intel Security report, 93% of a sample of 1,400 IT security professionals claim that
they use some type of hybrid (Public/Private) cloud service for their business operations. The cloud is
rapidly becoming a popular resource for businesses from all backgrounds, and for good reason. If your
organization hasn’t yet tapped into the power of the cloud, here are some detailed benefits of hybrid
cloud computing technology that are worth considering.

1. Importance of data and where it is stored (GDPR)


Your business should have a clear concept of the value (and sensitive nature) of the data that is critical
for operations. The inherent need to undertake efforts to assess risks and costs involved with current
data storage practices is real (GDPR). Especially in an international business organization, deciding
where to house data is a complex question that is largely determined by how that data will be utilized.
Many CIOs prefer to keep their companies’ data relatively nearby, and some of them will only work with
companies that house data domestically. That is often difficult for large companies with offices in
multiple locations, so it’s important to look at what you’re using your data for to decide where it should
(legally) be stored. Businesses have access to more data than ever but storing it can be tricky. While
some businesses choose to only store their data on local servers, using a hybrid approach (using both
dedicated servers as well as cloud services) can provide a more flexible option for storing data.
2. Hosting
When you’re not sure where to host data, a cloud platform is a great way to minimize uncertainty. A
hybrid cloud portfolio can support locally hosted options in either the UK or elsewhere in the EU, and
cost-effective cloud options will help mitigate the risks associated with long-term investments or
expensive migrations. Global adoption of cloud is likely to increase. Companies can expect the demand
for cloud computing to continue to rise in a post-Brexit and post-COVID Europe.

3. Security
Cloud technology has advanced greatly and now it is actually more secure and reliable than traditional
on-premise solutions. In fact, 64% of enterprises report that the cloud is more secure than their previous
legacy systems, and 90% of businesses in the USA are currently utilizing a hybrid cloud infrastructure.
Many business owners who are accustomed to using local servers hesitate to transition to the cloud for
fear of security risks. They worry that having their information “out there” on the cloud will make it
more susceptible to hackers. As scary as these fears are, however, they are unlikely to happen. In fact,
your data is just as secure in the cloud as it is in dedicated servers. Because cloud hosting has become so
popular, it has quickly progressed to the advanced stages of security. In other words, because so many
businesses are using cloud hosting in some form, it has been forced to maintain high levels of security to
meet all the demand.

4. Vulnerability to disasters
If you’re only storing your data on local servers, you may be more susceptible to having your data
affected by a natural disaster. Certain precautions may help alleviate this risk — such as backing up data,
for example — but utilizing the cloud can provide even greater protection. While the cloud is not
without its risks — after all, the cloud is essentially a few servers united on a software level — it does
create another layer of protection in the event of a disaster. Leaseweb provides access to our partners
industry leading solutions, companies that specialize in these areas, so for backup solutions on
Dedicated servers, VPS, Apache CloudStack we have partnered together with Acronis & to offer backup
solutions for VMware & Private Cloud offerings, Leaseweb have partnered together with Veeam.

5. Benefit for disaster recovery


Hosting systems and storing documents on the cloud provides a smart safeguard in case of an
emergency. Man-made and natural disasters can damage equipment, shut off power and incapacitate
critical IT functions. Supporting disaster recovery efforts is one of the important advantages of cloud
computing for companies. These improvements in security can also come with an attractive reduction in
cost.

6. Increased long-term costs


Not moving to the cloud could cost your company money in the long run. While you do need to pay for
equipment with the cloud, costs are often more flexible because you can pay as you go depending on
how much storage space you need, ‘On Demand’. Using this hybrid approach of combining cloud
services and local dedicated servers, you can ensure you’re not paying for more storage than you need.

7. Boosts cost efficiency


Cloud computing reduces or eliminates the need for businesses to purchase equipment and build out
and operate data centers. This presents significant savings on hardware, facilities, utilities and other
expenses required from traditional computing. It also reduces the need for on-site servers, software and
staff can trim the IT budget further.
8. Provides flexible pay options
Most cloud computing programs and applications, ranging from ERP and CRM to creativity and
productivity suites, use a subscription-based model. This allows businesses to scale up or down
according to their needs and budget. It also eliminates the need for major upfront capital expenditures
(OPEX vs CAPEX).

QUALITY OF SERVICE (QOS):


Users of internet network in increasing day by day, network requirement also increases to achieve good
performance. Therefore, many online services need a very large bandwidth and network performance.
Network performance is the element that disquiet the users and service providers. Internet service
providers should bring new technologies to provide the best services before competitors strike them.
Quality of Service refers to the ability of networks to attain maximum bandwidth and handle other
network elements like latency, error rate and uptime. Quality of Service include the management of
other networks resource by allocating priorities to specific type of data (audio, video and file). Basic
implementation of QoS need three major components: a. QoS within one network element. b. QoS
policy and management functions to control end-to-end traffic across network. c. Identification
techniques for coordinating QoS from end-to-end between network elements.

There a major issue associates with management of cloud services results to the catastrophe. With the
increasing trend of the cloud services, it become more difficult to investigate the QoS for cloud. The
various existential issues associated with the cloud server are:
 Managing and ensuring application in QoS
 Cost
 Increasing services for users
 Slow applications when hosted on Sever with more Errors
 Guaranteed own SLA’s
 No Data limits
 Performance of the applications
 System backlog

Accountability Issues in Cloud Computing:

Ensuring Accountability by cloud data controllers means that privacy is also ensured. Cloud security is
often compromised by the lack or absence of a few key features, particularly privacy.

This will consist of the following main topics.

Accountability Obligations for Cloud Service Providers: Accountability imposes transparency and
confidentiality obligations on cloud data controllers and their partners in the service distribution chain.
Accountability is to provide restrictions and control mechanisms for cloud data controllers and others in
the service supply chain, including the obligation of each to act as a responsible custodian of others’
personal information, to take responsibility for their protection and proper processing, and use.

Is Accountability a friend or foe: In practice, the application of Accountability in the clouds raises various
challenging issues that concern cloud privacy and security researchers. First of all, Accountability has the
power to increase cloud trust (Doiphod and Channe 2015) but its implementation may have different
consequences for different actors. Second, accountability aspects such as regulations can hinder
innovation and, if not implemented effectively, can hinder the desired increase in cloud trust. (Pearson
2011). Accountability may have positive results for the cloud provider, but it may have undesirable
consequences for the party providing services over the cloud and third parties whose data is kept under
protection.

Who are the Main Actors:


1. Client: Individuals or legal entities that have large data files to be stored in the cloud and utilize
cloud computing systems for data maintenance and computation.
2. Cloud Storage Server (CSS): Assets managed by a Cloud Service Provider (CSP) with significant
storage and computational resources to protect customers’ data.
3. Third-Party Auditor: In this case, the 3rd party with expertise and capabilities that customers do
not have is assigned to assess the risk of cloud storage services on behalf of customers upon
request. In this way, when the customer puts their data on remote servers, they can get rid of a
huge data storage burden. However, in this case, the customer will not be able to access their
data locally, so some additional measures must be taken to verify the integrity of their data.

What are the Problems: Structural characteristics of the cloud computing environment are the main
reasons for security problems. First, the nodes involved in computing are varied, sparsely scattered, and
often cannot be effectively controlled. Second, there is a risk that the cloud service provider (CSP) will
reveal confidentiality in the data transportation, processing, and storage process.
Since cloud computing is based on technology, vulnerabilities of current technologies will be directly
transferred to a cloud computing platform and even greater security threats will arise.

What Should Cloud Service Providers do:


1. According to the GDPR, personal data should not be stored for longer than necessary for the predefined
purpose.
2. In cases where personal data are processed outside the European Economic Area (EEA), data controllers
and data processors are expected to take appropriate technical and administrative measures unless an
adequate decision is made for the country in question.
3. You must have a contract that shows your agreement on how you process personal data.
4. The GDPR requires a Data Protection Impact Assessment (DPIA) and a security assessment to identify risks
that may occur.
5. The “privacy by design principle” needs to be integrated into the cloud architecture and law-making
process.

Cloud trends in supporting Ubiquitous Computing:

Ubiquitous computing (or “ubicomp”) is a concept in software engineering and computer science
where computing is made to appear anytime and everywhere. In contrast to
desktop computing, ubiquitous computing can occur using any device, in any location, and in any
format.
Enabling Technologies with the Internet of things
The Internet of Things is simply “A network of Internet connected objects able to collect and exchange
data.” It is commonly abbreviated as IoT. … In a simple way to put it, You have “things” that sense and
collect data and send it to the internet. This data can be accessible by other “things” too.

ZigBee Technologies
Zigbee Technology is a Wireless Communication Standard that defines a set of protocols for use in low
data rate, short to medium range wireless networking devices like sensors and control networks.

Zigbee Architecture
Although understanding the Architecture of Zigbee Standard is a very good idea, it is not the aim of this
article. Even then, we will take a look at the Zigbee Architecture or often called as the Zigbee Stack.
The following image illustrates the Zigbee Stack Architecture. The PHY (Physical Layer) and MAC
(Medium Access Layer) layers are defined by the IEEE 802.15.4 standard. On this foundation, the Zigbee
Alliance provides the NWK (Network Layer) and the framework for application layer.
Application Support (APS) Sub-layer, Zigbee Device Objects (ZDO) and the application objects for
manufacturers are all part of the Application Framework, which is under control of the Zigbee Alliance.
If you look at the Zigbee Stack in the above picture, it doesn’t exactly fit the OSI Networking Model. The
bottom three layers i.e. Physical, Data Link and Network Layers are present in the Zigbee Stack in the
form of PHY, MAC and NWK.
The last four layers i.e. Transport, Session, Presentation and the Application layers are covered in the
Application Support Sublayer (APS) and Zigbee Devices Object (ZDO).

Physical (PHY) Layer


As mentioned earlier, the lowest two layers i.e. the PHY and the MAC are defined by the IEEE 802.15.4
Specification. The PHY layer is closest to the hardware and it directly controls and communicates with
the Zigbee radio. The PHY layer translates the data packets in to over-the-air bits for transmission and
vice-versa during reception.

MAC Layer
The MAC layer is responsible for interface between PHY and NWK layers. The MAC Layer is also
responsible for providing PAN ID and also network discovery through beacon requests.

Network (NWK) Layer


The Network Layer (NWK) acts as an interface between MAC Layer and the Application Layer. It is also
responsible for mesh networking (network formation and routing). In addition to the above tasks, the
NWK Layer provides security to the Zigbee Networks i.e. the entire data in the NWK Frame is encrypted.

Application Layer
The Application Layer in the Zigbee Stack is the highest protocol layer and it consists of Application
Support (APS) sub-layer and Zigbee Device Object (ZDO). It contains the manufacturer defined
applications. The APS Sub-layer is responsible for discovery and binding services.
Zigbee Device Object (ZDO) looks over the local and over-the-air management of the network. The
Application Framework consists of Application Objects that control and manage the protocol layers in a
Zigbee device. The Application Framework can contain up to 240 Application Objects.
Zigbee Devices
The IEEE 802.15.4 Specification defines two types of devices: FFD or Full-Function Devices and RFD or
Reduced-Function Devices. An FFD Device can literally do it all. It can perform all the tasks described in
the IEEE 802.15.4 Standard and can take up any role in the network.
An RFD Device, as the name suggests, has limited capabilities. The number tasks performed by an RFD
Devices are limited.
An FFD Device can communicate with any device in the network and it must active and always listening
in the network. An RFD Device can only communicate only with an FFD Device and is intended for simple
applications like turning on or off a switch.
The FFD and RFD devices in an IEEE 802.15.4 Network can take three different roles: Coordinator, PAN
Coordinator and Device. Coordinator and PAN Coordinator are FFD Devices and the Device can be either
an FFD or an RFD Device.

Performance of Distributed Systems and the Cloud:


In the last several decades, there have been tremendous improvements in computers and computer
network technologies. With the emergence of the Internet, computers and their networking have
exhibited tremendous development, such as today’s theme – distributed computing and cloud
computing. The terms Distributed Systems and Cloud Computing Systems relate to distinct entities,
although the principle behind both is the same. To better grasp the ideas for each of them, a strong
understanding of the distributed systems and knowledge of how they vary from the centralized
computer is essential.
You may observe the use of cloud computing in most organizations nowadays, either directly or
indirectly. For example, if we use Google or Amazon’s services, we immediately access the resources
housed in Google or Amazon’s Cloud environment. Twitter is another instance of our tweets in the
Twitter cloud. Faster data processing and computer networking may be seen as necessary for these new
computing technologies to develop. Read more about the architecture of cloud computing.

Distributed Computing:

When numerous autonomous devices connect via a central network to achieve a shared objective,
distributed computing, distributed computing solves the difficulty of using distributed autonomous
machines and communicating with each other over a network. This is a computer method that can
interact and solve one single problem on numerous machines.
Distributed computing is a much faster way to do computing activities than utilizing one computer.
Some distributed computing features divide a single job amongst machines to simultaneously carry out
the work, calling remotes and calling remotes for distributed calculations.

Distributed Computing System Examples

 World Wide Web


 Hadoop’s Distributed File System (HDFS)
 ATM
 Facebook
 Google Indexing Server
 Google Web Server
 Cloud Network Systems
 Google Bots
Benefits of Distributed Computing

 Compared to a centralized computer, distributed computing systems provide a superior


price/performance rate, as adding microprocessors is economic rather than mainframes.
 The processing capacity of distributed computing systems is higher than centralized systems.
Distributed computer systems enable increased expansion to add software and computing
capacity as and when business demands increase.

Cloud Computing:

Cloud computing is a service that is provided to a network computer. For example, 10,000 people might
process SETI data on their PCs via a screensaver over a dedicated computing network. And cloud
computing might be when one million Apple customers keep all their MP3s on iCloud instead of PCs.
In cloud computing, IT resources and services like servers, storage, databases, networks, analytics,
software, etc. are provided over the Internet. It is a computer technology that offers its users/customers
host services through the Internet.
Cloud computing delivers services, including hardware, software, and Internet networking resources.
Cloud computer features include pooled computer resources, on-demand service, per-use payments,
service providers’ services, etc. Read more about Cloud computing and the different types of services in
cloud computing here.

It is divided into 4 separate kinds, for example.


 Private Cloud
 Public Cloud
 Hybrid Cloud
 Community Cloud

Examples of Cloud computing


 YouTube is the most acceptable cloud storage example hosting millions of video files uploaded,
streamed, and downloaded.
 Picasa and Flickr are hosting millions of digital photos that enable their users to build online
photo albums by uploading images to servers of their services.
 Google Docs is another type of cloud computing that enables users to hook up their server
presentations, text documents, and slides. Google Docs allows users to change and publish
other people’s work for viewing or changing.

Benefits of Cloud computing:

Cloud computing has numerous advantages and advantages, but we are the most relevant:
 Cloud computing makes it the most excellent resource for connection via companies’ cloud
offerings at a reasonable cost for organizations.
 In place of conventional or orthodox use of e-mails and file-sharing, companies can make use of
their cloud solutions to exchange information with workers.
Unit 4:

Enabling Technologies for the Internet of Things

Internet of Things (IoT) primarily exploits standard protocols and networking technologies. However, the
major enabling technologies and protocols of IoT are RFID, Sensor Networks and ZigBee Technology,
GPS. These technologies support the specific networking functionality needed in an IoT system in
contrast to a standard uniform network of common systems. As well as these enabling technologies, the
IoT also relies on other technologies to maximize the opportunities that are created by the IoT.

RFID (Radio Frequency Identification):


Radio Frequency Identification (RFID) is a form of wireless communication that incorporates the use
of electromagnetic or electrostatic coupling in the radio frequency portion of the electromagnetic
spectrum to uniquely identify an object, animal or person. It uses radio frequency to search ,identify,
track and communicate with items and people. it is a method that is used to track or identify an object
by radio transmission uses over the web. Data digitally encoded in an RFID tag which might be read by
the reader. This device work as a tag or label during which data read from tags that are stored in the
database through the reader as compared to traditional barcodes and QR codes. It is often read
outside the road of sight either passive or active RFID.

RFID provides a simple, low energy, and versatile option for identity and access tokens, connection
bootstrapping, and payments. RFID technology employs 2-way radio transmitter-receivers to identify
and track tags associated with objects.
Application of RFID technology in IoT is extremely broad and diverse. RFID tags are primarily used to
make everyday objects communicate with each other and the main hub and report their status. Retail,
manufacturing, logistics, smart warehousing, and banking are among the major industries using RFID
Internet of Things solutions.

There are two types of RFID :


1. Passive RFID: Passive RFID tags does not have their own power source. It uses power from the
reader. In this device, RF tags are not attached by a power supply and passive RF tag stored their
power. When it is emitted from active antennas and the RF tag are used specific frequency like
125-134KHZ as low frequency, 13.56MHZ as a high frequency and 856MHZ to 960MHZ as ultra-
high frequency.

2. Active RFID: In this device, RF tags are attached by a power supply that emits a signal and there is
an antenna which receives the data. means, active tag uses a power source like battery. It has it’s
own power source, does not require power from source/reader.

Features of RFID :
 An RFID tag consists of two-part which is an microcircuit and an antenna.
 This tag is covered by protective material which acts as a shield against the outer environment
effect.
 This tag may active or passive in which we mainly and widely used passive RFID.

Application of RFID :
 It utilized in tracking shipping containers, trucks and railroad, cars.
 It uses in Asset tracking.
 It utilized in credit-card shaped for access application.
 It uses in Personnel tracking.
 Controlling access to restricted areas.
 It uses ID badging.
 Supply chain management.
 Counterfeit prevention (e.g., in the pharmaceutical industry).

Advantages of RFID :
 It provides data access and real-time information without taking to much time.
 RFID tags follow the instruction and store a large amount of information.
 The RFID system is non-line of sight nature of the technology.
 It improves the Efficiency, traceability of production.
 In RFID hundreds of tags read in a short time.

Disadvantages of RFID :
 It takes longer to program RFID Devices.
 RFID intercepted easily even it is Encrypted.
 In an RFID system, there are two or three layers of ordinary household foil to dam the radio wave.
 There is privacy concern about RFID devices anybody can access information about anything.
 Active RFID can costlier due to battery.

SENSOR NETWORKS IN CLOUD COMPUTING:


Cloud computing is a model for enabling on demand network access to a shared pool of configurable
computing resources (networks, servers, storage, applications and services) that can be rapidly
provisioned and released with minimal management effort or service provider interaction. Cloud
computing offers two major benefits in IT market: efficiency, which is achieved through the highly
scalable hardware and software resources, and agility, which is achieved through parallel batch
processing, using real-time mobile interactive applications, most of them based on intensive use of
wireless sensor networks (WSN).
Due to the increasing demand of sensor network applications and their support in cloud computing for a
number of services, a new type of service architecture named Sensor Cloud (SC) was introduced as an
integration of cloud computing into the WSN. A sensor cloud is composed of virtual sensors built on top
of physical wireless sensors, which users automatically and dynamically can utilize on the basis of
applications demands.

The main advantages of this approach are:


1) it enables better sensor management capability;
2) data captured by WSNs can be shared among multiple users;
3) it reduces the overall cost of data collection for both the system and user;
4) the system is transparent regarding the types of sensors used. On the other hand, for enterprises to
be convinced to migrate the services that they offer to the cloud, there are a number of strict
requirements that any cloud system must fulfill.

In this way, the cloud software services must be: highly reliable, scalable, autonomic to support
ubiquitous access, dynamic discovery and composability. More of that, a SC must offer the required
service level indicated by users through QoS (Quality of Service) parameters. The main objective of this
paper is to develop a framework for supporting cost-effective cloud computing for intensive sensor
utilization in a SC that must have QoS guarantees related to efficient resource utilization and execution
timeliness. The proposed framework will work on providing a cloud computing platform capable of
providing real-time services for environmental sensing, monitoring, and process control systems. For
this purpose, a QoS-aware cloud model is provided, able to satisfy different QoS levels regarding cost-
effective resource management, timely service provisioning and high reliability.

Zigbee Technology-The smart home protocol:


Zigbee is one of the wireless personal area network (WPAN) specifications. It is designed to meet low-
power and low data rate applications and is developed under IEEE 802.15.4 standard by The Zigbee
Alliance. Typically, Zigbee is used in establishing a smart home for devices from different manufacturers
to communicate with each other to enable automation. Hence, it is also called the smart home protocol.
For example, the light system can be linked to the security cameras, and the coffee maker can be linked
to the alarm system so that the coffee is ready for you when you wake up.

If we want a network for short-range communication for streaming audio, we opt for Bluetooth. For
streaming videos and larger files, we use Wi-Fi. But, we need a network using which we can connect a
large number of battery-powered devices. We can't go for Wi-Fi because of its high energy
requirements, and we can't choose Bluetooth even though it consumes less power, as only a small
number of devices can be connected using Bluetooth. We need a network that can connect many
battery-powered devices, and the main aspect here is low energy/ power consumption. This is the
whole purpose of developing Zigbee Technology. Zigbee revolves around control and sensor networks.
Hence, it is one of the most common standards and applications for the Internet of Things (IoT).

Zigbee networks use Mesh Topology, which gives a separate link for every device pair in the Network.
Even if one link fails, the Network can utilize another alternate path/ link for communication. Hence, it
is reliable. Wi-Fi and Bluetooth networks, on the other hand, use Star topology, where each node/
device is connected to a central node that can be a hub, router, or switch. If the link to a device fails, the
device gets disconnected from the Network. If the central node fails, the whole Network will be down.
Architecture and Working
In a Zigbee network, there will be three types of devices:
1. Zigbee Coordinator
2. End device
3. Zigbee Router

GPS:

Cloud computing transforms data recording, storage, processing, and the associated costs.
Cloud computing is migrating data to the internet via remote servers. It is a key part of a long-term
strategy that can improve the functioning and security of data management. GPS and location services
are essentially a way for your company to answer some basic questions.

 Where are our routers right now?


 They are now where they belong.
 Are they still here?
 Are they safe?
 What is their performance?

Because GPS devices rely on software to function, it was only natural that the cloud could host many of
their functions. It includes code patches, updates to map and way-points, and even offering
entertainment content. What are the main reasons to choose a cloud-based GPS? There are abundant
benefits of using cloud computing in GPS, but here. We’d demonstrate the top 4.

Tracking
 Some companies expect their routers’ to move, but others expect them to stay in one place. They
need to be able to track if the location changes and IT personnel can track the company’s resources
and infrastructure using GPS and location services. It doesn’t matter if your company tracks one unit
or thousands; it’s vital to determine if they have been moved from the location you’ve chosen (and
perhaps paid for).
 A good real-life example would be DVD [2] rental companies that pay a premium to have their
product placed in a particular location, such as at the main entrance to a shopping mall or outside
the grocery store’s front doors. The rental company can use GPS and NCM location services to check
if there has been a sudden drop in sales at that location.
 A construction company may have routers on its trailers at their worksites, but these trailers often
move every few months. IT administrators can easily determine which routers are located in which
location without having to report on-site workers each time a trailer moves to a new site. However,
if you have mastered cloud computing and have a background in cloud computing engineering,
tracking through GPS and NCM location services will be a breeze for you to understand.

Dispatching
 Businesses that dispatch emergency response, delivery, and other vehicles must know where
they are at all times. Location services and GPS allow for units to be quickly and accurately
tracked, even when the driver or crew is too busy with their duties to reach out via telephone.
 An ambulance company that previously required its emergency vehicles verbally to report their
location via dispatch can now save crucial time by letting its in-ambulance router transmit its
location automatically. Dispatchers can quickly locate each ambulance at any given moment and
notify hospitals about their arrival times.

Troubleshooting
 IT teams in large organizations often lose track of where key components are located within the
network infrastructure. IT administrators can save time, money, and headaches with GPS
tracking.
 They can track and locate the problem and then fix it by the site. Instead of spending time trying
to locate the exact location of an issue, you can spend your time resolving it.
 For instance, if an IT department at a multistage construction company receives a call from a
worker complaining about the networks being down, IT wouldn’t normally be able to identify
the router being used by the worker or point out where to begin troubleshooting.
 IT staff can now use GPS and location services to ask the worker where they are located. They
can then find the router that services them, locate its physical location and remotely run
diagnostics. The worker doesn’t have to spend time looking for the router or reading numbers
on its bottom, and the IT staff can start troubleshooting immediately.

Physical security (Geo-fencing).


 Sometimes companies and organizations want to know if one of their routers or the vehicles in
which they are housed has left a certain area. NCM and GPS can instantly alert the appropriate
personnel (IT and security teams) when a router crosses a predetermined geographical
boundary.
 This feature is safer for company equipment and protects employees and contractors. For
example, a city’s police department uses GPS and location services to create Geo-fencing for its
patrol vehicles.
 It is to protect its officers. The router transmits an immediate alert to headquarters if a patrol
car leaves the designated area. A response team can be dispatched to investigate because
internal IT teams can track the GPS coordinates of the unit. It will also help if the officer cannot
be reached by radio or any other communication methods. To learn more about Geo-fencing
and other cloud computing features, you can go through a PG program in cloud computing.

Innovative Applications of the Internet of Things:


The Internet of Things (IoT) is defined as a network of devices that feed data into a platform to enable
communication and automated control. IoT connects machines to other machines as well as people. It
connects physical devices to digital interfaces.

Smart Buildings:
A smart city is an urban city that uses sensors and cellular or wireless technology placed in ubiquitous
places such as lamp posts and antennae. There are multiple facets in which one can incorporate IoT into
the functioning of a city:

 Traffic management: Sensors on roads and traffic signals send data to the IoT systems. This data,
accumulated over time, allows officials to analyze traffic patterns and peak hours. It also helps
create solutions for bottlenecks.
Commuters can use this information to determine which areas are congested and what alternate
routes can be used. A version of this already exists in third-party map services such as Google Maps.
 Pollution monitoring: A pressing problem faced by every country in the world is air pollution. With
existing sensors, one can easily measure parameters such as temperature, CO2 levels, smoke, and
humidity. Smart cities leverage this to gather data about air quality and develop mitigation methods.
 Resource management: The biggest factors in deciding a city’s livability are waste, water, and
electricity management.
With water management, sensors are attached internally or externally to water meters. These
sensors provide information to understand consumption patterns. They detect faults in supply and
automatically begin the necessary course of action. Trends in water wastage can be used to develop
an efficient water recycling system.
IoT-enabled waste management systems produce a geographical mapping of waste production.
These systems trigger the clearance process themselves; for example, by generating alerts when a
trash bin is full. They also provide more insights into waste segregation and how people can improve
waste processing.
Electricity management comes in the form of a smart grid, covered in detail in this article.
 Parking solutions: Parking woes, while sounding insignificant, play a big part in traffic management.
Smart parking solutions provide drivers with real-time information about empty spaces available.
 Infrastructure management: Public infrastructure such as street lamps, roads, parks, and gas supply
lines cost a lot to maintain. Repair work in any of these causes disruptions to everyday functioning.
IoT-based maintenance and monitoring systems lookout for signs of wear and tear while analyzing
patterns. This proactive approach can save a city a lot of money.
 Disaster management: The Internet of Things can be used to hook up disaster-prone areas to a
notification system. A forest fire, for example, can be detected and curbed before it grows beyond
control.

Installing smart grids:


Utility companies are turning to IoT to make energy provision more efficient. Appropriate sensors are
installed in energy meters, transmission lines, production plants, and distribution points. This IoT system
is called a smart grid.
Smart grids leverage the Internet of Things for many use cases:
 They create alerts in case of failure at any point during power transmission.
 Sensors are used to identify abnormalities in the line.
 They monitor energy consumption and peak usage statistics.
 They gather consumption data at a geographic, organizational, and individual level.
 They identify lossy nodes during transmission.
 They can pinpoint the exact location of inefficiency.
Everyday users can analyze their energy usage and bring about positive changes in their carbon
footprints. It also helps cut down costs when energy costs peak, as it did across Europe because of the
Ukraine-Russia war. Energy can be created at traditional power plants and solar and wind power plants.
Smart grids allow seamless switching between these different power sources. They ensure that the
correct parameters, such as voltage, are maintained while doing so. As with every other IoT system,
smart grids enable predictive maintenance. This cuts down costs considerably.
Retailing and Supply-Chain Management:
Supply chain management (SCM) is a process that streamlines the flow of goods and services from raw
material procurement to the customers. It involved inventory management, fleet management, vendor
relationships, and scheduled maintenance. During the pandemic, many businesses were affected by
supply chain issues, especially when it caused a global shutdown in early 2020. As operations switched
to being remote, it made sense for organizations to consider integrating IoT into their SCM processes.
 The Internet of Things is used at multiple layers in the SCM process. Shipping companies
use trackers to keep an eye on assets. They also analyze shipping routes to figure out the fastest and
most fuel-efficient routes. Other parameters such as container temperature and humidity can also
be monitored and controlled using IoT.
 The IoT system allows managers to overhaul the supply chain process by enabling smart routing
choices. This means that businesses can be confident in supply chain resilience.
 Real-time and remote management of fleets ensures a smooth experience for managers and
customers. Any delay or issues with transportation can automatically notify the appropriate
personnel.
IoT in fleet management provides end-to-end connectivity between the vehicles and the managers, as
well as the vehicles and the drivers. Besides asset management, IoT also takes care of vehicle health,
ensuring regulations, such as those for pollution emissions, are followed.
Cyber-Physical System:
A Cyber-Physical System (CPS) is a type of system that integrates physical and computational
components to monitor and control the physical processes in a seamless manner. In other words, A
cyber-physical system consists of a collection of computing devices communicating with one another
and interacting with the physical world via sensors and actuators in a feedback loop. These systems
combine the sensing, actuation, computation, and communication capabilities, and leverage these to
improve the overall performance, safety, and reliability of the physical systems.
Examples: CPS includes self-driving cars, The STARMAC is a small quadrotor aircraft.

Key Features of Cyber-Physical System:

In terms of the cyber-physical system, there are some key features are classified as

 Reactive Computation
 Concurrency
 Feedback Control of the Physical World
 Real-Time Computation
 Safety-Critical Application
The Internet of things is also called in the short form of (IoT). basically, IoT describes the network of
physical objects with embedded system sensors, software, and other technologies. It is provided as a
unique identifier (UID) and allows data to be transmitted over the network without human interaction
objects to collect and exchange data.
Example: Object detection software device, electronic vehicle system.

Online Social and Professional Networking:

Computer use has become very prevalent during the last few years. While 20 years ago only a few
people knew how to use a computer, today even little children have access and are already very
knowledgeable about how to use computers.
Together with this development, many networking sites have popped up offering users a chance at
interacting with other people from different places either for friendship, business, or for sharing
common interests. This is called social networking. The process of utilizing websites and computer
networks to connect with family and friends, share opinions and information with other people, meet
new friends and interact with each other, sometimes even finding a mate.
Social networking allows individuals a chance to find friendship and build relationships. It has been a
very important tool for families who live far from each other to connect and keep in touch with each
other. Through social networking, people are able to find long-lost friends and family as well as find new
friends and meet people who share the same interests. It has also been a tool for people to earn money
especially if one uses a professional networking site.

While most social networking sites such as Facebook and Yahoo Messenger are used for personal
interactions, professional networking sites such as LinkedIn and WiseStep.com are used for
building business and professional relationships and interactions.
Professional networking sites allow users to look for jobs using connections to find one that will suit
one’s qualifications. Other users and possible employers can view one’s profile and share
recommendations. They also allow professionals from different fields of interest to share opinions and
knowledge while allowing users to ask questions which can be answered by one’s connections and
providing answers to questions which one is knowledgeable about. While social networking sites also
provide a means by which people can promote products and services, they are usually used for finding
new friends and building social relationships rather than for professionals to find jobs and for businesses
to find new employees.

Professional networking sites, on the other hand, allow businesses to post information about their
companies which every member can view and letting them find new employees. It is also a tool for
professional job seekers to post their profiles and find a job.

Summary:

1.Social networking is the process of using websites to connect with other people especially friends and
family and to share information and opinions while professional networking is the process of using
websites to connect with businesses and other professionals to look for employment or to share one’s
knowledge.
2.Although social networking sites are also used to promote products and services, they are mostly used
for personal interactions while professional networking sites are used for professional and business
interactions.
3.In a professional networking site, users can post their profile which can be viewed by possible
employers which also post their company’s profile while in a social networking site users post pictures,
send and receive instant messages, and post updates about their lives.

How the Cloud Will Change Operating Systems:


Cloud computing is more an evolution than a revolution. Keeping the traditions is important for
adoption. One of these traditions is the operating system. Cloud computing is a technology deployment
approach that has the potential to help organizations better use IT resources to increase flexibility and
performance.
 One of the most important ways to support the underlying complexity of well-managed cloud
computing resources is through the operating system.
 An operating system such as Linux supports important standards that enhance portability and
interoperability across cloud environments.
 Operating system platforms are designed to hide much of the complexity required to support
applications running in complex and federated environments. Much of the functionality
required for the efficient operation of many applications is built in to the operating system.
 The operating system implements the level of security and quality of service to ensure that
applications are able to access the resources needed to deliver an acceptable level of
performance.
 Operating system exists to allow users to run programs and store and retrieve data from one
user session to the next.
 One of the most significant requirements for companies adopting cloud computing is the need
to adopt a hybrid approach to computing. To do so, most organizations will continue to maintain
their traditional data center to support complex mixed workloads.
 For example, an organization may choose a public cloud environment for development and test
workloads, a private cloud for customer-facing web environments that deal with personal
information, and a traditional data center for legacy billing and financial workloads.
 Virtualization requires some level of workload isolation since virtualized applications are stored
on the same physical server. However, cloud computing adds the concept of multi-tenancy.
 Multi-tenancy is the sharing of resources by multiple organizations, which requires that each
customer's data and applications be stored and managed separately from other customers' data
and applications.
 Both virtualization and multi-tenancy support have to be implemented in a secure manner. As
virtualization and multi-tenancy become the norm in cloud environments, it is critical that
security be built in at the core.
 When servers are virtualized, it makes it very easy for a new image to be created with little
effort.

Location-aware Applications:
Location-aware applications use the geographical position of a mobile worker or an asset to execute a
task. Position is detected mainly through satellite technologies, such as a GPS, or through mobile
location technologies in cellular networks and mobile devices. Examples include fleet management
applications with mapping, navigation and routing functionalities, government inspections and
integration with geographic information system applications. Location aware computing defines the
environment that utilize the information about the current location of the person using location aware
devices. Ideally the information provided should be both location-specific and personalized based on the
personal profile of the user. To get more clear understanding of location aware computing, let’s recall
the two different location abstraction levels.

• Location-transparent: This abstraction level completely hides the effects of mobility to applications
and users. Network services and resources can be transparently accessed by means of a resource and
service broker function that maps the application’s service type requests on adequate service provider
instances. Thus, Applications operating in this level of abstraction have higher priorities.
• Location-tolerant: This abstraction-level allows applications and users to tolerate those effects of
mobility that cannot be hidden by the platform. Reasons can be congestion of radio cells, degradation of
radio link qualities or change of terminals in case of user mobility.

Location is a fundamental aspect of the new, exciting world of mobile web-enabled services. The
usefulness of many of today's most popular mobile applications and services is determined by one key
factor: where you are at the exact moment when you're using the service. Location based service is a
service where
1. The user is able to determine their location.
2. The information provided is spatially related to the user's location.
3. The user is offered dynamic or two-way interaction with the location information or content.
Components of location-based services are as follows:
1. Mobile device
2. Content provider
3. Communication network
4. Positioning component

Intelligent Fabrics, Paints, and More:

 The term “fabric" is used by different vendors, analysts, and IT groups to describe different
things.
 A set of compute, storage, memory and I/O components joined through a fabric interconnect
and the software to configure and manage them.
 A fabric thus provides the capability to reconfigure all system components - server, network,
storage, and specialty engines – at the same time, the flexibility to provide resources within the
fabric to workloads as needed, and the capability to manage systems holistically.
 Services provided by Intelligent Fabrics are as follows:
1. It automatically adjust the room temperature when body temperature change.
2. It monitors the body functions such as blood presser, sugar level etc.
Cloud-based smart fabrics and paints – Ability to connect devices to cloud from any place, at any time
will open door to wide range of cutting-edge applications. Devices that once had to be read by utility
or city employees, such as electric meters and parking meters, will connect to web and create report.
Intelligence will be built into fabrics of our clothes, bedding, and furniture. These intelligent fabrics
will provide wide range of services including following:
 Automatically adjust room temperature when body temperature becomes too warm or
too cold.
 Notify rooms when we enter or leave so that lights, music, and other devices are
automatically controlled.
 Monitor body functions such as blood pressure, blood sugar levels, stress, and more, and
notify person and adjust environment to affect those functions.
 Notify others when elderly person has fallen.
 Provide deterrence against mosquitoes and other insects.

The Future of Cloud TV:

 Today, consumers watch video on a variety of connected devices. New Over-The-Top (OTT)
providers such as Netflix are offering direct-to-consumer services with low prices, advanced user
interfaces and easy access to multi-screen video.
 Changing usage patterns brought on by subscriber desire to watch content at the time, location
and on the device of their choosing are increasing content distribution costs.
 Pay TV providers are particularly susceptible to these trends and need to adapt their traditional
TV delivery architectures to offer innovative services that attract and retain customers.
 The traditional Set-Top Box (STB) will disappear. The functions of today's STB hardware will be
carried out in the network and by the connected device itself, eliminating the cost and
complexity of managing home-based STBs.
 Traffic will be all unicast. Over time, device format fragmentation, time-shifting viewing habits
and service personalization will erode broadcast and multicast efficiencies.
 Ultimately, every end user will be served with a unique stream. Services will be deployed in the
cloud.
 Dedicated video platforms will migrate to cloud-based services, reducing costs and accelerating
time to market.
 Operators will move from vertically integrated middleware stacks to more open architectures
with best-of-breed components.
 Cloud DVR technology makes all TV content available on demand, on any device and in any
location.

Cloud-based smart devices:


Cloud’s ability to provide internet access and at any time makes such processing reality. Some devices
may initially be intelligent with reference to their ability to regulate power consumption, possibly
avoiding power use during peak times and costs. Using the cloud for communication, devices can
coordinate activities. For example, your car may notify your home automation system that you are
down blocking and instruct it to light house, turn on your favorite music and prompt refrigerator for
list of ready to cook meals.

Home based Cloud Computing: Today most households have wireless network capabilities that allow
family members to connect to Web and access sites and contents they desire. With arrival of smart
devices, intelligent fabrics, and greater use of frequency identification devices (RFID), relations will
expect on-demand personalized technology solutions. Families will use cloud devices to customize
their environments and experiences. Within such environment, families will want to restrict
processing to within home, meaning that they will not want neighbors to receive signals generated by
their devices and clothing. That implies ability to encrypt wide range of signals within home. To that
end, you should expect to see cloud-based in-home devices that store family files, maintain appliance
settings. download and store movies and TV shows, and more.
Modular Software: With cloud computing, companies no longer have to raise capital required to fund
large data center. Instead, they can leverage PaaS solution. Furthermore, companies no longer have
to pay expensive licensing fees for various software tools such as database management systems.
Instead, they can leverage pay-on-demand solutions. Hence developers will release software solutions
at faster rate, bringing solutions to market that expects high functionality and demands lower cost.
85% Software developed since 2012 is cloud-enabled and increase in future data requirements will
enable more services through Cloud. All-State and Center will have its own Cloud Platform for
providing basic services in health, agriculture and social, etc. Aadhaar Card is major example of Cloud
Computing projects and all banking platforms are moving towards serving 7 billion people in world. All
Stock exchanges have to move towards cloud computing to provide efficient and real-time stock
details.
Conclusion: Cloud computing is beginning to transform way enterprises buy and use technology
resources and will become even more prominent in coming years. In the next-generation, cloud
computing technology role is going to be integral element in life of each human being because Cloud
is only place where all software and hardware and all devices can connect at single place.

Define time to market? What do you mean faster time to market for software
application?

 Time To Market (TTM) is the length of time it takes from a product being conceived until its
being available for sale.
 TTM is important in industries where products are outmoded quickly. A common assumption is
that TTM matters most for first-of-a-kind products, but actually the leader often has the luxury
of time, while the clock is clearly running for the followers.
 Nowadays software companies clearly understand that time costs money and that they need all
possible tools to get their products to market as fast as possible with no compromise to quality.
 So, they expect a wider range of features, a variety of services, scalability, high performance and
flexible pricing out-of-the-box from their cloud providers.
 This motivates hosting vendors to expand their offerings with PaaS and CaaS solutions, and
migrate their current users from commodity VPS to the advanced platforms.
 The bottom line is that success in the mobile market can be driven as much by who is there first
as much as it may be driven by the quality of the applications being delivered; as such,
minimizing the time to market is paramount.
 With so many cloud-based offerings available that can help speed up everything from
development to deployment to runtime operations, it's no wonder that those who are serious
about mobile development are leaning hard on the various PaaS, SaaS and IaaS offerings
available on the market today.

Home-Based Cloud Computing:

 Cloud computing has been evolved as a key computing platform for sharing resources and
services. People should have a relatively convenient environment for handling Home-appliances.
 Existing Home-Appliance control systems are not providing complete control over Home-
Appliances and also difficult to control from distant places.
 Framework is composed of mobile users, Home-appliances and the cloud environment. Mobile
that the user is going to use should contain Internet facility.
 A mobile user can use a smart phone with internet connection to control and handle Home-
appliances through Web2.0 Blog-based interfaces in Web2.0 Platform.
 Mobile User can control the Home-appliances, using the Device Profile of Web Services in the
cloud environment and can control completely by not only switching on and off but also can
change settings of the devices and also from any far places.
 Home-based healthcare could enable the care recipients to live independently at home.
Healthcare providers could monitor the patients based on their shared daily health data, and
provide some clinical suggestions, as well as giving feedback through reports of medical
examinations that the patients have undergone.
 Cloud computing services can support almost any type of medical software applications for
healthcare organizations. Cloud computing can offer practical solutions as in the new clinical
information management system called "Collaborative Care Solution" that was developed in
November 2010 by IBM and Active Health Management.
 It was beneficial for patients who were suffering from chronic conditions to connect with their
physicians and follow up their prescribed medications.
 Management of data was more efficient in regards to the growing numbers of patients' data and
information through electronic and personal health records.
 This could be viewed from the perspective of data storage and the number of servers needed to
cope up with theses enormous amounts of data.
 What facilitates the function of cloud computing is the usage of smart phones and tablets that
support medical staff and patients to access healthcare services.
 Data storage services can help to build a healthcare information integration platform to
integrate different healthcare providers. Thus, necessary medical information resources will be
shared between healthcare providers and recipients

Explain in brief mobile cloud computing with architecture. Also given example,
advantages and disadvantages:

 One of the main benefits of cloud computing is reducing downtime and wasted
expenditure for servers and other computer equipment. A given company is required to
purchase the minimum amount of hardware necessary to handle the maximum points of
stress on their system.
 Given situations where the strain and traffic are highly variable this leads to wasted
money. For example, Amazon.com, a pioneer in cloud computing, at times used as little
as 10% of their capacity so that they would have enough capacity to deal with those rarer
high strain times. Mobile Cloud Computing (MCC) at its simplest, refers to an
infrastructure where both the data storage and data processing happen outside of the
mobile device.
 Mobile cloud applications move the computing power and data storage away from mobile
phones and into the cloud, bringing applications and mobile computing to not just smart
phone users but a much broader range of mobile subscribers".
 Mobile cloud applications move the computing power and data storage away from the
mobile devices and into powerful and centralized computing platforms located in clouds,
which are then accessed over the wireless connection based on a thin native client.
Mobile devices face many resource challenges (battery life, storage, bandwidth etc.).
Cloud computing offers advantages to users by allowing them to use infrastructure,
platforms and software by cloud providers at low cost and elastically in an on-demand
fashion.
 Mobile cloud computing provides mobile users with data storage and processing services
in clouds, obviating the need to have a powerful device configuration (e.g. CPU speed,
memory capacity), as all resource-intensive computing can be performed in the cloud.
 In mobile cloud computing mobile network and cloud computing are combined, thereby
providing an optimal services for mobile clients. Cloud computing exists when tasks and
data are kept on individual devices. Applications run on a remote server and then sent to
the client.
 Here the mobile devices are connected to the mobile networks through the base stations;
they will establish and control the connections (air interface) and functional interfaces
between the mobile networks and mobile devices.
 Mobile users send service requests to the cloud through a web browser or desktop
application. The information's are transmitted to the central processors that are connected
to the servers providing mobile network services. Here, services like AAA
(Authentication, Authorization and Accounting) can be provided to the users based on
Home Agent (HA) and subscriber's data stored in databases.
 Mobile devices are connected to the mobile networks via base stations that establish and
control the connections and functional interfaces between the networks and mobile
devices.
 Mobile users' requests and information are transmitted to the central processors that are
connected to servers providing mobile network services.
 The subscribers' requests are delivered to a cloud through the Internet. In the cloud, cloud
controllers process the requests to provide mobile users with the corresponding cloud
services.

Advantages:
1. Saves battery power
2. Makes execution faster
3. Improves data storage capacity and processing power
4. Improves reliability and availability: Keeping data and application in the clouds reduces the
chance of lost on the mobile devices.
5. Dynamic provisioning: Dynamic on-demand provisioning of resources on a fine-grained,
self-service basis

Disadvantages:
1. Must send the program states (data) to the cloud server.
2. Network latency can lead to execution delay

What do you mean autonomic cloud computing? Explain system architecture for
autonomic Cloud management:

 Autonomic Computing is the ability of distributed system to manage its resources with
little or no human intervention. It involves intelligently adapting to environment and
requests by users in such a way the user does not even know.
 Autonomic monitoring is mostly implemented on specific layers of the cloud computing
architecture.
 Fig. below shows the high-level architecture enabling autonomic management of SaaS
applications on Clouds.
 SaaS Application Portal : This component hosts the SaaS application using a Web
Service-enabled portal system.
 Autonomic Management System and PaaS Framework: This layer serves as a Platform as
a Service. Its architecture comprises of autonomic management components to be
integrated in the PaaS level, along with modules enforcing security and energy efficiency.
 Infrastructure as a Service: This layer comprises distributed resources provided by private
(enterprise networks) and public Clouds.
 SaaS is described as a software application deployed as a hosted service and accessed
over the Internet.
 In order to manage the SaaS applications in large scale, the PaaS layer has to coordinate
the Cloud resources according to the SaaS requirements, which is ultimately the user
QoS.
 Application Scheduler: The scheduler is responsible for assigning each task in an
application to resources for execution based on user QoS parameters and the overall cost
for the service provider.

Write short note on Multimedia Cloud Computing:

 Due to the invention of cloud computing, nowadays users can easily access the multimedia
content over the internet at any time. User can efficiently store the multimedia content of any
type and of any size in the cloud after subscribing it with no difficulties.
 Not only storing the media content like Audio, Video and Image, but can process them
within the cloud since the computation time for processing media data is more in complex
hardware.
 After processing the processed data can be easily received from the cloud through a client
without any need of installing complex hardware.
 Fig. below shows fundamental concept of multimedia cloud.
 Thus, Multimedia cloud computing is the processing, accessing and storing of multimedia
contents like audio, video and image using the services and applications available in the
cloud without physically acquiring them.
 Currently many company's clouds like AmazonEC2, Google Music, DropBox, SkyDrive
provides content management system within the cloud network.
 The users of these clouds can access the multimedia content for example; the user can view a
video anywhere in the world at any time using their computers, tablets or smart phones.
 Cloud media is, a cloud which has the multimedia content of the owner of that particular
cloud. The media content can be accessed through the multimedia signaling protocols in the
cloud and can be streamed to clients present in computers, tablets, cars and smart phones.
 Fig. below shows relation between cloud media and media cloud.
 Not only processing, but the media content can be shared between clouds using the streaming
protocols like TCP/IP, UDP, RTP, HTTP etc.
 Streaming of media content involves, loading or buffering media data, coding, mixing, rating
and rendering over the service providers.
 Other profiling, packetizing, tokenizing of media contents will be done by the cloud based on
the streaming protocols used and it will be streamed to the client system.
 Cloud media technology offers number of key benefits to its service providers as well as the
users through increased implementation time, efficient data storage capacity, less
computation and cost.
 It created a striking impact in the multimedia content processing like editing, storing,
encrypting and decrypting, gaming, streaming, compressing etc.

Energy Aware Cloud Computing:


Energy efficiency is increasingly most important for information and communication technologies. The
reasons are the increased use in advanced technologies, increased energy costs and the need to reduce
GHG (greenhouse gas) emissions. These reasons called for energy-efficient technologies that tend to
decrease the overall energy consumption in terms of computation, communications and storage. Cloud
Computing has been recently attracted as a promising approach for delivering these advanced
technology services by utilizing the data center resources. The emerging cloud computing model
facilitates access to computing resources for end users through the internet. Cloud computing is a model
that enables on-demand access to the shared pool of customizable computing resources (e.g. servers,
storage, networks, and applications) and services (Mell and Grance, 2011). These resources can be
rapidly deployed with minimal management efforts and marginal interactions with the service providers.
Providing dynamic computing resources in the cloud computing paradigm enables corporations to scale
up/down the provided services, considering their clients’ demand and the cost of the leveraged
resources that contribute to the operational cost of the information technology (IT) facilities. The
scalability of the cloud services enables smaller businesses to benefit from different categories of
expensive computing-intensive services that were once exclusively available to large enterprises. Cloud
computing remedies the IT barriers, especially for small and medium-sized enterprises, and provides
efficient and economical IT solutions as the cloud providers develop tools and skills to exclusively focus
on handling the computational and IT challenges. With marvelous effects of cloud computing on the IT
industry, large enterprises such as Google, Amazon, and Microsoft endeavor to establish more powerful,
reliable, and economically efficient cloud computing platforms

Structure of cloud computing: The clouds are divided into four major models with distinct operational
schemes: the public cloud, private cloud, hybrid cloud, and community cloud. In public clouds, the cloud
computing resources are available to the public at the cost of lower security and privacy for the end
users. Such models are not utilized by enterprises with high reliability and security requirements. In
private clouds, the cloud computing resources are exclusively available to a single organization where
the highest degree of control over performance, reliability, and security is offered to the end users at
considerable operational cost. In this model, end users benefit from most features of cloud computing.
In the hybrid cloud model, private IT resources dedicated to an enterprise are integrated with the public
cloud. In this architecture, public and private infrastructure systems operate independently and
communicate over the encrypted connections. This architecture enables the companies to store the
protected data on private clouds while leveraging the computational resources in the public cloud to run
applications that rely on the stored data.

What is Jungle Computing? What are the reasons for using Jungle Computing?

 Jungle computing is distributed computing system.


 A Jungle Computing System consists of all compute resources available to end-users, which includes
clusters, clouds, grids, desktop grids, supercomputers, as well as stand-alone machines and even
mobile devices.
 Reasons for using Jungle Computing Systems:
a) An application may require more compute power than available in any one system a user has
access to.
 Different parts of an application may have different computational requirements, with no single
system that meets all requirements.
 From a high-level view, all resources in a Jungle Computing System are in some way equal, all
consisting of some amount of processing power, memory and possibly storage.
 End-users perceive these resources as just that: a compute resource to run their application on.

Jungle computing systems:

 When grid computing was introduced over a decade ago, its foremost visionary aim was to provide
efficient and transparent socket computing over a distributed set of resources.
 Many other distributed computing paradigms have been introduced, including peer-to-peer
computing, volunteer computing and more recently cloud computing.
 These paradigms all share many of the goals of grid computing, eventually aiming to provide end-
users with access to distributed resources with as little effort as possible.
 These new distributed computing paradigms have led to a diverse collection of resources available
to research scientists, which include stand-alone machines, cluster systems, grids, clouds, desktop
grids, etc.
 With clusters, grids and clouds thus being equipped with multi-core processors and many-core 'add-
ons', systems available to scientists are becoming increasingly hard to program and use.
 Despite the fact that the programming and efficient use of many-cores is known to be hard, this is
not the only problem. With the increasing heterogeneity of the underlying hardware, the efficient
mapping of computational problems onto the 'bare metal' has become vastly more complex. Now
more than ever, programmers must be aware of the potential for parallelism at all levels of
granularity.

Docker at a Glance:

 Docker is quickly changing the way that organizations are deploying software at scale.
 Docker is a tool that promises to easily encapsulate the process of creating a distributable artifact
for any application, deploying it at scale into any environment, and streamlining the workflow and
responsiveness of agile software organizations.
 Benefits:
1. Packaging software in a way that leverages the skills developers already have.
2. Bundling application software and required OS file systems together in a single standardized
image format
3. Abstracting software applications from the hardware without sacrificing resources

What is Process Simplification? Explain workflow with and without docker

 Docker can simplify both workflows and communication, and that usually starts with the
deployment story. Fig. below shows workflow with and without docker.
1. Application developers request resources from operations engineers.
2. Resources are provisioned and handed over to developers.
3. Developer’s script and tool their deployment.
4. Operations engineers and developers tweak the deployment repeatedly.
5. Additional application dependencies are discovered by developers.
6. Operations engineers work to install the additional requirements.
7. Go to step 5 and 6
8. The application is deployed.

Fig. Q.18.1(b) shows Docker deployment workflow


1. Developers build the Docker image and ship it to the registry.
2. Operations engineers provide configuration details to the container and provision resources.
3. Developers trigger deployment

Broad Support and Adoption:

Broad network access is the ability of network infrastructure to connect with a wide variety of
devices, including thin and thick clients, such as mobile phones, laptops, workstations, and tablets, to
enable seamless access to computing resources across these diverse platforms. It is a key
characteristic of cloud technology.
The term broad network access can be traced back to the early days of cloud computing, when accessing
resources was a complex and costly affair. Resources were finite and, for the most part, extremely
limited as devices could only access networking and storage systems that were hosted locally. The cloud
introduced a radical shift by democratizing access to compute, storage, and network resources. Broad
network access is a defining characteristic of the cloud, without which the private and public cloud
services we know today would not exist.
In fact, the National Institute of Standards and Technology (NIST), U.S., lays down five clear traits that
make a cloud a cloud:
 On-demand self-service
 Broad network access
 Resource pooling
 Rapid elasticity
 Measured service

Broad network access is what makes the cloud available to any device from any location. A cloud
provider must ensure that it provides its customers with broad network access capabilities. Otherwise,
one would be able to use the cloud service only from a limited set of platforms.

What is Cloud Adoption?


Cloud adoption refers to moving to or implementing cloud computing in an organization. This can
involve transitioning from on-premises infrastructure to the cloud or using the cloud in addition to on-
premises infrastructure. Here are a few examples of how organizations may adopt cloud computing:

a) Infrastructure as a Service (IaaS): This cloud adoption involves using cloud-based infrastructure, such
as servers, storage, and networking, to host applications and workloads. An example of this would
be an organization that uses Amazon Web Services (AWS) to host their website.
b) Platform as a Service (PaaS): This type of cloud adoption involves using cloud-based platforms, such
as Heroku or Azure, to develop and deploy applications. An example would be an organization that
uses PaaS to build and deploy a new customer relationship management (CRM) system.
c) Software as a Service (SaaS): This type of cloud adoption involves using cloud-based software
applications, such as Google Workspace or Salesforce. An example of this would be an organization
that switches from using Microsoft Office to Google Workspace for its productivity needs.
d) Hybrid cloud: This type of cloud adoption involves using a combination of on-premises infrastructure
and cloud services. An example of this would be an organization that uses on-premises servers for
certain workloads, such as their accounting software, and uses the cloud for other workloads, such
as their CRM system.

Cloud computing vendors are increasing the scope and number of services they provide via the cloud,
and popular public cloud environments like Microsoft Azure, Amazon Web Services, and Google Cloud
Platform are becoming more popular than ever.

Getting the Most from Docker:

To know what is docker in cloud computing, first, we need to understand what containers are in the
cloud environment. A container is an executable part of the software that is embedded with application
code and libraries and dependencies to run anywhere, be it on desktop or cloud. A container in cloud
computing is used to build blocks, which help in producing operational efficiency, developer
productivity, and environmental consistency. Containers are small, fast, and portable. Because of this,
the user is assured of reliability, consistency, and quickness regardless.

Reason for using Docker:

Listed below are some of the benefits of Docker container;


1. Tailor-made: Most industries want to use a purpose-built. The Docker in cloud
computing enables its clients to make use of Docker to organize their software infrastructure.
2. Accessibility: As the docker is a cloud framework, it is accessible from anywhere, anytime. Has
high efficiency.
3. Operating System Support: It takes less space. They are lightweight and can operate several
containers simultaneously.
4. Performance: Containers have better performance as they are hosted in a single docker engine.
5. Speed: No requirement for OS to boot. Applications are made online in seconds. As the business
environment is constantly changing, technological up-gradation needs to keep pace for
smoother workplace transitions. Docker helps organizations with the speedy delivery of service.
6. Flexibility: They are a very agile container platform. It is deployed easily across clouds, providing
users with an integrated view of all their applications across different environments. Easily
portable across different platforms.
7. Scalable: It helps create immediate impact by saving on recoding time, reducing costs, and
limiting the risk of operations. Containerization helps scale easily from the pilot stage to large-
scale production.
8. Automation: Docker works on Software as a service and Platform as a service model, which
enables organizations to streamline and automate diverse applications. Docker improves the
efficiency of operations as it works with a unified operating model.
9. Space Allocation: Data volumes can be shared and reused among multiple containers.

Even though there are a lot of benefits associated with docker, it has some limitations as well, which are
as follows:

1. Missing Features: Many features like container self-registration and self-inspects are in
progress.
2. Provide cross-platform compatibility: One of the issues in docker is if an application is designed
to run for windows, then it cannot work on other operating systems.

Docker Workflow:

In this section, we will focus on the Docker Engine as well as its different components. This will help
us in better understanding how Docker works before we move on to Docker architecture. Docker
Engine is the power that enables developing to perform various functions using this container-based
app. You can use the components that are listed below to build, package, ship, and run applications.

1. Docker Daemon
It is the background process that continuously works to help you manage images, storage volumes,
networks, and containers. It is always looking for Docker API requests to process them.
2. Docker CLI
It is an interface client that interacts with Docker Daemon. It helps developers simplify the process of
managing container instances. It is one of the primary reasons why developers prefer Docker over
other similar applications.

3. Docker Engine Rest API


It facilitates interactions between Docker daemon and applications. An HTTP client is usually
required to access these APIs.

Docker Architecture:
Docker architecture is a client-server-based architecture. It has three major components that are
mentioned below:

a) Docker host
b) Docker client
c) Docker registry
d) Docker objects

In the initial phase, the Docker client interacts with the daemon, which is responsible for performing
much of the work that goes into developing, running, and distributing Docker containers.
Docker daemon and the client can either run on a single system or the developer can use a remote
daemon to connect it with a local Docker client. Rest API is used to establish communication
between Docker daemon and client. This can be either done over a network interface or UNIX
sockets.

Basic Terms and Concepts,


U78Virtualization
Structures/Tools and
Mechanisms
" Before virtualization --> OS
manages the hardware.ð
" After virtualization --> a
virtualization layer is inserted
between the hardware and the
OS.ð
" The virtualization layer
converts portions of the real
hardware into virtual ardware.
" Thus different operating
systems such as Linux and
Windows can run on the same
physical
machine, simultaneously.
" Depending on the position of
the virtualization layer, there
are several classes of VM
architectures
" the hypervisor architecture
" paravirtualization
" host-based virtualization.
" Hypervisor
" Hardware virtualization
technique allowing multiple OS
called guests to run on a host
machine.
" Also called the Virtual
Machine Monitor (VMM).
" Supports hardware-level
virtualization on bare metal
devices like CPU, memory, disk
and network
interfaces
" Sits directly between the
physical hardware and its OS.
" Provides hypercalls for the
guest OSes and applications.
" Assumes a micro-kernel
architecture like the Microsoft
Hyper-V.
" Includes only the basic and
unchanging functions (such as
physical memory management
and processor scheduling).
" The device drivers and other
changeable components are
outside the hypervisor
" Or it can assume a monolithic
hypervisor architecture like the
VMware ESX for server
virtualization.
" implements all the above
functions, including those of the
device drivers.
" So the size of the hypervisor
code of a micro-kernel
hypervisor is smaller than that
of a monolithic
hypervisor.
" A hypervisor must be able to
convert physical devices into
virtual resources
dedicated for the deployed VM
to use.
Virtualization Structures/Tools
and Mechanisms
" Before virtualization --> OS
manages the hardware.ð
" After virtualization --> a
virtualization layer is inserted
between the hardware and the
OS.ð
" The virtualization layer
converts portions of the real
hardware into virtual ardware.
" Thus different operating
systems such as Linux and
Windows can run on the same
physical
machine, simultaneously.
" Depending on the position of
the virtualization layer, there
are several classes of VM
architectures
" the hypervisor architecture
" paravirtualization
" host-based virtualization.
" Hypervisor
" Hardware virtualization
technique allowing multiple OS
called guests to run on a host
machine.
" Also called the Virtual
Machine Monitor (VMM).
" Supports hardware-level
virtualization on bare metal
devices like CPU, memory, disk
and network
interfaces
" Sits directly between the
physical hardware and its OS.
" Provides hypercalls for the
guest OSes and applications.
" Assumes a micro-kernel
architecture like the Microsoft
Hyper-V.
" Includes only the basic and
unchanging functions (such as
physical memory management
and processor scheduling).
" The device drivers and other
changeable components are
outside the hypervisor
" Or it can assume a monolithic
hypervisor architecture like the
VMware ESX for server
virtualization.
" implements all the above
functions, including those of the
device drivers.
" So the size of the hypervisor
code of a micro-kernel
hypervisor is smaller than that
of a monolithic
hypervisor.
" A hypervisor must be able to
convert physical devices into
virtual resources
dedicated for the deployed VM
to use.
Virtualization Structures/Tools
and Mechanisms
" Before virtualization --> OS
manages the hardware.ð
" After virtualization --> a
virtualization layer is inserted
between the hardware and the
OS.ð
" The virtualization layer
converts portions of the real
hardware into virtual ardware.
" Thus different operating
systems such as Linux and
Windows can run on the same
physical
machine, simultaneously.
" Depending on the position of
the virtualization layer, there
are several classes of VM
architectures
" the hypervisor architecture
" paravirtualization
" host-based virtualization.
" Hypervisor
" Hardware virtualization
technique allowing multiple OS
called guests to run on a host
machine.
" Also called the Virtual
Machine Monitor (VMM).
" Supports hardware-level
virtualization on bare metal
devices like CPU, memory, disk
and network
interfaces
" Sits directly between the
physical hardware and its OS.
" Provides hypercalls for the
guest OSes and applications.
" Assumes a micro-kernel
architecture like the Microsoft
Hyper-V.
" Includes only the basic and
unchanging functions (such as
physical memory management
and processor scheduling).
" The device drivers and other
changeable components are
outside the hypervisor
" Or it can assume a monolithic
hypervisor architecture like the
VMware ESX for server
virtualization.
" implements all the above
functions, including those of the
device drivers.
" So the size of the hypervisor
code of a micro-kernel
hypervisor is smaller than that
of a monolithic
hypervisor.
" A hypervisor must be able to
convert physical devices into
virtual resources
dedicated for the deployed VM
to use.
Virtualization Structures/Tools
and Mechanisms
" Before virtualization --> OS
manages the hardware.ð
" After virtualization --> a
virtualization layer is inserted
between the hardware and the
OS.ð
" The virtualization layer
converts portions of the real
hardware into virtual ardware.
" Thus different operating
systems such as Linux and
Windows can run on the same
physical
machine, simultaneously.
" Depending on the position of
the virtualization layer, there
are several classes of VM
architectures
" the hypervisor architecture
" paravirtualization
" host-based virtualization.
" Hypervisor
" Hardware virtualization
technique allowing multiple OS
called guests to run on a host
machine.
" Also called the Virtual
Machine Monitor (VMM).
" Supports hardware-level
virtualization on bare metal
devices like CPU, memory, disk
and network
interfaces
" Sits directly between the
physical hardware and its OS.
" Provides hypercalls for the
guest OSes and applications.
" Assumes a micro-kernel
architecture like the Microsoft
Hyper-V.
" Includes only the basic and
unchanging functions (such as
physical memory management
and processor scheduling).
" The device drivers and other
changeable components are
outside the hypervisor
" Or it can assume a monolithic
hypervisor architecture like the
VMware ESX for server
virtualization.
" implements all the above
functions, including those of the
device drivers.
" So the size of the hypervisor
code of a micro-kernel
hypervisor is smaller than that
of a monolithic
hypervisor.
" A hypervisor must be able to
convert physical devices into
virtual resources
dedicated for the deployed VM
to use.

You might also like