CCD Chapter 1.0
CCD Chapter 1.0
Distributed Systems:
• It is a composition of multiple independent systems but all of them
are depicted as a single entity to the users. The purpose of distributed
systems is to share resources and also use them effectively and
efficiently. Distributed systems possess characteristics such as
scalability, concurrency, continuous availability, heterogeneity, and
independence in failures. But the main problem with this system was
that all the systems were required to be present at the same
geographical location.
Mainframe computing:
• Mainframes which first came into existence in 1951 are highly
powerful and reliable computing machines. These are responsible
for handling large data such as massive input-output operations.
Even today these are used for bulk processing tasks such as online
transactions etc. These systems have almost no downtime with high
fault tolerance. After distributed computing, these increased the
processing capabilities of the system.
Cluster computing:
• In 1980s, cluster computing came as an alternative to mainframe
computing. Each machine in the cluster was connected to each other
by a network with high bandwidth. These were way cheaper than
those mainframe systems. These were equally capable of high
computations. Also, new nodes could easily be added to the cluster
if it was required.
Grid computing:
• In 1990s, the concept of grid computing was introduced. It means
that different systems were placed at entirely different geographical
locations and these all were connected via the internet. These
systems belonged to different organizations and thus the grid
consisted of heterogeneous nodes. Although it solved some
problems but new problems emerged as the distance between the
nodes increased.
Virtualization:
• It was introduced nearly 40 years back. It refers to the process of
creating a virtual layer over the hardware which allows the user to
run multiple instances simultaneously on the hardware. It is a key
technology used in cloud computing. It is the base on which major
cloud computing services such as Amazon EC2, VMware vCloud,
etc work on.
Web 2.0:
• It is the interface through which the cloud computing services
interact with the clients. It is because of Web 2.0 that we have
interactive and dynamic web pages. It also increases flexibility
among web pages. Popular examples of web 2.0 include Google
Maps, Facebook,
Service orientation:
• It acts as a reference model for cloud computing. It supports lowcost,
flexible, and evolvable applications. Two important concepts were
introduced in this computing model.
Utility computing:
• It is a computing model that defines service provisioning techniques
for services such as compute services along with other major
services such as storage, infrastructure, etc which are provisioned on
a pay-per-use basis.
What is Virtualisation ?
1. Resources Pooling
This means that the Cloud provider used a multi-leaner
model to deliver the computing resources to various
customers. There are various allocated and reassigned
physical and virtual resources, which rely on customer
demand. In general, the customer has no control or
information about the location of the resources provided,
but can choose location on a higher level of abstraction.
2. On-Demand Self-Service
This is one of the main and useful advantages of Cloud
Computing as the user can track server uptimes, capability
and network storage on an ongoing basis. The user can also
monitor computing functionalities with this feature.
3. Easy Maintenance
The servers are managed easily and the downtime is small
and there are no downtime except in some cases. Cloud
Computing offers an update every time that increasingly
enhances it. The updates are more system friendly and
operate with patched bugs faster than the older ones.
5. Availability
The cloud capabilities can be changed and expanded
according to the usage. This review helps the consumer to
buy additional cloud storage for a very small price, if
necessary.
6. Automatic System
Cloud computing analyzes the data required automatically
and supports a certain service level of measuring
capabilities. It is possible to track, manage and report the
usage. It provides both the host and the customer with
accountability.
7. Economical
It is a one-off investment since the company (host) is
required to buy the storage, which can be made available to
many companies, which save the host from monthly or
annual costs. Only the amount spent on the basic
maintenance and some additional costs are much smaller.
8. Security
Cloud Security is one of cloud computing's best features. It
provides a snapshot of the data stored so that even if one of
the servers is damaged, the data cannot get lost. The
information is stored on the storage devices, which no other
person can hack or use. The service of storage is fast and
reliable.
9. Pay as you go
Users only have to pay for the service or the space in cloud
computing. No hidden or additional charge to be paid is
liable to pay. The service is economical and space is often
allocated free of charge.
Cost
Cloud computing is affordable, but it can be sometimes
expensive to change the cloud to customer demand. In
addition, it can hinder the small business by altering the cloud
as demand can sometimes cost more. Furthermore, it is
sometimes costly to transfer data from the Cloud to the
premises.
Downtime
Downtime is the most popular cloud computing challenge as
a platform free from downtime is guaranteed by no cloud
provider. Internet connection also plays an important role, as
it can be a problem if a company has a non-trustworthy
internet connection, because it faces downtime.
Lack of resources
The cloud industry also faces a lack of resources and
expertise, with many businesses hoping to overcome it by
hiring new, more experienced employees. These employees
will not only help solve the challenges of the business but will
also train existing employees to benefit the company.
Currently, many IT employees work to enhance cloud
computing skills and it is difficult for the chief executive
because the employees are little qualified. It claims that
employees with exposure of the latest innovations and
associated technology would be more important in
businesses.
Vendor lock-in
The problem with vendor lock-in cloud computing includes
clients being reliant (i.e. locked in) on the implementation of
a single Cloud provider and not switching to another vendor
without any significant costs, regulatory restrictions or
technological incompatibilities in the future. The lock-up
situation can be seen in apps for specific cloud platforms, such
as Amazon EC2, Microsoft Azure, that are not easily
transferred to any other cloud platform and that users are
vulnerable to changes made by their providers to further
confirm the lenses of a software developer.
In fact, the issue of lock-in arises when, for example, a
company decide to modify cloud providers (or perhaps
integrate services from different providers), but cannot move
applications or data across different cloud services, as the
semantics of cloud providers' resources and services do not
correspond. This heterogeneity of cloud semantics and APIs
creates technological incompatibility which in turn leads to
challenge interoperability and portability.
This makes it very complicated and difficult to interoperate,
cooperate, portability, handle and maintain data and services.
For these reasons, from the point of view of the company it is
important to maintain flexibility in changing providers
according to business needs or even to maintain in-house
certain components which are less critical to safety due to
risks.
The issue of supplier lock-in will prevent interoperability and
portability between cloud providers. It is the way for cloud
providers and clients to become more competitive.
• Full Virtualization
With full virtualization, one of the different hardware
virtualization types, VMs run their own operating systems and
applications, just as if they were on separate physical machines.
This allows for great flexibility and compatibility.
You can have VMs running different operating systems, like
Windows, Linux, or even exotic ones, all coexisting peacefully
on the same physical hardware.
Advantages:
One of the key advantages is isolation. Each VM operates in its
own virtual bubble, protected from the chaos that might arise
from other VMs sharing the same hardware.
Furthermore, full virtualization enables the migration of VMs
between physical hosts. Imagine the ability to move a running
VM from one physical server to another, like a teleportation
trick. This live migration feature allows for workload balancing,
hardware maintenance without downtime, and disaster
recovery.
Full virtualization also plays a vital role in testing and
development environments. It allows developers to create
different VMs for software testing, without the need for
dedicated physical machines. This helps them save a lot of
money, time, and efforts in the long run.
• Emulation Virtualization
Emulation virtualization, the next one in different types of
hardware virtualization, relies on a clever technique known as
hardware emulation. Through hardware emulation, a virtual
machine monitor, or hypervisor, creates a simulated hardware
environment within each virtual machine.
This simulated environment replicates the characteristics and
behaviour of the desired hardware platform, even if the
underlying physical hardware is different. It's like putting on a
digital costume that makes the virtual machine look and feel like
it's running on a specific type of hardware.
Advantages:
But how does this aid in enabling hardware virtualization? Well,
the main advantage of emulation virtualization lies in its
flexibility and compatibility. It enables virtual machines to run
software that may be tied to a specific hardware platform,
without requiring the exact hardware to be present.
This flexibility is particularly useful in scenarios where legacy
software or operating systems need to be preserved or migrated
to modern hardware. Emulation virtualization allows these
legacy systems to continue running on virtual machines,
ensuring their longevity and compatibility with new hardware
architectures.
It is a powerful tool in the virtualization magician's arsenal,
allowing us to transcend the limitations of physical hardware
and embrace a world of endless possibilities.
Q) Advantages of cloud computing in machine learning?
• Para-Virtualization
Unlike other types of hardware virtualization, paravirtualization
requires some special coordination between the virtual machine
and the hypervisor. The guest operating system running inside
the virtual machine undergoes slight modifications. These
modifications introduce specialised API calls, allowing the
guest operating system to communicate directly with the
hypervisor.
Advantages:
This direct communication eliminates the need for certain
resource-intensive tasks, such as hardware emulation, which is
required in full virtualization. By bypassing these tasks,
paravirtualization can achieve higher performance and
efficiency compared to other virtualization techniques.
Para-virtualization shines in scenarios where performance is
paramount. It's like having a race car driver and a skilled
navigator working together to achieve the fastest lap times. By
leveraging the direct communication between the guest
operating system and the hypervisor, para-virtualization
minimises the overhead and latency associated with traditional
virtualization approaches.
This performance boost is particularly beneficial for high
performance computing, real-time systems, and I/O-intensive
workloads. It's like having a turbocharger that boosts the virtual
machine's performance, enabling it to handle demanding tasks
with efficiency and precision.
Advantages of Hardware Virtualization:
Enhanced Scalability:
Hardware virtualization enables you to easily scale your
infrastructure to meet changing demands. Whether you need to
add more virtual machines or allocate additional resources to
existing VMs, virtualization allows for seamless scalability. It's
like having the ability to expand your stage and accommodate
more performers as the audience grows.
Cost Savings:
One of the major benefits of hardware virtualization is
significant cost savings. By consolidating multiple physical
servers into a virtualized environment, you reduce the need for
additional hardware, power consumption, and cooling costs. It
enables optimising your expenses by sharing resources
efficiently.
Improved Disaster Recovery and Business Continuity:
Enhanced Security:
Hardware virtualization can improve security by isolating
virtual machines from each other. Even if one VM is
compromised, the others remain unaffected.