CC NOTES
CC NOTES
Cloud Computing means storing and accessing the data and programs
on remote servers that are hosted on the internet instead of the
computer’s hard drive or local server. Cloud computing is also referred to
as Internet-based computing, it is a technology where the resource is
provided as a service through the Internet to the user. The data that is
stored can be files, images, documents, or any other storable document.
The following are some of the Operations that can be performed with
Cloud Computing
Storage, backup, and recovery of data
Delivery of software on demand
Development of new applications and services
Streaming videos and audio
Virtualization
Types of Virtualization
1. Application Virtualization
2. Network Virtualization
3. Desktop Virtualization
4. Storage Virtualization
5. Server Virtualization
6. Data virtualization
Types of Virtualization
Network Virtualization
3. Desktop Virtualization: With desktop virtualization, your operating
system is stored on a server and can be accessed from anywhere on any
device. It’s great for users who need flexibility, as it simplifies software
updates and provides portability.
4. Storage Virtualization: This combines storage from different servers
into a single system, making it easier to manage. It ensures smooth
performance and efficient operations even when the underlying hardware
changes or fails.
5. Server Virtualization: This splits a physical server into multiple virtual
servers, each functioning independently. It helps improve performance,
cut costs and makes tasks like server migration and energy management
easier.
Server Virtualization
computing.
Pros of Virtualization
Utilization of Hardware Efficiently: With the help of Virtualization
Hardware is Efficiently used by user as well as Cloud Service Provider.
In this the need of Physical Hardware System for the User is decreases
and this results in less costly.In Service Provider point of View, they will
utilize the Hardware using Hardware Virtualization which decrease the
Hardware requirement from Vendor side.
High Availability: One of the main benefit of Virtualization is that it
provides advance features which allow virtual instances to be available
all the times.
Disaster Recovery is efficient and easy: With the help of
virtualization Data Recovery, Backup, Duplication becomes very easy.
In traditional method, if somehow due to some disaster if Server
system Damaged then the surety of Data Recovery is very less. But
with the tools of Virtualization real time data backup recovery and
mirroring become easy task and provide surety of zero percent data
loss.
Virtualization saves Energy: Virtualization will help to save Energy
because while moving from physical Servers to Virtual Server’s, the
number of Server’s decreases due to this monthly power and cooling
cost decreases which will Save Money as well.
Quick and Easy Set up: In traditional methods Setting up physical
system and servers are very time-consuming. Firstly, purchase them in
bulk after that wait for shipment. When Shipment is done then wait for
Setting up and after that again spend time in installing required
software etc. Which will consume very time. But with the help of
virtualization the entire process is done in very less time which results
in productive setup.
Cloud Migration becomes easy: Most of the companies those who
already have spent a lot in the server have a doubt of Shifting
to Cloud. But it is more cost-effective to shift to cloud services because
all the data that is present in their servers can be easily migrated into
the cloud server and save something from maintenance charge, power
consumption, cooling cost, cost to Server Maintenance Engineer etc.
Resource Optimization: Virtualization allows efficient utilization of
physical hardware by running multiple virtual machines (VMs) on a
single physical server. This consolidation leads to cost savings in terms
of hardware, power, cooling, and space
Cons of Virtualization
High Initial Investment: While virtualization reduces costs in the long
run, the initial setup costs for storage and servers can be higher than a
traditional setup.
Complexity: Managing virtualized environments can be complex,
especially as the number of VMs increases.
Security Risks: Virtualization introduces additional layers, which may
pose security risks if not properly configured and monitored.
Learning New Infrastructure: As Organization shifted from Servers to
Cloud. They required skilled staff who can work with cloud easily.
Either they hire new IT staff with relevant skill or provide training on
that skill which increase the cost of company.
Data can be at Risk: Working on virtual instances on shared
resources means that our data is hosted on third party resource which
put’s our data in vulnerable condition. Any hacker can attack on our
data or try to perform unauthorized access. Without Security solution
our data is in threaten situation.
Data Virtualization
Data virtualization is used to combine data from different sources
into a single, unified view without the need to move or store the
data anywhere else. It works by running queries across various
data sources and pulling the results together in memory.
To make things easier, it adds a layer that hides the complexity of
how the data is stored. This means users can access and analyze
data directly from its source in a seamless way, thanks to
specialized tools.
Working on Data Virtualization
The data virtualization works in the following manner:
1. Data Abstraction
The process starts by pulling data from different sources—like
databases, cloud storage or APIs—and combining it into a single
virtual layer. This layer makes everything look unified and easy to
access without worrying about where the data lives.
2. Data Integration
Instead of copying or moving data, the platform integrates it. It
combines data from various systems into a single view, so you
can work with it all in one place, even if it’s coming from
completely different sources.
3. Querying and Transformation
Users can query the data using familiar tools like SQL or APIs. The
platform handles any transformations or joins in real time, pulling
everything together seamlessly—even if the data comes from
multiple systems.
4. Real-time Access
One of the best things about data virtualization is that you get
real-time or near-real-time access to up-to-date information. You
don’t have to wait for batch processes to refresh the data
because the system fetches it directly from the source.
5. Data Governance and Security
All access is managed centrally, so it’s easy to control who can
see what. Security and compliance rules are applied across all
data sources, ensuring sensitive information is protected while
giving the right people access to what they need.
6. Performance Optimization
To keep things running smoothly, the platform uses techniques
like caching frequently used data, optimizing queries, and
creating virtual indexes. This ensures that even complex queries
are fast and don’t slow down the source systems.
7. User Access
Finally, the data is made available through familiar tools like
Tableau, Power BI, or even custom applications. Users don’t need
to worry about the data’s location or structure—they just get a
clean, unified view that’s ready to use.
Features of Data Virtualization
Time-to-market acceleration from data to final product:
Virtual data objects can be created considerably more quickly
than existing ETL tools and databases since they include
integrated data. Customers may now more easily get the
information they require.
One-Stop Security: The contemporary data architecture
makes it feasible to access data from a single location. Data
can be secured down to the row and column level thanks to the
virtual layer that grants access to all organizational data.
Authorizing numerous user groups on the same virtual dataset
is feasible by using data masking, anonymization, and
pseudonymization.
Combine data explicitly from different sources: The
virtual data layer makes it simple to incorporate distributed
data from Data Warehouses, Big Data Platforms, Data lakes,
Cloud Solutions, and Machine Learning into user-required data
objects.
Flexibility: It is feasible to react quickly to new advances in
various sectors thanks to data virtualization. This is up to ten
times faster than conventional ETL and data warehousing
methods. By providing integrated virtual data objects, data
virtualization enables you to reply instantly to fresh data
requests. This does away with the necessity to copy data to
various data levels but just makes it virtually accessible.
Type-I hypervisors:
Hypervisors of type I run directly on top of the hardware. As a
result, they stand in for operating systems and communicate
directly with the ISA interface offered by the underlying
hardware, which they replicate to allow guest operating
systems to be managed. Because it runs natively on hardware,
this sort of hypervisor is also known as a native virtual
machine.
Type-II hypervisors:
To deliver virtualization services, Type II hypervisors require
the assistance of an operating system. This means they’re
operating system-managed applications that communicate
with it via the ABI and simulate the ISA of virtual hardware for
guest operating systems. Because it is housed within an
operating system, this form of hypervisor is also known as a
hosted virtual machine.
A hypervisor has a simple user interface that needs some storage
space. It exists as a thin layer of software and to establish a
virtualization management layer, it does hardware management
function. For the provisioning of virtual machines, device drivers
and support software are optimized while many standard
operating system functions are not implemented. Essentially, to
enhance performance overhead inherent to the coordination
which allows multiple VMs to interact with the same hardware
platform this type of virtualization system is used.
Hardware compatibility is another challenge for hardware-based
virtualization. The virtualization layer interacts directly with the
host hardware, which results that all the associated drivers and
support software must be compatible with the hypervisor. As
hardware devices drivers available to other operating systems
may not be available to hypervisor platforms similarly. Moreover,
host management and administration features may not contain
the range of advanced functions that are common to the
operating systems.
Note: Hyper-V communicates with the underlying hardware
mostly through vendor-supplied drivers.
Server Virtualization
Advantages:
Easier
Enhanced Performance
No emulation overhead
Limitations:
Requires modification to a guest operating system
3. Full Virtualization –
It is very much similar to Para virtualization. It can emulate the
underlying hardware when necessary. The hypervisor traps the
machine operations used by the operating system to perform I/O
or modify the system status. After trapping, these operations are
emulated in software and the status codes are returned very
much consistent with what the real hardware would deliver. This
is why an unmodified operating system is able to run on top of the
hypervisor.
Example: VMWare ESX server uses this method. A customized
Linux version known as Service Console is used as the
administrative operating system. It is not as fast as Para
virtualization.
Advantages:
No modification to the Guest operating system is required.
Limitations:
Complex
Slower due to emulation
Installation of the new device driver is difficult.
4. Hardware-Assisted Virtualization –
It is similar to Full Virtualization and Paravirtualization in terms of
operation except that it requires hardware support. Much of the
hypervisor overhead due to trapping and emulating I/O operations
and status instructions executed within a guest OS is dealt with
by relying on the hardware extensions of the x86 architecture.
Unmodified OS can be run as the hardware support for
virtualization would be used to handle hardware access requests,
privileged and protected operations, and to communicate with the
virtual machine.
Examples: AMD – V Pacifica and Intel VT Vanderpool provide
hardware support for virtualization.
Advantages:
No modification to a guest operating system is required.
Very less hypervisor overhead
Limitations:
Hardware support Required
5. Kernel level Virtualization –
Instead of using a hypervisor, it runs a separate version of the
Linux kernel and sees the associated virtual machine as a user-
space process on the physical host. This makes it easy to run
multiple virtual machines on a single host. A device driver is used
for communication between the main Linux kernel and the virtual
machine.
Processor support is required for virtualization ( Intel VT or AMD –
v). A slightly modified QEMU process is used as the display and
execution containers for the virtual machines. In many ways,
kernel-level virtualization is a specialized form of server
virtualization.
Examples: User – Mode Linux( UML ) and Kernel Virtual
Machine( KVM )
Advantages:
No special administrative software is required.
Very less overhead
Limitations:
Hardware Support Required
6. System Level or OS Virtualization –
Runs multiple but logically distinct environments on a single
instance of the operating system kernel. Also called shared kernel
approach as all virtual machines share a common kernel of host
operating system. Based on the change root concept “chroot”.
chroot starts during bootup. The kernel uses root filesystems to
load drivers and perform other early-stage system initialization
tasks. It then switches to another root filesystem using chroot
command to mount an on-disk file system as its final root
filesystem and continue system initialization and configuration
within that file system.
The chroot mechanism of system-level virtualization is an
extension of this concept. It enables the system to start virtual
servers with their own set of processes that execute relative to
their own filesystem root directories.
The main difference between system-level and server
virtualization is whether different operating systems can be run
on different virtual systems. If all virtual servers must share the
same copy of the operating system it is system-level virtualization
and if different servers can have different operating systems
( including different versions of a single operating system) it is
server virtualization.
Examples: FreeVPS, Linux Vserver, and OpenVZ are some
examples.
Advantages:
Significantly lightweight than complete machines(including a
kernel)
Can host many more virtual servers
Enhanced Security and isolation
Virtualizing an operating system usually has little to no
overhead.
Live migration is possible with OS Virtualization.
It can also leverage dynamic container load balancing between
nodes and clusters.
On OS virtualization, the file-level copy-on-write (CoW) method
is possible, making it easier to back up data, more space-
efficient, and easier to cache than block-level copy-on-write
schemes.
Software as a Service(SaaS)
Platform as a Service
Infrastructure as a Service
Anything as a Service
Lag Strategy –
This strategy is where the capacity is added only when it is
required, that is, only when the demand is observed and not
based on anticipation. This strategy is more conservative, as it
reduces the risk of wastage but at the same time, it can result in
late delivery of goods if not planned outright.
Match Strategy –
This strategy is where small amounts of capacity are added
gradually in required intervals of time, keeping in mind the
demand and the market potential of the product. This strategy is
said to improve performance in heterogeneous environments and
hybrid clouds.
Cost Reduction :
Cost reduction is the process by which organizations reduce
unnecessary costs in order to increase their profits in the
business. There is a direct alignment between the cost and the
growth of the company, which is why cost reduction is an
important factor in the organization’s productivity. The maximum
usage requirements should be kept in mind when dealing with the
performance of the organization.
Cost factor –
Two costs should be taken into account as follows.
The cost of acquiring new infrastructure.
The cost of its ongoing ownership.
Tools and Techniques for cost reduction –
There are the following tools and techniques that are used to
reduce costs as follows.
Budgetary Control
Standard Costing
Simplification and Variety Reduction
Planning and Control of Finance
Cost-Benefit Analysis
Value Analysis
Organizational Agility :
Organization agility is the process by which an organization will
adapt and evolve to sudden changes caused by internal and
external factors. It measures how quickly an organization will get
back on its feet, in the face of problems. Agility requires stability,
and for an organization to reach organizational agility, it should
build a stable foundation. In the IT field, one should respond to
business change by scaling its IT resources. If infrastructure
seems to be the problem, changing the business needs and
prioritizing as per the circumstances should be the solution.
Principles of Organizational Agility –
The five principles of Organizational Agility are as follows.
1. Frame your problems properly
2. Limit Change
3. Simplify Change
4. Subtract before you Add
5. Verify Outcomes
3. Diagonal Scalability –
It is a mixture of both Horizontal and Vertical scalability where the
resources are added both vertically and horizontally.