0% found this document useful (0 votes)
14 views

Lecture6 (1)

The document discusses the technological drivers for cloud computing, emphasizing the role of Service-Oriented Architecture (SOA), virtualization, and web services. It explains how SOA enables the development of reusable software components that can be accessed over the internet, highlighting the benefits of reliability, scalability, and interoperability. Additionally, it compares REST and SOAP as architectural styles for web services, outlining their characteristics, use cases, and performance differences.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Lecture6 (1)

The document discusses the technological drivers for cloud computing, emphasizing the role of Service-Oriented Architecture (SOA), virtualization, and web services. It explains how SOA enables the development of reusable software components that can be accessed over the internet, highlighting the benefits of reliability, scalability, and interoperability. Additionally, it compares REST and SOAP as architectural styles for web services, outlining their characteristics, use cases, and performance differences.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Technological Drivers for Cloud

Computing
Risala Tasin Khan
Professor
IIT, JU
Introduction

• Cloud computing enables service providers to offer various resources such


as infrastructure, platform and software as services to the requesting users
on a pay-as-you-go model.
• The cloud service consumers (CSCs) are benefitted from the cost reduction
in procuring the resources and the quality of service (QoS) this cloud service
providers promises.
• The success of cloud computing can be closely associated with the
technological enhancements in various areas such as Service-Oriented
Architecture (SOA), Virtualization, Multicore Technology, Networking
Technology etc.
What is SOA
• Service-Oriented Architecture or SOA is an architectural approach for designing and developing a web
application.
• In this approach, an application uses services available over the network via communication calls or
requests.
• A service is a self-contained unit of software that performs a specific task.
• It has three components: an interface, a contract, and implementation.
• The interface defines how a service provider will accept requests from a service consumer, the
contract defines how the service provider and the service consumer should interact, and the
implementation is the actual service code itself.
• Because the interface of a service is separate from its implementation, a service provider can
execute a request without the service consumer knowing how it does so; the service consumer
only worries about consuming services.
• SOA, or service-oriented architecture, defines a way to make software components reusable
and available via service interfaces.
• Services use common interface standards and an architectural pattern so they can be rapidly
incorporated into new applications.
SOA(Cont..)
• Each service in an SOA contains the code and data required to execute a complete,
discrete function/operation (e.g. checking a customer’s credit, calculating a
monthly loan payment, or processing a mortgage application).
• The service interfaces provide loose coupling, meaning they can be called by the
customer with little or no knowledge of how the service is implemented underneath.
• Generally SOA is used by enterprise applications and cloud computing is used for availing the
various Internet-based services in SOA fashion.
• Different companies or service providers may offer various services such as financial services,
health-care services, HR services etc.
• Various users can acquire and use the services through the Internet.
• Cloud computing is a service delivery model in which shared services and resources are
consumed by the users across the Internet.
SOA(Cont..)

• This interface is a service contract between the service provider and service
consumer.
• Applications behind the service interface can be written in Java, Microsoft .Net,
Cobol or any other programming language.
• Service interfaces are frequently defined using Web Service Definition
Language (WSDL) which is a standard tag structure based on xml (extensible
markup language).
• The services are exposed using standard network protocols—such as SOAP
(simple object access protocol)/HTTP or Restful HTTP (JSON/HTTP)—to send
requests to read or change data.
• At the appropriate stage the services are published in a registry that enables
developers to quickly find them and reuse them to develop new applications
or business processes.
Internal Structure of SOA
• In the Service-Oriented Architecture, the entire backend system is
majorly divided into three parts, that are: Service Provider, Service
Broker/Registry/Repository, and Service
Consumer/Requester.
• Service Provider:
• It is a maintainer and organization of the service that makes one or
more services available for others to use.
• The service provider creates a web service and provides information
about this service to the service registry.
• It has to decide the service category and trading partner agreements
that are required to use the services.
Cont…

• Service Broker, Service Registry or Service Repository:


• The main purpose of a service broker, service registry or service repository is to make the
web service information available to the potential requester.
• The one who administers the broker decides the scope of it.
• While the public brokers can be accessed from anywhere, private brokers are only
accessible by a limited number of users.
• Service Consumer:
• Service Consumers can locate entries in the broker registry through different find
operations for binding them to the service provider to invoke one of the services.
• It develops the needed component for clients to bind and use the services.
Benefits of SOA

• SOA enables mutual data exchange between programs of different vendors without the need for
additional programming or changes to the services.
• The services should be independent and they should have standard interfaces that can be called
to perform their tasks in a standard way.
• Also, a service need not have prior knowledge of the calling application and the application does
not need to have knowledge about how the tasks are performed by a service.
The Various Benefits of SOA
• Reliability:
• With small and independent services in the SOA, it becomes easier to
test and debug the applications instead of debugging the massive code
chunks, which makes the service-oriented architecture highly reliable.
• Location Independence:
• Services are located through the service registry and can be accessed
through Uniform Resource Locator (URL), therefore they can change
their location over time without interrupting consumer experience on
the system while making SOA location independent.
• Scalability:
• Services of the service-oriented architecture operate on different
servers within an environment, this increases its scalability.
The Various Benefits of SOA
• Reuse of Services:
• Various services can be reused by different applications.
• Having reusable services readily available also results in quicker time to
market.
Technologies Used by SOA
• Web Services:
• Web services are the prominent technology for implementing SOA systems and
applications.
• They use Internet technologies and standards for building distributed systems.
• Several aspects make Web services the technology of choice for SOA.
• First, they allow for interoperability across different platforms and programming
languages.
• Second, they are based on well-known and vendor-independent standards such as
HTTP, SOAP , XML, and WSDL .
• Third, they provide an intuitive and simple way to connect heterogeneous software
systems, enabling the quick composition of services in a distributed environment.
• Finally, they provide the features required by enterprise business applications to be
used in an industrial environment.
Web Services (Cont…)
• System architects develop a Web service with their
technology of choice and deploy it in compatible Web
or application servers.
• The service description document (Interface),
expressed by means of Web Service Definition
Language (WSDL), can be either uploaded to a global
registry or attached as a metadata to the service itself.
• Service consumers can look up and discover services in
global catalogs using Universal Description Discovery
and Integration (UDDI) or, most likely, directly retrieve
the service metadata by interrogating the Web service
first.
• The Web interface allows service consumers to
automatically generate clients for the given service and
embed them in their existing application.
Technologies used by Web
Services/SOA
• SOAP(Simple Object Access Protocol):
• SOAP (Simple Object Access Protocol) is a protocol used for
exchanging structured information in the implementation of
web services.
• It relies on XML (Extensible Markup Language) as its message
format and typically works over standard transport protocols
such as HTTP, SMTP, or TCP.
• SOAP enables communication between applications running
on different operating systems, with different technologies,
and across network boundaries.
Key Components of SOAP Architecture
•SOAP Envelope:
•Defines the structure of the message and what it contains.
•Consists of two parts: the Header (optional, for meta-information like authentication or transaction management)
and the Body (mandatory, containing the actual message payload).
•SOAP Encoding Rules:
•Specifies data types and how data is serialized and deserialized.
•Ensures that different platforms can understand and process the data correctly.
•SOAP Message:
•An XML document that contains the data being exchanged.
•Follows a strict structure with elements such as <Envelope>, <Header>, and <Body> .

•SOAP Transport Protocol Binding:


•Defines how SOAP messages are transported. While HTTP is the most common, SOAP can also use other
protocols like SMTP, FTP, or TCP.
•SOAP Faults:
•Provides error handling through the <Fault> element, which contains error codes, descriptions, and details about issues
encountered during processing.
How SOAP Works
• A SOAP message is sent from a client to a server as a request, containing
the necessary information (e.g., parameters for a service).
• The server processes the request and returns a response in the form of
another SOAP message.
• The communication is independent of the programming language or
platform, making SOAP a language-agnostic and platform-independent
protocol.
• USE CASE OF SOAP:
• Financial services requiring high security.
• Telecommunication systems.
• Enterprise-level applications where atomic transactions and robustness are
critical.
Technologies used by Web Services/SOA
• REST(Representational State Transfer):
• Despite the fact that XML documents are easy to produce and process in any
platform or programming language, SOAP has often been considered quite
inefficient because of the excessive use of markup that XML imposes for organizing
the information into a well-formed document.
• Therefore, lightweight alternatives to the SOAP/XML pair have been proposed to
support Web services.
• The most relevant alternative is REST which provides a model for designing
network-based software systems utilizing the client/ server model and leverages
the facilities provided by HTTP for IPC without additional burden
• REST (Representational State Transfer) is an architectural style used for building
scalable web services.
• REST is not a protocol itself but a set of guidelines that use HTTP and other web
standards for building lightweight, stateless services that are easy to consume and
develop.
• In a RESTful system, a client sends a request over HTTP using the
standard HTTP methods (PUT, GET, POST, and DELETE), and the
server issues a response that includes the representation of the
resource.
• By relying on this minimal support, it is possible to provide
whatever it needed to replace the basic and most important
functionality provided by SOAP, which is method invocation.
• The GET, PUT, POST, and DELETE methods constitute a minimal set
of operations for retrieving, adding, modifying, and deleting data.
• Twitter, Yahoo! (search APIs, maps, photos, etc), Flickr, and
Amazon.com all leverage REST.
.

Key Characteristics of REST


• Stateless:
• REST services are stateless, meaning each client request must contain all the necessary information to
understand and process the request. The server does not store any client state between requests, making it
easier to scale and manage.
• Uniform Interface:
• REST relies on a standardized set of methods provided by HTTP, such as GET, POST, PUT, DELETE, and PATCH.
These methods correspond to CRUD (Create, Read, Update, Delete) operations, ensuring a uniform interface
across different services
• Client-Server Architecture:
• REST adheres to a client-server model where the client (consumer of the service) and the server (provider of
the service) are separate entities.
• This separation of concerns enables flexibility in the development and scalability of each component.
• Resource Based:
• RESTful services treat everything as a resource, which can be identified and accessed using a Uniform Resource
Identifier (URI). Each resource can be represented in various formats such as JSON, XML, or plain text.
▪ Stateless Communication:
▪ Each request from the client to the server must include all the information the
server needs to fulfill the request, ensuring that each interaction is stateless
and independent.
▪ Cacheable:
▪ Responses from the server can be marked as cacheable or non-cacheable,
allowing clients to reuse responses for future requests and reduce server
load.
▪ Layered System:
▪ REST allows for a layered architecture, meaning a client might connect to an
intermediary (such as a load balancer or cache) rather than directly to the
server. This abstraction promotes scalability and flexibility.
RESTful Operations
(HTTP Methods)

•GET: Retrieves a resource or collection of


resources.
•POST: Creates a new resource on the server.
•PUT: Updates an existing resource or creates
it if it does not exist.
•DELETE: Deletes a specified resource.
•PATCH: Partially updates an existing
resource.
REST vs SOAP
• REST (Representational State Transfer) and SOAP (Simple Object Access Protocol) are two distinct architectural styles and protocols used for
building web services.
• Both enable communication between different applications over the internet, but they have different design principles, standards, and use
cases.

ASPECT REST SOAP


Architecture Architectural style that uses Uses XML-based messages
standard HTTP methods (GET, with strict message format
POST, PUT, DELETE, etc.) to and built-in error handling.
perform operations on
resources
Data format Primarily uses JSON and XML, Uses only XML format for
but can support other formats request and response
like HTML, plain text, or CSV. messages, making it more
verbose.
Message Structure Simple, lightweight messages Complex message structure
that use URIs to identify with an envelope containing a
resources and HTTP headers header and body, making it
to convey meta-information. more rigid.
REST vs SOAP
ASPECT REST SOAP
Transport Protocol Typically uses HTTP/HTTPS, but can Can use multiple protocols such as
work with other protocols like FTP. HTTP, SMTP, FTP, and more, making
it more versatile.
Performance Faster due to lightweight messages Slower due to the overhead of XML
and minimal overhead. Best suited parsing and larger message sizes.
for scenarios requiring low latency Better suited for scenarios where
and high performance. performance is not the primary
concern.
Stateless vs Stateful Stateless: Each request from a Can be either stateless or stateful,
client to the server must contain all providing flexibility in maintaining
the information needed to session states when needed.
understand and process the
request.
Error Handling Limited error handling through Rich, standardized error handling
HTTP status codes (e.g., 200 OK, through the <Fault> element, which
404 Not Found, 500 Internal Server includes error codes, messages,
Error). and additional details.
REST vs SOAP
ASPECT REST SOAP
SECURITY Relies on HTTPS and can use Has built-in security standards like
additional protocols like OAuth for WS-Security for handling
authentication and authorization. authentication, authorization, and
message integrity. Suitable for
complex security requirements.
Use Cases - Web and mobile applications. - Enterprise-level applications
- Microservices architectures. requiring high security and
- Public APIs for accessing reliability.
resources. - Payment gateways, banking
- Simple CRUD operations. services, and financial transactions.
- Applications requiring complex,
multi-step operations.
Message Size Smaller due to less verbose JSON or Larger message sizes due to XML
XML formats, leading to better format, making it less efficient for
performance in terms of network high-volume data exchanges.
bandwidth.
Technologies used by Web
Services/SOA
• WSDL(Web Service Description Language):
• WSDL is an XML-based language for the description of Web services.
• It is used to define the interface of a Web service in terms of methods to
be called and types and structures of the required parameters and return
values.
Virtualization
• Virtualization is a large umbrella of technologies and concepts that are meant to provide
an abstract environment—whether virtual hardware or an operating system—to run
applications.
• The term virtualization is often synonymous with hardware virtualization, which plays a
fundamental role in efficiently delivering Infrastructure-as-a-Service (IaaS) solutions for
cloud computing.

History of Virtualization
• The first step toward consistent adoption of virtualization
technologies was made with the wide spread of virtual
machine-based programming languages:
• In 1995 Sun released Java, which soon became popular among developers.
• The ability to integrate small Java applications, called applets, made Java a
very successful platform, and with the beginning of the new millennium Java
played a significant role in the application server market segment, thus
demonstrating that the existing technology was ready to support the
execution of managed code for enterprise-class applications.
Virtualization (Cont..)
• In 2002 Microsoft released the first version of .NET Framework, which was
Microsoft’s alternative to the Java technology.
• Based on the same principles as Java, able to support multiple programming
languages, and featuring complete integration with other Microsoft technologies,
.NET Framework soon became the principal development platform for the Microsoft
world and quickly became popular among developers.
• In 2006, two of the three “official languages” used for development at
Google, Java and Python, were based on the virtual machine model.
• This trend of shifting toward virtualization from a programming language
perspective demonstrated an important fact:
• The technology was ready to support virtualized solutions without a significant
performance overhead.
Reasons of Virtualization
• Virtualization technologies have gained renewed interest recently due to the confluence
of several phenomena:
• Increased performance and computing capacity:
• Nowadays, the average end-user desktop PC is powerful enough to meet almost all
the needs of everyday computing, with extra capacity that is rarely used.
• Almost all these PCs have resources enough to host a virtual machine manager and
execute a virtual machine with by far acceptable performance.
Reasons of Virtualization(Cont…)
• Underutilized hardware and software resources:
• Computers today are so powerful that in most cases only a fraction of their capacity is used
by an application or the system.
• Moreover, if we consider the IT infrastructure of an enterprise, many computers are only
partially utilized whereas they could be used without interruption on a 24/7/365 basis.
• For example, desktop PCs mostly devoted to office automation tasks and used by
administrative staff are only used during work hours, remaining completely unused
overnight.
• Using these resources for other purposes after hours could improve the efficiency of the IT
infrastructure.
• To transparently provide such a service, it would be necessary to deploy a completely
separate environment, which can be achieved through virtualization.
Reasons of Virtualization(Cont…)
• Lack of space:
• The continuous need for additional capacity, whether storage or compute power,
makes data centers grow quickly.
• Companies such as Google and Microsoft expand their infrastructures by building
data centers as large as football fields that are able to host thousands of nodes.
• Although this is viable for IT giants, in most cases enterprises cannot afford to build
another data center to accommodate additional resource capacity.
• This condition, along with hardware underutilization, has led to the diffusion of a
technique called server consolidation for which virtualization technologies are
fundamental.
Reasons of Virtualization(Cont…)
• Greening initiatives
• Recently, companies are increasingly looking for ways to reduce the amount of
energy they consume and to reduce their carbon footprint.
• Data centers are one of the major power consumers; they contribute consistently to
the impact that a company has on the environment.
• Maintaining a data center operation not only involves keeping servers on, but a great
deal of energy is also consumed in keeping them cool.
• Infrastructures for cooling have a significant impact on the carbon footprint of a data
center.
• Hence, reducing the number of servers through server consolidation will definitely
reduce the impact of cooling and power consumption of a data center.
• Virtualization technologies can provide an efficient way of consolidating servers.
Reasons of Virtualization(Cont…)
• Rise of administrative costs :
• Power consumption and cooling costs have now become higher than the cost of IT
equipment.
• Moreover, the increased demand for additional capacity, which translates into more servers
in a data center, is also responsible for a significant increment in administrative costs.
• Computers—in particular, servers—do not operate all on their own, but they require care
and feeding from system administrators.
• Common system administration tasks include hardware monitoring, defective hardware
replacement, server setup and updates, server resources monitoring, and backups.
• These are labour-intensive operations, and the higher the number of servers that have to be
managed, the higher the administrative costs.
• Virtualization can help reduce the number of required servers for a given workload, thus
reducing the cost of the administrative personnel.
Components of
Virtualized Environment
• In a virtualized environment there are three major
components: guest, host, and virtualization layer.
• The Host:
• The host machine is the physical hardware upon which
the virtualization takes place.
• This machine runs the virtualization software that allows
virtual machines to exist.
• Its physical components such as memory, storage, and
processor ultimately handle the needs of the virtual
machines.
• These resources are typically hidden or masked from the
guest machines.
• To produce this effect, a virtualization software, such as
a hypervisor, is installed on the actual physical
hardware.
• The purpose of the host machine is to provide the
physical computing power to the virtual machines in the
form of CPU, memory, storage, and network connection.

Cont..
• The Guest:
• The software-only machine (Guest) runs on the host machine within the created virtual environment.
• There can be multiple virtual machines running on a single host.
• A virtual machine need not be a computer.
• It is possible to virtualize various types of storage, databases, and other systems as well.
• A virtual machine runs its own environment.
• A VM can act as a single piece of physical hardware such as a desktop computer or server.
• However, everything is passed through the hypervisor, which makes the actual requests to the real
hardware.
• The hardware returns any data or feedback to the hypervisor, which passes it on to the virtual
machine.
• Each virtual machine runs separately from all other virtual machines. In fact, each virtual machine
believes it is the only system running on the hardware.
• Hypervisor
• Sometimes called a virtual machine manager (VMM), the Hypervisor is the
software that exists to run, create and manage the virtual machines.
• The hypervisor is what makes virtualization possible and creates a virtual
environment in which the guest machines run.
• To the guest machine, the hypervisor’s virtual machine is the only one that
exists, even if there are numerous virtual machines running on the same
physical hardware.
Types of hypervisors
• Type-1, or bare-metal hypervisors, are installed directly onto the physical hardware. As
such, they must contain their own operating systems for booting, running the hardware,
and connecting to the network.
• Popular Type-1 hypervisors include Microsoft Hyper-V and VMware ESXi.
• Type-2, or hosted hypervisors, run on an operating system that is directly installed on the
hardware.
• In this case, a copy of Windows, or a Unix-based system must be installed to boot the system and access
the hardware.
• Once the operating system is running, the hosted hypervisor can launch.
• Type-2 hypervisors are often used to run multiple operating systems on a single machine, rather than to
emulate numerous running systems on the hardware.
• Popular Type-2 hypervisors include VMware Workstation, VirtualBox, and Parallels that emulates a
Windows operating system while running on a Mac-based computer.
Characteristics of Virtualized Environment
• In the case of hardware virtualization, the guest is represented by a system image comprising an
operating system and installed applications.
• These are installed on top of virtual hardware that is controlled and managed by the
virtualization layer, also called the virtual machine manager.(VMM)
• The host is instead represented by the physical hardware, and in some cases the operating
system, that defines the environment where the virtual machine manager is running.
• In the case of virtual storage, the guest might be client applications or users that interact with the
virtual storage management software deployed on top of the real storage system.
• The case of virtual networking is also similar: The guest— applications and users—interacts with
a virtual network, such as a virtual private network (VPN), which is managed by specific software
(VPN client) using the physical network available on the node.
• VPNs are useful for creating the illusion of being within a different physical network and thus
accessing the resources in it, which would otherwise not be available.
Characteristics of Virtualized
Environment(Cont..)
• The main common characteristic of all these different
implementations is the fact that the virtual environment is created by
means of a software program.
• The technologies of today allow profitable use of virtualization and
make it possible to fully exploit the advantages that come with it.
• Such advantages have always been characteristics of virtualized
solutions.
Increased Security
(Advantage of Virtualization)

• The ability to control the execution of a guest in a completely transparent manner opens new possibilities for
delivering a secure, controlled execution environment.
• The virtual machine represents an emulated environment in which the guest is executed.
• All the operations of the guest are generally performed against the virtual machine, which then translates and
applies them to the host.
• This level of indirect access allows the virtual machine manager (VMM) to control and filter the activity of the
guest, thus preventing some harmful operations from being performed.
• Resources exposed by the host can then be hidden or simply protected from the guest.
• Moreover, sensitive information that is contained in the host can be naturally hidden without the need to install
complex security policies.
• Increased security is a requirement when dealing with untrusted code
Managed execution
(Advantage of Virtualization)

• Virtualization of the execution environment not only allows


increased security, but a wider range of features also can be
implemented.
• In particular, sharing, aggregation, emulation, and isolation
are the most relevant features
• Sharing:
• Virtualization allows the creation of a separate
computing environments within the same host.
• In this way it is possible to fully exploit the capabilities
of a powerful host, which would otherwise be
underutilized.
• Sharing is a particularly important feature in
virtualized data centers, where this basic feature is
used to reduce the number of active servers and limit
power consumption.
Managed
execution (Cont..)
• Aggregation:
• Not only is it possible to share physical resource
among several guests, but virtualization also allows
aggregation, which is the opposite process.
• A group of separate hosts can be tied together
and represented to guests as a single virtual host.
• This function is naturally implemented in
middleware for distributed computing, with a
classical example represented by cluster
management software, which harnesses the
physical resources of a homogeneous group of
machines and represents them as a single
resource.
Managed
execution (Cont..)
• Emulation:
• Guest programs are executed within an environment that is
controlled by the virtualization layer, which ultimately is a
program.
• This allows for controlling and tuning the environment that is
exposed to guests.
• This feature becomes very useful for testing purposes, where a
specific guest has to be validated against different platforms or
architectures and the wide range of options is not easily
accessible during development.
Managed execution (Cont..)
• Isolation:
• Virtualization allows providing guests—whether they
are operating systems, applications, or other
entities—with a completely separate environment, in
which they are executed.
• The guest program performs its activity by interacting
with an abstraction layer, which provides access to the
underlying resources.
• Isolation brings several benefits; for example, it allows
multiple guests to run on the same host without
interfering with each other.
• Second, it provides a separation between the host and
the guest.
• The virtual machine can filter the activity of the guest
and prevent harmful operations against the host.
Portability
(Advantage of Virtualization)
• The concept of portability applies in different ways according to the specific
type of virtualization considered.
• In the case of programming-level virtualization, as implemented by the
JVM or the .NET runtime, the binary code representing application
components (jars or assemblies) can be run without any recompilation on
any implementation of the corresponding virtual machine.
• This makes the application development cycle more flexible and application
deployment very straightforward: One version of the application, in most cases, is
able to run on different platforms with no changes.
• Finally, portability allows having your own system always with you and
ready to use as long as the required virtual machine manager is available.
Taxonomy of virtualization techniques
• Virtualization covers a wide range of emulation techniques that are applied to different areas of computing.
• Virtualization is mainly used to emulate execution environments, storage, and networks.
• Among these categories, execution virtualization constitutes the oldest, most popular, and most developed area.
• Therefore, it deserves major investigation and a further categorization.
• In particular we can divide these execution virtualization techniques into two major categories by considering the type of
host they require.
• Process-level techniques-type-II are implemented on top of an existing operating system, which has full control of the
hardware.
• System-level techniques-type-I are implemented directly on hardware and do not require—or require a minimum of
support from—an existing operating system.
• Within these two categories we can list various techniques that offer the guest a different type of virtual computation
environment: bare hardware, operating system resources, low-level programming language, and application libraries.
Execution Virtualization
• Execution virtualization is a concept that abstracts the
underlying hardware and creates a virtual execution
environment for software applications.
• This abstraction allows multiple operating systems or
applications to run concurrently on a single physical
machine, utilizing its resources more efficiently and
providing isolation between different processes or users.
• Execution virtualization defines the interfaces between the
level of abstractions which hides implementation details.
• Virtualization technique actually replaces one of the layers
and intercept the calls that are directed to it.
• For example if you implement OS level virtualization then it
world replace OS layer.
• Therefore, execution virtualization can be implemented
directly on top of the hardware by the operating system or
libraries dynamically or statically linked to an application
image.
Machine reference model for Virtualization

• Virtualizing an execution environment at


different levels of the computing stack
requires a reference model.
• At the bottom layer Hardware is situated
which can be accessed through some
instruction sets.
• Therefore, the model is expressed in terms
of the Instruction Set Architecture (ISA),
which defines the instruction set for the
processor, registers, memory, and interrupt
management.
• ISA is the interface between hardware and
software, and it is important to the operating
system (OS) developer (System ISA) and
developers of applications that directly
manage the underlying hardware (User ISA).
• IT consists of three part:
• Instruction sets
• Emulator
• Mapping of instructions
The ISA defines:
• Instructions: The commands the processor can
execute.
• Registers: Small storage locations within the CPU.
• Data types: The types of data the processor can
ISA(Cont..) handle.
• Addressing modes: How the processor accesses
data in memory.
Machine reference model for Execution
Virtualization
• ABI, which stands for Application Binary Interface, is a
set of rules and conventions that dictate how Application
software components interact at the binary level.
• It defines the low-level details of how functions are
called, data is organized, and system resources are
accessed in compiled programs.
• The application binary interface (ABI) separates the
operating system layer from the applications and
libraries, which are managed by the OS.
• System calls are defined at this level.
• This interface distinguishes between privileged
and non privileged instructions
• ABI converts our function code into byte code.
• Bytecode is a low-level representation of a program that is
intermediate between source code and machine code.
• When you write code in a high-level programming
language, like Java or Python, the source code is first
compiled into bytecode before it is executed. The bytecode
contains instructions that the interpreter or virtual machine
can understand and execute.
ABI(cont..)

ABI defines:
• Calling conventions: How functions receive parameters and return values.
• Data types: Sizes and alignments of data types.
• System calls: How applications request services from the operating system.
• Binary format: The format of executable files and libraries
Machine reference model (Cont..)
• The highest level of abstraction is represented
by the application programming interface
(API), which is the interface between
applications and libraries and/or the
underlying operating system.
• For any operation to be performed in the
application level API, ABI and ISA are
responsible for making it happen.
• The high-level abstraction is converted into
machine-level instructions to perform the
actual operations supported by the processor.
• The machine-level resources, such as
processor, registers and main memory
capacities, are used to perform the operation
at the hardware level of the central processing
unit (CPU).
Privileged and Non Privileged Instructions
• Privileged instructions are CPU instructions that can only be executed in a privileged
mode, also known as kernel mode or supervisor mode.
• In this mode, the executing process has full access to all system resources and can perform
sensitive operations that could potentially affect the stability and security of the entire
system.
• Privileged instructions often involve direct control over hardware resources, memory
management, interrupt handling, and system configuration.
• Examples of privileged instructions include:
• Enabling and disabling interrupts
• Accessing certain control registers that configure CPU behavior
• Modifying memory protection settings
• Initiating input/output (I/O) operations
• Loading or modifying the page tables for virtual memory management
• Only the operating system kernel and certain trusted components run in privileged mode.
User-level applications run in a non-privileged mode, which restricts their direct access to
hardware and sensitive system resources
Cont..
• Non-privileged instructions are CPU instructions that can be
executed by user-level processes running in a non-privileged
mode.
• These instructions allow user-level programs to perform regular
computations and interact with the system in a controlled and
isolated manner, without direct access to privileged operations.
• Examples of non-privileged instructions include:
• Arithmetic and logical operations
• Memory read and write operations (within the process's allocated
memory space)
• User-level I/O operations
• Basic program control flow instructions (e.g., branching, function calls)
How Execution Virtualization Works

1. Initialization:
When a VM is started, the hypervisor allocates the necessary resources (CPU, memory, I/O devices) and
sets up the virtual environment. The VM runs as a process on the host OS (for Type 2 hypervisors) or
directly on hardware (for Type 1 hypervisors).
2. Instruction Execution:
Guest operating systems and applications execute instructions as if they were running on physical
hardware. The hypervisor intercepts privileged instructions, handles them, and translates them into actions
on the physical hardware.
3. I/O Operations:
I/O operations, such as disk reads/writes or network access, are handled by the hypervisor. The
hypervisor can present virtual devices to the guest OS, while mapping these operations to physical
devices.
How Execution Virtualization Works(Cont..)

•Context Switching:
•The hypervisor manages context switching between multiple VMs.
This involves saving and restoring the state of VMs, such as CPU
registers and memory mappings, to ensure isolation and fairness.
•Resource Management:
•The hypervisor controls resource allocation to ensure that one VM
doesn’t monopolize physical resources. It can dynamically allocate
CPU, memory, and I/O bandwidth based on the needs of each VM.
Hardware-level virtualization
• Hardware-level virtualization is a virtualization technique that provides an
abstract execution environment in terms of computer hardware on top of
which a guest operating system can be run.
• In this model, the guest is represented by the operating system, the host
by the physical computer hardware, the virtual machine by its emulation,
and the virtual machine manager by the hypervisor (see Figure 3.6).
• The hypervisor is generally a program or a combination of software and
hardware that allows the abstraction of the underlying physical hardware.
• Hardware-level virtualization is also called system virtualization, since it
provides ISA to virtual machines, which is the representation of the
hardware interface of a system.
• This is to differentiate it from process virtual machines, which expose ABI to
virtual machines.
Hypervisors
• A fundamental element of hardware virtualization is the hypervisor,
or virtual machine manager (VMM).
• It recreates a hardware environment in which guest operating
systems are installed.
Organization of VMM
• A virtual machine manager is internally organized as described in
Figure.
• Three main modules, dispatcher, allocator, and interpreter,
coordinate their activity in order to emulate the underlying
hardware.
• The dispatcher constitutes the entry point of the monitor and
reroutes the instructions issued by the virtual machine instance
to one of the two other modules.
• The allocator is responsible for deciding the system resources to
be provided to the VM: whenever a virtual machine tries to
execute an instruction that results in changing the machine
resources associated with that VM, the allocator is invoked by
the dispatcher.
• The interpreter module consists of interpreter routines. These
are executed whenever a virtual machine executes a privileged
instruction: a trap is triggered and the corresponding routine is
executed.
Criteria Required to be a VMM
• The design and architecture of a virtual machine manager, together with
the underlying hardware design of the host machine, determine the full
concept of hardware virtualization, where a guest operating system can be
transparently executed on top of a VMM as though it were run on the
underlying hardware.
• A VMM must satisfies the following two properties:
1. Equivalence: A guest running under the control of a virtual machine
manager should exhibit the same behaviour as when it is executed
directly on the physical host.
2. Resource control: The virtual machine manager should be in complete
control of virtualized resources.
H/W Virtualization Techniques
• Full Virtualization:
• Full virtualization refers to the ability to run a program, most likely an operating
system, directly on top of a virtual machine and without any modification, as though
it were run on the raw hardware.
• To make this possible, virtual machine managers are required to provide a complete
emulation of the entire underlying hardware.
• The moment the guest tries to access the resources of host machine, a trap is sent
to VMM and the VMM emulates the complete Hardware resources that the guest
wants to access.
• In Full Virtualization, the hypervisor emulates completely the hardware. The
guest operating system does not know that it is running on a virtual machine
and use hardware instructions to interact to the emulated hardware.
• The principal advantage of full virtualization is:
• complete isolation, which leads to enhanced security,
• ease of emulation of different architectures and
• coexistence of different systems on the same platform.
H/W Virtualization Techniques
• Paravirtualization:
• In Paravirtualization, the hypervisor provides an API that the guest operating
system can use to access the hardware.
• Here, the guest operating system knows that it is running on a virtual
machine and, instead of using hardware instructions, use the hypervisor API
to interact to the hardware.
• Typically, paravirtualization simplifies the operation of the hypervisor and
supports a better performance of the virtual machines.
• As a problem, not all the operating systems support paravirtualization on all
the hypervisors.
• Recently, Ubuntu include drivers for paravirtualization on VirtualBox, KVM
and HyperV. You do not need any additional software for these hypervisors.
However, this is not the same for all the operating systems and hypervisors.
• Windows, for instance, has an exceptional support for HyperV, but not for all
the other hypervisors.
Example
• As an example, consider the use of a graphic card by a virtual
machine.
• In full virtualization, the guest operating system use a driver that
thinks that it is running in a real hardware. The programs, e.g.
Microsoft Office, invoke driver functions using an API, the driver
interacts with the emulated hardware using hardware interruptions,
I/O operations and data memory manipulation. The hypervisor must
understand all the hardware emulations and translate the behaviour
in the real hardware.
• In paravirtualization, the guest operating system uses a driver that
interact with the drivers in the host with the help of VMM. The
programs, e.g. Microsoft Office, invoke driver functions in the guest
using an API, the guest driver invoke the host driver using another
API and the host driver executes the operation in the real hardware.
Programming language-level virtualization
• Programming language-level virtualization is mostly used to achieve ease of
deployment of codes, managed execution, and portability across different
platforms and operating systems.
• It consists of a virtual machine executing the byte code of a program, which
is the result of the compilation process.
• Compilers implemented and used this technology to produce a binary
format representing the machine code for an abstract architecture.
• The characteristics of this architecture vary from implementation to
implementation.
• At runtime, the byte code can be either interpreted or compiled against the
underlying hardware instruction set.
Programming language-level
virtualization(Cont..)
• The main advantage of programming-level virtual machines, also called process virtual machines, is
the ability to provide a uniform execution environment across different platforms.
• Programs compiled into byte code can be executed on any operating system and platform for which
a virtual machine able to execute that code has been provided.
• From a development lifecycle point of view, this simplifies the development and deployment efforts
since it is not necessary to provide different versions of the same code.
• The implementation of the virtual machine for different platforms is still a costly task, but it is done
once and not for any application.
• Moreover, process virtual machines allow for more control over the execution of programs since
they do not provide direct access to the memory.
• Implementations of this model are also called high-level virtual machines, since high-level
programming languages are compiled to a conceptual ISA, which is further interpreted or
dynamically translated against the specific instruction of the hosting platform.
How ABI works in virtualized environment
• Here's how ABI works in virtualization:
• Guest and Host ABI Compatibility:
• Virtualization involves running multiple guest operating systems on a single physical
host. Each guest OS has its own ABI, which is designed to work with its specific
hardware and software environment.
• However, the host OS or hypervisor may have a different ABI.
• To enable communication between the guest and host environments, there needs
to be a level of ABI compatibility. This compatibility ensures that guest applications
can make system calls and access resources provided by the host.
• ABI Translation and Emulation:
• In some cases, the guest and host ABIs might be different due to architectural
differences between the virtualized environment and the physical hardware.
• In such situations, ABI translation or emulation is employed.
• This involves intercepting system calls and other low-level interactions made by the
guest OS, translating them into a form that the host ABI understands, and then
carrying out the requested action in the host environment.
Application-level virtualization
• Application virtualization, also called application service virtualization refers to
running an application on a thin client; a terminal or a network workstation with
few resident programs and accessing most programs residing on a connected
server.
• The thin client runs in an environment separate from, sometimes referred to as
being encapsulated from, the operating system where the application is located.
• Application virtualization fools the computer into working as if the application is
running on the local machine, while in fact it is running on a virtual machine (such
as a server) in another location, using its operating system (OS), and being
accessed by the local machine.
• Incompatibility problems with the local machine’s OS, or even bugs or poor
quality code in the application, may be overcome by running virtual applications.
• Any user from any location can use the same application using internet as the
original application is located in a central server.
• It allows you to install the application to the server and make it virtual
to be used by multiple users over the network. Thus reducing the
need to install the desired applications on individual systems and
saving the cost for installing and licensing the software for every
machine.
• This has made the deployment of the applications an easy task for
the clients or partners. You can easily deliver the executable file of
the desired application to your clients making the deployments
easier.

Benefits of • You can place the virtualized applications anywhere on the server or
make required copies, saving them in different locations. Thus you
can use such applications on any type of endpoints either Windows,

Application macOS, iOS, or Android. Thus providing you the portability and you
do not have to worry if any of the endpoints have been
compromised, you are still able to use your application from another
endpoint.
Virtualizatio • You can easily remove the unwanted virtual applications directly thus
saving you from uninstalling the applications from individual systems.

n • Having application virtualization will save you from having conflicts


among various applications running on the system due to any
compatibility issue.
• This allows you to update the application only in a single place
preventing you from making individual application updates for every
system.
• The virtualized application relies on the operating system of the host
where it is installed. One need not worry about the OS installed at
the endpoint or any other compatibility issue.
Storage Virtualization
• Storage virtualization in Cloud Computing is nothing but the sharing
of physical storage into multiple storage devices which further
appears to be a single storage device.
• It can be also called as a group of an available storage device which
simply manages from a central console. This virtualization provides
numerous benefits such as easy backup, achieving, and recovery of
the data.
Disadvantage of Virtualization
• Performance degradation:
• Performance is definitely one of the major concerns in using virtualization technology.
• It is true that virtualization allows the optimum use of all resources. However, it is
also a challenge when you need that additional boost sometimes, but it is not
available.
• Resources in virtualization are shared. The same resources that a single user
might have consumed are now shared among three or four users.
• The overall available resources might not be shared equally or may be shared in
some ratio depending upon the tasks being run.
• As the complexity of tasks increases, so does the need for performance from the
system. It results in a substantially higher time required to complete the task.
Disadvantage of Virtualization (Cont..)
• Inefficiency and degraded user experience:
• Virtualization can sometime lead to an inefficient use of the host.
• In particular, some of the specific features of the host cannot be exposed by the
abstraction layer and then become inaccessible.
• In the case of hardware virtualization, this could happen for device drivers: The
virtual machine can sometime simply provide a default graphic card that maps only a
subset of the features available in the host.
• In the case of programming-level virtual machines, some of the features of the
underlying operating systems may become inaccessible unless specific libraries are
used.
Disadvantage of Virtualization (Cont..)
• Security holes and new threats :
• Virtualization opens the door to a new and unexpected form of phishing.
• The capability of emulating a host in a completely transparent manner led the way to
malicious programs that are designed to extract sensitive information from the
guest.
• In the case of hardware virtualization, malicious programs can preload themselves
before the operating system and act as a thin virtual machine manager toward it.
• The operating system is then controlled and can be manipulated to extract sensitive
information of interest to third parties.

You might also like