Ch7slides Partb
Ch7slides Partb
2
Communication and Invocation
• An invocation
– A construct (e.g., RMI, RPC or event notification) whose purpose is
to bring about an operation on a resource in a different address
space
Pros?
Cons?
4
Communication and Invocation (cont.)
• Protocols and Openness
– Goal: Standardized protocols that enable interworking
between middleware implementations on different
platforms
– An open design allows the protocol
• to be expandable (when new features are needed), and
• to be integrable with other protocols
8
Figure 7.11 Invocations between address spaces
9
Communication and Invocation (cont.)
• Invocation Performance (cont.)
– Invocation over the network
• A null RPC or null RMI (no params, execute a null procedure,
no return): carry system but not user data.
• A null RPC (over a LAN) takes ~0.1 millisecond [Bridges et al.
2007]
c.f., A null local call takes a fraction of a microsecond (~0.0001-
0.000001 millisecond)
– For a null RPC, about 100 bytes are passed over the network, so with
a raw 100 Mbits /sec, the network transfer time is 0.01 millisecond.
Q: What may have cost the other 0.09 millisecond?
• Null invocations (null RPCs or null RMIs) are important since
they measure a fixed overhead (latency) w/o user data.
• Of course the overall overhead increases when data is
involved.
10
Communication and Invocation
(cont.)
13
Communication and Invocation (cont.)
• Other performance factors:
– Use of shared memory (to reduce memory copying overheads)
• Shared regions may be used for rapid communication between a user
process and the kernel, or between user processes.
– Choice of protocol (TCP/UDP)
• When implementing request-reply interactions on top of a protocol
such as TCP, TCP’s buffering behavior can hinder good performance,
and its connection overheads put it at a disadvantage compared with
UDP.
• UNLESS a TCP connection is used for multiple requests.
– Buffering of data before being dispatched over the network
• OS’s default buffering is to collect several small messages (in the
buffer) and then send them together in a single packet.
• This behavior may cause unnecessary delays.
14
Communication and Invocation
(cont.)
• Invocation within a computer
• [Bershad et al. 1990] Most cross-address-space invocation took place
within a computer and not, as might be expected in a client-server
installation, between computers.
– Data are cached in a local server, and only retrieved from the remote server
when necessary.
15
Figure 7.13
A lightweight remote procedure call
16
LRPC vs RPC
LRPC RPC
Processing of Arguments are copied Arguments are copied four times:
arguments once: when they are • from the client stub’s stack
marshalled onto the A onto a message,
stack. • from the message to a kernel
buffer,
• from the kernel buffer to a
server message, and
• from the message to the
server stub’s stack.
17
Communication and Invocation (cont.)
- Asynchronous Operation
• In the Internet environment, the effects of relatively high
latencies, low throughput and high server loads may
outweigh any benefits that the OS can provide to reduce
invocation overheads.
– Network may not be available all the time.
• disconnected and then reconnected
– Servers may be overloaded or have crashed.
19
Communication and Invocation
- Asynchronous Operation (cont.)
20
Communication and Invocation
- Asynchronous Operation (cont.)
21
Operating System Architecture
• Examine the architecture of a kernel suitable for a distributed
system
• The requirements of openness in kernel architecture
• An open distribute system should make it possible to:
– Run only that system software at each computer that is necessary for
it to carry out the particular role.
• Loading redundant or unneeded modules wastes memory resources.
• Ideally, the kernel would provide only the most basic mechanisms upon which
the general resource management tasks at a node are carried out.
– Allow the software implementing a service to be changed
independently of other facilities.
– Allow for alternatives of the provided services.
– Introduce new services without harming the existing ones.
• Server modules would be dynamically loaded as required, to implement the
required resource management policies for the currently running applications.
22
Operating System Architecture (cont.)
• Two kernel design approaches: Monolithic kernels vs.
Microkernels
– They differ in what functionality belongs to the kernel and what is
left for the server processes that can be dynamically loaded to run
on top of the kernel.
– Monolithic design is ‘massive’.
• performs all the basic OS functions and takes up in the order of megabytes
of code and data
• undifferentiated – It is coded in a non-modular way.
• intractable – altering individual software components is difficult.
– Microkernel design has a smaller kernel.
• The kernel provides only the most basic abstractions – address spaces,
threads, and local IPC.
• All other system services are provided by servers that are dynamically
loaded when needed. 23
Figure 7.15
Monolithic kernel and microkernel
24
Figure 7.16 The role of the microkernel
• If performance is the main goal, then middleware may use the facilities of
the microkernel directly.
• If portability is the main goal, then middleware uses a language runtime
support subsystem, or a higher-level operating system interface provided
by an operating system emulation subsystem.
25
Operating System Architecture (cont.)
• Comparison:
– Microkernel OSs are extensible.
• Modularity is enforced behind memory protection boundaries.
• Small kernels are relatively free of bugs. (software maintenance)
– Monolithic Kernels
• Hard to maintain
• Efficiency in invoking operations.
– System calls can be expensive but still cheaper than invocations to a separate
user-level address space.
• Lacks structure but that can be avoided using software development
techniques such as layering.
– Hybrid Approaches:
• Mach and Chorus are both originally microkernels.
• Because of performance problems, they eventually changed to allow
servers to be loaded dynamically either into the kernel address space or
the user address space.
26
Virtualization at the OS level
• We have seen virtualization already in overlay networks.
– a virtual or logical network that is created on top of an existing
physical network
• System virtualization: virtualization of an OS
– Examples : VirtualBox, Parallels, VMWare, …
– Goal: provide multiple virtual machines (hardware images) over the
underlying physical machine. Each VM may be running a different OS
instance.
– Modern computer have the ability to do this without a performance
hit.
• System virtualization enables the processor and other resources
to be shared between multiple tasks running on behalf of one
or several users.
27
Virtualization
• Motivation through different uses of virtualization:
– Resource sharing
– Flexible server allocation: Unlike processes, VMs can be migrated
quite simply to other physical machines, adding flexibility in
managing the server infrastructure.
– It is very relevant to cloud computing.
• Infrastructure as a service, Platform as a service, and Software as a
service
• In particular, VMs can be used to enable infrastructure as a service.
– Dynamic resource allocation: Some distribute applications need
to create and destroy VMs with very little overhead.
– Convenient access to different OS environments on a single
desktop computer.
28
Virtualization (cont.)
• Implementation:
– Virtualization is implemented as a thin layer of software
on top of the underlying physical architecture (known as
“virtual machine monitor” or “hypervisor”).
• The monitor provides interface based closely on the
underlying physical architecture.
• “Full Virtualization”
– The virtual machine monitor provides identical interface to the
underlying physical architecture.
– Advantage of Virtualization:
• Applications can run in virtualized environments without
being rewritten or recompiled.
29
• The Xen virtual machine monitor (aka the Xen hypervisor)
to provide VMs with a virtualization of the hardware, providing the
appearance that each VM has its own (virtualized) physical machine
and multiplexing the virtual resources onto the underlying physical
resources 30
Summary
• how the operating system supports the middleware
layer in providing invocations upon shared resources
– Clients invoke operations upon shared resources.
• Concurrent access to shared resources
• Protection of resources and processes
• Processes, Execution environment, Threads
• Communication between remote processes
• Kernel architectures
• Virtualization
31