0% found this document useful (0 votes)
110 views

Operating Systems:: Memory or Message Passing

An operating system acts as an intermediary between the user and computer hardware. It allocates resources, controls execution of programs and I/O devices, and aims to execute user programs efficiently while making the system convenient to use. The OS provides services like program execution, I/O operations, file manipulation, communications, and error detection via system calls. Multiprogramming allows multiple programs to run simultaneously by fast switching between them, improving CPU utilization during I/O waits. Paging and segmentation are memory management techniques that support logical views of memory and allow non-contiguous allocation. The UNIX file system has a hierarchical structure with directories, treats everything as a file, uses permissions, and provides commands like mkdir, rmdir, and

Uploaded by

suresh4ever
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
110 views

Operating Systems:: Memory or Message Passing

An operating system acts as an intermediary between the user and computer hardware. It allocates resources, controls execution of programs and I/O devices, and aims to execute user programs efficiently while making the system convenient to use. The OS provides services like program execution, I/O operations, file manipulation, communications, and error detection via system calls. Multiprogramming allows multiple programs to run simultaneously by fast switching between them, improving CPU utilization during I/O waits. Paging and segmentation are memory management techniques that support logical views of memory and allow non-contiguous allocation. The UNIX file system has a hierarchical structure with directories, treats everything as a file, uses permissions, and provides commands like mkdir, rmdir, and

Uploaded by

suresh4ever
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 24

1.

Operating Systems:

A program that acts as an intermediary between a user of a computer and the computer hardware.
• Resource allocator – manages and allocates resources.
• Control program – controls the execution of user programs and operations of I/O devices.
• Kernel – the one program running at all times (all else being application programs).

Operating system goals:


• Execute user programs and make solving user problems easier.
• Make the computer system convenient to use.
• Use the computer hardware in an efficient manner.

Operating System Services:

• Program execution – system capability to load a program into memory and to run it.
• I/O operations – since user programs cannot execute I/O operations directly, the operating
system must provide some means to perform I/O.
• File-system manipulation – program capability to read, write, create, and delete files.
• Communications – exchange of information between processes executing either on the same
computer or on different systems tied together by a network. Implemented via shared
memory or message passing.
• Error detection – ensure correct computing by detecting errors in the CPU and memory
hardware, in I/O devices, or in user programs.

System Calls:

System calls provide the interface between a running program and the operating system.
• Generally available as assembly-language instructions.
• Languages defined to replace assembly language for systems programming allow system
calls to be made directly (e.g., C, C++)
Three general methods are used to pass parameters between a running program and the operating
system.
• Pass parameters in registers.
• Store the parameters in a table in memory, and the table address is passed as a parameter in
a register.
• Push (store) the parameters onto the stack by the program, and pop off the stack by
operating system.

Types of System Calls

The System Calls can be classified in to following types


• Process control
• File management
• Device management
• Information maintenance
• Communications
2. Multiprogramming

Multiprogramming is a feature of an OS which allows running multiple programs simutaneously on 1


CPU. So, say, you may be typing in word, listning to music while in background IE is downloading
some file & anti-virus program is scanning. These all happen simultaneously to you. Actually
programs dont run simultaneously, but OS divides time for each program acccording to priorities.
When the chance of that program comes it runs, after the stipulated time is over, next program runs &
so on. Since this process is so fast that it appears programs are running simultaneously.

Multiprogramming makes efficient use of the CPU by overlapping the demands for the CPU and its
I/O devices from various users. It attempts to increase CPU utilization by always having something for
the CPU to execute.

The prime reason for multiprogramming is to give the CPU something to do while waiting for I/O to
complete. If there is no DMA, the CPU is fully occupied doing I/O, so there is nothing to be gained (at
least in terms of CPU utilization) by multiprogramming. No matter how much I/O a program does, the
CPU will be 100% busy. This of course assumes the major delay is the wait while data is copied. A
CPU could do other work if the I/O were slow for other reasons (arriving on a serial line, for instance).

Advantages of Multiprogramming
3. Paging and Segmentation:

Paging:

Logical address space of a process can be noncontiguous process is allocated physical memory
whenever the latter is available.
• Divide physical memory into fixed-sized blocks called frames (size is power of 2, between
512 bytes and 8192 bytes).
• Divide logical memory into blocks of same size called pages.
• Keep track of all free frames.
• To run a program of size n pages, need to find n free frames and load program.
• Set up a page table to translate logical to physical addresses.
• Internal fragmentation.

Example:

Before
After
Implementation of Page Table:
• Page table is kept in main memory.
• Page-table base register (PTBR) points to the page table.
• _ Page-table length register (PRLR) indicates size of the page table.
• In this scheme every data/instruction access requires two memory accesses. One for the
page table and one for the data/instruction.
• The two memory access problem can be solved by the use of a special fast-lookup hardware
cache called associative memory or translation look-aside buffers(TLBs)

Segmentation:

Memory-management scheme that supports user view of memory, a program is a collection of


segments. A segment is a logical unit such as: main program, procedure, function, method, object,
local variables, global variables, common block, stack, symbol table, arrays.

Segmentation Architecture
• Logical address consists of a two tuple: <segment-number, offset>.
• Segment table – maps two-dimensional physical addresses; each table entry has:
o Base – contains the starting physical address where the segments reside in memory.
o Limit – specifies the length of the segment.
• Segment-table base register (STBR) points to the segment table’s location in memory.
• Segment-table length register (STLR) indicates number of segments used by a program;
segment number s is legal if s < STLR.

Example:
4. UNIX File System:

UNIX as the official trademark is UNIX is a computer operating system initially established in 1969
by a group of AT&T employees at Bell Labs including Douglas McIlroy, Ken Thompson and
Dennis Ritchie. Presently UNIX systems are separated into different branches, developed by
AT&T as well as several viable vendors and non-profitable associations.

Features of UNIX operating systems

• The UNIX system consists of numerous constituents that are usually packaged together
which include the growth environment, documents, libraries, portable, modifiable source-code
for all of these constituents, along with kernel of an operating system as UNIX was a self-
contained software system.
• This was regarded as one of the main reasons it appeared as a significant teaching and
learning device and has had such a broad influence.
• The unique V7 UNIX distribution contains copies of all of the compiled binaries including all of
the source code and documentation occupied less than 10MB that arrived on a single 9-track
magnetic tape.
• In two volumes the printed documentation, typeset from the on-line sources contained.
• Kernel is the source code in system composed of numerous sub-components like:
• dev: device drivers for control of hardware and some pseudo-hardware
• conf: configuration and machine-dependent parts, along with boot code
• h: header files, defining key structures within the system and important system-specific
invariables
• sys: operating system "kernel", process scheduling, handling memory management, system
calls
• Development Environment contains early versions of UNIX contained a growth environment
sufficient to reconstruct the complete system from source code:
• as: machine-language assembler for the machine
• cc: C language compiler
• ld: linker, for combining object files
• lib: object-code for effectively automating the build process

A file system is a logical method for organizing and storing large amounts of information in a way
which makes it easy manage. The file is the smallest unit in which information is stored. The UNIX file
system has several important features.

• Different types of file


• Structure of the file system
• Your home directory
• Your current directory
• Pathnames
• Access permissions

The file system refers to the way in which UNIX implements files and directories. In UNIX, a file
system has the following features,

• hierarchical structure (support for directories)


• files are expandable (may grow as required)
• files are treated as byte streams (can contain any characters)
• security rights are associated with files and directories (read/write/execute privledge for
owner/group/others)
• files may be shared (concurrent access)
• hardware devices are treated just like files
The following commands are available:

• mkdir - makes a directory :mkdir directory_name


A directory is an entry in the parent directory telling it that this child node will hold other
files/directories. All directories are created with two files in them. The files are "." and ".."
pronounced dot and dot-dot. These files are hidden files as well as any other file begining with
a ".". The dot file is the actual directory entry. It is like a place holder. It points to a record on
disk which holds information about the directory. The information that it holds includes the
ownership, permissions and date and timestamps of the directory. The dotdot file points to the
parents record.
• rmdir - remove an empty directory An empty directory has no files or directories in it.
rmdir directory_name
• cd: Allows you to change which folder(directory) you are currently in.
cd new_directory
The point of cd is convenience. You can reference every file by its full path for instance the
file if you wanted to make the following hierarchy: root folder contains a folder called classes
which contains a folder for cs1007. The mkdir command would be mkdir /classes, followed by
mkdir /classes/cs1007. However you could have typed in mkdir classes, cd classes, mkdir
cs1007. You can access all files either with the absolute path i.e a path beginning with a / or a
relative path.
• pwd (print working directory)
When a user logs in to a UNIX system, they are located in their own directory space. The
pwd (print working directory) command displays the pathname of the current directory you
are in. This is helpful when you want to know exactly where you are.
pwd
• touch
Creates an empty file
touch filename
There are many ways to create a file. You can use a text editor, you can use shell redirection,
copy a file. For this applet I only implemeted the touch command. The command create an
empty file. The point of it is to illustrate files and how what role they play in the directory
structure.
• mv
Move a file/directory to a new location or alternatively rename a file/directory
mv old_directory_name new_directory_name

A file and a directory are very similar in their attributes. Both have a name attribute. The mv
command takes the file or directory and places it in its new location. Sometimes the location
is the same, the only difference is the name of the file/directory. So this command acts both
like a rename and a move command.
• cp (copy)

This command stands for copy, and is used for copying one file to another.
cp .profile temp2
The above makes a copy of the .profile file and names it temp2
• rm (remove)

The rm utility is used for erasing files and directories. rm temp2


This removes the file. Once a file is removed, it cannot be restored. To cover situations where
mistakes might occur, a switch -i appended to this command will request a Yes or No
response before deleting the file.
• ls (list files)
This command is similar to DIR in DOS; it displays a list of all the files in the directory. A new
user does not have many files in their home directory.
Just like DOS, UNIX systems support hidden files. A hidden file in unix is any file beginning
with a "."
UNIX systems extend the power of commands by using special flags or switches. These
switches are one of the most powerful features of UNIX systems. Switches are preceded with
a "-" symbol.
The two most common switches that you will use is the -a and -l switch. The switch l stands
for long listing, and the a switch is for all files, including directories and hidden files.
• chmod
Changes the permissions on the file
chmod permission filename
• chgrp
Changes the group associated with the filename
chgrp group filename
• exit
Exits the applet
exit
5. Process Control Block:

A Process Control Block (PCB, also called Task Controlling Block or Task Struct) is a data structure
in the operating system kernel containing the information needed to manage a particular process. The
PCB is "the manifestation of a process in an operating system".

The PCB contains important information about the specific process including

• The current state of the process i.e., whether it is ready, running, waiting, or whatever.
• Unique identification of the process in order to track "which is which" information.
• A pointer to parent process.
• Similarly, a pointer to child process (if it exists).
• The priority of process (a part of CPU scheduling information).
• Pointers to locate memory of processes.
• A register save area.
• The processor it is running on.

The PCB is a certain store that allows the operating systems to locate key information about a
process. Thus, the PCB is the data structure that defines a process to the operating systems.

Each process has a status associated with it. Many processes consume no CPU time until they get
some sort of input. For example, a process might be waiting for a keystroke from the user. While it is
waiting for the keystroke, it uses no CPU time. While it's waiting, it is "suspended". When the
keystroke arrives, the OS changes its status. When the status of the process changes, from pending
to active, for example, or from suspended to running, the information in the process control block must
be used like the data in any other program to direct execution of the task-switching portion of the
operating system.
This process swapping happens without direct user interference, and each process gets enough CPU
cycles to accomplish its task in a reasonable amount of time. Trouble can begin if the user tries to
have too many processes functioning at the same time. The operating system itself requires some
CPU cycles to perform the saving and swapping of all the registers, queues and stacks of the
application processes. If enough processes are started, and if the operating system hasn't been
carefully designed, the system can begin to use the vast majority of its available CPU cycles to swap
between processes rather than run processes. When this happens, it's called thrashing, and it
usually requires some sort of direct user intervention to stop processes and bring order back to the
system.
One way that operating-system designers reduce the chance of thrashing is by reducing the need for
new processes to perform various tasks. Some operating systems allow for a "process-lite," called
a thread that can deal with all the CPU-intensive work of a normal process, but generally does not
deal with the various types of I/O and does not establish structures requiring the extensive process
control block of a regular process. A process may start many threads or other processes, but a thread
cannot start a process.
So far, all the scheduling we've discussed has concerned a single CPU. In a system with two or more
CPUs, the operating system must divide the workload among the CPUs, trying to balance the
demands of the required processes with the available cycles on the different CPUs. Asymmetric
operating systems use one CPU for their own needs and divide application processes among the
remaining CPUs. Symmetric operating systems divide themselves among the various CPUs,
balancing demand versus CPU availability even when the operating system itself is all that's running.

If the operating system is the only software with execution needs, the CPU is not the only resource to
be scheduled. Memory management is the next crucial step in making sure that all processes run
smoothly.

The role of the PCBs is central in process management: they are accessed and/or modified by most
OS utilities, including those involved with scheduling, memory and I/O resource access and
performance monitoring. It can be said that the set of the PCBs defines the current state of the
operating system. Data structuring for processes is often done in terms of PCBs. For example,
pointers to other PCBs inside a PCB allow the creation of those queues of processes in various
scheduling states (``ready'', ``blocked'', etc.) that we previously mentioned.

In modern sophisticated multitasking systems the PCB stores many different items of data, all needed
for correct and efficient process management. Though the details of these structures are obviously
system-dependent, we can identify some very common parts, and classify them in three main
categories:

• Process identification data;


• Processor state data;
• Process control data;

Process identification data always include a unique identifier for the process (almost invariably an
integer number) and, in a multiuser-multitasking system, data like the identifier the parent process,
user identifier, user group identifier, etc. The process id is particularly relevant, since it's often used to
cross-reference the OS tables defined above, e.g. allowing to identify which process is using which
I/O devices, or memory areas.

Processor state data are those pieces of information that define the status of a process when it's
suspended, allowing the OS to restart it later and still execute correctly. This always include the
content of the CPU general-purpose registers, the CPU process status word, stack and frame pointers
etc.
6. (a) Services of Operating Systems:

Following are the five services provided by an operating systems to the convenience of the users.

Program Execution:

The purpose of computer systems is to allow the user to execute programs. So the operating systems
provide an environment where the user can conveniently run programs. The user does not
have to worry about the memory allocation or multitasking or anything. These things are taken
care of by the operating systems. Running a program involves the allocating and deallocating
memory, CPU scheduling in case of multiprocess. These functions cannot be given to the
user-level programs. So user-level programs cannot help the user to run programs
independently without the help from operating systems.

I/O Operations

Each program requires an input and produces output. This involves the use of I/O. The operating
systems hides the user the details of underlying hardware for the I/O. All the user sees is that the I/O
has been performed without any details. So the operating systems by providing I/O makes it
convenient for the users to run programs. For efficiently and protection users cannot control I/O so
this service cannot be provided by user-level programs.

File System Manipulation

The output of a program may need to be written into new files or input taken from some files. The
operating systems provide this service. The user does not have to worry about secondary storage
management. User gives a command for reading or writing to a file and sees his task accomplished.
Thus operating systems make it easier for user programs to accomplish their task. This service
involves secondary storage management. The speed of I/O that depends on secondary storage
management is critical to the speed of many programs and hence I think it is best relegated to the
operating systems to manage it than giving individual users the control of it. It is not difficult for the
user-level programs to provide these services but for above mentioned reasons it is best if this service
s left with operating system.

Communications

There are instances where processes need to communicate with each other to exchange information.
It may be between processes running on the same computer or running on the different computers.
By providing this service the operating system relieves the user of the worry of passing messages
between processes. In case where the messages need to be passed to processes on the other
computers through a network it can be done by the user programs. The user program may be
customized to the specifics of the hardware through which the message transits and provides the
service interface to the operating system.

Error Detection

An error is one part of the system may cause malfunctioning of the complete system. To avoid such a
situation the operating system constantly monitors the system for detecting the errors. This relieves
the user of the worry of errors propagating to various part of the system and causing malfunctioning.
This service cannot allow to be handled by user programs because it involves monitoring and in cases
altering area of memory or deallocation of memory for a faulty process. Or may be relinquishing the
CPU of a process that goes into an infinite loop. These tasks are too critical to be handed over to the
user programs. A user program if given these privileges can interfere with the correct (normal)
operation of the operating systems.

6 (b). File Systems and Device Drivers:

A file system is a method of storing and organizing computer files and their data. Essentially, it
organizes these files into a database for the storage, organization, manipulation, and retrieval by the
computer's operating system.

File systems are used on data storage devices such as hard disks or CD-ROMs to maintain
the physical location of the files. Beyond this, they might provide access to data on a file server by
acting as clients for a network protocol (e.g., NFS, SMB, or 9P clients), or they may be virtual and
exist only as an access method for virtual data (e.g., procfs). It is distinguished from a directory
service and registry.

Unix-like operating systems create a virtual file system, which makes all the files on all the
devices appear to exist in a single hierarchy. This means, in those systems, there is one root
directory, and every file existing on the system is located under it somewhere. Unix-like systems can
use a RAM disk or network shared resource as its root directory.

1. In many situations, file systems other than the root need to be available as soon as the
operating system has booted. All Unix-like systems therefore provide a facility for mounting
file systems at boot time. System administrators define these file systems in the configuration
file fstab or vfstab in Solaris Operating Environment, which also indicates options and mount
points.
2. In some situations, there is no need to mount certain file systems at boot time, although their
use may be desired thereafter. There are some utilities for Unix-like systems that allow the
mounting of predefined file systems upon demand.
3. Removable media have become very common with microcomputer platforms. They allow
programs and data to be transferred between machines without a physical connection.
Common examples include USB flash drives, CD-ROMs, and DVDs. Utilities have therefore
been developed to detect the presence and availability of a medium and then mount that
medium without any user intervention.
4. Progressive Unix-like systems have also introduced a concept called super mounting; see,
for example, the Linux supermount-ng project. For example, a floppy disk that has been super
mounted can be physically removed from the system. Under normal circumstances, the disk
should have been synchronized and then unmounted before its removal. Provided
synchronization has occurred, a different disk can be inserted into the drive. The system
automatically notices that the disk has changed and updates the mount point contents to
reflect the new medium. Similar functionality is found on Windows machines.
5. A similar innovation preferred by some users is the use of autofs, a system that, like super
mounting, eliminates the need for manual mounting commands. The difference from super
mount, other than compatibility in an apparent greater range of applications such as access to
file systems on network servers, is that devices are mounted transparently when requests to
their file systems are made, as would be appropriate for file systems on network servers,
rather than relying on events such as the insertion of media, as would be appropriate for
removable media.

Device Drivers:

A device driver or software driver is a computer program allowing higher-level computer programs
to interact with a hardware device. A driver typically communicates with the device through the
computer bus or communications subsystem to which the hardware connects. When a calling
program invokes a routine in the driver, the driver issues commands to the device. Once the device
sends data back to the driver, the driver may invoke routines in the original calling program. Drivers
are hardware-dependent and operating-system-specific. They usually provide the interrupt handling
required for any necessary asynchronous time-dependent hardware interface.

Purpose: A device driver simplifies programming by acting as a translator between a hardware


device and the applications or operating systems that use it. Programmers can write the higher-level
application code independently of whatever specific hardware device it will ultimately control, because
code and device can interface in a standard way, regardless of the software superstructure or of
underlying hardware. Every version of a device, such as a printer, requires its own hardware-specific
specialized commands. In contrast, most applications utilize devices (such as a file to a printer) by
means of high-level device-generic commands such as PRINTLN (print a line). The device-driver
accepts these generic high-level commands and breaks them into a series of low-level device-specific
commands as required by the device being driven. Furthermore, drivers can provide a level of security
as they can run in kernel-mode, thereby protecting the operating system from applications running in
user-mode.

Because of the diversity of modern hardware and operating systems, drivers operate in many different
environments. Drivers may interface with:
• Printers, video adapters, network cards, Sound cards
• local buses of various sorts — in particular, for bus mastering on modern systems
• Low-bandwidth I/O buses of various sorts (for pointing devices such as mice, keyboards,
USB, etc.)
• computer storage devices such as hard disk, CD-ROM and floppy disk buses (ATA, SATA,
SCSI)
• implementing support for different file systems
• image scanners, digital cameras
7a. Producer- Consumer problem:

The producer-consumer problem (also known as the bounded-buffer problem) is a classical


example of a multi-process synchronization problem. The problem describes two processes, the
producer and the consumer, who share a common, fixed-size buffer. The producer's job is to generate
a piece of data, put it into the buffer and start again. At the same time the consumer is consuming the
data (i.e. removing it from the buffer) one piece at a time. The problem is to make sure that the
producer won't try to add data into the buffer if it's full and that the consumer won't try to remove data
from an empty buffer.

The solution for the producer is to either go to sleep or discard data if the buffer is full. The next time
the consumer removes an item from the buffer, it notifies the producer who starts to fill the buffer
again. In the same way, the consumer can go to sleep if it finds the buffer to be empty. The next time
the producer puts data into the buffer, it wakes up the sleeping consumer. The solution can be
reached by means of inter-process communication, typically using semaphores. An inadequate
solution could result in a deadlock where both processes are waiting to be awakened. The problem
can also be generalized to have multiple producers and consumers.

Inadequate implementation

This solution has a race condition. To solve the problem, a careless programmer might come up with
a solution shown below. In the solution two library routines are used, sleep and wakeup. When sleep
is called, the caller is blocked until another process wakes it up by using the wakeup routine.
itemCount is the number of items in the buffer.

int itemCount

procedure producer() {
while (true) {
item = produceItem()

if (itemCount == BUFFER_SIZE) {
sleep()
}

putItemIntoBuffer(item)
itemCount = itemCount + 1

if (itemCount == 1) {
wakeup(consumer)
}
}
}

procedure consumer() {
while (true) {

if (itemCount == 0) {
sleep()
}

item = removeItemFromBuffer()
itemCount = itemCount - 1

if (itemCount == BUFFER_SIZE - 1) {
wakeup(producer)
}

consumeItem(item)
}
}

The problem with this solution is that it contains a race condition that can lead into a deadlock.
Consider the following scenario:

1. The consumer has just read the variable itemCount, noticed it's zero and is just about to
move inside the if-block.
2. Just before calling sleep, the consumer is interrupted and the producer is resumed.
3. The producer creates an item, puts it into the buffer, and increases itemCount.
4. Because the buffer was empty prior to the last addition, the producer tries to wake up the
consumer.
5. Unfortunately the consumer wasn't yet sleeping, and the wakeup call is lost. When the
consumer resumes, it goes to sleep and will never be awakened again. This is because the
consumer is only awakened by the producer when itemCount is equal to 1.
6. The producer will loop until the buffer is full, after which it will also go to sleep.

Since both processes will sleep forever, we have run into a deadlock. This solution therefore is
unsatisfactory.

An alternative analysis is that if the programming language does not define the semantics of
concurrent accesses to shared variables (in this case itemCount) without use of synchronization, then
the solution is unsatisfactory for that reason, without needing to explicitly demonstrate a race
condition.

Using semaphores

Semaphores solve the problem of lost wakeup calls. In the solution below we use two semaphores,
fillCount and emptyCount, to solve the problem. fillCount is incremented and emptyCount
decremented when a new item has been put into the buffer. If the producer tries to decrement
emptyCount while its value is zero, the producer is put to sleep. The next time an item is consumed,
emptyCount is incremented and the producer wakes up. The consumer works analogously.

semaphore fillCount = 0
semaphore emptyCount = BUFFER_SIZE
procedure producer() {
while (true) {
item = produceItem()
down(emptyCount)
putItemIntoBuffer(item)
up(fillCount)
}
}
procedure consumer() {
while (true) {
down(fillCount)
item = removeItemFromBuffer()
up(emptyCount)
consumeItem(item)
}
}

The solution above works fine when there is only one producer and consumer. Unfortunately, with
multiple producers or consumers this solution contains a serious race condition that could result in two
or more processes reading or writing into the same slot at the same time. To understand how this is
possible, imagine how the procedure putItemIntoBuffer() can be implemented. It could contain two
actions, one determining the next available slot and the other writing into it. If the procedure can be
executed concurrently by multiple producers, then the following scenario is possible:

1. Two producers decrement emptyCount


2. One of the producers determines the next empty slot in the buffer
3. Second producer determines the next empty slot and gets the same result as the first
producer
4. Both producers write into the same slot

To overcome this problem, we need a way to make sure that only one producer is executing
putItemIntoBuffer() at a time. In other words we need a way to execute a critical section with mutual
exclusion. To accomplish this we use a binary semaphore called mutex. Since the value of a binary
semaphore can be only either one or zero, only one process can be executing between down(mutex)
and up(mutex). The solution for multiple producers and consumers is shown below.

semaphore mutex = 1
semaphore fillCount = 0
semaphore emptyCount = BUFFER_SIZE
procedure producer() {
while (true) {
item = produceItem()
down(emptyCount)
down(mutex)
putItemIntoBuffer(item)
up(mutex)
up(fillCount)
}
up(fillCount) //the consumer may not finish before the producer.
}
procedure consumer() {
while (true) {
down(fillCount)
down(mutex)
item = removeItemFromBuffer()
up(mutex)
up(emptyCount)
consumeItem(item)
}
}

Notice that the order in which different semaphores are incremented or decremented is essential:
changing the order might result in a deadlock.

Using monitors

The following pseudo code shows a solution to the producer-consumer problem using monitors. Since
mutual exclusion is implicit with monitors, no extra effort is necessary to protect critical section. In
other words, the solution shown below works with any number of producers and consumers without
any modifications. It is also noteworthy that using monitors makes race conditions much less likely
than when using semaphores.

monitor ProducerConsumer {

int itemCount
condition full
condition empty

procedure add(item) {
while (itemCount == BUFFER_SIZE) {
wait(full)
}

putItemIntoBuffer(item)
itemCount = itemCount + 1

if (itemCount == 1) {
notify(empty)
}
}

function remove() {
while (itemCount == 0) {
wait(empty)
}

item = removeItemFromBuffer()
itemCount = itemCount - 1

if (itemCount == BUFFER_SIZE - 1) {
notify(full)
}

return item;
}
}
procedure producer() {
while (true) {
item = produceItem()
ProducerConsumer.add(item)
}
}
procedure consumer() {
while (true) {
item = ProducerConsumer.remove()
consumeItem()
}
}

Note the use of while statements in the above code, both when testing if the buffer is full or empty.
With multiple consumers, there is a race condition where one consumer gets notified that an item has
been put into the buffer but another consumer is already waiting on the monitor so removes it from the
buffer instead. If the while was instead an if too many items may be put into the buffer or a remove
might be attempted on an empty buffer.

7b. Deadlock

A deadlock is a situation where in two or more competing actions are each waiting for the other to
finish, and thus neither ever does. Deadlock refers to a specific condition when two or more
processes are each waiting for each other to release a resource, or more than two processes are
waiting for resources in a circular chain. Deadlock is a common problem in multiprocessing where
many processes share a specific type of mutually exclusive resource known as a software lock or soft
lock. Computers intended for the time-sharing and/or real-time markets are often equipped with a
hardware lock (or hard lock) which guarantees exclusive access to processes, forcing serialized
access. Deadlocks are particularly troubling because there is no general solution to avoid (soft)
deadlocks.

This situation may be likened to two people who are drawing diagrams, with only one pencil and one
ruler between them. If one person takes the pencil and the other takes the ruler, a deadlock occurs
when the person with the pencil needs the ruler and the person with the ruler needs the pencil to
finish his work with the ruler. Neither request can be satisfied, so a deadlock occurs.

The telecommunications description of deadlock is weaker than Coffman deadlock because


processes can wait for messages instead of resources. Deadlock can be the result of corrupted
messages or signals rather than merely waiting for resources. For example, a dataflow element that
has been directed to receive input on the wrong link will never proceed even though that link is not
involved in a Coffman cycle.

An example of a deadlock which may occur in database products is the following. Client applications
using the database may require exclusive access to a table, and in order to gain exclusive access
they ask for a lock. If one client application holds a lock on a table and attempts to obtain the lock on a
second table that is already held by a second client application, this may lead to deadlock if the
second application then attempts to obtain the lock that is held by the first application.

Necessary conditions

There are four necessary and sufficient conditions for a Coffman deadlock to occur.

1. Mutual exclusion condition: a resource that cannot be used by more than one process at a
time
2. Hold and wait condition: processes already holding resources may request new resources
3. No preemption condition: No resource can be forcibly removed from a process holding it,
resources can be released only by the explicit action of the process
4. Circular wait condition: two or more processes form a circular chain where each process waits
for a resource that the next process in the chain holds

Prevention

• Removing the mutual exclusion condition means that no process may have exclusive access
to a resource. This proves impossible for resources that cannot be spooled, and even with
spooled resources deadlock could still occur. Algorithms that avoid mutual exclusion are
called non-blocking synchronization algorithms.
• The "hold and wait" conditions may be removed by requiring processes to request all the
resources they will need before starting up (or before embarking upon a particular set of
operations); this advance knowledge is frequently difficult to satisfy and, in any case, is an
inefficient use of resources. Another way is to require processes to release all their resources
before requesting all the resources they will need. This too is often impractical. (Such
algorithms, such as serializing tokens, are known as the all-or-none algorithms.)
• A "no preemption" (lockout) condition may also be difficult or impossible to avoid as a process
has to be able to have a resource for a certain amount of time, or the processing outcome
may be inconsistent or thrashing may occur. However, inability to enforce preemption may
interfere with a priority algorithm. (Note: Preemption of a "locked out" resource generally
implies a rollback, and is to be avoided, since it is very costly in overhead.) Algorithms that
allow preemption include lock-free and wait-free algorithms and optimistic concurrency
control.
• The circular wait condition: Algorithms that avoid circular waits include "disable interrupts
during critical sections", and "use a hierarchy to determine a partial ordering of resources"
(where no obvious hierarchy exists, even the memory address of resources has been used to
determine ordering) and Dijkstra's solution.
Circular wait prevention

Circular wait prevention consists of allowing processes to wait for resources, but ensure that the
waiting can't be circular. One approach might be to assign precedence to each resource and force
processes to request resources in order of increasing precedence. That is to say that if a process
holds some resources and the highest precedence of these resources is m, and then this process
cannot request any resource with precedence smaller than m. This forces resource allocation to follow
a particular and non-circular ordering, so circular wait cannot occur. Another approach is to allow
holding only one resource per process; if a process requests another resource, it must first free the
one it is currently holding (that is, disallow hold-and-wait).

Avoidance

Deadlock can be avoided if certain information about processes is available in advance of resource
allocation. For every resource request, the system sees if granting the request will mean that the
system will enter an unsafe state, meaning a state that could result in deadlock. The system then only
grants requests that will lead to safe states. In order for the system to be able to figure out whether
the next state will be safe or unsafe, it must know in advance at any time the number and type of all
resources in existence, available, and requested. One known algorithm that is used for deadlock
avoidance is the Banker's algorithm, which requires resource usage limit to be known in advance.
However, for many systems it is impossible to know in advance what every process will request. This
means that deadlock avoidance is often impossible.

Two other algorithms are Wait/Die and Wound/Wait, each of which uses a symmetry-breaking
technique. In both these algorithms there exists an older process (O) and a younger process (Y).
Process age can be determined by a timestamp at process creation time. Smaller time stamps are
older processes, while larger timestamps represent younger processes.

Wait/Die Wound/Wait

O needs a resource held by Y O waits Y dies

Y needs a resource held by O Y dies Y waits

It is important to note that a process may be in an unsafe state but would not result in a deadlock. The
notion of safe/unsafe states only refers to the ability of the system to enter a deadlock state or not. For
example, if a process requests A which would result in an unsafe state, but releases B which would
prevent circular wait, then the state is unsafe but the system is not in deadlock.
Detection

Often, neither avoidance nor deadlock prevention may be used. Instead deadlock detection and
process restart are used by employing an algorithm that tracks resource allocation and process
states, and rolls back and restarts one or more of the processes in order to remove the deadlock.
Detecting a deadlock that has already occurred is easily possible since the resources that each
process has locked and/or currently requested are known to the resource scheduler or OS.

Detecting the possibility of a deadlock before it occurs is much more difficult and is, in fact, generally
undecidable, because the halting problem can be rephrased as a deadlock scenario. However, in
specific environments, using specific means of locking resources, deadlock detection may be
decidable. In the general case, it is not possible to distinguish between algorithms that are merely
waiting for a very unlikely set of circumstances to occur and algorithms that will never finish because
of deadlock.

Deadlock detection techniques include, but are not limited to, Model checking. This approach
constructs a Finite State-model on which it performs a progress analysis and finds all possible
terminal sets in the model. These then each represent a deadlock.
8(a). Virtual Memory Management Systems

Virtual memory is a memory management technique developed for multitasking kernels; this
technique virtualizes computer architecture's various hardware memory devices (such as RAM
modules and disk storage drives), allowing a program to be designed as though:

• there is only one hardware memory device and this "virtual" device acts like a RAM module.
• the program has, by default, sole access to this virtual RAM module as the basis for a
contiguous working memory (an address space).

Systems that employ virtual memory:

• Use hardware memory more efficiently than systems without virtual memory.
• make the programming of applications easier by:
o Hiding fragmentation.
o Delegating to the kernel the burden of managing the memory hierarchy; there is no
need for the program to handle overlays explicitly.
o Obviating the need to relocate program code or to access memory with relative
addressing.

Virtual memory can be divided in to two types:

Paged virtual memory

Segmented virtual memory

Paged virtual memory: Virtual memory divides the virtual address space of an application program
into pages; a page is a block of contiguous virtual memory addresses. Pages are usually at
least 4 KiB (4×1024 bytes) in size, and systems with large virtual address ranges or large
amounts of real memory generally use larger page sizes.

Page tables

Almost all implementations use page tables to translate the virtual addresses seen by the application
program into physical addresses (also referred to as "real addresses") used by the hardware to
process instructions. Each entry in the page table contains a mapping for a virtual page to either the
real memory address at which the page is stored, or an indicator that the page is currently held in a
disk file.
Dynamic address translation

If, while executing an instruction, a CPU fetches an instruction located at a particular virtual address,
or fetches data from a specific virtual address or stores data to a particular virtual address, the virtual
address must be translated to the corresponding physical address. This is done by a hardware
component, sometimes called a memory management unit, which looks up the real address (from the
page table) corresponding to a virtual address and passes the real address to the parts of the CPU
which execute instructions.

Paging supervisor

This part of the operating system creates and manages page tables. If the address translation
hardware raises a page fault exception, the paging supervisor accesses secondary storage, returns
the page containing the required virtual address, updates the page tables to reflect the physical
location of the virtual address and finally tells the dynamic address translation mechanism to restart
the request. When all physical memory is already in use as is typical, the paging supervisor must free
an area in primary storage to hold the swapped-in page. Freeing memory minimally requires updating
the paging table to say that the page is in secondary storage. The supervisor saves time by not re–
swapping pages that are already present in secondary storage.

Paging supervisors generally choose the page that has been least recently used, guessing that such
pages are less likely to be requested. Every time the dynamic address translation hardware matches
a virtual address with a real physical memory address, it time-stamps the page table entry for that
virtual address.

8b. Basics of OS Security and Protection

9. Explain the Features of Process Scheduling and Memory Management in UNIX

You might also like