0% found this document useful (0 votes)
11 views16 pages

OS Unit-2

The document explains the concept of processes in operating systems, detailing their states, memory layout, and the role of the Process Control Block (PCB). It discusses process scheduling, concurrency principles, and challenges such as race conditions, deadlocks, and starvation, along with solutions like semaphores and mutual exclusion. Additionally, it covers interprocess communication methods, including shared memory and message passing.

Uploaded by

starshadow9027
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views16 pages

OS Unit-2

The document explains the concept of processes in operating systems, detailing their states, memory layout, and the role of the Process Control Block (PCB). It discusses process scheduling, concurrency principles, and challenges such as race conditions, deadlocks, and starvation, along with solutions like semaphores and mutual exclusion. Additionally, it covers interprocess communication methods, including shared memory and message passing.

Uploaded by

starshadow9027
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

The Process

A process is a program that is currently executing.


When a program stored on disk is loaded into memory and starts running, it becomes a
process.
So,
Program = passive (stored on disk)
Process = active (program in execution)

The memory layout of a process is typically divided into multiple sections, and is shown in
Figure.

• Text section— the executable code


• Data section—global variables
• Heap section—memory that is dynamically allocated during program run time
• Stack section— temporary data storage when invoking functions (such as function
parameters, return addresses, and local variables)

Notice that the sizes of the text and data sections are fixed, as their sizes do not change
during program run time. However, the stack and heap sections can shrink and grow
dynamically during program execution.

Process State

As a process executes, it changes state. The state of a process is defined in part by the
current activity of that process. A process may be in one of the following states:

• New: The process is being created.


• Ready: The process is waiting to be assigned to a processor.

• Running: Instructions are being executed.

• Waiting: The process is waiting for some event to occur (such as an I/O completion or
reception of a signal).

• Terminated: The process has finished execution.

Process Control Block (PCB)

The operating system stores process information in a data structure called PCB.
Each process is represented in the operating system by a process control block (PCB)—also
called a task control block.

• Process state: The state may be new, ready, running, waiting, halted, and
so on.
• Program counter. The counter indicates the address of the next instruction to be executed
for this process.

• CPU registers. The registers vary in number and type, depending on the
computer architecture. They include accumulators, index registers, stack pointers, and
general-purpose registers, plus any condition-code information. Along with the program
counter, this state information must be saved when an interrupt occurs, to allow the process
to be continued correctly afterward when it is rescheduled to run.

• CPU-scheduling information. This information includes a process priority, pointers to


scheduling queues, and any other scheduling parameters.
(Chapter 5 describes process scheduling.)

• Memory-management information. This information may include such items as the value
of the base and limit registers and the page tables, or the segment tables, depending on the
memory system used by the operating system (Chapter 9).

• Accounting information. This information includes the amount of CPU and real time
used, time limits, account numbers, job or process numbers,
and so on.

• I/O status information. This information includes the list of I/O devices
allocated to the process, a list of open files, and so on.

In brief, the PCB simply serves as the repository for all the data needed to start, or restart, a
process, along with some accounting data.
Process Scheduling

The objective of multiprogramming is to have some process running at all times so as to


maximize CPU utilization.

The number of processes currently in memory is known as the degree of multiprogramming.

An I/O-bound process is one that spends more of its time doing I/O than it spends doing
computations. A CPU-bound process, in contrast, generates I/O requests infrequently, using
more of its time doing computations.
[Link] Queues

[Link] Switch

Switching the CPU core to another process requires performing a state save of the current
process and a state restore of a different process. This task is known as a context switch
and is illustrated in Figure. When a context switch occurs, the kernel saves the context of the
old process in its PCB and loads the saved context of the new process scheduled to run.
Context switch time is pure overhead, because the system does no useful work while
switching.
Principle of Concurrency
Concurrency in operating systems is the ability to manage multiple tasks simultaneously,
allowing processes to overlap in execution to improve CPU utilization, responsiveness, and
throughput. It enables efficient resource sharing—like memory and CPU time—but requires
strict synchronization to prevent issues like deadlocks, race conditions, and resource
starvation.

Key Principles and Core Concepts

[Link] Interaction: Processes can be independent or cooperate, requiring mechanisms


for communication (e.g., message passing or shared memory).

[Link] Exclusion: Only one process at a time should access critical, non-shareable
resources (e.g., printers, shared variables) to prevent conflicts.
Synchronization: Coordinating the execution order of processes to ensure consistent data
and avoid race conditions (when output depends on unexpected sequence).
[Link] Handling: Preventing scenarios where multiple processes are blocked
indefinitely, each waiting for a resource held by another.

4. Starvation Prevention: Ensuring that no process is perpetually denied access to


resources, allowing all tasks to eventually progress.

5. Resource Allocation: Managing CPU scheduling and memory to prevent any single
process from monopolizing system resources.

Problems Caused by Concurrency


Concurrency introduces several challenges.

1. Race Condition
Occurs when two processes access shared data at the same time and the result depends on
execution order.
Example: Two processes updating the same bank account balance.

2. Deadlock
Occurs when processes wait for each other indefinitely.
Example: Process A waits for resource held by Process B and vice versa.

3. Starvation
A process may never get CPU time because higher priority processes keep executing.

4. Data Inconsistency
If multiple processes modify shared data simultaneously, incorrect results may occur.

Producer/ Consumer Problem

The Producer–Consumer Problem is a classic problem in Operating Systems that


explains issues related to process synchronization and concurrency.

The Producer–Consumer problem describes a situation where:


One or more processes produce data → called producers
One or more processes consume data → called consumers
Both share a common memory buffer.

The challenge is how to coordinate them so that they do not access the buffer
incorrectly at the same time.
Basic Idea
The producer creates data and puts it into a shared buffer and the consumer takes
data from the buffer and uses it but the problem is that they should not access the
buffer at the same time, so to remove this synchronization problem we use mutex
locks or semaphores.

Producer → Buffer → Consumer

Three variables are usually used to write the algorithm for producer and consumer :
mutex → ensures mutual exclusion ( a binary no. i.e. 0 or 1)
empty → number of empty buffer slots. (a non negative integer value)
full → number of filled buffer slots. (a non negative integer value)

Producer Algorithm

wait(empty)
wait(mutex)

add item to buffer

signal(mutex)
signal(full)

Explanation:
1.​ Wait until buffer has empty space
2.​ Lock the buffer
3.​ Insert item
4.​ Unlock the buffer
5.​ Update count

Consumer Algorithm

wait(full)
wait(mutex)

remove item from buffer

signal(mutex)
signal(empty)
Explanation:
1.​ Wait until buffer has items
2.​ Lock the buffer
3.​ Remove item
4.​ Unlock the buffer
5.​ Update count

Mutual Exclusion
Mutual exclusion (Mutex) in OS is a synchronization mechanism ensuring that only one
process or thread accesses a shared resource or enters a critical section at a time. It
prevents race conditions and data inconsistency by locking resources during use. Common
techniques include mutex locks, semaphores, and disabling interrupts.

Critical Section
A critical section is the portion of a program where a process accesses shared resources
and therefore must not be executed by more than one process at the same time.
Shared resources may include:
●​ shared variables
●​ shared files
●​ shared memory
●​ shared data structures

Critical Section Problem


The critical section problem is the problem of designing a protocol that allows processes to
cooperate safely when accessing shared data.
The goal is to ensure that:
●​ only one process executes the critical section at a time
●​ shared data remains consistent

Requirements for a Solution ( according to the Gelvin book)


A correct solution to the critical section problem must satisfy three conditions.
1. Mutual Exclusion
Only one process can be in the critical section at a [Link] one process is executing in the
critical section, other processes must wait.
2. Progress
If no process is in the critical section, the selection of the next process should not be delayed
indefinitely.
Only processes that want to enter the critical section should participate in the decision.
3. Bounded Waiting
There must be a limit on the number of times other processes can enter the critical section
before a waiting process gets its turn.
************************************************************
But if we need to check the given solution is valid or not, we will check according to the
points as I provide below:

1. Mutual exclusion

a. One process in critical section, another process tries to enter → Show that second
process will block in entry code.

b. Two (or more) processes are in the entry code → Show that at most one will enter critical
section

2. Progress(== absence of deadlock)

a. No process in critical section, P1 arrives → P1 enters

b. Two (or more) processes are in the entry code → Show that at least one will enter critical
section

3. Bounded Waiting(== Fairness)

One process in critical section, another process is waiting to enter → Show that if first
process exits the critical section and attempts to re-enter, show that it does not get re-enter
or enter finite times.

************************************************************

Format to write a solution for critical section:

Now we are trying to write a solution for the critical section problem of processes
Let's start with our first attempt

Attempt 1:

int turn=1 // global variable

code for process0 | code for process1

while (turn!=0); | while (turn!=1);

Critical Section | Critical Section

turn=1; | turn=0;

When you apply the above three condition i.e. Mutual exclusion, progress and B.W. to
check the valid solution it is or not, you will find it only satisfies M.E. and B.W. but not
Progress.

Why? (I already discussed with you in class)


Answer: Check and apply the above conditions according to the points I already highlighted
above.

As we know the three condition individually should be satisfied to validate the solution, So,
because it is not satisfying any one of them, it is invalid or not good solution.

M.E. ✅. Progress ❌. Bounded waiting ✅


Now, let's make another attempt 2.

Attempt 2:

int want[2]= { False, False }; // global variable

code for process0 | code for process1

want[0]=True; | want[1]=True;

while (want[1]); | while (want[0]);

Critical Section | Critical Section

want[0]=False; | want[1]=False;

Again the above solution only satisfies M.E. and B.W. but not Progress.

M.E. ✅. Progress ❌. Bounded waiting ✅


Peterson solution
Combination of Attempt 1 and Attempt 2

int want[2]= { False, False }; // global variable


int turn = 0; // global variable

code for process0 | code for process1

want[0]=True; | want[1]=True;

turn=1; | turn=0;

while (want[1] && turn==1); | while (want[0] && turn==0);

Critical Section | Critical Section

want[0]=False; | want[1]=False;

In the above solution, the turn variable is just removed the deadlock condition which was
happening in the Attempt 2 and validate the progress condition. So, the Peterson solution is
fullfill the all three conditions requirement according to the highlighted points above.



Mutual Exclusion ,


Progress
Bounded waiting

Test&Set()
Many modern computer systems provide special hardware instructions that allow us either to
test and modify the content of a word or to swap the contents of two words atomically— that
is, as one uninterruptible unit.

Boolean Lock = False; //global variable

Boolean TestAndSet(Boolean *target)


{
boolean rv = *target;
*target = true;
return rv;
}
while (testandset(&Lock));
/* critical section */
lock = false;
/* remainder section */



M.E.


Progress
B.w.

Semaphores

A semaphore is a signaling mechanism used to control access to


resources using a counter value. The semaphores are integer value
which can be accessed using following functions only:
1.​ wait() / P()
2.​ signal() / V()

wait(S) { | signal(S) {
while (S <= 0); | S++;
// busy wait. | }
S--; |
} |

where wait (S) is a decrement count function it just decrement S by 1


and signal(S) is an increment count function it just increment S by 1.
S is the semaphore , which actually a integer value .

It can allow multiple processes to access resources.


It is more general and flexible than a mutex.

Types of semaphores:
1.​ Binary semaphore (0 or 1, similar to mutex)
2.​ Counting semaphore (0 to N, allows multiple access)

Let's understand how the semaphore works as a solution for critical


section problem.
Assume we have 2 processes P1 and P2 . Let take binary semaphore
S=1 .

S=1 // global variable

P1. | P2

while(True). | while(True)
{ {
wait(S) ; | wait(S);
C.S. C.S.
signal(S) ; | signal(S);
} | }

When P1 runs , its decrement S value by 1 and it becomes [Link] enters


it's Critical section . If at this time when P2 wants to enter or run it's
critical section it can't because the while loop value of wait function of P2
get true and it get trap in that loop till the value of S don't incremented by
signal function of P1. When P1 done with its C.S. and run it's signal
function it updates the S=1 again ,now the P2 function can run by
satisfying it's wait function and enter or run it's critical section.

Note: if we use binary semaphore, then any one process can execute it's
critical section, means there will be mutual exclusion.
But if we use counting semaphore then the no of processes which can
be execute it's critical section at same time is equal to the value of S.

Ex: if S=6 then no. of processes which can run at same time is 6. means
there mutual exclusion not satisfied.

That's why we use counting semaphores when we have multi-resource


system example: multicor cpu and binary semaphore when we have
single resource example: single cpu.
Question:
A shared variable x, initialized to zero, is operated on by four concurrent
processes W, X, Y, Z as follows. Each of the process W and X reads x
from memory, increments by 2, stores it to memory and then terminates.
Each of the processes Y and Z reads x from memory, decrements by 3,
stores it to memory and then terminates. Each processes before reading
x invokes the P operation (i.e., wait) on a counting semaphore S and
invokes the V operation (i.e., signal) on the semaphore S after storing x
to memory. Semaphore S is initialized to two. What is the maximum
possible value of x after all processes complete execution?

Interprocess Communication
Processes executing concurrently in the operating system may be either
independent processes or cooperating processes. A process is
independent if it does not share data with any other processes executing
in the system. A process is cooperating if it can affect or be affected by
the other processes executing in the system. Clearly, any process that
shares data with other processes is a cooperating process.

Cooperating processes require an interprocess communication (IPC)


mechanism that will allow them to exchange data— that is, send data to
and receive data from each other. There are two fundamental models of
interprocess communication: shared memory and message passing.

Shared memory interprocess communication


Shared Memory IPC is a method where two or more processes
communicate by accessing the same memory area. Processes share a
common memory space and read/write data directly from it.
How it's work
1.​ One process creates a shared memory segment
2.​ Other processes attach to this memory
3.​ All processes can:
read data
write data
4.​ Processes must coordinate access to avoid conflicts.
Example: Producer- Consumer Problem

Message passing IPC

Message Passing is a method of interprocess communication (IPC) in


which processes communicate by sending and receiving messages
instead of sharing [Link] talk to each other by exchanging
messages.

Message passing uses two main operations:


1. Send(): Used to send a message to another process.
2. Receive(): Used to receive a message from another process.

How It Works
1.​ A process creates a message
2.​ It sends the message using send()
3.​ The OS delivers the message
4.​ The receiving process uses receive() to get it

Prepare on your own:Dining Philosopher Problem** , Sleeping


Barber Problem

You might also like