Operating Systems
Operating Systems
• OS is a control program
– Controls execution of programs to prevent errors and
improper use of the computer
Types of operating systems
• A computer system can be organized in a number
of different ways.
• According to the number of general-purpose
processors used.
– Single-Processor Systems
– Multiprocessor Systems
– Clustered Systems
SINGLE-PROCESSOR SYSTEM
• These systems have only one main CPU
• Executes all tasks by this CPU.
• They handle one process at a time
– Simpler and easier to design.
• Less powerful for multitasking compared to
multi-processor systems.
• More efficient for single-threaded applications.
MULTIPROCESSORS SYSTEM
Multiprocessor systems are computing architectures that
utilize two or more processors to perform tasks
simultaneously.
❏ Enhances performance by enabling parallel processing
❏ Improves throughput and responsiveness.
❏ Symmetric multiprocessor (SMP) systems - all processors
share the same memory and resources, each processor
performs all tasks ( Tightly Coupled Systems)
❏ Asymmetric multiprocessor systems - processors have
different roles and memory access patterns - each
processor is assigned a specific task.
❏ ( commonly used in servers, high-performance
computing, and applications requiring significant
computational power)
MULTIPROCESSORS SYSTEM
• Parallel systems/tightly-coupled systems
– Advantages include:
1. Increased throughput
2. Economy of scale
3. Increased reliability – graceful degradation or fault tolerance
Clustered Systems
Advantages
● Scalability: Easily add more nodes to increase capacity without major reconfigurations.
● Flexibility: Any node can handle any task, providing versatility in operations.
● High Availability: Continuous operation is maintained as no single point of failure exists.
Disadvantages
● Complex Management: Coordinating equal nodes can be more complex, especially as the number of nodes
increases.
● Resource Contention: Shared resources may lead to contention issues if not managed properly.
Clustered Systems
Symmetric clustering - Examples
Advantages
● Simpler Coordination: Clear roles simplify the coordination and management of tasks within the cluster.
● Efficient Resource Use: Resources can be optimized based on node roles, ensuring efficient utilization.
● Easier Maintenance: Maintenance and updates can be managed more straightforwardly by focusing on the master
node.
Disadvantages
● Single Point of Failure: If the master node fails, the entire cluster may be affected unless failover mechanisms are in
place.
● Limited Scalability: Adding more slave nodes may offer limited performance gains compared to symmetric clustering.
● Potential Bottleneck: The master node can become a performance bottleneck if it handles too many tasks.
Multiprogramming
• Multiprogramming is a technique to execute number of
programs simultaneously by a single processor.
– In Multiprogramming, number of processes reside in main
memory, OS picks and begins to executes one of the jobs in
the main memory. If any I/O wait happened in a process, then
CPU switches from that job to another job.
Advantages:
•Efficient memory utilization
•Throughput increases
•CPU is never idle, so performance increases.
Multiprogramming
Job 1
Job 2
Job 3
Job 4
Job 5
S. Asymmetric Multiprocessing Symmetric Multiprocessing
No.
1. The processors are not treated equally. All the processors are treated equally.
No Communication between Processors as they are controlled All processors communicate with another
3.
by the master processor. processor by a shared memory.
3. It takes less time for job processing. It takes more time to process the jobs.
In this, more than one process can In this, one process can be executed
4.
be executed at a time. at a time.
5. It is economical. It is economical.
1. Main Thread: The primary thread of the application - responsible for managing the user
interface and handling user interactions
● It receives and processes user input (clicking buttons or selecting songs).
● Terminated: The thread has finished its execution and will not
run again.
● Initially, all threads are in the "New" state.
● When the application starts, the main thread enters the "Runnable"
state and begins executing.
● If the user selects a song, the main thread may pass the request to
the playback thread, which transitions from "New" to "Runnable."
● The playback thread loads the audio file and enters the "Blocked"
state while waiting for the file to load.
● Once the audio file is loaded, the playback thread transitions to the
"Runnable" state, and the operating system scheduler allows it to
start executing.
● The playback thread continuously updates the playback status and
remains in the "Running" state while the song is playing.
● If the user interacts with the GUI, the GUI thread transitions from
"Runnable" to "Running" to handle the user input and update the
display.
● When the user closes the application, all threads eventually reach the
"Terminated" state.
Process vs Thread?
• The primary difference is that threads
within the same process run in a
shared memory space, while
processes run in separate memory
spaces.
2. The process takes more time to terminate. The thread takes less time to terminate.
3. It takes more time for creation. It takes less time for creation.
4. It also takes more time for context switching. It takes less time for context switching.
8.
P;;;;;;
The process is called the heavyweight process.
A Thread is lightweight as each thread in a
process shares code, data, and resources.
If one process is blocked then it will not affect the If a user-level thread is blocked, then all other
10. execution of other processes user-level threads are blocked.
14. The process does not share data with each other. Threads share data with each other.
Process State
• As a process executes, it changes state
– new: The process is being created
– running: Instructions are being executed
– waiting: The process is waiting for some event to occur
– ready: The process is waiting to be assigned to a processor
– terminated: The process has finished execution
000000000
0000000000000000000
New: When a user launches the word processing application, a new
process is created by the OS to handle the application's execution.
Ready: The word processing process is loaded into memory and waiting for
I/P.
Running: The process is executing commands and performing operations
based on user input. (typing, formatting text, Editing or saving documents
Blocked: While the word processing process is running, it may encounter
situations where it needs to wait for certain events.If the user initiates a file
open operation and the file is large or stored on a slow storage device, the
process may be blocked while waiting for the file to load.
Terminated: When the user decides to close the word processing application
or when the task is completed, the process enters the terminated state. It
releases any system resources it was using, such as memory and file
handles, and exits
Throughout the lifecycle of the word processing process, it can transition
between these states based on user actions, system events, and the
completion of tasks.
The OS manages the process scheduling, memory allocation, and I/O
operations to ensure smooth operation of the word processing application
State Transitions
• Valid
• New to ready
• Ready to running
• Running to exit
• Running to ready
• Running to blocked
• Blocked to ready
• Ready to exit
• Blocked to exit
Process Control Block (PCB)
P1 24
P2 3
P3 3
3 9 1 2
0
6 4
• Now we add the concepts of varying arrival times and preemption to the analysis
1 1 2
0 1 5
0 7 6
1 1 1
0 1 6
6 8 9
P1 P2 P3 P1 P1 P1 P1 P1
1 1 1 2 2 3
0 4 7
0 4 8 2 6 0
• Three queues:
– Q0 – RR with time quantum 8 milliseconds
– Q1 – RR time quantum 16 milliseconds
– Q2 – FCFS
• Scheduling
– A new job enters queue Q0 which is served FCFS
• When it gains CPU, job receives 8 milliseconds
• If it does not finish in 8 milliseconds, job is moved to queue Q1
– At Q1 job is again served FCFS and receives 16 additional
milliseconds
• If it still does not complete, it is preempted and moved to queue Q2
Multilevel Feedback Queues
Example:
• Consider a system that has a CPU-bound process, which requires a
burst time of 40 seconds. The multilevel Feed Back Queue
scheduling algorithm is used and the queue time quantum ‘2’
seconds and in each level it is incremented by ‘5’ seconds. Then
how many times the process will be interrupted and in which queue
the process will terminate the execution?
• Solution:
• Process P needs 40 Seconds for total execution.
• At Queue 1 it is executed for 2 seconds and then interrupted and
shifted to queue 2.
• At Queue 2 it is executed for 7 seconds and then interrupted and
shifted to queue 3.
• At Queue 3 it is executed for 12 seconds and then interrupted
and shifted to queue 4.
• At Queue 4 it is executed for 17 seconds and then interrupted
and shifted to queue 5.
• At Queue 5 it executes for 2 seconds and then it completes.
• Hence the process is interrupted 4 times and completed on
Process Synchronization
• Process Synchronization is a way to coordinate
processes that use shared data. It occurs in an
operating system among cooperating processes.
• Cooperating processes are processes that share
resources.
• While executing many concurrent processes,
process synchronization helps to maintain data
consistency and cooperating process execution.
Race Condition
def withdraw(amount):
global balance
if balance >= amount:
balance -= amount
# Create two threads
thread_A = Thread(target=withdraw, args=(50,))
thread_B = Thread(target=withdraw, args=(70,))
The initial `balance` is set to 100. `thread_A` attempts to withdraw 50 units, and
`thread_B` attempts to withdraw 70 units.
if `thread_A` executes first, the final balance will be 50. Conversely, if `thread_B`
executes first, the final balance will be 30.
If the threads are interleaved, the final balance will depend on the order and timing
of their operations. For instance, if `thread_A` withdraws 50 units first, and then
`thread_B` withdraws 70 units, the final balance will be -20 (indicating an overdraft),
as both threads did not check the balance simultaneously.
if `thread_A` and `thread_B` both read the initial balance of 100 simultaneously,
they may proceed to withdraw their amounts without considering each other's
changes. As a result, both threads may update the balance to negative values,
leading to an inconsistent result.
• A process must acquire a lock before entering a critical section, it releases the
lock when it exists the critical section.
• The acquire() acquires the lock, and release() releases the lock.
• A mutex lock has a boolean variable available whose value indicates if the lock
is available or not.
Semaphore
• Synchronization tool to control access to shared resources by multiple processes or
threads in a concurrent system, that does not require busy waiting
• Semaphore S – integer variable
• Two standard operations modify S: wait() and signal() (called P() and V())
– When a process wants to access a shared resource, it must perform a wait operation
on the semaphore.
– If the semaphore's value is greater than zero indicating that there are resources
available, the process decrements the semaphore's value and proceeds.
– If the semaphore's value is zero indicating that all resources are currently in use ,
the process may have to wait until a resource becomes available.
– When a process finishes using a shared resource, it must perform a signal operation
on the semaphore and increments the semaphore's value, signals that a resource
has been released and is now available for use by another process.
wait (S) {
while S <= 0
{
; // no-op
} S--;
signal (S) {
S++;
}
Semaphore
There are two types of semaphores :
1)Binary Semaphores
2)Counting Semaphores
Binary Semaphores : They can only be either 0 or 1.
– They are also known as mutex locks, as the locks can provide mutual
exclusion.
– All the processes can share the same mutex semaphore that is
initialized to 1.
– A process has to wait until the lock becomes 0.
– Then, the process can make the mutex semaphore 1 and start its
critical section.
– When it completes its critical section, it can reset the value of mutex
semaphore to 0 and some other process can enter its critical
section.
• Counting Semaphores : They can have any value and are
not restricted over a certain domain
• They can be used to control access a resource that has a
limitation on the number of simultaneous accesses.
• The semaphore can be initialized to the number of
instances of the resource.
• Whenever a process wants to use that resource, it checks if
the number of remaining instances is more than zero, i.e.,
the process has an instance available.
• Then, the process can enter its critical section thereby
decreasing the value of the counting semaphore by 1. After
the process is over with the use of the instance of the
resource, it can leave the critical section thereby adding 1
to the number of available instances of the resource
• A counting semaphore S is initialized to 10.
• Then, 6 P operations and 4 V operations are performed on S.
• What is the final value of S?
P operation means wait operation decrements the value of semaphore variable by 1.
V operation also called as signal operation increments the value of semaphore variable by 1.
Thus,
Final value of semaphore variable S
= 10 – (6 x 1) + (4 x 1)
= 10 – 6 + 4=8
Processes can run in many ways, below is one of the cases in which x attains
max value
If x=0 Semaphore S is initialized to 2
Process W executes S=1, x=1 but it doesn’t update the x variable.
Then process Y executes S=0, it decrements x, now x= -2 and signal
semaphore S=1
Now process Z executes s=0, x=-4, signal semaphore S=1
monitor alarm
{
condition c;
void delay(int ticks)
{
int begin_time = read_clock();
while (read_clock() < begin_time + ticks)
c.wait();
}
void tick()
{
c.broadcast();
}
}
Suppose we want to synchronize two concurrent processes A and B to display
11001100110011..….. The code for A and B is shown below
Process A:
while (1) {
W:
print ‘1’;
print ‘1’;
X:
}
Process B:
while (1) {
Y:
print ‘0’;
print ‘0’;
Z:
Process P:
while (1) {
wait (S) :
print ‘1’;
print ‘1’;
signal (T):
}
Process Q:
while (1) {
wait (T) :
print ‘0’;
print ‘0’;
signal (S) :
Consider the methods used by processes P1 and P2 for accessing their critical
sections
whenever needed, as given below. The initial values of shared Boolean variables
S1 and S2
are randomly assigned.
Method Used by P1
while (S1 == S2) ;
Critical Section
S1 = S2;
Method Used by P2
Check if all the critical region properties are satisfied or not and if satisfied justify
your
answer.
Principle of Mutual Exclusion: No two processes may be simultaneously present
in the
critical section at the same time. That is, if one process is present in the critical
section
other should not be allowed.P1 can enter critical section only if S1 is not equal to
S2, and P2 can enter critical section
only if S1 is equal to S2. Therefore Mutual Exclusion is satisfied. Progress: no
process running outside the critical section should block the other
interested process from entering critical section whenever critical section is free.
Suppose P1 after executing critical section again want to execute the critical
section and
P2 dont want to enter the critical section, then in that case P1 has to
unnecesarily wait
for P2. Hence progress is not satisfied.
Computer System Organization
• Computer-system operation
– One or more CPUs, device controllers connect
through common bus providing access to shared
memory
– Concurrent execution of CPUs and devices competing
for memory cycles
Storage Structure
• Main memory – only large storage media that the CPU can access directly
– Random access
– Typically volatile
• Secondary storage – extension of main memory that provides large nonvolatile
storage capacity
• Hard disks – rigid metal or glass platters covered with magnetic recording
material
– Disk surface is logically divided into tracks, which are subdivided into
sectors
– The disk controller determines the logical interaction between the device
and the computer
• Solid-state disks – faster than hard disks, nonvolatile
– Various technologies
– Becoming more popular
Storage systems organized in hierarchy
Speed
Cost
Volatility
• I/O devices and the CPU can execute concurrently
• Device controller informs CPU that it has finished its operation by causing an interrupt.
• Interrupt transfers control to the interrupt service routine generally, through the interrupt
vector, which contains the addresses of all the service routines
• Cache memory is an extremely fast memory type that acts as a buffer between
RAM and the CPU. It holds frequently requested data and instructions so that
they are immediately available to the CPU when needed.