0% found this document useful (0 votes)
2 views

Operating Systems

The document provides an overview of operating systems, including their definitions, functions, and types such as single-processor, multiprocessor, and clustered systems. It covers key concepts like memory management, process management, file management, and user interfaces, as well as the goals of operating systems in resource allocation and performance monitoring. Additionally, it discusses the differences between asymmetric and symmetric multiprocessing, highlighting their characteristics, advantages, and disadvantages.

Uploaded by

bijukolathu351
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Operating Systems

The document provides an overview of operating systems, including their definitions, functions, and types such as single-processor, multiprocessor, and clustered systems. It covers key concepts like memory management, process management, file management, and user interfaces, as well as the goals of operating systems in resource allocation and performance monitoring. Additionally, it discusses the differences between asymmetric and symmetric multiprocessing, highlighting their characteristics, advantages, and disadvantages.

Uploaded by

bijukolathu351
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 157

Operating System Concepts

❏ Module III: Operating Systems

❏ Introduction to Operating Systems: Definition and functions of


an operating system.
❏ Types of operating systems:cti batch, interave, real-time, etc.
❏ Processes and threads. Process states and life cycle,
Context switching and multitasking.
❏ Virtual memory and paging.
❏ Memory allocation methods.
❏ Memory protection and addressing techniques.
❏ Structure and organization of file systems. Disk scheduling
and storage optimization techniques.
❏ Differentiate between command-line interfaces (CLI) and
graphical user interfaces (GUI).
❏ User management and security considerations.
What is an Operating System?

• A well organized program that controls H/W


• A program that acts as an intermediary
between a user of a computer and the
computer hardware.
OS - Resource Manager
• OS manages files, memory, processes, handles input
and output, and controls peripheral devices like disk
drives and printers, among other things.
• It is in charge of managing the hardware, (processors,
memory, I/O devices, as well as communication
devices)
Goals of Operating System

• Operating system goals:


– Execute user programs and make solving user
problems easier
– Make the computer system convenient to use
– Use the computer hardware in an efficient manner
• Computer System Components:
– Hardware – provides basic computing resources
• CPU, memory, I/O devices
– Operating system
• Controls and coordinates use of hardware among various applications and users
– Application programs – define the ways in which the system resources
are used to solve the computing problems of the users
• Word processors, compilers, web browsers, database systems, video games
– Users
• People, machines, other computers
Operating System Definition

• No universally accepted definition


• The one program running at all times on the
computer is the OS kernel.
• Everything else is either
– a system program (ships with the operating
system) , or
– an application program.
Operating System Functions
• File Management - Manages storage, retrieval,
and organization of files on different storage
devices -hard drives, SSDs, USB drives
– When you save a Word document or download a movie, the OS organizes
these files in folders, keeps track of their location, and makes sure they
can be opened and edited when needed.
– To save a picture, save it in a "Pictures" folder. OS records the path to the
file "C:\Users\Pictures" - which tells it where to find the file when you
need it.
– If you open a document from your "Documents" folder, the OS knows
exactly which part of the hard drive to look in to retrieve that document.
– When you double-click a Word document, the OS makes sure that
Microsoft Word (or another word processor) is used to open the file,
allowing you to read or edit it.
Operating System Functions
• Memory Management - ensuring that each program
running has enough memory and doesn't interfere
with others. It also frees up memory when a program
is closed.
• Process Management - manages all the running
processes
• Device Management - OS manages communication
between your computer and its peripherals
– When you plug in a USB flash drive or connect a wireless mouse, the OS
installs the necessary drivers and makes the device usable almost
instantly
Operating System Functions
• User Interface - Allows users to interact with the
computer, either GUI - icons, windows) or CLI.
– where you can click on icons and drag files. Linux provides both a GUI and
CLI, like typing commands in the terminal to perform tasks.

• Input/Output Management - Handles input from


devices(keyboards/Mouse) and output to screens/
printers.
– When you press a key on the keyboard, the OS detects it and displays the
corresponding character on the screen in a text editor.

• System Performance Monitoring - constantly


monitors the system’s performance and provides
reports to users.
Operating System Functions
• Networking - manages network connections,
allowing computers to communicate with each other
and share resources like files and printers over local
and wide area networks.
– When you connect to Wi-Fi and browse the web, the OS manages the
connection, enabling communication between your device and remote
servers (like Google’s).
– Open Task Manager (Ctrl + Shift + Esc), then go to the Performance tab.
– At the bottom, click on Open Resource Monitor.
– In Resource Monitor, go to the Network tab to see real-time information about your network activity, including
which apps are sending/receiving data, and how much data is being transferred.
Operating System Functions
• System Performance Monitoring
– When you press Ctrl + Shift + Esc or right-click the taskbar and select Task
Manager, the OS displays a performance report. It shows:
■ CPU usage: How much of the processor's capacity is being used by
each program.
■ Memory usage: How much of the computer's RAM is in use.
■ Disk activity: How much data is being read from or written to the
disk.
■ Network usage: The current network traffic.
– This allows users to monitor system performance in real-time, identify
programs that are slowing down the computer, and close unnecessary
tasks if needed.
Operating System Functions
Memory Management

• Memory management refers to management of Primary


Memory or Main Memory
• An Operating System does the following activities for memory
management −
– Keeps tracks of primary memory, i.e., what part of it are in use by
whom, what part are not in use.

– In multiprogramming, the OS decides which process will get


memory when and how much.

– Allocates the memory when a process requests it to do so.

– De-allocates the memory when a process no longer needs it or has


been terminated.
Device Management

• An Operating System manages device


communication via their respective drivers.
• It does the following activities for device
management −
– Keeps tracks of all devices. Program responsible
for this task is known as the I/O controller.
– Decides which process gets the device when and
for how much time.
– Allocates the device in the efficient way.
– De-allocates devices.
File Management

• A file system is normally organized into directories for easy


navigation and usage. These directories may contain files.

• An Operating System does the following activities for file


management −

• Keeps track of information, location, uses, status etc. The


collective facilities are often known as file system.

• Decides who gets the resources.

• Allocates the resources.

• De-allocates the resources.


• Bootstrap program is loaded at power-up or reboot
– Typically stored in ROM or EPROM, generally known as
firmware
– loading into RAM
– Initializes all aspects of system and loads operating
system kernel and starts execution

• OS can take control of the system and start managing


the computer's resources ( start providing services to
the system and its users)
• Once the system is fully booted, and the system waits for some event to
occur.
• The occurrence of an event is usually signaled by an interrupt from
either the hardware or the software.
• Hardware may trigger an interrupt at any time by sending a signal to
the CPU.
• If you press a key on the keyboard (to input a text), keyboard
generates an interrupt signal ( IRQ) to the CPU to indicate that a key
has been pressed.
• The CPU stops its current task, saves its state, and transfers control to
a specific interrupt handler for keyboard input.
• The interrupt handler reads the input from the keyboard, updates the
system's input buffer, and performs any necessary tasks related to
handling keyboard input.After the interrupt handler completes its task,
the CPU restores its saved state and resumes the task it was working
on before the interrupt occurred.
• Software may trigger an interrupt by executing a special operation called
a system call.
• System calls are the software interrupts that allow the program to
interact with the operating system to perform file I/O , fopen to open a
System View:
• OS is a resource allocator
– Distributes and Manages all system resources
– Decides between conflicting requests for efficient and
fair resource use
■ Decides which program/ task gets the CPU at any given time.
■ Manages the memory(make sure each program has the necessary
space to run efficiently).
■ Deciding where to store data and how to retrieve it when needed.
■ OS schedules tasks to maximizes overall system performance.

• OS is a control program
– Controls execution of programs to prevent errors and
improper use of the computer
Types of operating systems
• A computer system can be organized in a number
of different ways.
• According to the number of general-purpose
processors used.

– Single-Processor Systems
– Multiprocessor Systems
– Clustered Systems
SINGLE-PROCESSOR SYSTEM
• These systems have only one main CPU
• Executes all tasks by this CPU.
• They handle one process at a time
– Simpler and easier to design.
• Less powerful for multitasking compared to
multi-processor systems.
• More efficient for single-threaded applications.
MULTIPROCESSORS SYSTEM
Multiprocessor systems are computing architectures that
utilize two or more processors to perform tasks
simultaneously.
❏ Enhances performance by enabling parallel processing
❏ Improves throughput and responsiveness.
❏ Symmetric multiprocessor (SMP) systems - all processors
share the same memory and resources, each processor
performs all tasks ( Tightly Coupled Systems)
❏ Asymmetric multiprocessor systems - processors have
different roles and memory access patterns - each
processor is assigned a specific task.
❏ ( commonly used in servers, high-performance
computing, and applications requiring significant
computational power)
MULTIPROCESSORS SYSTEM
• Parallel systems/tightly-coupled systems
– Advantages include:
1. Increased throughput
2. Economy of scale
3. Increased reliability – graceful degradation or fault tolerance
Clustered Systems

❏ Clustered systems are groups of interconnected


computers
❏ Work together to perform tasks as if they were a
single system.
❏ Each computer in the cluster(node) can operate
independently but collaborates to achieve the task
❏ Enhance performance, reliability, and availability
Clustered Systems
• Multiple systems working together
– Usually sharing storage via a storage-area network (SAN)
– Provides a high-availability service which survives failures
• Asymmetric clustering (master-slave clustering) involves nodes
with distinct roles.
● One node acts as the Master (or primary) that manages
tasks,
● Other nodes serve as slaves (or secondary) that handle
specific functions or act as backups.
● Distributed lock manager (DLM) - Ensures concurrent
access to shared resources safely to avoid conflicts or data
corruption, enhances data integrity, and maintains
performance and reliability
• Symmetric clustering (peer-based clustering)has multiple nodes,
● Treats all nodes in the cluster as equals.
● Each node has the same role, capabilities, and access to
shared resources - high-performance computing (HPC)
○ Applications must be written to use parallelization
Clustered Systems
Symmetric clustering - Characteristics
● Equal Roles: Every node can perform the same functions, and there's no designated
master or primary node.
● Shared Resources: Resources like storage and memory are shared or accessible to
all nodes uniformly.
● Load Balancing: Workloads are distributed evenly across all nodes, enhancing
performance and efficiency.
● Fault Tolerance: If one node fails, others can seamlessly take over its tasks without
significant disruption.

Advantages
● Scalability: Easily add more nodes to increase capacity without major reconfigurations.
● Flexibility: Any node can handle any task, providing versatility in operations.
● High Availability: Continuous operation is maintained as no single point of failure exists.

Disadvantages
● Complex Management: Coordinating equal nodes can be more complex, especially as the number of nodes
increases.
● Resource Contention: Shared resources may lead to contention issues if not managed properly.
Clustered Systems
Symmetric clustering - Examples

● High-Performance Computing (HPC): Scientific simulations, data analysis, and


other compute-intensive tasks.
● Web Server Farms: Distributing web traffic across multiple servers to ensure quick
response times and reliability.
● Database Clusters: Enhancing database performance and availability by
distributing queries across multiple database servers.
Clustered Systems
Asymmetric clustering - Characteristics
● Designated Roles: Clear distinction between master and slave nodes, with the master controlling the
cluster's operations.
● Hierarchical Structure: A top-down approach where the master node oversees and delegates tasks to
slave nodes.
● Resource Allocation: Resources may be allocated based on the node's role, with the master handling
management tasks and slaves handling processing or storage.
● Simplified Management: Easier to manage due to the clear separation of responsibilities.

Advantages

● Simpler Coordination: Clear roles simplify the coordination and management of tasks within the cluster.
● Efficient Resource Use: Resources can be optimized based on node roles, ensuring efficient utilization.
● Easier Maintenance: Maintenance and updates can be managed more straightforwardly by focusing on the master
node.

Disadvantages

● Single Point of Failure: If the master node fails, the entire cluster may be affected unless failover mechanisms are in
place.
● Limited Scalability: Adding more slave nodes may offer limited performance gains compared to symmetric clustering.
● Potential Bottleneck: The master node can become a performance bottleneck if it handles too many tasks.
Multiprogramming
• Multiprogramming is a technique to execute number of
programs simultaneously by a single processor.
– In Multiprogramming, number of processes reside in main
memory, OS picks and begins to executes one of the jobs in
the main memory. If any I/O wait happened in a process, then
CPU switches from that job to another job.
Advantages:
•Efficient memory utilization
•Throughput increases
•CPU is never idle, so performance increases.
Multiprogramming

The layout of multiprogramming system.


The main memory consists of 5 jobs at a time, the
CPU executes one by one.
OS

Job 1

Job 2

Job 3

Job 4

Job 5
S. Asymmetric Multiprocessing Symmetric Multiprocessing
No.

1. The processors are not treated equally. All the processors are treated equally.

Tasks of the OS are done individual


2. Tasks of the OS are done by master processor.
processor.

No Communication between Processors as they are controlled All processors communicate with another
3.
by the master processor. processor by a shared memory.

The process is taken from the ready


4. Process scheduling approach used is master-slave.
queue.

Symmetric multiprocessing systems are


5. Asymmetric multiprocessing systems are cheaper.
costlier.

Asymmetric multiprocessing systems are easier to


6. Complex to design.
design.

The architecture of each processor is the


7. All processors can exhibit different architecture.
same.

It is complex as synchronization is required of


It is simple as here the master processor has access to the
8. the processors in order to maintain the load
data, etc.
balance.

In case a master processor malfunctions then slave processor


continues the execution which is turned to master processor. In case of processor failure, there is reduction
9.
When a slave processor fails then other processors take over in the system’s computing capacity.
the task.
SNo Multiprocessing Multiprogramming

The availability of more than one


The concurrent application of more
processor per system, that can
1. than one program in the main memory
execute several set of instructions in
is known as multiprogramming.
parallel is known as multiprocessing.

The number of CPU is more than


2. The number of CPU is one.
one.

3. It takes less time for job processing. It takes more time to process the jobs.

In this, more than one process can In this, one process can be executed
4.
be executed at a time. at a time.

5. It is economical. It is economical.

The number of users is can be one


6. The number of users is one at a time.
or more than one.

7. Throughput is maximum. Throughput is less.

8. Its efficiency is maximum. Its efficiency is Less.


Types of Operating System

• Batch operating system


• Time-sharing operating systems
• Distributed operating System
• Network operating system.
• Real-time system
Batch Processing
❏ A collection of similar jobs or tasks is processed in groups or
batches without manual intervention during execution.
❏ The system collects jobs or data over a period of time and
processes them all at once.
❏ This method is typically used when the tasks do not require
immediate action and can be delayed.
❏ Example: Payroll processing - At the end of the month, all
employee records are collected, and their salaries are
calculated in one batch process.
Batch Processing
• This type of operating
system does not interact
with the computer directly.
• There is an operator which
takes similar jobs having
the same requirement and
group them into batches.
• It is the responsibility of
the operator to sort jobs
with similar needs.
• Advantages of Batch Operating System:
• Processors of the batch systems know how long the job
would be when it is in queue
• Multiple users can share the batch systems
• The idle time for the batch system is very less
• It is easy to manage large work repeatedly in batch
systems
• Disadvantages of Batch Operating System:
• The computer operators should be well known with
batch systems
• Batch systems are hard to debug
• It is sometimes costly
• The other jobs will have to wait for an unknown time if
any job fails.
Time Sharing Systems
• Time sharing, or multitasking, is a logical
extension of multiprogramming.
• Multiple jobs are executed by switching the CPU
between them.
• In this, the CPU time is shared by different
processes, so it is called as Time sharing
Systems.
• Time slice is defined by the OS, for sharing CPU
time between processes.
• Examples: Multics, Unix, etc.,
Time Sharing Systems
• Each task is given some time to
execute so that all the tasks work
smoothly.
• Each user gets the time of CPU as they
use a single system. These systems are
also known as Multitasking Systems.
• The time that each task gets to execute
is called quantum.
• After this time interval is over OS
switches over to the next task.
• Advantages of Time-Sharing OS:
• Each task gets an equal opportunity
• CPU idle time can be reduced.
• Minimizing response time.

• Disadvantages of Time-Sharing OS:


• One must have to take care of the security and integrity of user
programs and data
• Data transmission rates are very high in comparison to other
methods.
Distributed operating System
• Autonomous interconnected computers communicate with
each other using a shared communication network.
• Independent systems possess their own memory unit and CPU.
(loosely coupled systems or distributed systems).
• It is always possible that one user can access the files or software
which are not actually present on his system but some other
system connected within this network
• i.e., remote access is enabled within the devices connected in
that network.
• Processors in a distributed system may vary in size and function.
• These processors are referred as sites, nodes, computers, and so
on.
Examples of Distributed Operating System are- LOCUS, etc.
The advantages of distributed systems are as
follows:
• Resource sharing - a user at one site may be
able to use the resources available at another.
• If one site fails in a distributed system, the remaining
sites can potentially continue operating.
• Better service to the customers.
• Reduction of the load on the host computer.
• Reduction of delays in data processing.
• Disadvantages of Distributed Operating
System:
• Failure of the main network will stop the entire
communication
• These types of systems are not readily
available as they are very expensive.
• Not only that the underlying software is highly
complex and not understood well yet
Network Operating System
• These systems run on a server and provide the
capability to manage data, users, groups, security,
applications, and other networking functions.
• These types of operating systems allow shared access
of files, printers, security, applications, and other
networking functions over a small private network.
• All the users are well aware of the underlying
configuration, of all other users within the network,
their individual connections( tightly coupled
systems).
Examples of Network Operating System are: Microsoft Windows
Server 2003, Microsoft Windows Server 2008, UNIX, Linux, Mac OS
X, Novell NetWare, and BSD, etc.
• Advantages of Network Operating System:
• Highly stable centralized servers
• Security concerns are handled through servers
• New technologies and hardware up-gradation are
easily integrated into the system
• Server access is possible remotely from different
locations and types of systems
• Disadvantages of Network Operating
System:
• Servers are costly
• User has to depend on a central location for most
operations
• Maintenance and updates are required regularly
A real-time system
• A data processing system in which the time interval required
to process and respond to inputs is so small that it controls the
environment.
• Real-time systems are used when there are time requirements
that are very strict like missile systems, air traffic control
systems, robots, etc.
• The time taken by the system to respond to an input and
display of required updated information is termed as
the response time.
• So in this method, the response time is very less as compared
to online processing.
• A real-time operating system must have well-defined, fixed
time constraints, otherwise the system will fail.
• Two types of Real-Time Operating System which are
as follows:

• Hard Real-Time Systems:


• These OSs are meant for applications where time constraints are
very strict and even the shortest possible delay is not acceptable.
• These systems are built for saving life like automatic parachutes
or airbags which are required to be readily available in case of
any accident. Virtual memory is rarely found in these systems.

• Soft Real-Time Systems:

• These OSs are for applications where for time-constraint is less


strict.
• Advantages of RTOS:
• Maximum Consumption: Maximum utilization of
devices and system, thus more output from all the
resources
• Task Shifting: The time assigned for shifting tasks in
these systems are very less. For example, in older
systems, it takes about 10 microseconds in shifting one
task to another, and in the latest systems, it takes 3
microseconds.
• Focus on Application: Focus on running applications and
less importance to applications which are in the queue.
• Error Free: These types of systems are error-free.
• Memory Allocation: Memory allocation is best managed
in these types of systems.
• Disadvantages of RTOS:
• Limited Tasks: Very few tasks run at the same time and their
concentration is very less on few applications to avoid errors.
• Use heavy system resources: Sometimes the system resources
are not so good and they are expensive as well.
• Complex Algorithms: The algorithms are very complex and
difficult for the designer to write on.
• Device driver and interrupt signals: It needs specific device
drivers and interrupts signals to respond earliest to interrupts.
• Thread Priority: It is not good to set thread priority as these
systems are very less prone to switching tasks.

• Examples of Real-Time Operating Systems are: Scientific


experiments, medical imaging systems, industrial control
systems, weapon systems, robots, air traffic control systems, etc.
Processor Management
• In multiprogramming environment, the OS decides which process
gets the processor when and for how much time. This function is
called process scheduling.

• An Operating System does the following activities for processor


management −

– Keeps tracks of processor and status of process. The program


responsible for this task is known as traffic controller.

– Allocates the processor (CPU) to a process.

– De-allocates processor when a process is no longer required.


PROCESS MANAGEMENT
Process Concept
• Process – a program in execution; process execution must progress in
sequential fashion
• Multiple parts
– The program code, also called text section
– Execution context - Current activity including program counter,
processor registers,memory allocation
– Stack containing temporary data
• Function parameters, return addresses, local variables
– Data section containing global variables
– Heap containing memory dynamically allocated during run time
• Program is passive entity stored on disk (executable file), process is active
– Program becomes process when executable file loaded into memory
• Execution of program started via GUI mouse clicks, command line entry of its name, etc
• OS allocates necessary resources such as memory space, file descriptors and CPU for the
processes
Thread
• Thread is the segment of a process
– A process can have multiple threads and these multiple
threads are contained within a process.
• A thread has the states: Running, Runnable
Ready,Blocked, and Terminated.
• The thread takes less time to terminate as
compared to the process but unlike the process,
threads do not isolate.
A musicplayer application (runs as a single process). This application has multiple
threads to handle different tasks simultaneously.

1. Main Thread: The primary thread of the application - responsible for managing the user
interface and handling user interactions
● It receives and processes user input (clicking buttons or selecting songs).

2. Playback Thread: Playing audio files.


● When the user selects a song to play, the main thread passes the request to the
playback thread. The playback thread then loads the audio file, decodes it, and sends
the audio data to the sound card for playback.
● While playing, it continuously updates the playback status, such as the current
position and volume level.

3. GUI Thread: Handles GUI rendering.


● It updates the screen to display the current song information, playback controls, and
visualizations like album art or equalizer animations.
● The GUI thread continuously refreshes the display to provide a responsive and
interactive user interface.
● New: The thread has been created, but it has not yet started
executing.

● Runnable: The thread is ready to run, but it may or may not be


currently executing.
○ The operating system scheduler determines when the thread
gets CPU time.

● Running: The thread is currently executing its instructions.

● Blocked: The thread is temporarily unable to run, often waiting


for a resource or input/output operation to complete.
○ The playback thread might be blocked while waiting for the
audio file to load.

● Terminated: The thread has finished its execution and will not
run again.
● Initially, all threads are in the "New" state.
● When the application starts, the main thread enters the "Runnable"
state and begins executing.
● If the user selects a song, the main thread may pass the request to
the playback thread, which transitions from "New" to "Runnable."
● The playback thread loads the audio file and enters the "Blocked"
state while waiting for the file to load.
● Once the audio file is loaded, the playback thread transitions to the
"Runnable" state, and the operating system scheduler allows it to
start executing.
● The playback thread continuously updates the playback status and
remains in the "Running" state while the song is playing.
● If the user interacts with the GUI, the GUI thread transitions from
"Runnable" to "Running" to handle the user input and update the
display.
● When the user closes the application, all threads eventually reach the
"Terminated" state.
Process vs Thread?
• The primary difference is that threads
within the same process run in a
shared memory space, while
processes run in separate memory
spaces.

• Threads are not independent of one


another like processes are, and as a
result threads share with other threads
their code section, data section, and
OS resources (like open files and
signals).
• But, like process, a thread has its own
program counter (PC), register set,
and stack space.
Advantages of Thread over Process
• Responsiveness: If the process is divided into multiple
threads, if one thread completes its execution, then its output
can be immediately returned.
• Faster context switch: Context switch time between threads is
lower compared to process context switch. Process context
switching requires more overhead from the CPU.
• Effective utilization of multiprocessor system: If we have
multiple threads in a single process, then we can schedule
multiple threads on multiple processor. This will make process
execution faster.
• Resource sharing: Resources like code, data, and files can
be shared among all threads within a process.
stack and registers can’t be shared among the threads.
Each thread has its own stack and registers.
• Communication: Communication between multiple
threads is easier, as the threads shares common address
space. while in process we have to follow some specific
communication technique for communication between two
process.
• Enhanced throughput of the system: If a process is
divided into multiple threads, and each thread function is
considered as one job, then the number of jobs completed
per unit of time is increased, thus increasing the
throughput of the system.
S.NO Process Thread

1. Process means any program is in execution. Thread means a segment of a process.

2. The process takes more time to terminate. The thread takes less time to terminate.

3. It takes more time for creation. It takes less time for creation.

4. It also takes more time for context switching. It takes less time for context switching.

The process is less efficient in terms of Thread is more efficient in terms of


5.
communication. communication.

We don’t need multi programs in action for


Multiprogramming holds the concepts of multi-
6. multiple threads because a single process
process.
consists of multiple threads.

7. The process is isolated. Threads share memory.


S.NO Process Thread

8.
P;;;;;;
The process is called the heavyweight process.
A Thread is lightweight as each thread in a
process shares code, data, and resources.

Thread switching does not require calling an


Process switching uses an interface in an operating
9. system.
operating system and causes an interrupt to
the kernel.

If one process is blocked then it will not affect the If a user-level thread is blocked, then all other
10. execution of other processes user-level threads are blocked.

Thread has Parents’ PCB, its own Thread


The process has its own Process Control Block, Stack,
11. and Address Space.
Control Block, and Stack and common
Address space.

Since all threads of the same process share


Changes to the parent process do not affect child address space and other resources so any
12. processes. changes to the main thread may affect the
behavior of the other threads of the process.

No system call is involved, it is created using


13. A system call is involved in it.
APIs.

14. The process does not share data with each other. Threads share data with each other.
Process State
• As a process executes, it changes state
– new: The process is being created
– running: Instructions are being executed
– waiting: The process is waiting for some event to occur
– ready: The process is waiting to be assigned to a processor
– terminated: The process has finished execution
000000000
0000000000000000000
New: When a user launches the word processing application, a new
process is created by the OS to handle the application's execution.
Ready: The word processing process is loaded into memory and waiting for
I/P.
Running: The process is executing commands and performing operations
based on user input. (typing, formatting text, Editing or saving documents
Blocked: While the word processing process is running, it may encounter
situations where it needs to wait for certain events.If the user initiates a file
open operation and the file is large or stored on a slow storage device, the
process may be blocked while waiting for the file to load.
Terminated: When the user decides to close the word processing application
or when the task is completed, the process enters the terminated state. It
releases any system resources it was using, such as memory and file
handles, and exits
Throughout the lifecycle of the word processing process, it can transition
between these states based on user actions, system events, and the
completion of tasks.
The OS manages the process scheduling, memory allocation, and I/O
operations to ensure smooth operation of the word processing application
State Transitions
• Valid
• New to ready
• Ready to running
• Running to exit
• Running to ready
• Running to blocked
• Blocked to ready
• Ready to exit
• Blocked to exit
Process Control Block (PCB)

• A Process Control Block(PCB) is a data


structure maintained by the OS(for every
process).
• The PCB is identified by an integer process ID
(PID).
A PCB keeps all the information needed to keep
track of a process
Process Control Block (PCB)
Information associated with each process
(also called Task Control Block)
• Process state – running, waiting, etc
• Program counter – location of instruction to next
execute
• CPU registers – contents of all process-centric
registers
• CPU scheduling information- priorities,
scheduling queue pointers
• Memory-management information – memory
allocated to the process
• Accounting information – CPU used, clock time
elapsed since start, time limits
• I/O status information – I/O devices allocated to
process, list of open files
CPU Switch From Process to Process
Context Switch

• When the scheduler switches the CPU from


one executing process to execute another
process, the state from the current running
process is stored into the process control
block.
• After this, the state for the process to run next
is loaded from its own PCB and used to set the
PC, registers, etc. At that point, the second
process can start executing.
• When the process is switched, the following
information is stored for later use.
– Program Counter
– Scheduling information
– Currently used register
– Changed State
– I/O State information
– memory-management information
Scheduling Terminology
• CPU utilization – keep the CPU as busy as possible
• Throughput – no: of processes that complete their
execution per time unit
• Arrival Time(AT): when a process enters in a ready state
• Burst Time(BT): Time required for a process for execution
• Completion Time (CT): It is the finishing or completion
time of the process in the system
• Turnaround Time (TA): It is the time difference between
Completion Time and Arrival Time - TA = CT - AT
• Waiting Time (WT): It is the time difference between
Turnaround Time and Burst Time - WT = TA - BT
Scheduling Algorithm Optimization Criteria

• Max CPU utilization


• Max throughput
• Min turnaround time
• Min waiting time
• Min response time
Process Scheduling
• The process scheduling is the activity of the process manager
that handles the removal of the running process from the CPU
and the selection of another process on the basis of a particular
strategy.
• Process scheduling is an essential part of a Multiprogramming
operating systems.
• These OS allow more than one process to be loaded into the
executable memory at a time and the loaded process shares
the CPU using time multiplexing.

Process Queues
The Operating system manages various types of queues for each of the
process states.
• The PCB related to the process is also stored in the queue of the same
state.
• If the Process is moved from one state to another state then its PCB is
also unlinked from the corresponding queue and added to the other state
queue in which the transition is made.

• Maintains scheduling queues of processes


– Job queue – set of all processes in the system
– Ready queue – set of all processes residing in main
memory, ready and waiting to execute
– Device queues – set of processes waiting for an I/O
device
– Processes migrate among the various queues
Representation of Process Scheduling

● Queueing diagram represents queues, resources,


flows
SCHEDULERS
• Schedulers are special system software which
handle Process scheduling in various ways.
• Their main task is to select the jobs to be
submitted into the system and to decide
which process to run.
• Schedulers are of three types
– Long-Term Scheduler
– Short-Term Scheduler
– Medium-Term Scheduler
SCHEDULERS
Long-term scheduler (or job scheduler) – selects which processes should be brought
into the ready queue ( Determines which programs are admitted to the system for
processing) (Process loads into the memory for CPU scheduling.)

• Long-term scheduler is invoked infrequently (seconds, minutes) ⇒ (may be


slow)
• The long-term scheduler controls the degree of multiprogramming
• It selects processes from the queue and loads them into memory for
execution.

Processes can be described as either:


• I/O-bound process – spends more time doing I/O than computations, many
short CPU bursts
• CPU-bound process – spends more time doing computations; few very long
CPU bursts
Long-term scheduler strives for good process mix
Contd…
• Short-term scheduler (or CPU scheduler) – selects which
process should be executed next and allocates CPU
– Sometimes the only scheduler in a system
– Short-term scheduler is invoked frequently (milliseconds) ⇒
(must be fast)

• CPU scheduler selects a process among the processes that are


ready to execute and allocates CPU to one of them.

• Dispatchers, make the decision of which process to execute


next. Short-term schedulers are faster than long-term
schedulers.
Dispatcher
• Dispatcher module gives control of the CPU
to the process selected by the short-term
scheduler; this involves:
– switching context
– jumping to the proper location in the user
program to restart that program
• Dispatch latency – time it takes for the
dispatcher to stop one process and start
another running
● Medium-term scheduler can be added if degree of
multiple programming needs to decrease
● Remove process from memory, store on disk, bring
back in from disk to continue execution: swapping
• A Process Scheduler schedules different
processes to be assigned to the CPU based on
particular scheduling algorithms. There are six
popular process scheduling algorithms
– First-Come, First-Served (FCFS) Scheduling
– Shortest-Job-Next (SJN) Scheduling
– Priority Scheduling
– Shortest Remaining Time
– Round Robin(RR) Scheduling
– Multiple-Level Queues Scheduling
• These algorithms are either non-preemptive or
preemptive.
• Non-preemptive algorithms -Once a process
enters the running state, it cannot be preempted
until it completes its allotted time.
• Preemptive scheduling is based on priority where
a scheduler may preempt a low priority running
process anytime when a high priority process
enters into a ready state
First Come First Serve (FCFS)
• Jobs are executed on first come, first serve
basis.
• It is a non-preemptive :once the CPU has been
allocated to a process, the process keeps the
CPU till it finishes or it requests the I/O
• Its implementation is based on FIFO queue.
• Poor in performance as average wait time is
high.
First- Come, First-Served (FCFS) Scheduling
Process Burst Time

P1 24

P2 3

P3 3

Suppose that the processes arrive in the order:P1, P2,P3


The Gantt Chart for the schedule is:

• Waiting time for P1 = 0; P2 = 24;P3 = 27


• Average waiting time: (0 + 24 + 27)/3 = 17
FCFS Scheduling (Cont.)

Suppose that the processes arrive in the order:P2,P3,P1


● The Gantt chart for the schedule is:

● Waiting time forP1 =6; P2 = 0; P3 = 3


● Average waiting time: (6 + 0 + 3)/3 = 3
● Much better than previous case
● Convoy effect - short process behind long process
● Consider one CPU-bound and many I/O-bound processes
Shortest-Job-First (SJF) Scheduling
❏ Associate with each process the length of its next CPU
burst.
❏ Use these lengths to schedule the process with the
shortest time.
❏ Two variants
❏ Non preemptive: Once CPU given to the process it cannot be
preempted until completes its CPU burst.
❏ Preemptive : If a new process arrives with CPU burst length
less than remaining time of current executing process,
preempt the executing process. This scheme is known as the
Shortest-Remaining-Time-First (SRTF).
❏ SJF is optimal
❏ Gives minimum average waiting time for a given set of
processes
• It is practically infeasible as Operating System may not
know burst times and therefore may not sort them. While
it is not possible to predict execution time, several
methods can be used to estimate the execution time for a
job, such as a weighted average of previous execution
times.
• SJF can be used in specialized environments where
accurate estimates of running time are available.
Example of SJF
ProcessAiBurst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
• SJF scheduling chart
P4 P1 P3 P2

3 9 1 2
0
6 4

• Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


Example of Shortest-remaining-time-first

• Now we add the concepts of varying arrival times and preemption to the analysis

ProcessA arri Arrival TimeT Burst Time


P1 0 8
P2 1 4
P3 2 9
P4 3 5
• Preemptive
P1 P2SJF Gantt Chart
P4 P1 P3

1 1 2
0 1 5
0 7 6

• Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 msec


Priority Scheduling
• A priority number (integer) is associated with each process
• The CPU is allocated to the process with the highest priority
(smallest integer ≡ highest priority)
– Preemptive
– Nonpreemptive
• SJF is priority scheduling where priority is the inverse of
predicted next CPU burst time
• Problem ≡ Starvation – low priority processes may never
execute
• Solution ≡ Aging – as time progresses increase the priority of
the process
Example of Priority Scheduling
ProcessA arri Burst TimeT Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
• Priority scheduling Gantt Chart
P2 P5 P1 P3 P4

1 1 1
0 1 6
6 8 9

Average waiting time = 8.2 msec


Round Robin (RR)
• Each process gets a small unit of CPU time (time quantum q),
usually 10-100 milliseconds. After this time has elapsed, the
process is preempted and added to the end of the ready
queue.
• If there are n processes in the ready queue and the time
quantum is q, then each process gets 1/n of the CPU time in
chunks of at most q time units at once. No process waits
more than (n-1)q time units.
• Timer interrupts every quantum to schedule next process
• Performance
– q large ⇒ FIFO
– q small ⇒ q must be large with respect to context switch,
otherwise overhead is too high
Example of RR with Time Quantum = 4

Process Burst Time


P1 24
P2 3
P3 3

• The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1
1 1 1 2 2 3
0 4 7
0 4 8 2 6 0

• Typically, higher average turnaround than SJF, but better response


• q should be large compared to context switch time
• q usually 10ms to 100ms, context switch < 10 usec
Time Quantum and Context Switch Time
Turnaround Time Varies With
The Time Quantum

80% of CPU bursts


should be shorter
than q
Multiple-Level Queues Scheduling

• Multiple-level queues are not an independent scheduling


algorithm.
• They make use of other existing algorithms to group and
schedule jobs with common characteristics.
• Multiple queues are maintained for processes with common
characteristics.
• Each queue can have its own scheduling algorithms.
• Priorities are assigned to each queue.
• For example,
• CPU-bound jobs can be scheduled in one queue and
• All I/O-bound jobs in another queue. The Process Scheduler
then alternately selects jobs from each queue and assigns them
to the CPU based on the algorithm assigned to the queue.
Multilevel Queue
• Ready queue is partitioned into separate queues, eg:
– foreground (interactive)
– background (batch)
• Process permanently in a given queue
• Each queue has its own scheduling algorithm:
– foreground – RR
– background – FCFS
• Scheduling must be done between the queues:
– Fixed priority scheduling; (i.e., serve all from foreground then
from background). Possibility of starvation.
– Time slice – each queue gets a certain amount of CPU time
which it can schedule amongst its processes; i.e., 80% to
foreground in RR
– 20% to background in FCFS
Multilevel Queue Scheduling
• System Processes: The CPU itself has its own process to run
which is generally termed as System Process.

• Interactive Processes: An Interactive Process is a type of


process in which there should be same type of interaction.

• Batch Processes: Batch processing is generally a technique in


the Operating system that collects the programs and data
together in the form of the batch before the processing starts
Example Problem
• Consider below table of four processes under Multilevel queue
scheduling. Queue number denotes the queue of the process.

• Priority of queue 1 is greater than queue 2. queue 1 uses


Round Robin (Time Quantum = 2) and queue 2 uses FCFS.
Working:
• At starting, both queues have process so process in queue 1
(P1, P2) runs first (because of higher priority) in the round
robin fashion and completes after 7 units
• Then process in queue 2 (P3) starts running (as there is no
process in queue 1) but while it is running P4 comes in queue
1 and interrupts P3 and start running for 5 second and
• After its completion P3 takes the CPU and completes its
execution.
Multilevel Feedback Queue

• Multi level Feedback Queue Scheduling: It


allows the process to move in between
queues.
• The idea is to separate processes according to
the characteristics of their CPU bursts. If a
process uses too much CPU time, it is moved
to a lower-priority queue.
Multilevel Feedback Queue
• A process can move between the various queues;
aging can be implemented this way.
• In a multilevel queue-scheduling algorithm,
processes are permanently assigned to a
queue on entry to the system and processes
are not allowed to move between queues.
• As the processes are permanently assigned
to the queue, this setup has the advantage of
low scheduling overhead,
• But on the other hand disadvantage of being
inflexible.
Example of Multilevel Feedback Queue

• Three queues:
– Q0 – RR with time quantum 8 milliseconds
– Q1 – RR time quantum 16 milliseconds
– Q2 – FCFS

• Scheduling
– A new job enters queue Q0 which is served FCFS
• When it gains CPU, job receives 8 milliseconds
• If it does not finish in 8 milliseconds, job is moved to queue Q1
– At Q1 job is again served FCFS and receives 16 additional
milliseconds
• If it still does not complete, it is preempted and moved to queue Q2
Multilevel Feedback Queues
Example:
• Consider a system that has a CPU-bound process, which requires a
burst time of 40 seconds. The multilevel Feed Back Queue
scheduling algorithm is used and the queue time quantum ‘2’
seconds and in each level it is incremented by ‘5’ seconds. Then
how many times the process will be interrupted and in which queue
the process will terminate the execution?
• Solution:
• Process P needs 40 Seconds for total execution.
• At Queue 1 it is executed for 2 seconds and then interrupted and
shifted to queue 2.
• At Queue 2 it is executed for 7 seconds and then interrupted and
shifted to queue 3.
• At Queue 3 it is executed for 12 seconds and then interrupted
and shifted to queue 4.
• At Queue 4 it is executed for 17 seconds and then interrupted
and shifted to queue 5.
• At Queue 5 it executes for 2 seconds and then it completes.
• Hence the process is interrupted 4 times and completed on
Process Synchronization
• Process Synchronization is a way to coordinate
processes that use shared data. It occurs in an
operating system among cooperating processes.
• Cooperating processes are processes that share
resources.
• While executing many concurrent processes,
process synchronization helps to maintain data
consistency and cooperating process execution.
Race Condition

• Processes have to be scheduled to ensure that


concurrent access to shared data does not create
inconsistencies. Data inconsistency can result in what is
called a race condition
• When more than one process is executing the same code
or accessing the same memory or any shared variable in
that condition there is a possibility that the output or the
value of the shared variable is wrong so for that all the
processes doing the race to say that my output is correct
this condition known as a race condition.
• Race condition:-The situation where several
process access and manipulate shared data
concurrently, the final value of shared data
depends upon which process finishes last.
• It is Competition of process to enter the
critical section
two threads, Thread A and Thread B, and a shared variable called `balance`.
balance = 100

def withdraw(amount):
global balance
if balance >= amount:
balance -= amount
# Create two threads
thread_A = Thread(target=withdraw, args=(50,))
thread_B = Thread(target=withdraw, args=(70,))

# Start the threads


thread_A.start()
thread_B.start()

# Wait for both threads to finish


thread_A.join()
thread_B.join()

# Print the final balance


print(balance)
thread_A` and `thread_B` execute the `withdraw` function, which checks if the
`balance` is sufficient to withdraw the specified `amount` and updates the `balance`
accordingly.

The initial `balance` is set to 100. `thread_A` attempts to withdraw 50 units, and
`thread_B` attempts to withdraw 70 units.

if `thread_A` executes first, the final balance will be 50. Conversely, if `thread_B`
executes first, the final balance will be 30.
If the threads are interleaved, the final balance will depend on the order and timing
of their operations. For instance, if `thread_A` withdraws 50 units first, and then
`thread_B` withdraws 70 units, the final balance will be -20 (indicating an overdraft),
as both threads did not check the balance simultaneously.

if `thread_A` and `thread_B` both read the initial balance of 100 simultaneously,
they may proceed to withdraw their amounts without considering each other's
changes. As a result, both threads may update the balance to negative values,
leading to an inconsistent result.

synchronize their access


• The order of execution of instruction
influences the result produced .
• To prevent race condition ,concurrent process
must be synchronized
• On the basis of synchronization, processes are
categorized as one of the following two types:
• Independent Process : Execution of one
process does not affects the execution of other
processes.
• Cooperative Process/co
ordinating/dependant/communicatable :
Execution of one process affects the execution
of other processes.
Producer-Consumer problem
• The Producer-Consumer problem is a classical multi-
process synchronization problem.
• There is one Producer in the producer-consumer
problem, Producer is producing some items, whereas
there is one Consumer that is consuming the items
produced by the Producer.
• The same memory buffer is shared by both producers
and consumers which is of fixed-size.
• The task of the Producer is to produce the item, put it
into the memory buffer, and again start producing
items.
• Whereas the task of the Consumer is to consume the
item from the memory buffer.
Contd…
• The producer should produce data only when the
buffer is not full. In case it is found that the buffer is
full, the producer is not allowed to store any data into
the memory buffer.
• Data can only be consumed by the consumer if and
only if the memory buffer is not empty. In case it is
found that the buffer is empty, the consumer is not
allowed to use any data from the memory buffer.
• Accessing memory buffer should not be allowed to
producer and consumer at the same time.
Critical Section

• A critical section is a code segment that


can be accessed by only one process at a
time.
• The critical section contains shared
variables that need to be synchronized to
maintain the consistency of data variables.
• Each process must request permission to enter its
critical section. The section of code implementing this
is called entry section.
• The entry to the critical section is handled by the
wait() / P(). //proberen in Dutch means to try
• The exit from a critical section is controlled by the
signal() /V(). verhogen
• Only one process can be executed inside the critical
section at a time. Other processes waiting to
execute their critical sections have to wait until the
current process finishes executing its critical section.
• Any solution to the critical section problem
must satisfy three requirements:
• Mutual Exclusion
• Progress
• Bounded Waiting
• Mutual Exclusion : If a process is executing in
its critical section, then no other process is
allowed to execute in the critical section.
• No two process simultaneously present
inside the critical section
• Progress : Progress means that if one
process doesn't need to execute into
critical section then it should not stop
other processes to get into the
critical section.
• Bounded Waiting : A bound must exist on the
number of times that other processes are
allowed to enter their critical sections after a
process has made a request to enter its critical
section and before that request is granted
Mutex locks.

• Is a software tool to solve Critical section problem

• A process must acquire a lock before entering a critical section, it releases the
lock when it exists the critical section.

• The acquire() acquires the lock, and release() releases the lock.

• A mutex lock has a boolean variable available whose value indicates if the lock
is available or not.
Semaphore
• Synchronization tool to control access to shared resources by multiple processes or
threads in a concurrent system, that does not require busy waiting
• Semaphore S – integer variable
• Two standard operations modify S: wait() and signal() (called P() and V())
– When a process wants to access a shared resource, it must perform a wait operation
on the semaphore.
– If the semaphore's value is greater than zero indicating that there are resources
available, the process decrements the semaphore's value and proceeds.
– If the semaphore's value is zero indicating that all resources are currently in use ,
the process may have to wait until a resource becomes available.
– When a process finishes using a shared resource, it must perform a signal operation
on the semaphore and increments the semaphore's value, signals that a resource
has been released and is now available for use by another process.
wait (S) {
while S <= 0
{
; // no-op
} S--;

signal (S) {
S++;
}
Semaphore
There are two types of semaphores :
1)Binary Semaphores
2)Counting Semaphores
Binary Semaphores : They can only be either 0 or 1.
– They are also known as mutex locks, as the locks can provide mutual
exclusion.
– All the processes can share the same mutex semaphore that is
initialized to 1.
– A process has to wait until the lock becomes 0.
– Then, the process can make the mutex semaphore 1 and start its
critical section.
– When it completes its critical section, it can reset the value of mutex
semaphore to 0 and some other process can enter its critical
section.
• Counting Semaphores : They can have any value and are
not restricted over a certain domain
• They can be used to control access a resource that has a
limitation on the number of simultaneous accesses.
• The semaphore can be initialized to the number of
instances of the resource.
• Whenever a process wants to use that resource, it checks if
the number of remaining instances is more than zero, i.e.,
the process has an instance available.
• Then, the process can enter its critical section thereby
decreasing the value of the counting semaphore by 1. After
the process is over with the use of the instance of the
resource, it can leave the critical section thereby adding 1
to the number of available instances of the resource
• A counting semaphore S is initialized to 10.
• Then, 6 P operations and 4 V operations are performed on S.
• What is the final value of S?
P operation means wait operation decrements the value of semaphore variable by 1.
V operation also called as signal operation increments the value of semaphore variable by 1.
Thus,
Final value of semaphore variable S

= 10 – (6 x 1) + (4 x 1)

= 10 – 6 + 4=8

• A counting semaphore S is initialized to 7.


• Then, 20 P operations and 15 V operations are performed on S.
• What is the final value of S?
• A shared variable x, initialized to zero, is operated on by four concurrent processes W, X,
• Y, Z as follows.
• The processes W and X reads x from memory, increments by one, stores it to memory,
and then terminates.
• The processes Y and Z reads x from memory, decrements by two, stores it to memory,
and then terminates.
• Each process before reading x invokes the P operation on a counting semaphore S and
invokes the V operation on the semaphore S after storing x to memory.
• Semaphore S is initialized to two.
• What is the maximum possible value of x after all processes complete execution?

Processes can run in many ways, below is one of the cases in which x attains
max value
If x=0 Semaphore S is initialized to 2
Process W executes S=1, x=1 but it doesn’t update the x variable.
Then process Y executes S=0, it decrements x, now x= -2 and signal
semaphore S=1
Now process Z executes s=0, x=-4, signal semaphore S=1

Now process W updates x=1, S=2 Then process X executes X=2


Disadvantage of semaphore
Deadlocks and Starvation
• The implementation of a semaphore with a
waiting queue may result in a situation where
two or more processes are waiting indefinitely
for an event that can be caused only by one of
the waiting processes.
• The event in question is the execution of a
signal operation. When such a state is reached,
these processes are said to be deadlocked.
• Write a monitor that implements an alarm clock that enables a calling program to
delay
itself for a specified number of time units (ticks). You may assume the existence of
a real
hardware clock that invokes a procedure tick in your monitor at regular intervals.
Solution-

monitor alarm
{
condition c;
void delay(int ticks)
{
int begin_time = read_clock();
while (read_clock() < begin_time + ticks)
c.wait();
}
void tick()
{
c.broadcast();
}
}
Suppose we want to synchronize two concurrent processes A and B to display
11001100110011..….. The code for A and B is shown below

Process A:
while (1) {
W:
print ‘1’;
print ‘1’;
X:
}

Process B:
while (1) {
Y:
print ‘0’;
print ‘0’;
Z:

Semaphore calls can be inserted only at points W, X, Y and Z.


What is the minimum number of semaphore variables needed and what should be their
initial values?
Insert synchronization statements (one each) at points W, X, Y and Z to achieve the above
output. Write the new code in such a manner that it ensures mutual exclusion, prevents
starvation, deadlocks and race conditions.
Two binary semaphore variables (S and T) are required. S should be
initialized to 1,
while T should be initialized to 0.
The new code is as follows:

Process P:
while (1) {
wait (S) :
print ‘1’;
print ‘1’;
signal (T):
}

Process Q:
while (1) {
wait (T) :
print ‘0’;
print ‘0’;
signal (S) :
Consider the methods used by processes P1 and P2 for accessing their critical
sections
whenever needed, as given below. The initial values of shared Boolean variables
S1 and S2
are randomly assigned.

Method Used by P1
while (S1 == S2) ;
Critical Section
S1 = S2;
Method Used by P2

while (S1 != S2) ;


Critical Section
S2 = not (S1);

Check if all the critical region properties are satisfied or not and if satisfied justify
your
answer.
Principle of Mutual Exclusion: No two processes may be simultaneously present
in the
critical section at the same time. That is, if one process is present in the critical
section
other should not be allowed.P1 can enter critical section only if S1 is not equal to
S2, and P2 can enter critical section
only if S1 is equal to S2. Therefore Mutual Exclusion is satisfied. Progress: no
process running outside the critical section should block the other
interested process from entering critical section whenever critical section is free.
Suppose P1 after executing critical section again want to execute the critical
section and
P2 dont want to enter the critical section, then in that case P1 has to
unnecesarily wait
for P2. Hence progress is not satisfied.
Computer System Organization
• Computer-system operation
– One or more CPUs, device controllers connect
through common bus providing access to shared
memory
– Concurrent execution of CPUs and devices competing
for memory cycles
Storage Structure
• Main memory – only large storage media that the CPU can access directly
– Random access
– Typically volatile
• Secondary storage – extension of main memory that provides large nonvolatile
storage capacity
• Hard disks – rigid metal or glass platters covered with magnetic recording
material
– Disk surface is logically divided into tracks, which are subdivided into
sectors
– The disk controller determines the logical interaction between the device
and the computer
• Solid-state disks – faster than hard disks, nonvolatile
– Various technologies
– Becoming more popular
Storage systems organized in hierarchy
Speed
Cost
Volatility
• I/O devices and the CPU can execute concurrently

• A device controller is in charge of each device type

• Each device controller has a local buffer

• CPU moves data from/to main memory to/from local buffers

• I/O is from the device to local buffer of controller

• Device controller informs CPU that it has finished its operation by causing an interrupt.
• Interrupt transfers control to the interrupt service routine generally, through the interrupt
vector, which contains the addresses of all the service routines

• Interrupt architecture must save the address of the interrupted instruction

• A trap or exception is a software-generated interrupt caused either by an error or a user


request

• An operating system is interrupt driven


Caching

• Important principle, performed at many levels in a computer (in hardware,


operating system, software)

• Cache memory is an extremely fast memory type that acts as a buffer between
RAM and the CPU. It holds frequently requested data and instructions so that
they are immediately available to the CPU when needed.

• Information in use copied from slower to faster storage temporarily

• Faster storage (cache) checked first to determine if information is there


– If it is, information used directly from the cache (fast)
– If not, data copied to cache and used there

• Cache smaller than storage being cached


– Cache management important design problem
– Cache size and replacement policy

You might also like