0% found this document useful (0 votes)
63 views

OS Threads Unit 3 Copy 1

This document discusses threads and threading models. It begins by defining processes and threads, noting that threads share resources like memory but have their own program counters and stacks. It then covers benefits of threads over processes like lower overhead. Different threading models are presented including one-to-one, many-to-one, and many-to-many. User-level threads managed in libraries are discussed along with kernel-level threads managed by the operating system. Combined models like Solaris are also summarized.

Uploaded by

Abhinay Yadav
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views

OS Threads Unit 3 Copy 1

This document discusses threads and threading models. It begins by defining processes and threads, noting that threads share resources like memory but have their own program counters and stacks. It then covers benefits of threads over processes like lower overhead. Different threading models are presented including one-to-one, many-to-one, and many-to-many. User-level threads managed in libraries are discussed along with kernel-level threads managed by the operating system. Combined models like Solaris are also summarized.

Uploaded by

Abhinay Yadav
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 34

Chapter : Threads

Chapter : Threads

· Overview
· Benefits of Threads vs Processes
· Single and Multithreaded Processes
· Types of Threads
· Multithreading Models
· Thread Libraries
Process Characteristics
Concept of Process has two facets.
▪A Process is:
• A Unit of resource ownership:
▪ a virtual address space for the process image
▪ control of some resources (files, I/O devices...)
• A Unit of execution - process is an execution path
through one or more programs
▪ may be interleaved with other processes
▪ execution state (Ready, Running, Blocked...) and
dispatching priority the kernel threads
These two characteristics are treated separately:
• The unit of resource ownership is usually referred
to as a process or task
• The unit of execution is usually referred to a
thread or a “lightweight process”
Process and Thread
Single Thread Multiple Thread

Threads
Threads
▪ It is single sequential (flow of) execution of tasks of process
▪ A thread (or lightweight process ) is a basic unit of CPU utilization;
it consists of:
▪ program counter
▪ Thread id
▪ register set
▪ stack space
▪ A thread shares with its peer threads its:
▪ code section
▪ data section
▪ operating-system resources
Single and Multithreaded Processes
Threads contd..

• In a multiple threaded task, while one thread


is blocked and waiting, a second thread in
the same task can run.

• In web browser, one thread displays images,


other text, other fetches data from network.
• Word processor, graphics, response to
keystrokes, spell
check.
Multithreading and Single-threading
▪ Multithreading: The OS supports multiple
threads of execution within a single process
▪ Single threading: The OS does not
recognize the separate concept of thread

• MS-DOS supports a single user process and


a single thread
• Traditional UNIX supports multiple user
processes but only one thread per process
• Solaris and Windows 2000 support multiple
threads
In a Multithreaded Environment
▪ Process Have:
▪ A virtual address space which holds the process image
▪ Protected access to processors, other processes (inter-
process communication), files, and other I/O resources
▪ Thread Have:
▪ Have execution state (running, ready, etc.)
▪ Save thread context (e.g. pc) when not running
▪ Private storage for local variables and execution stack
▪ Have shared access to the address space and resources
(files etc.) of their process
• when one thread alters (non-private) data, all other
threads (of the process) can see this
• threads communicate via shared variables
Single Threaded and Multithreaded Process Models

Thread Control Block contains a register image, thread priority and


thread state information
Benefits of Threads vs Processes

▪ Less time to create a new thread than a process,


because the newly created thread uses the current
process address space .
▪ Less time to terminate a thread
▪ Less time to switch between two threads - newly
created thread uses the current process address space.
▪ Less communication overheads
▪ threads share everything: address space,
▪ So, data produced by one thread is immediately
available to all the other threads.
Application benefits of threads
▪ Consider an application that consists of several independent
parts that do not need to run in sequence
▪ Each part can be implemented as a thread
▪ Whenever one thread is blocked waiting for I/O, execution
could switch to another thread of the same application (instead
of switching to another process)
▪ Example 1: File Server on a LAN
• Needs to handle many file requests over a short period
• Threads can be created (and later destroyed) for each
request
• If multiple processors: different threads could execute
simultaneously on different processors
▪ Example 2: Spreadsheet on a single processor machine:
• One thread displays menu and reads user input while the
other executes the commands and updates display
Thread States:
▪ Three key states: Running, Ready, Blocked
▪ No Suspend state because all threads
within the same process share the same
address space (same process image)
• Suspending implies swapping out the whole
process, suspending all threads in the
process
▪ Termination of a process terminates all
threads within the process
• Because the process is the environment the
thread runs in
Types of Threads
▪User Level Threads: (ULT)
▪ Threads of user application process.
▪ ULT are supported above the kernel and managed
without kernel support.
▪ These are implemented in user space in main memory.
And managed by user level library.
▪ The kernel is not aware of the existence of threads
▪ User Level Library (Pthread) is used for – Thread
Creation,
Scheduling and Management
▪ULT’s can be be implemented on any Operating
System, because no kernel services are required
to support them
Kernel Role for ULTs (None!)
▪ The kernel is not aware of thread activity
▪ it only manages processes
▪ If a thread makes an I/O call, the whole process
is blocked
• Note: in the thread library that thread is still in
“running” state, and will resume execution when the
I/O is complete
▪ So thread states are independent of process
states
Advantages and disadvantages of ULT

Advantages
▪ Thread switching does not involve the kernel: no mode
switching
▪ Therefore fast
▪ Scheduling can be application specific: choose the best
algorithm for the situation.
▪ Can run on any OS. We only need a thread library
Disadvantages
▪ Most system calls are blocking for processes. So all threads
within a process will be implicitly blocked
▪ The kernel can only assign processors to processes. Two
threads within the same process cannot run simultaneously on
two processors
Kernel Level Threads
▪ All thread management is done by kernel
▪ No thread library; system call used for kernel thread facility
▪ Kernel maintains context information for the process and the
threads
▪ Switching between threads requires the kernel
▪ Kernel does Scheduling on a thread basis
Advantages
▪ The kernel can schedule multiple threads of the same process on
multiple processors
▪ Blocking at thread level, not process level, If a thread blocks, the CPU
can be assigned to another thread in the same process
Disadvantages
▪ Thread switching always involves the kernel. This means 2 mode
switches per thread switch
▪ So it is slower compared to User Level Threads

Combined ULT/KLT Approaches
▪ Thread creation done in the user space
▪ Bulk of thread scheduling and synchronization done in user
space
▪ ULT’s mapped onto KLT’s
• The programmer may adjust the number of KLTs
▪ KLT’s may be assigned to processors
▪ Combines the best of both approaches
(e.g. Solaris)
Solaris
▪ Process includes the user’s address space, stack, and
process control block
▪ User-level threads (threads library)
• invisible to the OS &are the interface for application parallelism
▪ Kernel threads
• the unit that can be dispatched on a processor
▪ Lightweight processes (LWP)
• each LWP supports one or more ULTs and maps to exactly one
KLT
Solaris
Solaris: Kernel Level Thread
▪ Only objects scheduled within the system
▪ May be multiplexed on the CPU’s or tied to a specific CPU
▪ Each LWP is tied to a kernel level threadUser-level threads
(threads library)

Solaris: User Level Thread


▪ Share the execution environment of the task
• Same address space, instructions, data, file (any thread
opens file, all threads can read).
▪ Can be tied to a LWP or multiplexed over multiple LWPs
▪ Represented by data structures in address space of the
task – but kernel knows about them indirectly via LWPs
Solaris
Solaris: versatility
▪ We can use ULTs when logical parallelism does not need
to be supported by hardware parallelism (we save mode
switching)
• Ex: Multiple windows but only one is active at any one
time
▪ If ULT threads can block then we can add more LWPs to
avoid blocking the whole application
▪ Note versatility of SOLARIS that can operate like Windows-
NT or like conventional Unix

Solaris: Light-Weight Processes


▪ A UNIX process consists mainly of an address space and a
set of LWPs that share the address space
▪ Each LWP is like a virtual CPU and the kernel schedules
the LWP by the KLT that it is attached to.
Combination of ULT and KLT

▪ Run-time library (RTL) ties together


▪ Multiple threads handled by RTL
▪ If 1 thread makes system call, LWP makes call, LWP
will block, all threads tied to LWP will block
▪ Any other thread in same task will not block.Solaris:
Light-Weight Processes
Combination of ULT and KLT

▪ Run-time library (RTL) ties together


▪ Multiple threads handled by RTL
▪ If 1 thread makes system call, LWP makes call, LWP
will block, all threads tied to LWP will block
▪ Any other thread in same task will not block.Solaris:
Light-Weight Processes

Multithreading Models
▪ One-to -One
▪ Many-to -One
▪ Many-to -Many
One -to -one Model
Each user -level thread maps to one kernel thread
Advantage :
facilitates the running of multiple threads in parallel/ greater
concurrency.

new user thread must include the creation of a corresponding kernel thread
Many-to -One

▪ The many -to-one model associates all


user - level threads to a single kernel -
level thread.
▪ User level threads follow the many to
one threading model .
▪ This means multiple threads managed
by a library in user space but the kernel
is only aware of a single thread of the
process owning these threads.
Many -to -Many Model
Many -to -Many Model
▪ In this model, developers can create as many user threads
as necessary and the corresponding Kernel threads can
run in parallel on a multiprocessor machine.
▪ This model provides the best accuracy on concurrency and
when a thread performs a blocking system call, the kernel
can schedule another thread for execution.
▪ A number of user -level threads are associated to an equal
or smaller number of kernel-level threads.
▪ Allows many user level threads to be mapped to many
kernel threads
Thread Cancellation
· Thread cancellation is the task of terminating a
thread before it has completed .

– For example, if multiple threads are


concurrently searching through a database
and one thread returns the result, the
remaining threads might be canceled .
Scheduler Activation
▪ Scheduler activations are a threading mechanism that, when
implemented in an operating system's process scheduler,
provide kernel-level thread functionality with user-level thread
flexibility and performance.
▪ Technique for communication between the user-thread library
and the kernel is known as scheduler activation.
▪ It works as likes: The kernel provides an application with a set
of virtual processors (LWPs), and the application can schedule
user threads onto an available virtual processor.
▪ Moreover, the kernel must inform an application about certain
events. This procedure is known as an upcall.
▪ Upcalls are handled by the thread library with an upcall
handler, and upcall handlers must run on a virtual
processor.
Scheduler Activation
Scheduler Activation
▪ One event that triggers an upcall occurs when an application
thread is about to block.
▪ In this scenario, the kernel makes an upcall to the
application informing it that a thread is about to block and
identifying the specific thread.
▪ Then the kernel allocates a new virtual processor to the
application. The application runs an upcall handler on this
new virtual processor, that saves the state of the blocking
thread and relinquishes the virtual processor on which the
blocking thread is running.
▪ Another thread that is eligible to run on the new virtual
processor is scheduled then by the upcall handler.
Whenever the event that the blocking thread was waiting
for occurs, the kernel makes another upcall to the thread
library informing it that the previously blocked thread is
now eligible to run.
Scheduler Activation
Scheduler Activation
Scheduler Activation
▪ A virtual processor is also required for The upcall handler
for this event, and the kernel may allocate a new virtual
processor or preempt one of the user threads and then run
the upcall handler on its virtual processor.
▪ The application schedules an eligible thread to run on an
available virtual processor, after marking the unblocked
thread as eligible to run.

You might also like