0% found this document useful (0 votes)
15 views

Threads

The document discusses the concepts of threads and processes, emphasizing the distinction between process ownership of resources and the execution units known as threads. It covers multithreading, user-level and kernel-level threads, and various threading models, highlighting their advantages and challenges. Additionally, it touches on microkernels, thread management in different operating systems, and the performance implications of these designs.

Uploaded by

terrymutheu57
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Threads

The document discusses the concepts of threads and processes, emphasizing the distinction between process ownership of resources and the execution units known as threads. It covers multithreading, user-level and kernel-level threads, and various threading models, highlighting their advantages and challenges. Additionally, it touches on microkernels, thread management in different operating systems, and the performance implications of these designs.

Uploaded by

terrymutheu57
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 22

Threads

• We separate two ideas about:


– Process: Ownership of memory, files, other resources
– Thread: Unit of execution we use to dispatch
• Multithreading
– Allow multiple threads per process
• Thread
– Has individual execution state
– Each thread has a control block, with a state
(Running/Blocked/etc.), saved registers, instruction
pointer
– Separate stack
– Shares memory and files with other threads that are in
that process
– Faster to create a thread than a process
Using Threads
• In multiple threads of a single process
– Implements separate control blocks for the process
and each thread
– Can quickly switch between threads
– Can communicate without invoking the kernel
• Four Examples of multiple threads
– Foreground/Background
– Asynchronous Processing – Backing up in
background
– Faster Execution – Read one set of data while
processing another set
– Organization – For a word processing program, may
allow one thread for each file being edited
Threads
• Some thread operations include
– Spawn – Creating a new thread
– Block – Waiting for an event
– Unblock – Event happened, start new
– Finish – This thread is completed
• Generally a thread can block without blocking
the remaining threads in the process
– Allow the process to start two operations at once,
each thread blocks on the appropriate event
• Os must handle synchronization between
threads
– System calls or local subroutines
– Thread is generally responsible for getting/releasing
locks, etc.
Managing Threads
• User-level threads
– Application creates/manages all threads using a library
– System schedules the process as a unit
• Scheduling, etc. is all in user space (faster)
• Scheduling can be application specific
• Does not require O.S. support
– Has problems with blocking system calls
• Can avoid by polling a non-blocking call
– Cannot support multiprocessing
• Kernel-level threads
– Kernel handles/manages threads
– Easier to support multiple processors
– Kernel itself may be multithreaded
– Need user/kernel mode switch to change threads
• Other Approaches
– Each kernel thread may have multiple user threads
Threads and Processes
• 1 to 1
– Each process has one thread (Unix)
• Many to 1
– Each process can have multiple threads (NT, Solaris,
OS/2, Mach)
– Very common
• 1 to Many
– Seen in some distributed operating systems
– Thread can move between machines and address
spaces
• Many to Many
– Can switch to a different domain (I/O) to perform
key operations
– Helps create a structured program

Thread Example (Win95)
Echo program - get data from a socket, echo it back to the sender
• Note: Some details not directly associated with threads have been omitted

SOCKET msock, ssock; // master & slave server sockets

int main(int argc, char *argv[]) // main - Concurrent TCP server for ECHO service
{
char *service = "echo"; // service name or port number
struct sockaddr_in fsin; // the address of a client
int alen; // length of client's address
WSADATA wsadata;
if (WSAStartup(WSVERS, &wsadata) != 0) errexit("WSAStartup failed\n");
msock = passiveTCP(service, 5);
while (1) {
alen = sizeof(fsin);
ssock = accept(msock, (struct sockaddr *)&fsin, &alen);
if (ssock == INVALID_SOCKET)
errexit("accept: error number\n", GetLastError());
if (_beginthread((void (*)(void *))TCPechod, 8000, (void *)ssock) < 0)
errexit("_beginthread: %s\n", strerror(errno));
}
return 1; // not reached
}

int TCPechod(SOCKET fd) // TCPechod - echo data until end of file


{
char buf[BUFSIZE];
int cc;
cc = recv(fd, buf, sizeof buf, 0);
while (cc != SOCKET_ERROR && cc > 0) {
if (send(fd, buf, cc, 0) == SOCKET_ERROR) { break; }
cc = recv(fd, buf, sizeof buf, 0);
}
if (cc == SOCKET_ERROR)
fprintf(stderr, "echo recv error: %d\n", GetLastError());
closesocket(fd);
return 0;
}
Parallel Processors
• SISD – Single instruction, single data (standard
processor)
• SIMD – Single instruction, multiple data (vector
processor, MMX)
• MISD – Multiple instruction, single data (never
implemented)
• MIMD – Multiple instruction, multiple data
– Tightly coupled (shared memory)
• Master/Slave
– Kernel, I/O on one processor
– Others do user programs
• Symmetric Multiprocessor
– Processors share Kernel, I/O
– Loosely coupled (distributed mem.)
• Cluster
SMP Organization
• Generally each processor has its own cache,
share memory and I/O
• Design issues
– Simultaneous concurrent processes or threads
• Kernel routines must be reentrant to allow multiple
threads to execute them
– Scheduling
• Must avoid conflicts
• May be able to run threads concurrently
– Synchronization
• Mutual exclusion, event ordering
– Memory management
• Have a unified paging scheme
– Reliability and fault tolerance
• Solutions similar to normal case
Microkernels
• Popularized by use in Mach O.S.
• Monolithic O.S.
– Built as a single large program, any routine can call
any other routine
– Used in most early systems
• Layered O.S.
– Based on modular programming
– Major changes still had wide-spread effects on other
layers
• Microkernel
– Only essential functions in the kernel
– File System, Device drivers, etc., are now external
subsystems/processes
– Processes interact through messages passed through
the kernel
Microkernel Benefits
• Uniform Interface
– Same message for user/system services
• Extensibility
– Easy to add new services
– Modifications need only change directly affected
components
– Could have multiple file services
• Flexibility
– Can customize system by omitting services
• Portability
– Isolate nearly all processor-specific code in the
kernel
– Changes tend to be in logical areas
Microkernel Benefits 2
• Reliability
– Easy to rigorously test kernel
– Fewer system calls to master
– Less interaction with other components
• Distributed System Support
– Just as easy to send a message to another machine as
this machine
• Need system-wide unique Ids
– Processes don’t have to know where a service
resides
• Object-Orientated O.S.
– Lends discipline to the kernel
– Some systems (NT) incorporate OO principles into
the design
Performance
• Sending a message generally slower than simple
kernel call
• Depends on size of the microkernel
– First generation systems slower
– Then tried to include critical system items into
kernel (Mach)
• Fewer user/system mode switches
• Lose some microkernel benefits
– Trying approach of very small kernel
• L4 - 12K code, 7 system calls
• Speed seems to match Unix
Microkernel Design
• Vary according to the system
• Primitive Memory Management
– Kernel handles virtualphysical mapping, rest is a
user mode process
• V.M. module can decide what pages to move to/from disk
• Module can allocate memory
– Three microkernel operations
• Grant – Grant pages to someone else
– Grantor gives up access to pages
• Map – Map pages in another space
– Both processes can access page
• Flush – Reclaim pages granted or mapped
• Interprocess Communication
– Based on messages
• I/O and Interrupts
– Handle interrupts as messages
Win 2000 Threads
• Characteristics of Processes
– Implemented as objects
– May contain one or more threads
– Both processes and threads have built-in
synchronization capabilities
Process/Thread Objects
• Each process must contain at least one thread
• Multithreading
– Threads in the same process can execute
concurrently
• SMP Support
– Any thread (including kernel threads) can run on
any processor
– Soft affinity – Try to reschedule a thread on the same
processor
– Hard affinity – Restrict a thread to certain
processors
Win 2000 Threads
• Thread States
– Ready – Able to run
– Standby – Scheduled to run
– Running
– Waiting – Blocked or suspended
– Transition – Not blocked, but can’t run (paged out
of memory)
– Terminated
• Support for O.S. Subsystem
– Process creation
• Begins with request from application
• Goes to protected subsystem
• Passed to executive, returns handle
• Win32, OS/2 use handle to create thread
• Return process/thread information
– Win2000 - client requests a new thread
• Thread inherits limits, etc. from parent
Solaris Threads
• Four thread-related concepts
– Process – Normal Unix process
– User-level Thread – Thread library
– Lightweight Process – Mapping between ULTs
and Kernel Threads
– Kernel Thread – Fundamental kernel scheduling
object
• Also used for system functions
– Figure 4.15, page 185
Threads 2
• Motivation
– ULTs allow logical parallelism and speed of user-
level threads
– LWPs handle situation of blocking system calls
– ULTs can share/not share LWPs
• Process Structure
– Process has:
• Signal dispatch table
• Memory Map
• File Descriptors
• List of LWPs for this process
– Each LWP has
• LWP ID Saved Registers
• Priority Kernel Stack
• Signal Mask Resource usage
• Kernel Thread Ptr Process Ptr
Thread Execution
• ULTs have four states:
– Active – Has a LWP assigned
– Sleeping
– Stopped
– Runnable
• Events that could happen:
– Wait due to synchronization
– Suspension – Until woken up by another thread
– Preemption – Higher-priority thread able to run
– Yielding – Yield to same-priority thread (if one
exists)
More on Threads
• LWP States
– Running
– Runnable
– Stopped
– Blocked – As far as ULT library is concerned, that
tread is still active
• Interrupts
– Converted into kernel threads, each with its own
priority, context, stack
– Control access to data structures using
synchronization primitives
– Given higher priorities than normal threads
– Interrupted thread pinned to that processor, will stay
on that processor
– Reduce time spent with interrupts off.
Linux Threads
• Task structure maintained for each
process/thread
– State (executing, ready, zombie, etc.)
– Scheduling information
– Process, user, group identifiers
– Interprocess communication info
– Links to parent, siblings, children
– Timers (time used, interval timer)
– File system – Pointers to open files
– Virtual memory
– Processor-specific context
• Threads are implemented as processes that share
files, virtual memory, signals, etc.
– “Clone” system call to create a thread
– <pthread.h> library provides more user-friendly
thread support

You might also like