0% found this document useful (0 votes)
8 views

Operating Systems Cheat Sheet

The document is a comprehensive cheat sheet on operating systems, covering key concepts such as processes, threads, multiprogramming, and resource management. It explains the roles of the operating system, process states, scheduling, and the producer-consumer problem, along with synchronization mechanisms. Additionally, it discusses threading models, process control blocks, and the challenges of multicore programming.

Uploaded by

faggotkilla
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Operating Systems Cheat Sheet

The document is a comprehensive cheat sheet on operating systems, covering key concepts such as processes, threads, multiprogramming, and resource management. It explains the roles of the operating system, process states, scheduling, and the producer-consumer problem, along with synchronization mechanisms. Additionally, it discusses threading models, process control blocks, and the challenges of multicore programming.

Uploaded by

faggotkilla
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

e.g.

, Regular Expressions, Garden, Kitchen, Audio     

 Home  Cheat Sheets  Create  Community  Help LOGIN or REGISTER

Operating Systems Cheat Sheet by makahoshi1

Introduction Processes Threads

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
What is an Operating System? Objective of multiprogramming What is a thread?

A program that acts as an intermediary between a is to have some process running at all times, to A basic unit of CPU utilization, consisting of a
user and the hardware, it is between the applic- maximize CPU utilization. program counter, a stack, and a set of registers.
ation program and the hardware How does multiprogramming work? They also form the basics of multithreading.

What are the three main purposes of an several processes are stored in memory at one Benefits of multi-threading?
operating system? time, when one process is done and is waiting, Responsiveness: Threads may provide rapid
to provide an environment for a computer user to the os takes the CPU away from that process and response while other threads are busy.
execute programs. gives it to another process Resource Sharing: Threads share common
to allocate and separate resources of the Benefits of multiprogramming code, data, and other resources, which allows
computer as needed. multiple tasks to be performed simultaneously in a
higher throughput (amount of work accomplished
to serve as a control program: supervise the single address space.
in a given time interval) and increased CPU utiliz-
execution of user programs, management of the Economy: Creating and managing threads is
ation
operation control of i/o devices much faster than performing the same tasks for
What is a process? processes.
What does and Operating System Do?
A process is a program in execution Scalability: A single threaded process can only
Resource Allocator: reallocates the resources,
run on one CPU, whereas the execution of a
manages all of the resources, decides between What do processes need?
multi-threaded application may be split amongst
the requests for efficient and fair resource use. A process needs: CPU time, memory, files, and i/o available processors.
Control Program: Controls the execution of devices
programs to prevent errors and improper use of Multicore Programming Challenges
What are the Process States?
the computer Dividing Tasks: Examining applications to find
New: the process is being created activities that can be performed concurrently.
GOALS of the operating system
Ready: The process is waiting to be assigned to a Balance: Finding tasks to run concurrently that
execute programs and make solving problems processor provide equal value. I.e. don't waste a thread on
easier, make the computer system easy to use, Waiting: The process is waiting for some event to trivial tasks.
use the computer hardware in an efficient manner. occur Data Splitting: To prevent the threads from interf-
What happens when you start your computer? Running: instructions are being executed ering with one another.
Terminated: the process has finished execution Data Dependency: If one task is dependent upon
when you start your computer the bootstrap
program is loaded at power-up or reboot. This Why is the operating system good for the results of another, then the tasks need to be
program is usually stored in the ROM or the resource allocation? synchronized to assure access in the proper
EROM generally known as Firmware . This The operating system is good for resource order.
program loads the operating system kernel and allocation because it acts as hardware /software Testing and Debugging: Inherently more difficult
starts the execution. The one program running at interface in parallel processing situations, as the race
all times is the kernel

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
What are interrupts and how are they used? What does a Process include? conditions become much more complex and

1.) a program counter difficult to identify.


an interrupt is an electronic signal , interrupts
serve as a mechanism for process cooperation 2.) stack: contains temporary data Multithreading Models

and are often used to control I/O, a program 3.) data section: contains global variables Many-To-One: Many user-level threads are all
issues an interrupt to request the operating What is the Process Control Block? mapped onto a single kernel thread.
system support. The hardware requests an processes are represented in the operating One-To-One: Creates a separate kernel thread to
interrupt and then transfers the control to the system by a PCB handle each user thread. Most implementations of
interrupt handler, where the interrupt then ends. The process control block includes: this model place a limit on how many threads can

The operating System Structure 1.) Process state be created.

2.) program counter Many-To-Many: Allows many user level threads


the operating system utilizes multiprogramming.
3.) CPU registers to be mapped to many kernel threads. Processes
multiprogramming organizes jobs so that the CPU
4.) CPU scheduling information can be split across multiple processors. Allows
always has something to do, this allows no
5.) Memory Management the OS to create a sufficient number of kernel
wasted time. in multiprogramming one job is
6.) Accounting information threads.
selected and run via the job scheduling. when it is
waiting the os switches to another job 7.) I/O status Thread Libraries

How does the operating system run a Why is the PCB created? Provide programmers with an API for creating and

program, What does it need to do? A process control block is created so that the managing threads. Implemented either in User

operating system knows information on the Space or Kernel Space.


1.) reserve machine time
process. User Space: API functions are implemented
2.) manually load urge program into memory
solely within user space. & no kernel support.
3.) load starting address and begin execution What happens when a program enters the
Kernel Space: Involves system calls and requires
4.) monitor and control execution of program from system?
a kernel with thread library support.
console When a program enters the system it is placed in
Three main thread libraries:
What is a process? the queue by the queuing routine and the
scheduler redirects the program from the queue POSIX Pthreads: Provided as either a user or
A process is a program in execution, it s active,
and loads it into memory kernel library, as an extension to the POSIX
while a program is passive. The program
standard.
becomes the process when it is running. Why are queues and schedulers important?
Win32 Threads: Provided as a kernel-level
The process needs resources to complete its task they determine which program is loaded into library on Windows systems.
so it waits. memory after one program finishes processes Java Threads: Implementation of threads is
A process includes: a counter, a stack, and a data and when the space is available based upon whatever OS and hardware the JVM
section.
What is a CPU switch and how is it used? is running on, i.e. either Pthreads or Win32
What is process management? threads depending on the system.
when the os does a switch it stops one process

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
The operating system is responsible for managing from executing (idling it) and allows another Pthreads
the processes. The os process to use the processor *POSIX standard defines the specification for
1.) creates and deletes the user and system What is process scheduling? pThreads, not the implementation.
processes. * Global variables are shared amongst all threads.
the process scheduler selects among the
2.) suspends and resumes processes. *One thread can wait for the others to rejoin
available processes for next execution on CPU
3.) provides mechanisms for process synchroni- before continuing.
zation QUEUE
*Available on Solaris, Linux, Mac OSX, Tru64,
4.) provides mechanisms for process commun- generally the first program on the queue is loaded and via public domain shareware for Windows.
ication first but there are situations where there are
Java Threads
5.) provides mechanisms for deadlock handling multiple queues,
1.) job Queue: when processes enter the system *Managed by the JVM

they are put into the job queue *Imlemented using the threads model provided by

2.) Ready Queue: the processes that are ready underlying OS.

and waiting to execute are kept on a list (the *Threads are created by extending thread class

ready queue) and by implementing the Runnable interface.

3.) Device Queue: are the processes that are Thread Pools
waiting for a particular i/o device (each device has A solution that creates a number of threads when
its own device queue a process first starts, and places them into a
How does the operating decide which queue thread pool to avoid inefficient thread use.
the program goes to? * Threads are allocated from the pool as needed,

it is based on what resources the program needs, and returned to the pool when no longer needed.

and it will be placed in the corresponding queue * When no threads are available in the pool, the
process may have to wait until one becomes
What are the types of Schedulers?
available.
1.) long term scheduler: selects which processes * The max. number of threads available in a
should be brought into the ready queue thread pool may be determined by adjustable
2.) short term scheduler: selects which process parameters, possibly dynamically in response to
should be executed next and then allocates CPU changing system loads.
What is a context switch? Threading Issues
a context switch is needed so that the CPU can The fork( ) and exec( ) System Calls
switch to another process, in the context switch Q: If one thread forks, is the entire process
urge system saves the state of the process copied, or is the new process single-threaded?
Processes run concurrently *A: System dependent.

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Problems that Processes run in to No two processes can be running simultaneously *A: If the new process execs right away, there is
(at the same time) but they can be running concu- no need to copy all the other threads. If it doesn't,
rrently where the CPU is multitasking then the entire process should be copied.

How are processes created? *A: Many versions of UNIX provide multiple
versions of the fork call for this purpose.
The parent creates the child which can create
Signal Handling used to process signals by
more processes.
generating a particular event, delivering it to a
The child process is a duplicate of the parent
process, and handling it.)
process
Q: When a multi-threaded process receives a
fork() signal, to what thread should that signal be delive-
fork creates a new process red?
when you run the fork command it either returns a A: There are four major options:
0 or a 1. *Deliver the signal to the thread to which the
the 0 means that it is a child process signal applies.
the 1 means that it is a parent process * Deliver the signal to every thread in the process.
*Deliver the signal to certain threads in the
execve()
process.
the execve system call is used to assign a new * Assign a specific thread to receive all signals in
program to a child. a process.
it is used after the fork command to replace the Thread Cancellation can be done in one of two
process' memory space with a new program ways
Process Creation *Asynchronous Cancellation: cancels the thread

every process has a process id, to know what immediately.

process you are on and for process management *Deferred Cancellation: sets a flag indicating the

every process has an id thread should cancel itself when it is convenient.

very important when a process is created with the It is then up to the cancelled thread to check this

fork() only the shared memory segments are flag periodically and exit nicely when it sees the

shared between the parent process and the child flag set.

process, copies of the stack and the heap are Scheduler Activations

made for the new child Provide Upcalls, a communication mechanism


from the kernel to the thread library. This
Process Creation Continue
communication allows an application to maintain
when a process creates a new process the parent the correct number kernel threads.
can continue to run concurrently or the parent can

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
The Producer and Consumer Problem wait until all of the children terminate Synchronization
in cooperating processes the produce-consumer How are processes terminated?
problem is common where the producer process A process terminates when it is done executing
produces information that is consumed by the the last statement, when the child is terminated it
consumer process may return data back to the parent through an exit
Producer and Consumer Explained status uses the exit() system call

he Producer relies on the Consumer to make Can a process terminate if it is not done?
space in the data-area so that it may insert more Yes, the parent may terminate the child (abort) if:
information whilst at the same time, the Consumer the child has exceeded its usage of some of its
relies on the Producer to insert information into resources it has been allocated
the data area so that it may remove that inform- the task assigned to the child is no longer needed
ation
wait() or waitpid()
examples of Producer - Consumer Problem
these are the system call command that are used
Client - Server paradigm, the client is the for process termination
consumer and the server as the producer
Cascading Termination
Solution to the Producer- Consumer problem
some operating systems do not allow children to
the solution and producer processes must run be alive if the parent has died, in this case if the
concurrently, to allow this there needs to be an parent is terminated, then the children must also
available buffer of items that can be filled by the terminate. this is known as cascading termination
producer and emptied by the consumer.
Processes may be either Cooperating or
the producer can produce one item while the
Independent
consumer is consuming another item.
the producer and consumer must be synchr- Cooperating: the process may be cooperating if it

onized, so that the consumer does not try to can affect or be affectedly the other processes

consume an item that has not been produced executing in the system.
Some characteristics of cooperating processes
Two types of buffers can be used
include: state is shared, the result of execution is
Unbounded buffer- no limit on the size of the nondeterministic, result of execution cannot be
buffer predicted.
Bounded buffer- there is a fixed buffer size, in this Independent: a process can be independent if it
case the consumer must wait if the buffer is empty cannot be affected or affect the other processes.
and the producer must wait if the buffer is full Some characteristics of independent processes

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Bounded Buffer Solution include: state not shared, execution is determ- Background
The bounded buffer can be used to enable inistic and depends on input, execution is reprod-
Concurrent access to shared data may result
processes to share memory, in the example code ucible and will always be the same, or if the
in data inconsistency. Maintaining data consis-
the variable 'in' points to the next free position in execution can be stopped.
tency requires mechanisms to ensure the
the buffer. 'out' points to the first full position in the Advantages of Process Cooperation orderly execution of cooperating processes.
buffer. information sharing, computation speed-up, Race Condition
The buffer is empty when in == out. modularity, convenience
when (in+1)% buffer size == out then the buffer is A situation where several processes access
What is Interprocess Communication and manipulate the same data concurrently
full
Cooperating processes need interprocess and the outcome of the execution depends on
communication a mechanism that will allow them the particular order in which the access take
to exchange data and information place.
There are two models of IPC: shared memory and
Critical Section
Message passing
Each process has a critical section segment of
What is Shared Memory?
code. When one process is in critical section,
a region of memory that is shared by cooperating no other may be in its critical section.
processes is established, processes can
Parts of Critical Section
exchange information by reading and writing to
the shared region Each process must ask permission to enter
Benefits of Shared Memory: allows maximum critical section in entry section, may follow
speed and convenience of communication and is critical section with exit section, then rema-
faster than message passing. inder section.

What is Message Passing Solutions to Critical Section Problem

message passing is a mechanism for processes The three possible solutions are Mutual
to communicate and to synchronize their actions. Exclusion, progress, and bounded waiting.
processes communicate with each other without
Mutual Exclusion
sharing variables
Benefits of message passing: message passing is If process Pi is executing in its critical section,

easier to implement for inter computer commun- then no other processes can be executing in

ication and is useful for smaller amounts of data their critical sections.

Message passing can be either Blocking or Non- Progress


Blocking
If no process is executing in its critical section
Message Passing facilitates: and there exists some processes that wish to

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
CPU Scheduling the message passing facility provides two operat- execute their critical section, then the selection
ions: of the process that will enter the critical section
send(message)- message size fixed or variable cannot be postponed indefinitely.
receive(message) Bounded Waiting
How do processes P and Q communicate A bound must exist on the number of times
for two processes to communicate they must: that other processes are allowed to enter their
1.) send messages to an receive messages from critical sections after a process has made a
each other request to enter its critical section and before
2.) they must establish a communication link that request is granted.
between them, this link can be implemented in a
Peterson's Solution
variety of ways.
Two process solution. Assume that LOAD and
Implementations of communication link
STORE instructions are atomic; that is, cannot
include
be interrupted. The two processes share two
1.) physical (ex. shared memory, hardware bus) variables: int turn and Boolean flag[2]. Turn
2.) logical (direct/indirect, synchronous/asynch- indicates whose turn it is to enter the critical
ronous, automatic/explicit buffering section. The flag array is used to indicate if a
Direct vs. Indirect Communication Links process is ready to enter the critical section.

Direct Communication Link: processes must Synchronization Hardware


name each other explicitly, they must state where Many systems provide hardware support for
they are sending the message and where they are critical section code. Modern machines provide
receiving the message. special atomic hardware instructions.
this can be either symmetric where they both
name each other or asymmetric where only the Semaphore

sender names the receipient A semaphore is a synchronization tool that


does not require busy waiting. You can have a
Indirect Communication Link: messages are sent counting semaphore or a binary semaphore.
to and received from mailboxes or ports. Semaphores provide mutual exclusion.

Properties of Direct Communication Link Semaphore Implementation


1.) Links are established automatically When implementing semaphores you must
2.) A link is associated with one pair of commun- guarantee that no two processes can execute
icating processes wait () and signal () on the same semaphore
3.) between each pair there exists exactly one link at the same time.

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
What is CPU scheduling? 4.) the link may be unidirectional or bidirectional Deadlock
(usually bidirectional)
The basis of multiprogrammed operating systems Deadlock is when two or more processes are
Properties of Indirect Communication Links waiting indefinitely for an even that can only be
What is the basic concept of CPU scheduling?
1.) Link established only if processes share a caused by one of the waiting processes.
To be able to have a process running at all time to
mailbox
maximize CPU utilization. The operating system Starvation
2.) a link may be associated with many processes
takes the CPU away from a process that is in Indefinite blocking. A process may never be
3.) each pair of processes may share several
wait, and gives the CPU to another process. removed from the semaphore queue in which
communication links
What is a CPU-I/O Burst Cycle? 4.) link may be unidirectional or bidriectional it is suspended.

The process execution cycle where the process Message-Passing Synchronization Bounded Buffer Problem
alternates between CPU execution and I/O wait.
Message Passing may be either blocking or non The problem describes two processes, the
Begins with CPU burst, then I/O burst, and then
blocking producer and the consumer, who share a
CPU burst, and so on. The CPU burst eventually
Blocking is considered synchronous, sends and common, fixed-size buffer used as a queue.
ends with a system request to terminate
receives until a message is available/received The producer's job is to generate a piece of
execution.
Nonblocking is considered asynchronous, the data, put it into the buffer and start again. At
What is a CPU Scheduler? (Also called short- sender sends process and resumes operation, the the same time, the consumer is consuming the
term scheduler) receiver retrieves either a message or null data (i.e., removing it from the buffer) one
Carries out a selection process that picks a piece at a time. The problem is to make sure
Buffering
process in the ready que to be executed if the that the producer won't try to add data into the
In both direct and indirect communication buffer if it's full and that the consumer won't try
CPU becomes idle. It then allocates the CPU to
messages exchanges are placed in a temporary to remove data from an empty buffer.
that process.
queue. These queues are implemented in three
When might a CPU scheduling decision ways Readers Writers Problem
happen? 1.) zero capacity: has a max length of 0, the link The problem is that you want multiple readers
1) Switches from running to waiting state cannot have any messages waiting in it. sender to be able to read at the same time but only
2) Switches from running to ready state blocks until recepient receives one single writer can access the shared data
3) Switches from waiting to ready state Bounded capacity: the queue has finite length n, at a time.
4) Process terminates at most n messages can be placed there. the
Dining Philosophers Problem
sender must wait if link is full
The scheduling under 1 and 4 is nonpreemptive Unbounded Capacity: the queues length is potent- Monitors

(or cooperative), otherwise it is preemptive. ially infinite, any number of messages can wait in A high level abstraction. Abstract data type,
Preemptive: Priority to high priority processes, it. the sender never blocks internal variables only accessible by code
Nonpreemeptive: Running task is executed till Other strategies for communication within the procedure. Only one process may
completion and can not be interrupted. be active within the monitor at a given time.

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Potential issues with preemptive scheduling? Some other ways for communication include:
Sockets, Remote Procedure Calls, and Pipes
1) Processes that share data: While one is in a Race Condition
state of updating its data, another process is Sockets
A situation where several processes access
given priority to run but can not read the data from sockets is defined as an endpoint for communica- and manipulate the same data concurrently
the first process. tion, need a pair of sockets-- one for each and the outcome of the execution depends on
process. the particular order in which the access take
2) Operating system kernal: Another process A socket is defined by an IP address concat- place.
might be given priority while the kernal is being enated with a port number
utilized by another process. The kernal might be Remote Procedure Controls
going through important data changes, leaving it
A way to abstract the procedure-call mechanism
in a vulnerable state. A possible solution is waiting
for use between systems with network connec-
for the kernal to return to a consistent state before
tions.
starting another process.
the RPC scheme is useful in implementing a
What is the dispatcher? distributed file system
It is a module that gives control of the CPU to the Pipes
process selected by the CPU scheduler. This
A pipe acts as a conduit allowing two processes
involves {{nl})a) switching context
to communicate. Pipes were one of the first IPC
b) switching to user mode
mechanisms and provided one of the simpler
c) jumping to the proper location in the user
ways for processes to communicate with one
program to restart that program.
another, there are however limitations

It is involked in every process switch; the time the Ordinary Pipes


dispatcher takes to stop one process and start allow communication in standard producer-con-
another is the dispatch latency. sumer style
Describe the Scheduling Criteria ordinary pipes are unidirectional
an ordinary pipe cannot be accessed from outside
Various criterias used when comparing various
the process that creates it. typically a parent
CPU- scheduling algorithms.
process creates a pipe and uses it to commun-
a) CPU utilization: Keep the CPU as busy as
icate with a child process
possible. Ranges from 0 to 10%, usually ranges
only exists while the processes are commun-
from 40% (lightly loaded system) to 90% (heavily
icating
loaded system).
b) Throughput: Measures the number of Named Pipes
processes that are completed per time unit.

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
c) Turnaround time: Amount of time to execute a more powerful than ordinary pipes Semaphore
a particular process. It is the sum of time spent communication can be bidirectional
waiting to get into memory, waiting in the ready no parent- child relationship is required
queue, executing on the CPU, and doing I/O. once a name pipe is established several
d) Waiting time: The time spent waiting in the processes can be used for named pipes
ready queue.
e) Response time: The amount of time it takes to
produce a response after a submission of a
request. Generally limited by the speed of the
output device.

Best to maximize CPU utilization and throughput,


and minimize turnaround time, waiting time, and
response time, but can still vary depending on the
task.

Describe the First-Come, First-Served


scheduling algorithm

The process that requests the CPU first is


allocated the CPU first. The Gantt chart illust-
rates a schedule of start and finish times of each
process. The average waiting time is heavily
dependent on the order of arrival of the
processes. If a processes with longer burst time
arrive first, the entire process order will now have
a longer average wait time. This effect is called
the convoy effect

Describe the short-job-first scheduling algori-


thm

Associates processes by the length of their next


CPU burst and gives CPU priority to the process
with the smallest next CPU burst. If the next CPU
burst of multiple processes are the same, First--
come-first-serve scheduling is used. It is difficult to

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
know the length of the next CPU request even Semaphore
though it is optimal over FCFS.
A generalization of a spin-lock process, a more
What is exponential averaging? complex kind of mutual exclusion
Uses the previous CPU bursts to predict future The semaphore provides mutual exclusion in a
bursts. The formula is Tn+1 = α tn+ (1 - α) Tn. tn protected region for groups of processes, not just
is the length of the nth CPU burst and Tn+1 is the one process at a time
predicted value of the next burst. α is a value from Why are semaphores used?
0 to 1. If α = 0 the recent history has no effect and
Semaphores are used in cases where you have n
current conditions are seen as consistent. If α = 1
amount of processes in a critical section problem
then only the most recent CPU burst matters.
Most commonly α =1/2, where recent and past How do you initialize a semaphore

history are equally weighted. In a shortest-rema- You initialize a semaphore with the semaphore-
ining-time-first exponential averaging, you line _init --> the shared int sem holds the semaphore
up the previous processes based on their burst identifier
times ascending instead. Wait and Signal
Example of shortest-remaining-time-first expone-
A simple way to understand wait (P) and signal
ntial averaging: If T1 = 10 and α = 0.5 and the
(V) operations is: wait: If the value of semaphore
previous runs are 8,7,4,16.
variable is not negative, decrements it by 1. If the
T2=.5(4+10) =7
semaphore variable is now negative, the process
T3=.5(7+7)=7
executing wait is blocked (i.e., added to the
T3=.5(8+7)=7.5
semaphore's queue) until the value is greater or
T4=.5(16+7.5)=11.25
equal to 1. Otherwise, the process continues
What is priority scheduling? execution, having used a unit of the resource.
A priority number is assigned to each process signal: Increments the value of semaphore
based on its CPU burst. Higher burst gets a lower variable by 1. After the increment, if the pre-in-
priority and vice versa. crement value was negative (meaning there are
processes waiting for a resource), it transfers a
Interally defined priority uses measurable blocked process from the semaphore's waiting
qualities such as average I/O burst, time limits, queue to the ready queue.
memory requirements, etc. How do we get rid of the busy waiting
externally defined priorities are criteria set not by problem?
the OS, mostly human qualities like the type of
Rather than busy waiting the process can block
work, importance of the process in relation to
itself, the block operation places a process into a

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
business, amount of funds being paid, etc. waiting queue and the state is changed to waiting

Signal
Preemptive priority will ask the CPU if the newly
A process that is blocked can be waked up which
arrived process is higher priority than the currently
is done with the signal operation that removes
running process. A nonpreemptive priority will
one process from the list of waiting processes, the
simply put the new process at the head of the
wakeup resumes the operation of the blocked
queue.
process.
Potential problems with priority scheduling? removes the busy waiting from the entry of the
indefinite blocking (also called starvation): A critical section of the application
process that is ready to run is left waiting indefi- in busy waiting there may be a negative
nitely because the computer is constantly getting semaphore value
higher-priority processes. Aging is a solution
Deadlocks and Starvation
where the priority of waiting processes are
increased as time goes on.

Describe the Round-Robin scheduling algori-


thm

Similar to first-come-first-serve, but each process


is given a unit of time called the time quantum
(usually between 10 to 100 milisecond), where the
CPU is given to the next process after the time
quantum(s) for the current process is over -
regardless of if the process is finished or not. If
the process is interrupted, it is prempted and put
back in the ready queue. Depending on the size
of the time quantum, the RR policy can appear
like a first-come-first-serve policy or processor
sharing, where it creates an appearance that
each processor has its own processor because it
is switching from one process to the next so
quickly.

Turnaround time is dependent on the size of the


time quantum, where the average turnaround time

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
does not always improve as the time quantum
size increased, but improves when most
processes finish their CPU burst in a single time
quantum. A rule of thumb is 80% of CPU bursts
should be shorter than the time quantum in order
to keep the context switches low.

Describe the multilevel queue scheduling

It is a method of scheduling algorithm that


separates priority based on the type of processes
in this order:
1) system processes
2)interactive proesses
3)interactive editing processes
4)batch processes
5)student processes

Each queue also has it's own scheduling alogor-


ithm, so System processes could use FCFS while
student processes use RR. Each queue has
absolute priority over lower priority queues, but it
is possible to time-slice among queues so each
queue gets a certain portion of CPU time.

Describe a multilevel feedback queue schedu-


ler

Works similarly as the multilevel queue scheduler,


but can separate queues further based on their
CPU bursts. its parameters are:
The number of queues
The scheduling algorithm for each queue
The method used to determine when to upgrade a
process
the method used to determine when to demote a
process

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
The method used to determine which queue a
process will enter when the process needs
service

It is by definition the most general CPU-sched-


uling algorithm, but it is also the most complex.

Describe thread scheduling

User level threads: Managed by a thread library


that the kernel is unaware of and is mapped to an
associated kernel level thread, and runs on
available light weight process. This is called proc-
ess-contention scope (PCS) since it makes
threads belonging to the same process compete
for CPU. Priority is set by the programmer and not
adjusted by the thread library

Kernel-level threads: Scheduled by the


operating system, and uses the system-con-
tention scope to schedule kernel threads onto a
CPU, where competition for the CPU takes place
among all threads in the system. PCS is done
according to priority

Describe Pthread scheduling

The POSIX Pthread API allows for specifying


either PCS or SCS during thread creation where
PTHREAD_SCOPE_PROCESS schedules
threads using PCS and PTHREAD_SCOPE_S-
YSTEM handles SCS. On systems with the many-
to-many model, the PTHREAD_SCOPE_P-
ROCESS policy schedules user-level threads
onto LWPs, whereas the PTHREAD_SCOPE_S-
YSTEM creates and binds LWP for each user-

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
level thread. Linux and Mac OS X only allow
PTHREAD_SCOPE_SYSTEM.

Describe how multiple-processor scheduling


works

Multiple processes are balances between multiple


processors through load sharing. One approach is
through asymmetric multiprocessing where one
processor acts as the master, is in charge of all
the scheduling, controls all activites, and runs all
kernel code, while the rest of the processors only
run user code. symmetric multiprocessing,
SMP has each processor self schedule throgh a
common ready queue, or seperate ready queues
for each processor. Almost all modern OSes
support SMP.

Processors contain cache memory and if a


process were switch from processor to another,
the cache data would be invalidated and have to
be reloaded. SMP tries to keep familiar processes
on the same processor through processor affini-
ty. Soft affinity attempts to keep processes on the
same processor, but makes no guarantees. Hard
affinity specifies that a process is not moved
between processors.

Load balancing tries to balance the work done


between processors so none sits idle while
another is overloaded. Push migration uses a
separate process that runs periodically and
moves processes from heavily loaded processors
onto less loaded ones.
Pull migration makes idole processors take
processes from other processors. Push and pull

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
are not mutually exclusive, and can also
counteract possessor affinity if not carefully
managed.

To remedy this, modern hardware designs


implemented multithreaded processor cores in
which two or more hardware threads are assigned
to each core, so if one is stalling the core can
switch to another thread.

How's Your Readability?

Cheatography is sponsored by Readable.com. Check out Readable.com to make your content and
copy more engaging and support Cheatography!

Measure Your Readability Now!

Download the Operating Systems Cheat Sheet Share This Cheat


Sheet!

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
PDF (recommended)

 PDF (9 pages)

Alternative Downloads

 PDF (black and white)


 LaTeX

9 Pages

Comments Cheatographers
DaveChild, 09:50 22 Oct 15 makahoshi1

Wow, great cheat sheet! And really nice to see so much collaboration in one cheat sheet, too :)
Yemariam

Brett Ellingson, 10:36 26 Oct 15

Wow. Awesome information!!

Tomas J, 21:27 26 Jan 16 Metadata


Damn this is a nice find, good work, thank's...
Languages: English

Published: 21st October, 2015

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
xsmycsheet, 18:31 21 Feb 17 Rated: 5 stars based on 16 ratings

Very impressive cheat sheet and I love using this for exams!

NetHead21, 04:33 28 May 17

Ahoy! Mates, wonderful. Thank you very much.


Favourited By

Add a Comment
Your Comment

Your Name

Your Email Address

Your Comment

Post Your Comment

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
Latest Cheat Sheet Most Popular Cheat Sheet Random Cheat Sheet

About Cheatography On The Blog 10th June Recent Activity


Cheatography is a collection of 3381 cheat FunThomas424242 updated Jenkins.
5 Ways Cheatography Benefits Your Business
sheets and quick references in 25 languages for 14 hours 53 mins ago
everything from google to science! Cheatography Cheat Sheets are a great
matttias updated LAMP from source.
timesaver for individuals - coders, gardeners,
15 hours 24 mins ago
musicians, everybody! But businesses can
benefit from them as well - read on to find out LiezelN updated Bootstrap 4 (2019) ver
more. 2.1.
1 day 2 hours ago

Behind the Scenes Cheatography RSS Feeds Share Cheatography!


If you have any problems, or just want to say hi, 3.2K
Latest Cheat Sheets
you can find us right here:
Latest Blog Posts

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD
DaveChild SpaceDuck

Cheatography

© 2011 - 2019 Cheatography.com | CC License | Terms | Privacy Site by Added Bytes

Create PDF in your applications with the Pdfcrowd HTML to PDF API PDFCROWD

You might also like