0% found this document useful (0 votes)
10 views

Operating Systems Finals Revision

The document provides a comprehensive overview of operating systems, focusing on processes, threads, synchronization, scheduling policies, memory management, and virtual machines. Key concepts include thread creation and termination, task classification, concurrency challenges, and semaphore operations. It also discusses memory mapping, virtual memory, and the role of hypervisors in virtualization.

Uploaded by

nour.shalabyuni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Operating Systems Finals Revision

The document provides a comprehensive overview of operating systems, focusing on processes, threads, synchronization, scheduling policies, memory management, and virtual machines. Key concepts include thread creation and termination, task classification, concurrency challenges, and semaphore operations. It also discusses memory mapping, virtual memory, and the role of hypervisors in virtualization.

Uploaded by

nour.shalabyuni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

Operating Systems Finals Revision

Lecture 4
Once a process is defined, the user code is executed by creating a
thread
A thread is the basic unit of a process defining an execution path
of a task to be passed to the CPU to execute its instruction
 OS can create multiple threads that all share the process
resource

TO CREATE A THREAD
int pthread_create(pthread_t * thread, const pthread_attr_t * attr,
void * (*start_routine)(void *), void *arg);
thread: pointer that returns thread id
attr: pointer to structure that is used to define thread attributes
start_routine: pointer to a subroutine that is executed by a thread
arg: pointer to void that contains the arguments to the function
defined in the earlier argument

TO TERMINATE A THREAD
void pthread_exit(void *retval);
retval: parameter that stores return status of the thread
terminated
TO WAIT FOR TERMINATION
int pthread_join(pthread_t th, void **thread_return);
th: tread id of the thread which the current thread waits
thrrad_return: pointer to location where the exit status of the
thread mentioned in th is stored

TO GET THREAD ID OF CURRENT THREAD


Pthread_t pthread_self(void);

Task classification
Aperiodic: unpredictable 1 shot tasks/ soft or no deadlines
Sporadic: unpredictable 1 shot tasks/ hard deadline
Periodic: tasks repeated after a period of time/ hard deadlines

Calculations
Release time: Time instant task is ready to be executed
Completion time: time where execution task must be completed
Relative deadline= completion time-release time
Execution time= time required for processor to execute the task
Response time=end of execution – release time
Wait time= in between delays in the execution of task caused by
scheduler
Timeslice= time allocated to a process to be executed on the CPU
Useful CPU work= ei/ei+Wi
OS must:
1. Preempt  interrupt and save current context
2. Schedule  run scheduler to choose the next to be executed
task
3. Dispatch  dispatch task and switch into its context

Ready-queue
1. Ready Queue
2. Job pool: all processes in the system
3. Device Queue

Preemptive scheduling
The currently running process may be interrupted and moved to
the ready state by the OS

Short-term scheduling policies


 First Come First Served (FCFS)
 Shortest Job First (SJF)/Shortest Process Next (SPN)
 Shortest Remaining Time (SRT)
 Highest Response Rate Next (HRRN): Ratio=Wi+ei/ei
Lecture 6

OS manages processes and threads

Concurrent: Different parts of the program conceptually execute


simultaneously on the single hardware
Parallel: Different parts of program physically execute
simultaneously on distinct hardware

Multiprogramming: Multiple processes within a uniprocessor


system
Multiprocessing: Multiple processes executing on multiple cores
Distributed processing: Multiple processes executing on
multiple distributed computer systems

Short Term Scheduling Policies

Normal schedulers
 First Come First Served
 Shortest Job First
 Shortest Remaining Time
 Highest Response Rate Next

Clock-driven schedulers: Round Robin

Normal and clock-driven Schedulers: Mutex, Semaphores,


Monitors, Condition Variables
Priority-driven Schedulers
 Static priority
 Multilevel Feedback Queue

Clock/Priority-driven Schedulers: Multilevel Queue Scheduling


with Priority Levels

Clock and Priority-driven Schedulers:


1. Non-preemptive Critical Section Protocol
2. Priority Inheritance Protocol
a. When a lower priority task blocks a higher priority task,
it inherits the priority of the blocked higher priority task
b. After execution the task returns to its original priority
level
3. Priority Ceiling Protocol

Concurrency Challenges
 Race conditions
o Occur when the outcome of a program depends on the
interleaving of execution of multiple threads or
processes
 Deadlocks
o When 2 or more threads or processes are blocked
indefinitely
 Livelocks
o When multiple threads/processes continuously change
their states in response to each other’s actions,
without making progress.
 Starvation
o When thread/process is unable to access a resource it
needs for an extended period due to resource
contention
 Priority Inversion
o When low-priority thread holds a resource required by
a high-priority thread, causing the high-priority thread
to wait longer than expected.

Critical section= time between locking and unlocking

Lecture 8

Process Synchronization
Coordination of execution of multiple processes in a multi-process
system to ensure they access shared resources in a
controlled/predictable manner
Aims to resolve problem of race conditions

Types of processes
1. Independent
2. Cooperative (where process synchronization problems arise)

Concurrency challenges P2
 Race conditions: Multiple threads read/write the same
variable
 Critical sections: Segment of code that is executed by
multiple concurrent threads/processes
 Mutual exclusion: Property of process synchronization that
states that “no 2 processes can exist in the critical section at
any given point of time”
 Deadlocks
 Livelocks
 Starvation
 Priority Inversion: when low priority thread holds a
resource required by high priority thread

Semaphores
Primitives for synchronization that are used to manage how many
processes can use a single shared resource
Keep track of a count that shows how many resources are
available

Advantages:
1. Effective process synchronization
2. Flexible resource management
3. Simple and frequently used

Disadvantages:
1. Implementation is difficult
2. Potential deadlock and livelock
3. Synchronization overhead

Operations:
1. Sem_init() : Initialization
a. int sem_init(sem_t *sem, int pshared, unsigned int
value);
b. returns 0 after completing successfully
c. pshared=0 then semaphore can be used by all threads
in this process only
d. pshared <> 0 , the semaphore can be shared by other
processes
2. Sem_Wait() : Wait
a. int sem_wait(sem_t *sem);
b. returns 0 after completing successfully
c. The semaphore count is decreased via the wait action,
commonly referred to as the "down" operation.
3. Sem_Post() : Signal
a. int sem_post(sem_t *sem);
b. returns 0 after completing successfully
c. The semaphore count is increased via the signal action,
also referred to as the "up" operation

Semaphore Types:
1. Counting
2. Binary
Producer/Consumer problem
 Producer generating some type of data and placing in buffer
 Consumer taking items out of buffer and using them one at a
time
 Producer should not add elements if buffer is full
 Consumer should not remove elements if buffer is empty
 Assume: buffer infinite: producer can add to buffer at any
time

MUTEX
Special type of Binary Semaphore
Lock operation: pthread_mutex_lock()
Unlock operation: pthread_mutex_unlock()

Lecture 9/10

Memory mapping and allocation


 Data segment
o Data  nonzero initialized global and static data
o .bss zero initialized/uninitialized global and static data
o Rodata constant data
o Heap dynamic data of runtime
o Stack temporary data like local variables
 Code segment
o Intvects interrupts
o Text read-only data
o Const text
o Cinit boot loader
o Pinit
o Unused flash

The virtual memory


 Virtual as they do not have to correspond to actual locations
in the physical memory
 Not necessary that all the data stored in the virtual memory
has a correspondence in the physical memory and vice versa
 Main memory is abstractly composed of the OS and user
space
 PCB is stored in the OS space
 User processes are stored in the user space
 Secondary memory disk holds the VM
 Main physical memory disk holds PCB
 PCB carries information about the VM
 PCB is moved to the cache before going to CPU
 At the end the CPU has information about process address
space
 ISA dictates the size of the VM

The goal is when running the process to know how to map the
process address within the loaded VM to actual location in
memory
OS should be able to:
1. Allocate physical memory
2. Arbitrate how it is being addressed

Memory management mechanisms should satisfy (achieved


using HW support)
1. Reallocation
2. Sharing
3. Protection
4. Logical/Physical organization

CPU contains Memory Management Unit(MMU). CPU issues Virtual


addresses to MMU, and MMU is responsible for converting into
physical addresses.

Most MMU incorporate small cache of virtual/physical address


translations(Translation Lookaside Buffer TLB)

Fixed partitioning: memory is divided into number of


predefined sized frames
1. Equal-size partitions
2. Unequal-size partitions

Dynamic partitioning: memory is empty, and partition sizes are


determined based on the loaded process size

What is the VM?


 Storage allocation scheme in which secondary memory can
be addressed as though it were part of main memory
 Size of the virtual storage is limited by the addressing
scheme of the computer system(ISA) and by the amount of
secondary memory available

2 main mechanisms used based on partitioning


1. Page-based memory management (Fixed)
a. Allocation  pages to pages Frames
b. Arbitration page table
2. Segment-based memory management (Dynamic)
a. Allocation  segments
b. Arbitration segment registers

Virtual address consists of page number and offset(determines


offset within the mapped frame)

Page table fields


 Page tables are stored inside main memory
 On context switch, process page table is loaded to the MMU
where the arbitration takes place
 Page table single entry will be stored in CPU registers
 Entry should have word size equivalent to the CPU register
Page replacement policies
 A page replacement algorithm is needed
o VM manager uses algorithm to select page in memory
for replacement
o Accesses page table entry of selected page to mark
“not present”
o Initiates page-out operation for it if modified bit of its
page entry indicates that it is a dirty page
o FIFO
o Optimal: pages are replaced which would not be used
for the longest duration of time in the future
o Least recently used
 Replacement necessary when a fault occurs and here is no
free page frames in memory
 Another fault could occur if the replaced page is referenced
 Important to replace page not likely to be referenced in
immediate future

Lecture 11

Virtual Machine
 Compute resource that uses software instead of a physical
computer to run programs/deploy apps
 Virtual “guest” machine runs on physical “host” machine
 Each VM runs its own OS and functions separately from other
VM’s even if they are all running on the same host
 Commonly used with servers where we have a huge physical
hardware and require many OS

Hypervisor
 Virtual Machine Monitor
 Software that creates/ runs virtual machines
 Allows one host computer to support multiple VMs by
virtually sharing its resources
 Virtualization allows concurrent execution of multiple Oss
and their apps on the same physical machine

Characteristics:
1. Provides environment essentially identical with original
machine
2. Minor decrease in speed
3. VMM is in complete control of the system resources

Types:
1. Bare metal: acts like lightweight OS and runs directly on
hosts hardware
2. Hosted: runs as a software layer on an OS like other
computer programs
VM is configured with characteristics of a real machine:
1. Number of processors
2. RAM amount
3. HDD Network ports
4. OS/Apps

Types of virtualizations:
1. Hardware
2. Software
3. Storage
4. Network
5. Desktop

Tutorial 6/7

Counting semaphore

struct semaphore {
int count;
queueType queue;
};
void semWait(semaphore s) {
s.count--;
if (s.count < 0) {
/* place this process in s.queue */
/* block this process */
}
}

void semSignal(semaphore s) {
S.count++;
if (s.count <= 0) {
/* remove a process P from s.queue and place it on ready
list*/
}
}

Binary Semaphore

struct binary_semaphore {
enum {zero, one} value;
queueType queue;
};

void semWaitB(binary_semaphore s) {
if (s.value == one)
s.value = zero;
else {
/* place this process in s.queue */
}
}

void semSignalB(semaphore s) {
if (s.queue.isEmpty())
s.value = one;
else {
/* remove process P from s.queue and place on ready
list*/
}
}

MUTEX

struct mutex {
enum {zero, one} value;
queueType queue;
int ownerID;
};

void semWaitB(mutex m) {
if (m.value == one) {
m.ownerID = getProcessID();
m.value = zero;
} else {
/* place this process in m.queue */
}
}

void semSignalB(mutex m) {
If(m.ownerID == getProcessID()) {
if (m.queue.isEmpty())
m.value = one;
else {
/* remove a process P from m.queue and place it
on ready list*/
/* update ownerID to be equal to Process P’s ID */
}
}
}

Tutorial 8

Monitor
 Programming language construct that provides equivalent
functionality to that of semaphores and that is easier to
control
 Process enters monitor by invoking one of its procedures
 One process executing in the monitor at a time
 Monitor supports synchronization by the use of condition
variables (accessible only within the monitor)
cwait(c): Suspend execution of calling process on condition c
csignal(c): Resume execution of some process blocked after cwait
on the same condition

Tutorial 9

Using the following page table, give the physical address


corresponding to each of the following virtual addresses:
a) 20
b) 4100
c) 8300
Method:
1. Divide no. by page size
2. Page size is 4096
3. Offset = apply modulo operator to get remainder
4. We have virtual page number so go to corresponding
physical page and append calculated offset to get bytes
physical mapping

You might also like