0% found this document useful (0 votes)
344 views

PPT-Unit-4 CPU Scheduling and Algorithms

ppt

Uploaded by

catstudysss
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
344 views

PPT-Unit-4 CPU Scheduling and Algorithms

ppt

Uploaded by

catstudysss
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

Unit-4 CPU Scheduling and Algorithms

Chapter Outcomes:
• Justify the need and objective of job scheduling with relevant example.
• Explain with example the procedure of allocating CPU to the given process using the
specified OS.
• Calculate turnaround time and average waiting time of the given scheduling algorithm.
• Explain functioning of the given necessary condition leading to deadlock.

Learning Objectives:
• To understand Basic Concept of CPU Scheduling
• To study Types of Scheduling
• To learn Types of Scheduling Algorithms
• To study Basic Concepts of Deadlock
Scheduling Concept
• Scheduling is an important function of an operating system. The process scheduling is the
activity of the process manager that handles the removal of the running process from the
CPU and the selection of another process on the basis of a particular strategy.
• The scheduler is the kernel component (module/program) responsible for deciding which
program should be executed on the CPU. A scheduler selects a job/task which is to be
submitted next for execution.
• When it receives control, the scheduler the program and gives the CPU to another
program. The function of deciding which program should be given the CPU, and for how
long, is called scheduling.
Scheduling Model
• CPU scheduling is the process of selecting a process from the ready queue and assigning
the CPU to it. CPU scheduling algorithms decides which of the processes in the ready queue
is to be allocated to the CPU.

Every process in OS that requests CPU service carries out the following sequence of actions:
• 1. In first action, join the ready queue and wait for CPU service.
• 2. In second action, execute (receive CPU service) for the duration of the current CPU burst
or for the duration of the time slice (timeout).
• 3. In third action, join the I/O queue to wait for I/O service or return to the ready queue to
wait for more CPU service.
• 4. In fourth action, terminate and exit if service is completed i.e., if there are no more CPU
or I/O bursts. If more service is required, return to the ready queue to wait for more CPU
service.
Scheduling Model
The CPU scheduler is the part of the operating system that selects the next process to which
the CPU will be allocated, de-allocates the CPU from the process currently executing, and
allocates the CPU to the newly selected process
CPU and I/O Burst Cycles
• CPU scheduling is greatly affected by how a
process behaves during its execution. Almost all
the processes continue to switch between CPU
(for processing) and I/O devices (for performing
I/O) during their execution.
• Processes alternate back and forth between
these two states. Process execution begins with
a CPU burst. It is followed by an I/O burst, which
is followed by another CPU burst, then another
I/O burst and so on. Eventually the last CPU
burst will end with a system request to
terminate execution, rather than another I/O
burst
CPU and I/O Burst Cycles
CPU Bound Process: The process which spends more time in computations or with CPU and
very rarely with the I/O devices is called as CPU bound process.

I/O Bound Process: The process which spends more time in I/O operation than computation
(time spends with CPU) is I/O bound process
Scheduling Criteria
CPU Utilization: CPU utilization is defined as, the percentage of time the CPU is busy in
executing processes. For higher utilization, CPU must be kept as busy as possible, that is, there
must be some process running at all times. CPU Utilization may range from 0 to 100 percent. In
a real system it should range from 40 percent to 90 percent.

Throughput: If the CPU is busy executing processes, then work is being done. One measure of
work is the number of processes that are completed per time unit called throughput. For long
processes, this rate may be one process per hour, for short transactions, throughput might be
10 processes per second. Throughput is defined as, the total number of processes that a
system can execute per unit of time. 3
Scheduling Criteria
Turnaround Time (TAT): It is the difference between the time a process enters the system and
the time it exits the system. From the point of view of a particular process, the important
criterion is how long it takes to execute that process. The interval from the time of submission
of a process to the time of completion is the turnaround time. Turnaround Time is the sum; of
the periods spent waiting to get into memory writing in the ready queue, executing on the
CPU and doing I/O. Turnaround time is defined as, the amount of time that has rolled by from
the time of creation to the termination of a process.

Waiting Time: The CPU scheduling algorithm does not affect the amount of time during which
a process executes of does I/O, if affects only the amount of time that a process spends
waiting in the ready queue. Waiting time is the sum of the periods spent waiting in the ready
queue. Waiting time is defined as, the time spent by a process while waiting in the ready
queue.
Scheduling Criteria
Response Time: Response time is defined as, the time elapsed between the moment when a
user initiates a request and the instant when the system starts responding to this request. In
an interactive system TAT (Turnaround Time) may not be best criterion. Often a process can
produce some output early and can continue computing new results while previous results are
being output to the user. Thus another measure is the time from submission of a request until
the first response is produced. This measure called response time is the amount of time it
takes to start responding, but not the time that it takes to output that response. The
Turnaround Time is generally limited by the speed of the output device.

Balanced Utilization: Balanced utilization is defined as, the percentage of time all the system
resources are busy. It considers not only the CPU utilization but the utilization of I/O devices,
memory, and all other resources. To get more work done by the system, the CPU and I/O
devices must he kept running simultaneously. For this, it is desirable to load a mixture of CPU-
bound and I/O-bound processes in the memory.
Preemptive Scheduling
• A pre-emptive scheduling allows a higher priority process to replace a currently running
process, even if its time slot is not completed or it has not requested for any I/O.
• If a higher priority process is enters the system, the currently running process is stopped
and the CPU transfers the control to the higher priority process.
• The currently running process may be interrupted and moved to the ready state by the
operating system. Window-95 introduces pre-emptive scheduling and all subsequent
versions of Windows operating systems have used pre-emptive scheduling.
• The Mac OS X operating system for the Macintosh also uses preemptive scheduling.
• The pre-emptive scheduling algorithms is based on priority where a scheduler may
preempt a low priority running process anytime when a high priority process enters into a
ready state.
• The advantage of pre-emptive scheduling algorithms is they allow real multiprogramming.
• The disadvantages of pre-emptive scheduling algorithms are they are complex and they
lead the system to race condition.
Non-Preemptive Scheduling
• In non-preemptive scheduling once the CPU is assigned to a process, the processor do not
release until the completion of that process. It means that a running process has the
control of the CPU and other allocated resources until the normal termination of that
process.
• In non-pre-emptive scheduling, once the CPU has been allocated to a process, the process
keeps the CPU until it releases the CPU either by terminating or by switching to the waiting
state. This scheduling method was used by Microsoft Windows 3.
• In Microsoft Windows 3 case, once a process is in the running state, it continues to execute
until it terminates or blocks itself to wait for I/O or by requesting some operating system
services i.e., if a higher priority process enters the system, the running process cannot be
forced to release the control.
Non-Preemptive Scheduling
• Non-preemptive algorithms are designed so that once a process enters the running state, it
cannot be preempted until it completes its allotted time.
• The advantages of non-preemptive scheduling algorithms are they are simple and they
cannot lead the system to race condition.
• The disadvantage of non-preemptive scheduling algorithms is they do not allow real
multiprogramming.
Difference between pre-emptive and Non-Pre-emptive algorithm
Preemptive Scheduling Non-preemptive Scheduling
Even if CPU is allocated to one process, CPU Once, the CPU has been allocated to a
can be preempted to other process if other process the process keeps the CPU until
process is having higher priority or some releases CPU either by terminating or by
other fulfilling criteria. switching to waiting state.
Throughput is less Throughput is high
It is suitable for real time system. It is not suitable for real time system
Algorithm design is complex. Algorithm design is simple.
Only the processes having higher priority are Processes having any priority can get
scheduled. scheduled.
It doesn’t treat all processes as equal. It treats all process as equal.
Difference between pre-emptive and Non-Pre-emptive algorithm
Preemptive Scheduling Non-preemptive Scheduling
It has high overhead. It has low overhead
Examples: Round Robin, Priority algorithms. Example: FCFS algorithm.
In pre-emptive scheduling, a running process In non-preemptive scheduling, the processor
may be interrupted by another process that executes a process till termination without
needs to execute. any interruption.
The system resources are used efficiently. The system resources are not used efficiently
Circumstances for pre-emptive: Circumstances for non-pre-emptive:
• process switch from running to ready state • process switches from running to waiting
• process switch from waiting to ready state state
• process terminates
Dispatcher
• Dispatcher is a component which involves in the CPU scheduling. The dispatcher is the
module that actually gives control of the CPU to the process selected by the short term
scheduler.
• The module of the operating system that performs the function of setting up the execution
of the selected process on the CPU is known as dispatcher.
• The CPU scheduler only selects a process to be executed next on the CPU but it cannot
assign CPU to the selected process.
The following are functions performed by dispatcher:
1. Loading the register of the process.
2. Switching operating system to the user mode.
3. Restart the program by jumping to the proper location in the user program.

The dispatcher needs to be as fast as possible, as it is run on every context switch. The time
taken by dispatcher to stop one process and start another process to run is called dispatch
latency time.
First Come First Serve (FCFS) Scheduling Algorithm
• It is the simplest type of algorithm in which the process that requests the CPU first is
allocated the CPU first.
• In FCFS scheduling algorithm processes are scheduled in the order they are received. The
FCFS algorithm is easily implemented by using a queue data structure for the ready queue.
• Jobs are processed in the order of their arrival in the ready queue. It can be implemented
with FIFO (First In First Out (FIFO)) queue.
• The FCFS scheduling algorithm is non-pre-emptive. Once the CPU has been allocated to a
process, that process keeps the CPU until it wants to release the CPU, either by terminating
or by requesting I/O.
• In time-sharing system it is not useful because process will hold the CPU until it finishes or
changes a state to wait state.
• In FCFS once, the processes is given to the CPU it keeps it till the completion of execution
First Come First Serve (FCFS) Scheduling Algorithm
• Performance of FCFS is often very poor because a process with a long CPU burst will hold
up other processes. Moreover, it can affect overall throughput since I/O on processes in the
waiting state may complete; while the CPU bound process is still running.
• Average waiting time for FCFS algorithm is not minimal, and it also varies substantially if the
process CPU burst time vary greatly.
First Come First Serve (FCFS) Scheduling Algorithm
Example-1: Consider the four jobs are scheduled for execution (all jobs arrived at same time).
Find the average waiting time and turnaround time.

Gantt Chart:
First Come First Serve (FCFS) Scheduling Algorithm
First Come First Serve (FCFS) Scheduling Algorithm
Example 2: Consider the four jobs are scheduled for execution. Find the average waiting time
and turnaround time

Gantt Chart:
First Come First Serve (FCFS) Scheduling Algorithm
First Come First Serve (FCFS) Scheduling Algorithm
Advantages of FCFS Scheduling Algorithm:
• FCFS is easier to understand and implement as processes are simply to be added at the end
and removed from the front of queue. No process from in between the queue is required
to be accessed.
• FCFS is well suited for batch systems where the longer time periods for each process arc
often acceptable.

Disadvantages of FCFS Scheduling Algorithm:


• Average waiting time is very large.
• FCFS is not an attractive alternative on its own for a single processor system.
• Another difficulty with FCFS tends to favor processor bound. Processes over I/O bound
processes and may result in inefficient use of both the processor and the I/O devices.
Shortest Job First (SJF) Scheduling Algorithm
• The SJF scheduling algorithm is also knows as Shortest Process Next (SPN) scheduling
algorithm or Shortest Request Next (SRN) scheduling algorithm that schedules the
processes according to the length of the CPU burst they require.
• In SJF algorithm, the process with the shortest expected processing time is assigned to
CPU. Hence, the name is shortest job first.
• Jobs or processes are processed in the ascending order of their CPU burst times. Every time
the job with smallest CPU burst-time is selected from the ready queue.
• If the two processes having same CPU burst Time then they will be scheduled according to
FCFS algorithm. Performance of this algorithm is very good in comparison to FCFS.
• The SJF scheduling algorithm is probably optimal; it gives minimal average waiting time for
a given set of processes. By moving short process before a long one, the waiting time of
short process decreases. Consequently average waiting time reduces.
Shortest Job First (SJF) Scheduling Algorithm
• In SJF scheduling algorithm the process in the ready queue with the shortest expected
processing time is assigned to CPU next.

SJF can be evaluated in two different manners:


• Non pre-emptive SJF: In this method, if CPU is executing one job, it is not stopped in
between before completion.
• Pre-emptive SJF: In this method, while CPU is executing a job, if a new job arrives with
smaller burst time, then the current job is pre-empted (sent back to ready queue) and the
new job is executed. It is also called Shortest Remaining Time First (SRTF).
Shortest Job First (SJF) Scheduling Algorithm
Example: Consider following table, find turnaround time and wait time.
Shortest Job First (SJF) Scheduling Algorithm
Non-preemptive SJF:
Shortest Job First (SJF) Scheduling Algorithm
preemptive SJF:
Shortest Job First (SJF) Scheduling Algorithm
Advantages of SJF Scheduling Algorithm:
• Overall performance is significantly improve in terms of response time.
• SJF algorithm eliminates the variance in waiting and turnaround times.

Disadvantages of SJF Scheduling Algorithm:


• There is a risk of starvation of longer processes.
• It is very difficult to know the length of next CPU burst.
Priority Scheduling Algorithm
• A priority is associated with each process and the CPU is allocated to the process with the
highest priority, hence it is called Priority Scheduling.
• In priority scheduling algorithm, a priority is associated with each process and the
scheduler always picks up the highest priority process for execution from the ready queue.
Equal priority processes are scheduled in FCFS order.
• The priority scheduling can be either pre-emptive or non-pre-emptive. When process
enters into the ready queue, its priority is compared with the priority of the current
running process.
• In a pre-emptive priority scheduling algorithm, at any time, the CPU is allocated one
process and if the priority of the newly arrived process is higher than the priority of the
currently running process, the process which was allocated is interrupted and return to the
queue. The higher priority job is started for execution.
Priority Scheduling Algorithm
Example 1: Consider the following set of processes assumed to have arrived at time 0, in the
order P1, P2, P3, …… P5, with the length of the CPU burst time given in milliseconds. Calculate
the average TAT and Waiting Time
Priority Scheduling Algorithm
Priority Scheduling Algorithm
Advantages of Priority Scheduling Algorithm:
• It is simple in use.
• Important processes are never made to wait because of the execution of less important
processes.
• Suitable for applications with varying time and resource requirements.

Disadvantages of Priority Scheduling Algorithm:


• It suffers from the problem of starvation of lower priority processes, since the continuous
arrival of higher priority processes will prevent lower priority processes indefinitely from
acquiring the CPU.
• Apriority scheduling can leave some low priority waiting processes indefinitely for CPU.
Round Robin (RR) Scheduling Algorithm
• The Round Robin (RR) scheduling algorithm is designed especially for time sharing systems.
RR is the pre-emptive process scheduling algorithm.
• Round robin scheduling is a preemptive version of First Come First Serve scheduling.
Processes are (FCFS) dispatched in a First In First Out (FIFO) sequence but each process is
allowed to run for only a limited amount of time.
• It is similar to FCFS scheduling but pre-emption is added to switch between processes. A
small unit of time called a Time Quantum or Time Slice is defined.
• A time quantum is generally from 10 to 100 milliseconds. The ready queue is treated as a
circular queue. The CPU scheduler goes around the ready queue, allocating CPU to each
process for a time interval of up to 1 time quantum.
• In RR scheduling processes are dispatched FIFO but are given a limited amount of processor
time called a time slice or a quantum.
• If a process does not complete before its quantum expires, the system preempts it and
gives the processor to the next waiting process. The system then places the preempted
process at the back of the ready queue.
Round Robin (RR) Scheduling Algorithm
• process P1 is dispatched to a processor, where it executes either until completion, in which
case it exists the system, or until its time slice expires, at which point is preempted and
placed at the tail of the ready queue. The scheduler then dispatches process P2.
• To implement RR scheduling we keep ready queue as FIFO queue of processes. New
processes are added to the tail of the ready queue. The average waiting time under the RR
policy however is often quite long.
Round Robin (RR) Scheduling Algorithm
Example 1: Consider the following set of processes that arrive at time 0, with the length of CPU
burst time given in milliseconds. (time quantum=4 msec.)
Round Robin (RR) Scheduling Algorithm
Advantages of RR Scheduling Algorithm:
• Round robin is particularly effective in a general purpose time sharing system or
transaction processing system.
• It is efficient for time sharing systems where the CPU time is divided among the competing
processes.
• RR increases the fairness among the processes.
• In RR scheduling algorithm overhead on processor is low.

Disadvantages of RR Scheduling Algorithm:


• In RR the processes (even the short processes) may take long time to execute. This
decreases the system throughput.
• RR requires some extra hardware support, such as a timer, to cause interrupt after each
time out.
• In RR scheduling care must be taken in choosing quantum value.
• Throughput in RR scheduling algorithm is low if time quantum is too small.
Multilevel Priority Queue (MLQ) Scheduling Algorithm
• Multilevel Queue Scheduling based on response - time requirements. Some process
required a quick response by the processor; some processes can wait.
• A multilevel queue scheduling algorithm partitions the ready queue into separate queues.
In a multilevel queue scheduling processes are permanently assigned to one queue,
depending upon their properties such as the size of the memory or the type of the process
or priority of the process. So each queue follows a separate scheduling algorithm.
• In multilevel queue scheduling algorithm scheduling the processes are classified into
different groups such as System processes, Interactive processes, Interactive editing
processes, Batch processes, User processes etc.,
• The interactive processes are known as foreground processes and the batch processes are
known as background processes. These two types of processes have different response-
time requirements and so may have different scheduling needs
Multilevel Priority Queue (MLQ) Scheduling Algorithm

Multilevel Priority Queue Scheduling with


scheduling Algorithms
Multilevel Priority Queue (MLQ) Scheduling Algorithm
Advantages MLQ Scheduling Algorithm:
• In MLQ, the processes are permanently assigned to their respective queues and do not
move between queues. This results in low scheduling overhead.
• In MLQ one can apply different scheduling algorithms to different processes.
• There are many processes which we are not able to put them in the one single queue
which is solved by MLQ scheduling as we can now put them in different queues.

Disadvantages MLQ Scheduling Algorithm:


• The processes in lower priority queues may have to starve for CPU in case processes are
continuously arriving in higher priority queues.
• In MLQ the process does not move from one queue to another queue
Multilevel Feedback Queue (MLFQ) Scheduling Algorithm
• The MLFQ scheduling also known as multilevel adaptive
scheduling is an improved version of multilevel queue
scheduling algorithm.
• In multilevel queue scheduling, processes cannot move
from one queue to other because processes do not
change their foreground or background nature. But in
multilevel feedback queue scheduling, processes are not
permanently assigned to a queue on entry to the
system.
• Instead, they are allowed to move between queues. The
idea is to separate processes with different CPU burst
characteristics. If a process uses too much CPU time, it
will be moved to a lower priority queue.
• Similarly, a process that waits too long in a low priority
queue will be moved to a higher priority queue. This
form of aging prevents starvation.
Multilevel Feedback Queue (MLFQ) Scheduling Algorithm
A multilevel feedback queue scheduler is defined by the following parameters:
• The number of queues.
• The scheduling algorithm for each queue.
• The method used to determine when to upgrade a process to a higher priority queue.
• The method used to determine when to demote a process to a lower priority queue.
• The method used to determine which queue a process will enter when that process needs
service.
Multilevel Feedback Queue (MLFQ) Scheduling Algorithm
Advantages of Multilevel Queue Scheduling Algorithm:
• The multilevel feedback queue scheduling speeds up the flow of task execution.
• It prevents starvation by moving a lower priority process to a higher priority queue if it has
been waiting for too long.
• It is fair to I/O bound (short) processes as these processes need not wait too long and are
executed quickly.
• It improves the overall performance of the system.

Disadvantages of Multilevel Queue Scheduling Algorithm:


• The turnaround time for long processes may increase significantly.
• It is the most complex scheduling algorithm.
• In this algorithm, moving the processes between queues causes a number of context
switches which results in an increased overhead.
Deadlock
• Deadlock is a situation when two or more processes get locked and cannot processed
further because of inter-dependability. In real world, deadlocks can arise when two
persons wait for phone calls from one another.

• Deadlock is defined as, "a situation where a set of processes are blocked because each
process is holding a resource and waiting for another resource acquired by some other
process".

• Processes involved in a deadlock remain blocked permanently and this affects OS


performance indices like throughput and resource efficiency.
Necessary Conditions to Deadlock
Mutual Exclusion: At least one resource is held in a
non-sharable mode, that is only one process at a
time can use the resource. If another process
requests that resource, the requesting process must
be delayed until the resource has been released.
Each resource is either currently assigned to exactly
one process or is available.

Hold and Wait: There must exist a process that is


holding at least one resource and is waiting to
acquire additional resources that are currently being
held by another process. Process currently holding
resources granted earlier can request new resources.

44
Necessary Conditions to Deadlock
No Pre-emption: Resources cannot be pre-empted; i.e. resource can only be released
voluntarily by the process holding it, after the process has completed its task. Resources
previously granted cannot be forcibly taken away from a process. They must be explicitly
released by the process holding them.

Circular Wait: There exist a set (P0, P1, ----- Pn) of waiting processes such that P0 is
waiting for a resource which is held by P1, P1 is waiting for a resource which is held by
P2. Pn-1 is waiting for resources which are held by Pn and Pn is waiting for a resource
which is held by P0. Thus there must be a circular chain of two or more processes, each
of which is waiting for a resource held by the next member of the chain.

45
Deadlock Handling
A deadlock in operating system can be handled in following four different ways:

1. Adopt methods for avoiding the deadlock.


2. Prevent the deadlock from occurring (use a protocol).
3. Ignore the deadlock.
4. Allow the deadlock to occur, detect it and recover from it.

46
Deadlock Prevention
• Eliminating Mutual Exclusion Condition
• Eliminating Hold and Wait Condition
• Eliminating No Preemption Condition
• Eliminating Circular Wait Condition

47
Deadlock Prevention
• A deadlock can be prevented by eliminating any one of the four necessary conditions of the
deadlock which results in the inefficient use of resources. Deadlock avoidance approaches
ensure that deadlocks cannot arise/occur in a system, by not allowing the conditions for
deadlocks to hold simultaneously.
• Deadlock avoidance requires that the operating system be given information in advance
regarding the resources a process will request and use. This information is used by the
operating system to schedule the allocation of resources so that no process waits for a
resource.
• Deadlock prevention prevent deadlocks by restraining how requests can be made. The
restraints ensure that at least one of the necessary conditions for deadlock cannot occur,
and hence, that deadlock cannot hold.

48
Safe State
• A state is safe if the system can allocate all resources
requested by all processes without entering a
deadlock state.
• A state is safe if there exists a safe sequence of
processes (P0, P1, P2, ..., PN) such that all of the
resource requests for Pi can be granted using the
resources currently allocated to Pi and all processes Pj
where j < i.
• If a safe sequence does not exist, then the system is in
an unsafe state, which may lead to deadlock. (all safe
states are deadlock free, but not all unsafe states lead
to deadlocks.)

49
Bankers Algorithm
Dijkstra was the first person to propose an algorithm in 1965 for deadlock avoidance. This is
known as Banker’s algorithm. Banker's algorithm is a deadlock avoidance algorithm. It is
named so because this algorithm is used in banking systems to determine whether a loan
can be granted or not.

Banker’s algorithm in the operating system is such that it can know in advance before a
resource is allocated to a process, whether it can lead to deadlock (“unsafe state”) or it can
certainly manage to avoid it (“safe state”).

50
Safety Algorithm
This algorithm is used to find out whether a system is in safe state or not. This algorithm can
be described as follows:
1. Let Work and Finish be vectors of length m and n, respectively.
Initialize Work= Available and
Finish[i] = false for i = 0, 1, ... , n – 1.
2. Find an index i such that both
(i) Finish[i] == false
(ii) Needi≤ Work
If no such i exists, go to step 4.
3. Work = Work + Allocation;
Finish[i] = true
Go to step 2.
4. If Finish[i] == true for all i, then the system is in a safe state.

This algorithm may require an order of m x n2 operations to determine whether a state is safe.
51
Example
A system with five processes P0 through P4 and three resource types A, B, and C. Resource
type A has ten instances, resource type B has five instances, and resource type C has seven
instances. At time T0 the resources are allocated to processes as per given below:

The Need matrix can be defined as (Max-Allocation) given below:

52
Introduction

53
Deadlock Recovery
Process Termination:
Abort all deadlocked processes
Abort one process at a time until the deadlock cycle is eliminated

Resource Preemption:
Select a process for pre-emption
Rollback of the process
Prevent Starvation

54
Thank You

Vijay Patil
Department of Computer Engineering (NBA Accredited)
Vidyalankar Polytechnic
Vidyalankar College Marg, Wadala(E), Mumbai 400 037
E-mail: [email protected] 55
Copy protected with Online-PDF-No-Copy.com 56

You might also like