AOS-UNIT-5
AOS-UNIT-5
LINUX
Linux is one of popular version of UNIX operating System. It is open source as its source code is
freely available. It is free to use. Linux was designed considering UNIX compatibility. Its
functionality list is quite similar to that of UNIX.
Kernel − Kernel is the core part of Linux. It is responsible for all major activities of this
operating system. It consists of various modules and it interacts directly with the underlying
hardware. Kernel provides the required abstraction to hide low level hardware details to
system or application programs.
System Library − System libraries are special functions or programs using which
application programs or system utilities accesses Kernel's features. These libraries
implement most of the functionalities of the operating system and do not requires kernel
module's code access rights.
System Utility − System Utility programs are responsible to do specialized, individual
level tasks.
Kernel component code executes in a special privileged mode called kernel mode with full
access to all resources of the computer. This code represents a single process, executes in single
address space and do not require any context switch and hence is very efficient and fast. Kernel
runs each processes and provides system services to processes, provides protected access to
hardware to processes.
Support code which is not required to run in kernel mode is in System Library. User programs
and other system programs works in User Mode which has no access to system hardware and
kernel code. User programs/ utilities use System libraries to access Kernel functions to get
system's low level tasks.
Basic Features
Portable − Portability means software can works on different types of hardware in same
way. Linux kernel and application programs supports their installation on any kind of
hardware platform.
Open Source − Linux source code is freely available and it is community based
development project. Multiple teams work in collaboration to enhance the capability of
Linux operating system and it is continuously evolving.
Multi-User − Linux is a multiuser system means multiple users can access system
resources like memory/ ram/ application programs at same time.
Multiprogramming − Linux is a multiprogramming system means multiple applications
can run at same time.
Hierarchical File System − Linux provides a standard file structure in which system files/
user files are arranged.
Shell − Linux provides a special interpreter program which can be used to execute
commands of the operating system. It can be used to do various types of operations, call
application programs. etc.
Security − Linux provides user security using authentication features like password
protection/ controlled access to specific files/ encryption of data.
Architecture
The following illustration shows the architecture of a Linux system −
The architecture of a Linux System consists of the following layers −
Hardware layer − Hardware consists of all peripheral devices (RAM/ HDD/ CPU etc).
Kernel − It is the core component of Operating System, interacts directly with hardware,
provides low level services to upper layer components.
Shell − An interface to kernel, hiding complexity of kernel's functions from users. The
shell takes commands from the user and executes kernel's functions.
Utilities − Utility programs that provide the user most of the functionalities of an operating
systems.
The process scheduler is responsible for choosing which processes run and for how long. A
scheduler is the basic part of a multitasking operating system like Linux.
A multitasking operating system gives the illusion that multiple tasks are running at once when
in fact there is only a limited set of processors. There are two kinds of multitasking operating
systems: preemptive and cooperative.
Linux is a preemptive operating system. Preemptive operating systems decide when to stop
executing a process, and which new process should begin running. The amount of time a process
runs is usually determined before it’s scheduled, this is called the timeslice and it is effectively a
slice of the processors time.
In cooperative operating systems the scheduler relies on the process to explicitly tell the
scheduler that it’s ready to stop (this is often called yielding). Cooperative operating systems
have a problem where tasks that don’t yield can bring down the entire operating system. The last
mainstream cooperative OSes were Mac OS 9 and Windows 3.1.
The Linux scheduler has gone through several iterations. The latest scheduler—CFS (the
Completely Fair Scheduler)—uses the concept of fair scheduling from queue theory..
Scheduling policies
Scheduling policies are the rules the scheduler follows to determine what should run and when.
An effective scheduling policy needs to consider both kinds of processes: I/O-bound processes
and CPU-bound processes.
I/O-bound processes spend most of their time waiting for I/O operations, like a network request
or keyboard operation, to complete. GUI applications are usually I/O-bound because they spend
most of their time waiting on user input. I/O-bound processes often run for a short time because
they block while waiting for I/O operations to complete.
CPU-bound processes spend most of their time executing code. CPU-bound processes are often
preempted because they don’t block on I/O requests very often. An example of a CPU-bound
task would be one that performs a lot of Math calculation, like MATLAB.
Some processes are I/O-bound and CPU-bound at different times. For example, a word processor
is normally waiting for user input, but there might be regular CPU-intensive operations like
spellchecking.
a) Process priority
One type of scheduling algorithm is priority scheduling, which gives different tasks a priority
based on their need to be processed. Higher priority tasks are run before lower priority tasks, and
processes with the same priority are scheduled round-robin style..
The kernel uses two separate priority values. A nice value, and a real-time priority value.
The nice value is a number from -20 to +19 with a default of 0. The larger the nice value, the
lower the priority (processes are being nice by letting other processes run in their place).
Processes with a lower nice value receive a larger portion of a systems processor time, processes
with a higher nice value receive a smaller portion. Nice values are the standard priority range for
Unix systems, although the value is used differently across OSes. In OS X, the nice value
controls the absolute timeslice allotted to a process. In Linux, the nice value controls the
proportion of timeslice
The real-time priority value can range from 0 to 99, although the value is configurable. The
real-time value behaves the opposite of the nice value: a higher value means higher priority. “All
real-time processes are at a higher priority than normal processes; that is the real-time values and
nice values are in disjoint value spaces”
b) Timeslice
The timeslice value represents how long a process can run before it is preempted. The scheduler
policy must decide on a default timeslice. The default timeslice is important: too long and the
system will seem unresponsive, too short and the system becomes less efficient as the processor
spends more time performing context switches between processes.
A common default timeslice value is 10ms, but Linux works differently. Instead of an absolute
time, the CFS algorithm assigns a proportion of the processor, so the amount of processor time
depends on the current load. The assigned proportion is affected by the nice value, which acts a
weight. A process with a lower nice value gets a higher weighting, and a higher nice value gets a
lower weighting
When a process becomes eligible to run, the decision of whether to run the process or not
depends on how much of a proportion of the processor the newly runnable process has
consumed. If it has run a smaller proportion than the currently executing process then it will be
run, otherwise it will be scheduled to run later
Imagine a machine that is running only two processes: a video encoder and a text editor. The
video encoder is CPU-bound, whereas the text editor is I/O-bound, because it spends much of its
time waiting for user input.
The text editor should respond instantly when it receives a key press, but the video encoding can
afford some latency. It doesn’t matter to the user if there’s a half second delay encoding the
video, whereas a half second delay on the text editor would be noticeably laggy.
If both processes have the same nice value they will be allocated 50% of the processor. The text
editor will not use much of its allocated processor time because it will spend so much time
blocked, waiting for I/O. The video encoder will be able to use more than its 50% of processor
time. However, when the text editor wakes up in response to user input, CFS will see that the
text editor has used less than its allotted 50% and therefore less time than the video encoder. It
will then preempt the video encoder and run the text editor, enabling the text editor to respond
quickly to user input
In the LINUX operating system, we have mainly two types of processes namely - Real-time
Process and Normal Process. Let us learn more about them in detail.
Realtime Process
Real-time processes are processes that cannot be delayed in any situation. Real-time
processes are referred to as urgent processes.
SCHED_FIFO
SCHED_RR.
A real-time process will try to seize all the other working processes having lesser priority.
For example, A migration process that is responsible for the distribution of the processes across
the CPU is a real-time process. Let us learn about different scheduling policies used to deal with
real-time processes briefly.
SCHED_FIFO
FIFO in SCHED_FIFO means First In First Out. Hence, the SCHED_FIFO policy schedules the
processes according to the arrival time of the process.
SCHED_RR
RR in SCHED_RR means Round Robin. The SCHED_RR policy schedules the processes by
giving them a fixed amount of time for execution. This fixed time is known as time quantum.
Note: Real-time processes have priority ranging between 1 and 99. Hence, SCHED_FIFO, and
SCHED_RR policies deal with processes having a priority higher than 0.
Normal Process
Normal Processes are the opposite of real-time processes. Normal processes will execute or stop
according to the time assigned by the process scheduler. Hence, a normal process can suffer
some delay if the CPU is busy executing other high-priority processes. Let us learn about
different scheduling policies used to deal with the normal processes in detail.
Batch (SCHED_BATCH)
As the name suggests, the SCHED_BATCH policy is used for executing a batch of processes.
This policy is somewhat similar to the Normal policy. SCHED_BATCH policy deals with the
non-interactive processes that are useful in optimizing the CPU throughput time.
SCHED_BATCH scheduling policy is used for a group of processes having priority: 0.
Idle (SCHED_IDLE)
SCHED_IDLE policy deals with the processes having extremely Low Priority. Low-priority
tasks are the tasks that are executed when there are absolutely no tasks to be executed.
SHED_IDLE policy is designed for the lowest priority tasks of the operating systems.
To allow the accumulation of requests in the device queue, a plug is used. When the
request comes in and the device queue is empty, the plug is put at the head of the
device queue, and a task comprising of the unplug function is registered in the disk
task queue. Thus the requests keep on accumulating for some time and then the task
queue executes the unplug routine which removes the plug and calls
the request_fn() to service the requests.
IOS
IOS ARCHITECTURE
IOS is a Mobile Operating System that was developed by Apple Inc. for iPhones, iPads, and
other Apple mobile devices. iOS is the second most popular and most used Mobile Operating
System after Android.
The structure of the iOS operating System is Layered based. Its communication doesn’t occur
directly. The layer’s between the Application Layer and the Hardware layer will help for
Communication. The lower level gives basic services on which all applications rely and the
higher-level layers provide graphics and interface-related services. Most of the system interfaces
come with a special package called a framework.
A framework is a directory that holds dynamic shared libraries like .a files, header files, images,
and helper apps that support the library. Each layer has a set of frameworks that are helpful for
developers.
Architecture of IOS
CORE OS Layer
All the IOS technologies are built under the lowest level layer i.e. Core OS layer. These
technologies include:
MEDIA Layer
With the help of the media layer, we will enable all graphics video, and audio technology of the
system. This is the second layer in the architecture. The different frameworks of MEDIA layers
are:
1. ULKit Graphics-
This framework provides support for designing images and animating the view content.
2. Core Graphics Framework-
This framework support 2D vector and image-based rendering and it is a native drawing
engine for iOS.
3. Core Animation-
This framework helps in optimizing the animation experience of the apps in iOS.
4. Media Player Framework-
This framework provides support for playing the playlist and enables the user to use their
iTunes library.
5. AV Kit-
This framework provides various easy-to-use interfaces for video presentation, recording,
and playback of audio and video.
6. Open AL-
This framework is an Industry Standard Technology for providing Audio.
7. Core Images-
This framework provides advanced support for motionless images.
8. GL Kit-
This framework manages advanced 2D and 3D rendering by hardware-accelerated
interfaces.
COCOA TOUCH
COCOA Touch is also known as the application layer which acts as an interface for the user to
work with the iOS Operating system. It supports touch and motion events and many more
features. The COCOA TOUCH layer provides the following frameworks:
1. EvenKit Framework-
This framework shows a standard system interface using view controllers for viewing and
changing events.
2. GameKit Framework-
This framework provides support for users to share their game-related data online using a
Game Center.
3. MapKit Framework-
This framework gives a scrollable map that one can include in your user interface of the
app.
4. PushKit Framework-
This framework provides registration support.
1. iOS Operating System is the Commercial Operating system of Apple Inc. and is popular
for its security.
2. iOS operating system comes with pre-installed apps which were developed by Apple like
Mail, Map, TV, Music, Wallet, Health, and Many More.
3. Swift Programming language is used for Developing Apps that would run on IOS
Operating System.
4. In iOS Operating System we can perform Multitask like Chatting along with Surfing on
the Internet.
1. More Costly.
2. Less User Friendly as Compared to Android Operating System.
3. Not Flexible as it supports only IOS devices.
4. Battery Performance is poor.
IOS filesystem
The iOS file system is geared toward apps running on their own. To keep the system
simple, users of iOS devices do not have direct access to the file system and apps are expected to
follow this convention.
On iOS each app’s files are contained in a so-called sandbox to separate files in this app and to
protect app’s data from others. Within the sandbox, the files are organized into different containers,
such as Bundle Container, Data Container, and iCloud Container, as illustrated below.
The sandbox directory
When in comes to reading and writing files, each iOS application has its own sandbox directory.
For security reasons, every interaction of the iOS app with the file system is limited to this
sandbox directory. Exceptions are access requests to user data like photos, music, contacts etc.
The system additionally creates the Documents/Inbox directory which we can use to access files
that our app was asked to open by other applications. We can read and delete files in this
directory but cannot edit or create new files.
The Library directory contains standard subdirectories we can use to store app support files. The
most used subdirectories are:
Library/Application Support/ - to store any files the app needs that should not be exposed to the
user, for example configuration files, templates etc.
Library/Caches/ - to cache data that can be recreated and needs to persist longer than files in
the tmp directory. The system may delete the directory on rare occasions to free up disk space.
*****************************************************************************
PROCESS SCHEDULING
The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.
Categories of Scheduling
1. Non-preemptive: Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process
terminates and moves to a waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of time.
During resource allocation, the process switches from running state to ready state or from
waiting state to ready state. This switching occurs as the CPU may give priority to other
processes and replace the process with higher priority with the running process.
The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. The OS
maintains a separate queue for each of the process states and PCBs of all processes in the same
execution state are placed in the same queue. When the state of a process is changed, its PCB is
unlinked from its current queue and moved to its new state queue.
The Operating System maintains the following important process scheduling queues −
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main memory, ready
and waiting to execute. A new process is always put in this queue.
Device queues − The processes which are blocked due to unavailability of an I/O device
constitute this queue.
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.). The
OS scheduler determines how to move processes between the ready and run queues which can
only have one entry per processor core on the system; in the above diagram, it has been merged
with the CPU.
Two-state process model refers to running and non-running states which are described below −
Running
1
When a new process is created, it enters into the system as in the running state.
Not Running
Processes that are not running are kept in queue, waiting for their turn to
2 execute. Each entry in the queue is a pointer to a particular process. Queue is
implemented by using linked list. Use of dispatcher is as follows. When a
process is interrupted, that process is transferred in the waiting queue. If the
process has completed or aborted, the process is discarded. In either case, the
dispatcher then selects a process from the queue to execute.
Schedulers
Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to run.
Schedulers are of three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound and processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When a process changes the state from new to
ready, then there is use of long-term scheduler.
Short-term schedulers, also known as dispatchers, make the decision of which process to execute
next. Short-term schedulers are faster than long-term schedulers.
A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the secondary
storage. This process is called swapping, and the process is said to be swapped out or rolled out.
Swapping may be necessary to improve the process mix.
It is a process swapping
1 It is a job scheduler It is a CPU scheduler
scheduler.
It is almost absent or
It is also minimal in time It is a part of Time sharing
4 minimal in time sharing
sharing system systems.
system
Context Switching
A context switching is the mechanism to store and restore the state or context of a CPU in
Process Control block so that a process execution can be resumed from the same point at a later
time. Using this technique, a context switcher enables multiple processes to share a single CPU.
Context switching is an essential part of a multitasking operating system features.
When the scheduler switches the CPU from executing one process to execute another, the state
from the current running process is stored into the process control block. After this, the state for
the process to run next is loaded from its own PCB and used to set the PC, registers, etc. At that
point, the second process can start executing.
Context switches are computationally intensive since register and memory state must be saved
and restored. To avoid the amount of context switching time, some hardware systems employ
two or more sets of processor registers. When the process is switched, the following information
is stored for later use.
Program Counter
Scheduling information
Base and limit register value
Currently used register
Changed State
I/O State information
Accounting information