Chapter 9 Complete
Chapter 9 Complete
Operating Systems
Introduction to Operating
Systems
Learning Outcomes
Types of Software
1. System Software (Operating System)
2. Application Software
Operating System (System
Software)
An operating system (OS) is the most fundamental
software that runs on a computer. It acts as an
intermediary between the computer hardware and the
user, as well as other software applications. The OS
manages the computer's resources, such as the central
processing unit (CPU), memory, storage, and input/output
devices, and provides services for other programs to run.
Commonly Used
Operating Systems
Operating Systems and its types vary depending on
device type:
For Desktops and Laptops:
2. Memory Management:
a. Allocating and deallocating memory to processes.
b. Keeping track of used and free memory
Tasks of Operating Systems
3. File 4. Device
Management
Providing operations for creating,
Management:
Managing input/output operations with
reading, writing, and deleting files. various devices (keyboard, mouse,
Managing file permissions and access monitor, etc.)
control Providing device drivers for hardware
Keeping track of file attributes (size, interaction.
date, etc.) Allocating and deallocating devices to
processes.
Tasks of Operating
Systems
5. User Interface:
a. Providing a graphical user
interface (GUI) or command-line
interface (CLI)
b. Managing user input and output
c. Displaying information and error
messages
Introduction to Operating
System
SLO: 9.1.3
Objectives:
Students will be able to:
1. Differentiate between command line interface and graphical user interface of an operating system;
2. Identify the core advantages and disadvantages of each interface type.
How many of you prefer using
apps with icons and windows
versus typing commands?
For example, instead of clicking a folder icon to open it, you might type
"open folder_name" in a CLI. While it might seem old-fashioned compared
to fancy graphics, it's actually very powerful for technical tasks and can be
faster for experienced users.
(Desktop OS)
Parallel
Real-time Multiprocessor Embedded
Processing
Operating Operating Operating
Operating
System System System
System
Time-sharing
Operating
System;
1. Simple Batch System
Batch processing is a type of operating system that processes similar types of
jobs into a batch. In the batch processing operating system, multiple jobs of
similar types are converted into a batch and sent to the CPU for execution.
Simple batch systems were among the earliest operating systems used in
computers, particularly in the 1950s and 1960s. These systems were designed
to execute jobs in a sequential, non-interactive manner.
Some early systems that were based on simple batch processing include:
• IBM 701 (1952)
• IBM 1401 (1959)
• UNIVAC I (1951)
Key Features
• No Direct User Interaction: Users don't interact with the computer directly. Instead,
they submit their jobs (programs or tasks) to an operator, often using punched cards or
tapes.
• Batch Processing: The operator collects similar jobs into batches and submits them to
the computer. The jobs within a batch are executed sequentially without any user
intervention.
• Monitor Program: A special program called the monitor manages the execution of
jobs. It loads each job into memory, runs it, and then moves on to the next job in the
batch.
• No Multiprogramming: Only one job is in memory and executed at a time. This
means the CPU might sit idle while waiting for slow input/output operations to
complete.
2. Multiprogramming Batch System
A multiprogramming batch system is an extension of simple batch systems that
improves CPU utilization by executing multiple jobs simultaneously. It was introduced to
overcome the inefficiencies of simple batch systems, where the CPU would often remain
idle while waiting for I/O operations to complete.
Example:
Multiprogramming batch systems were common in large mainframes in the 1960s and
1970s, such as IBM's OS/360 and DEC's TOPS-10. These systems ran multiple jobs
concurrently, significantly improving performance over simple batch systems.
Key Features
• Process Management: Each running program is a process, with its own memory space and
resources. The operating system keeps track of all active processes.
• Time Sharing: The CPU (the brain of your computer) is rapidly shared among the processes. Each
process gets a small slice of time to execute, then the CPU switches to another process. This
switching happens so quickly that it creates the illusion of simultaneous execution.
• Scheduling: The operating system uses a scheduler to determine which process gets to use the
CPU and for how long. This can be based on priority, resource needs, or other factors.
• Context Switching: When switching between processes, the operating system saves the current
state of one process (its register values, program counter, etc.) and restores the state of the next
process. This allows the processes to resume execution seamlessly
4. Distributed Operating
System
For example, the engine management system within a car uses a real-time
operating system to react to feedback from sensors placed throughout the engine.
The OS will then immediately inform the driver if action is to be taken (e.g., oil
needs topped up or brake pads need changed). Another example would be a point-
of-sale system in a shop that reacts quickly to sales, warning if stock of any item
sold is getting low and reordering it.
The main goal of an RTOS is to perform critical tasks on time. It ensures that certain
processes are finished within strict deadlines, making it perfect for situations where
timing is very important. It is also good at handling multiple tasks at once.
Real-Time Operating System
Uses:
• Defense systems like RADAR.
• Air traffic control system.
• Stock trading applications.
• Used to control nuclear centrifuges.
Real-Time Operating System
Characteristics:
1. Precise Timing: It ensures that critical tasks, such as adjusting the speed
of the centrifuge or monitoring its state, happen exactly when they are
supposed to, with minimal delay.
2. Safety: The RTOS prioritizes essential safety-related functions, ensuring
that if something goes wrong, the system responds immediately to prevent
accidents.
3. Reliability: In an environment like a nuclear facility, the system must be
highly reliable, predictable, and capable of handling multiple tasks
simultaneously without errors
Parallel Processing Operating System:
A Parallel Processing Operating System helps manage multiple processors
working together at the same time. Its main goal is to split big tasks into smaller
pieces, so each processor can work on a part of the task simultaneously. This
speeds up the overall process and makes the system more efficient.
Uses:
• Supercomputing: Used for tasks like scientific research and climate modeling
where large amounts of data are processed quickly.
• Graphics Processing: Helps in rendering images and videos by working on
many pixels at the same time.
• Machine Learning: Speeds up training of complex models by processing
large datasets in parallel.
Features of Parallel Processing:
1. Faster Execution: By splitting a task across multiple
processors, the system can complete tasks more quickly.
2. Efficiency: It allows for better use of resources, especially in
systems with many processors or cores.
3. Scalability: As more processors are added, the system can
handle larger and more complex tasks.
Embedded Operating System
Examples:
It keeps track of what each program is doing. The program that does this is
called a Scheduler. It gives the CPU to a program when needed and takes it
back when the program is finished.
Process Management Example
In this example there are three processes A, B and C will manage the CPU time as
follows. ready for execution. The OS will manage CPU time as follow:
Case 1: When the 3 processes become ready in the order of ABC, the total execution
time will be:
τ = (5+7+8)/3 = 6.67 milli sec
Case 2: When the 3 processes become ready in the order of BCA, the total execution
time will be:
τ = (2+3+8)/3 = 4.33 milli sec
In the above example, in Case2, the OS is managing the processes more efficiently.
The execution time in Case 2 is less as compared to Case 1.
2. Memory Management
The operating system controls the computer's main memory, which is the
quick-access storage used by the CPU. When a program needs to run, it has
to be loaded into this main memory first. The operating system handles the
job of giving memory to different programs and taking it back when they're
done, making sure that one program doesn’t use the memory reserved for
another.
Memory Management Cont.
Input and output devices, which are also known as peripherals, are
hardware devices connected to a computer, such as a screen,
printer, keyboard or camera.
Imagine you are typing a document and decide to print it. Here's how
I/O system management works in this scenario:
Input: Your keyboard inputs the text you type, which the OS processes
and displays on the screen.
Output: When you click "Print," the OS sends the document data to
the printer using a specific driver.
Buffering: If the printer is slow, the OS may use a buffer to store the
document until the printer is ready.
Error Handling: If the printer runs out of ink, the OS will alert you and
pause the printing process until you replace the ink cartridge.
Secondary Storage refers to non-volatile storage, meaning the data is
retained even when the computer is turned off. Examples include
hard drives, SSDs, CDs, USB drives, etc.
The OS uses a specific set of instructions, called the Interrupt Service Routine
(ISR), to address the interrupt. The ISR is a special function designed to handle
particular types of interrupts.
When an interrupt occurs, the OS saves the current state of the CPU, including
the contents of registers and the program counter. This is called context saving.
After handling the interrupt, the OS restores the saved state so the CPU can
resume the interrupted task as if nothing happened.
Example
Imagine you’re typing on your keyboard. Each time you press a key,
the keyboard sends an interrupt to the CPU to process the input. The
OS interrupts whatever task the CPU was performing, processes the
key press using the ISR, and then returns to the original task.
Network Management
1. Receives input from the user, usually in the form of text commands.
2. Analyzes the input to identify the command and its associated
arguments.
3. Locates the corresponding program or executable file on the system.
4. Loads the program into memory and start the execution of the program.
5. Detects and reports errors that may occur during command execution.
6. Maintains information about the current environment, such as the
working directory
9.3 Process Management
SLO 9.3.1
Learning Objective:
After the lecture students will be able to:
1. Determine the sequence of execution of processes to get
the minimum execution time; (Application)
Determine The Sequence Of Execution Of Processes To Get The
Minimum Execution Time
In this example there are three processes A, B and C will manage the CPU time as
follows. ready for execution. The OS will manage CPU time as follow:
Case 1: When the 3 processes become ready in the order of ABC, the total
execution time will be:
τ = (5+7+8)/3 = 6.67 milli sec
Case 2: When the 3 processes become ready in the order of BCA, the total
execution time will be:
τ = (2+3+8)/3 = 4.33 milli sec
In the above example, in Case2, the OS is managing the processes more
efficiently. The execution time in Case 2 is less as compared to Case 1.
9.3 Process Management
SLO 9.3.2
Learning Objective:
After the lecture students will be able to:
1. Explain the process state diagram including new, running,
waiting/ blocked, ready and terminated states of a process;
(Understanding)
States of Process
A process is an instance of a program in execution, and as it runs,
it transitions through different states depending on the conditions
and actions it encounters. The process state diagram is a
fundamental concept in operating systems that illustrates the
various states a process can be in during its lifecycle
States of Process
1. New State: When a process is first created, it enters the New state. This
happens when the operating system recognizes the need to create a
process, such as when a user launches a program. The process remains in
this state while the operating system prepares it for execution. This
preparation might involve allocating memory.
2. Ready State: In the Ready state, the process is fully prepared and
waiting to be assigned to the CPU. It has all the necessary resources
except for the CPU. When the CPU becomes available, the scheduler
selects one of the ready processes to execute next.
3. Running State: The Running state is when the process is actively
executing instructions on the CPU. It's the only state during which the
process is doing actual work. A process will remain in the Running state
until one of the following events occurs:
1. The process completes its task and transitions to the Terminated state.
2. The process needs to perform an input/output (I/O) operation and thus moves to
the Waiting/Blocked state.
4. Waiting/Blocked State: A process enters the Waiting (or Blocked) state
when it cannot proceed until some external event occurs, such as the
completion of an I/O operation or the availability of a resource. While in
this state, the process is not using the CPU. Instead, it waits for the event
to complete.
5. Terminated State: The Terminated state is the final state of a process.
When a process finishes its execution, it enters this state. In this state, the
operating system deallocates the resources associated with the process,
such as memory and process control blocks.
9.3 Process Management
SLO 9.3.3
Learning Objective:
After the lecture students will be able to:
1. Differentiate between thread and process; (Understanding)
Differentiate Between Thread And Process
Process Thread
1. A process is an independent 1. A thread is a smaller, lightweight
program in execution. unit of execution within a process.
2. Each process has its own memory 2. Threads share the same memory
space, and operating system space and resources of the
resources (like file handles, process they belong to but can
network connections, etc.). execute independently within that
shared environment.
3. Any change in the process does
not change the behavior of other 3. Any change in the thread may
processes. change the behavior of other
threads in the same process.
4. Threads share the same memory
space and resources of the 4. Threads are best for tasks that
process they belong to but can can be divided into smaller,
execute independently within that parallel subtasks that need to
shared environment. share common data.
Example
Process: Running a word processor and a web browser
simultaneously involves two separate processes, each with its own
memory space and resources.
Multiple threads of the same process can run in parallel, especially on multi-
core CPUs.
Threads share the same memory space within a process, making them
lightweight.
Used in scenarios where tasks within the same program can be divided into
smaller, parallelizable components, like in web browsers, where different
tabs may be handled by different threads.
Multitasking
Multitasking is the ability of an operating system to execute multiple tasks (or
processes) at the same time. These tasks might be part of different programs
or the same program.
The CPU switches rapidly between tasks, giving the illusion that they are
running simultaneously.
The OS decides when the CPU should switch from one task to another to
give them proper time.
Common in modern operating systems (like Windows, macOS, Linux) where
users can run multiple applications at the same time (e.g., a web browser,
text editor, and music player).
Multiprocessing
Multiprocessing refers to the use of two or more CPUs (or cores) within a
single computer system to execute multiple processes simultaneously.
Multiprocessing refers to the use of two or more CPUs (or cores) within a
single computer system to execute multiple processes simultaneously.
Each CPU or the Core of CPU can execute multiple processes at the same
instant of time.
Common in high-performance computing, servers, and modern desktop
systems where workloads can be distributed across multiple processors for
faster performance.
Multiprogramming
Multiprogramming is an approach where multiple programs are loaded into
memory at the same time, and the CPU switches between them to maximize
resource utilization.