0% found this document useful (0 votes)
2 views

OS Notes

The document provides comprehensive notes on operating systems, covering topics such as the definition and role of operating systems, computer system organization, boot processes, storage structures, and various types of systems including single-user and multi-program systems. It also discusses operating system services, system calls, and process management, detailing how processes are executed, their states, and the scheduling mechanisms involved. Additionally, it highlights the importance of resource management and user interfaces in operating systems.

Uploaded by

aaima8447
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

OS Notes

The document provides comprehensive notes on operating systems, covering topics such as the definition and role of operating systems, computer system organization, boot processes, storage structures, and various types of systems including single-user and multi-program systems. It also discusses operating system services, system calls, and process management, detailing how processes are executed, their states, and the scheduling mechanisms involved. Additionally, it highlights the importance of resource management and user interfaces in operating systems.

Uploaded by

aaima8447
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Operating System Notes By Muhammad Faisal Kamran

Topic 1: What is Operating System


 A Program that acts as a link between the hardware and the user
 Every computer and computer like device example mobile tv and phones all have an
operating system
 In technical terms it is known as a resource manager where it manages the resources of
the computer internally.
 Resources of a computer are processor memory storage I/O devices and files

Topic 2: Computer System Organization


 A modern computer system consists of one or more CPUs and multiple device controllers
 All are connected via a common bus to shared memory.
 Each device controller manages a specific type of device (e.g., disk drives, audio, video).
 The CPU and controllers operate in parallel, with a memory controller ensuring
synchronized access.
 When powered on or rebooted, the system runs a bootstrap program stored in ROM or
EEPROM (firmware).
 This program initializes the system, sets up hardware components, and loads the
operating system for execution.
 For completing this goal, the bootstrap program must locate the operating-system kernel
and load it into memory.

Topic 3: System Boot and Interrupts


 After loading, the kernel starts system processes (daemons) to provide services. On
UNIX, the "init" process launches other daemons.
 Once booted, the system waits for events and is usually triggered by interrupts from
hardware or software.
 Hardware interrupts come via the system bus, while software interrupts occur through
system calls. When interrupted, the CPU pauses execution, jumps to an interrupt service
routine (ISR) via an interrupt vector table, processes the request, then resumes the
previous task.
 Modern systems store the return address on the system stack. If the ISR modifies
processor state, it must save and restore it before returning to ensure smooth execution.
Topic 4: Storage Structure
 CPU only executes instruction from RAM, mostly DRAM
 General purpose programs are stored in Hard Drives and SSD while programs like OS are
stored in ROM and EEPROM
 The CPU loads data from the memory
 Memory consist of bytes with unique addresses which can be accessed thorugh load and
store instruction. It also automatically fetches instructions from memory for execution
 Storage systems are organized in a hierarchy based on speed and cost.
 Faster storage (e.g., registers and cache) is more expensive and volatile
 Slower storage (e.g., magnetic disks and tapes) is cheaper and nonvolatile.
 Solid-state drives (SSDs) are faster than magnetic disks and nonvolatile.
 Flash memory and NVRAM provide nonvolatile storage, with flash being slower than
DRAM and NVRAM using batteries for backup power.
 The hierarchy balances speed, cost, and reliability.

Topic 5: Computer Structure


 Computer system can be divided into four components:
1. Hardware: provides basic computing resources. Includes CPU, memory, I/O
devices
2. Operating system: Controls and coordinates use of hardware among various
applications and users
3. Application programs: define the ways in which the system resources are used to
solve the computing problems of the users. Word processors, compilers, web
browsers, database systems, video games
4. Users: People, machines, other computers
 I/O devices in a computer are managed by device controllers and device drivers.
Controllers handle specific devices and transfer data to their local buffers, while drivers
provide a standard interface for the operating system.
 For I/O operations, the driver sets up the controller, which performs the data transfer. The
controller signals completion via an interrupt. For large data transfers, Direct Memory
Access (DMA) allows controllers to transfer entire blocks of data directly to or from
memory with minimal CPU involvement, reducing overhead and improving efficiency.
 Components of CPU include
1. Program Counter (PC): Holds the memory address of the next instruction
2. Instruction Register (IR): Holds the instruction currently being executed
3. General Registers (Reg. 1..n): Hold variables and temporary results
4. Arithmetic and Logic Unit (ALU): Performs arithmetic functions and logic
operations
5. Stack Pointer (SP): Holds memory address of a stack with a frame for each active
procedure’s parameters & local variables
6. Processor Status Word (PSW): Contains various control bits including the mode
bit which determines whether privileged instructions can be executed
7. Memory Address Register (MAR): Contains address of memory to be loaded
from/stored to
8. Memory Data Register (MDR): Contains memory data loaded or to be stored

Topic 6: View
 There are 2 main views in an operating system. User and System
 User view
1. Refers to the interface being used.
2. They are designed for one user to monopolize its resources, to maximize the work
that the user is performing.
3. In these cases, the operating system is designed mostly for ease of use, with some
attention paid to performance, and none paid to resource utilization.
 System View
1. Viewed as a resource allocator also.
2. A computer system consists of many resources like hardware and software that
must be managed efficiently.
3. The operating system acts as the manager of the resources, decides between
conflicting requests, controls execution of programs etc

Topic 7: Single User System


 System that can be used by one user at a time
 Mostly this system was found in early computers
 Responsive, interactive and convenient
 Types of single user system include
1. Single user single task: Only has to deal with one person at a time, running one
user application at a time. Example mobile phones
2. Single user multiple task: Designed mainly with a single user in mind, but it can
deal with many applications running at the same time. Example surfing the
internet
Topic 8: Multi Program System
 Multiprogramming is a technique to execute number of programs simultaneously by a
single processor.
 Number of processes reside in main memory at a time.
 The OS picks and begins to executes one of the jobs in the main memory.
 If any I/O wait happened in a process, then CPU switches from that job to another job.
Meaning CPU in not idle at any time.

Topic 8: Batch Operating System


 In this type of system, there is no direct interaction between user and the computer.
 The user has to submit a job written on cards or tape to a computer operator.
 Each job is prepared on an off-line device like punch cards
 OS defines a job which has predefined sequence of commands, programs and data as a
single unit. It also keeps a number of jobs in memory and executes them without any
manual information.
 Jobs are processed in the order of submission
 When job completes its execution, its memory is released and the output for the job gets
copied into an output spool for later printing or processing.

Topic 9: Time Sharing System


 Logical extension of multiprogramming.
 Multiple jobs are executed by switching the CPU between them.
 CPU time is shared by different processes.

Topic 10: Real Time System


 Intended to serve real-time application by processing data as it comes in, typically
without buffering delays.
 Often used as a control device in a dedicated application such as controlling scientific
experiments, medical imaging systems, industrial control systems, and some display
systems.
 If it does not produce output in a defined time then the output will be useless.
Topic 11: Operating System Services
 Operating system provides an environment for the execution of the programs
 It provides certain services to the programs and to the users of those programs
 These services include
1. User interface: Is something that allows the user to interact with the operating
system. Types are
 Command line interface: User types command using keyboards. Example
cmd. Preferred by system administrators and advanced users because it is
faster, more powerful, and allows automation through shell scripts
 Graphical user interface: Users interact with windows, menus, and icons.
easier for general users, like those on Windows and Mac, who rarely use
the command line using a mouse and keyboard.
 Batch interface: Commands are written in files and executed
automatically. Example bat files
2. Program execution: The system must be able to load a program into memory and
to run that program. The program must be able to end its execution, either
normally or abnormally (showing error)
3. I/O Operations: A running program often needs to input or output (I/O) data, such
as reading a file or using a device like a printer. User cannot directly use the
hardware so it uses it via operating system
4. File System Manipulation: The file system helps programs manage files and
folders. Programs can create, delete, read, and write files, search for specific files,
and view file details
5. Communications: Sometimes, programs need to share information with each
other. This can happen on the same computer or between different computers
connected by a network. There are two main ways programs can communicate:
 Shared Memory: Two or more programs can read and write data in the
same memory space.
 Message Passing: Programs send and receive messages in a specific
format through the operating system.
6. Error detection: The operating system constantly checks for errors to keep things
running smoothly such as hardware issues, i/o issue and program issues
7. Resource allocation: When multiple users or jobs are running at the same time,
the operating system makes sure each one gets the resources it needs. These
resources can include things like CPU time, memory, and storage. Example the
OS uses CPU scheduling to decide which jobs get time on the CPU, considering
factors like speed and available memory
8. Accounting: Accounting in an operating system keeps track of how much and
what kind of resources each user is using. This information can be used for things
like billing or to collect usage statistics to improve the system
9. Protection and security: Protection and security ensure that users and processes
can't interfere with each other's data or the system itself. Protection makes sure
only authorized users can access certain system resources. Security focuses on
preventing outsiders from breaking into the system by requiring things like
passwords for access.

Topic 12: System Calls


 System calls allow programs to request services from the kernel(privileged mode) of the
operating system operating system
 Provides interface that are available by the operating system
 They are usually written in C or C++, with some low-level tasks using assembly language
 To simplify it we can use application programming interface (API)
 Types of system calls include
1. Process control: Performs task of process creation and process termination.
Functions include
 End/abort
 Load/execute
 Create and Terminate
 Wait and signal
 Allocate and free memory
2. File management: Handles all the file manipulation jobs. Functions include
 Create a file
 Delete file
 Open and close file
 Read, write, and reposition
 Get and set file attributes
3. Device management: Handles all the data manipulation jobs. Functions include
 Request and release device
 Attach and detach device
 Get and set device attributes
4. Information maintenance: Handles information and transfer between OS and user.
Functions include
 Get or set time and date
 Get process and device attributes
5. Communications: Used for inter process communications. Functions include
 Create, delete communications connections
 Send, receive message
 Help OS to transfer status information
 Attach or detach remote devices
 Here are general common rules for passing parameters to the System Call:
1. Parameters should be pushed on or popped off the stack by the operating system.
2. Parameters can be passed in registers.
3. When there are more parameters than registers, it should be stored in a block, and
the block address should be passed as a parameter to a register.
 Important System Calls Used in OS
1. wait()
 In some systems, a process needs to wait for another process to complete
its execution. This type of situation occurs when a parent process creates a
child process, and the execution of the parent process remains suspended
until its child process executes.
 The suspension of the parent process automatically occurs with a wait()
system call. When the child process ends execution, the control moves
back to the parent process.
2. fork()
 Processes use this system call to create processes that are a copy of
themselves. With the help of this system Call parent process creates a
child process, and the execution of the parent process will be suspended
till the child process executes.
3. exec()
 This system call runs when an executable file in the context of an already
running process that replaces the older executable file. However, the
original process identifier remains as a new process is not built, but stack,
data, head, data, etc. are replaced by the new process.
4. kill():
 The kill() system call is used by OS to send a termination signal to a
process that urges the process to exit. However, a kill system call does not
necessarily mean killing the process and can have various meanings.
5. exit():
 The exit() system call is used to terminate program execution. Specially in
the multi-threaded environment, this call defines that the thread execution
is complete. The OS reclaims resources that were used by the process after
the use of exit() system call.
Topic 13: Process
 Process can be thought of as a program in execution
 Threads is the unit of execution in a process. A process can have from one thread to
many. Can also be called a lightweight process
 As a process executes, it changes state. The state of a process is defined in part by the
current activity of that process. There can be 5 states
1. New: New process created
2. Ready: waiting to be assigned to a processor
3. Running: Instructions are being executed
4. Waiting: Waiting for some event to occur
5. Terminated: Process has finished
 Each process is represented in the operating system by a process control block (PCB) also
called a task control block.
 It contains many pieces of information associated with a specific process, including
these:
1. Process state: discussed above
2. Program counter: Indicates the address of the next instruction to be executed for
this process
3. CPU registers: Small, high-speed storage locations within a CPU that temporarily
hold data and instructions
4. CPU-scheduling information: Includes a process priority, pointers to scheduling
queues, and any other scheduling parameters
5. Memory-management information: Include such items as the value of the base
and limit registers and the page tables, or the segment tables, depending on the
memory system used by the operating system
6. Accounting information: Includes the amount of CPU and real time used, time
limits, account numbers, job or process numbers
7. I/O status information: Information includes the list of I/O devices allocated to the
process
 Process scheduling determines the order and timing of process execution, managed by the
process manage
 Process Scheduling
1. Ensures CPU is utilized efficiently.
2. In multiprogramming, some process must be running at all times.
3. In time-sharing, CPU switches rapidly between processes.
 Scheduling Queues
1. Types of queues used in scheduling:
2. Job queue: All processes in the system.
3. Ready queue: Processes in memory waiting for CPU.
4. Device queues: Waiting for I/O devices.
5. Represented using a queuing diagram.
 Schedulers
1. Decide which processes run and when:
2. Long-term scheduler (Job scheduler): Selects processes from disk for memory.
 Short-term scheduler (CPU scheduler): Chooses which ready process gets CPU next.
 Context Switch
1. When CPU switches from one process to another:
2. Saves the state of the old process.
3. Loads the state of the new process.
4. It’s overhead, meaning it doesn't perform useful work but is necessary.
 Operations on Processes
1. Processes can be created and terminated.
2. Parent process creates child processes, forming a process tree.
3. Resources may be shared or split.
 Interprocess Communication (IPC)
1. For cooperating processes to communicate.
2. Benefits:
 Information sharing
 Speedup (using multiple cores)
 Modularity
 Convenience (multitasking)
 IPC Mechanisms
1. Shared Memory:
 Processes share a region of memory.
 Fast, but synchronization is programmer’s responsibility.
2. Message Passing:
 Processes send and receive messages.
 Easier in distributed systems.
 Producer–Consumer Problem
1. A classic IPC example:
 Producer creates data (e.g., compiler).
 Consumer uses data (e.g., assembler).
 Requires a buffer:
 Unbounded buffer: Producer never waits.
 Bounded buffer: Producer waits if full; consumer waits if empty.
 Message-Passing Systems
1. Used when shared memory is not feasible.
2. Must support:
3. send(message)
4. receive(message)
5. Messages can be fixed or variable size.
6. Fixed: Easier system-level implementation.
7. Variable: Easier programming but more complex system support

Topic 14: Process Synchronization


 Process synchronization is the coordination of concurrent processes that share resources
to ensure consistency and correct execution.
 It is a key aspect of operating systems, particularly in multiprocessing or multithreading
environments
 Purpose of it is to ensure when multiple processes or threads execute simultaneously and
share data their execution must be managed in such a way that:
1. No two processes modify the same data simultaneously.
2. Data remains consistent after all operations.
3. Race conditions are avoided.
 Need for synchronization
1. Race Conditions: Occur when multiple processes access shared data and try to
change it at the same time.
2. Data Consistency: Shared variables may be corrupted if two processes access and
update them concurrently.
3. Atomicity: Operations on shared data must be indivisible to prevent interference.

Topic 15: Critical Section Problem

Definition:
A Critical Section is a segment of code where shared resources (like variables, memory, files)
are accessed or modified. The Critical Section Problem arises in concurrent programming when
processes must access shared resources without interference.

Objective:
The objective of solving the Critical Section Problem is to design a synchronization
mechanism so that only one process can enter its critical section at a time.

Structure of a Process Using Critical Section:


- Entry Section
- Critical Section
- Exit Section
- Remainder Section

Requirements of a Good Solution:


1. Mutual Exclusion: Only one process can be in the critical section at a time.
2. Progress: If no process is in the critical section, the decision of which process enters next
should not be postponed indefinitely.
3. Bounded Waiting: There must be a limit on how long a process waits before entering its
critical section again.

Challenges:
- Writing correct synchronization code without deadlocks or starvation.
- Maintaining performance by minimizing busy waiting.

Approaches to Solve:
- Software-based algorithms: Peterson’s, Dekker’s, Lamport’s bakery algorithm.
- Hardware support: Test-and-Set, Compare-and-Swap instructions.
- High-level constructs: Semaphores, Monitors, Mutexes.

Conclusion:
The Critical Section Problem is fundamental in process synchronization and solving it
correctly ensures the consistency and reliability of shared data in concurrent systems.

Topic 16: Peterson’s Solution

Definition:
Peterson’s Solution is a classical software-based algorithm for solving the critical section
problem for two processes. It ensures mutual exclusion, progress, and bounded waiting without
using hardware instructions.

Assumptions:
- Two processes: P0 and P1.
- Uses two shared variables:
1. flag[2]: Boolean array, where flag[i] = true means Pi wants to enter the critical section.
2. turn: An integer variable indicating whose turn it is to enter the critical section.

Algorithm:
The algorithm works as follows for each process Pi (i = 0 or 1):
- A process sets its flag[i] = true to indicate intent to enter the critical section.
- Then it sets the turn variable to give priority to the other process.
- If the other process also wants to enter (flag[1 - i] = true) and it’s its turn, the process waits.
- Otherwise, it enters the critical section.
- After exiting, it sets flag[i] = false to allow the other process to proceed.
Satisfies All Conditions:
1. Mutual Exclusion: Only one process can enter at a time.
2. Progress: A process not interested in entering does not block the other.
3. Bounded Waiting: Each process gets a fair chance to enter after some time.

Limitations:
- Works only for two processes.
- Relies on strict alternation and busy-waiting.
- Not suitable for modern CPUs with instruction reordering unless memory is marked volatile
or proper memory barriers are used.

Conclusion:
Peterson’s Algorithm is a simple and elegant solution to the critical section problem for two
processes. Though mostly educational, it illustrates the key principles of mutual exclusion using
only shared memory and basic instructions.

You might also like