0% found this document useful (0 votes)
13 views68 pages

OPERATING SYSTEM AND DATABASE MANAGEMENT SYSTEM (1)

The document provides an overview of operating systems (OS), detailing their functions, types, and evolution. It explains key components such as process management, memory management, and system calls, as well as various types of operating systems including batch processing and time-sharing systems. Additionally, it discusses the importance of security, reliability, and performance considerations in OS design and operation.

Uploaded by

Shivani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views68 pages

OPERATING SYSTEM AND DATABASE MANAGEMENT SYSTEM (1)

The document provides an overview of operating systems (OS), detailing their functions, types, and evolution. It explains key components such as process management, memory management, and system calls, as well as various types of operating systems including batch processing and time-sharing systems. Additionally, it discusses the importance of security, reliability, and performance considerations in OS design and operation.

Uploaded by

Shivani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 68

OPERATING SYSTEM

AND DATABASE
MANAGEMENT SYSTEM
Operating System
• An Operating System (OS) is a collection of programs that acts as an
interface between a user of a computer and the computer hardware.
The purpose of an operating system is to provide an environment in
which a user may execute the programs. Operating Systems are
viewed as resource managers.
• Primary Goal: Convenience
• Secondary Goal: Efficiency
Abstract view of components of
computer system
Operating system functions
• 1. Should act as a command interpreter by providing a user friendly environment.
• 2. Should facilitate communication with other users.
• 3. Facilitate the directory/file creation along with the security option.
• 4. Provide routines that handle the intricate details of I/O programming.
• 5. Provide access to compilers to translate programs from high-level languages to machine language.
• 6. Provide a loader program to move the compiled program code to the computer’s memory for
execution.
• 7. Assure that when there are several active processes in the computer, each will get fair and non-
interfering access to the central processing unit for execution.
• 8. Take care of storage and device allocation.
• 9. Provide for long term storage of user information in the form of files.
• 10. Permit system resources to be shared among users when appropriate, and be protected from
unauthorized or mischievous intervention as necessary.
Operations and Functions of OS
• The main operations and functions of an operating system are as
follows:
• Process Management
• Memory Management
• Secondary Storage Management
• I/O Management
• File Management
• Protection
Process Management
• The creation and deletion of both user and system processes
• The suspension and resumption of processes.
• The provision of mechanisms for process synchronization
• The provision of mechanisms for deadlock handling.
Memory Management
• Keep track of which parts of memory are currently being used and by
whom.
• Decide which processes are to be loaded into memory when memory
space becomes available.
• Allocate and deallocate memory space as needed.
Secondary Storage Management
• Free space management
• Storage allocation
• Disk scheduling.
I/O Management
• A buffer caching system
• To activate a general device driver code
• To run the driver software for specific hardware devices as and when
required.
File Management
• The creation and deletion of files.
• The creation and deletion of directory.
• The support of primitives for manipulating files and directories.
• The mapping of files onto disk storage.
• Backup of files on stable (non volatile) storage.
• Protection and security of the files.
Protection
• The various processes in an operating system must be protected from
each other’s activities. For that purpose, various mechanisms which
can be used to ensure that the files, memory segment, CPU and other
resources can be operated on only by those processes that have
gained proper authorization from the operating system.
Evolution of OS
• First Generation (1940’s): Card readers/Punch Cards: No OS
• Second Generatino(1970’s): Batch Processing(SPOOLING): Magnetic
Tapes: No OS
• Third Generation(1980’s): Disk Technology: OS was introduced.
1.GM-NAA I/O, developed by General Motors for the IBM
704 in the 1950s.
2.Unix, created by Bell Labs in 1969, which formed the
basis for the Unix family of operating systems.
3.MS-DOS, released by Microsoft in 1981, which became
the dominant operating system for personal computers
in the 1980s.
4.OS/360, developed for the IBM System/360 mainframe
computers
SPOOLING
• Spooling is a process in which data is temporarily held to be
used and executed by a device, program, or system. Data is
sent to and stored in memory or other volatile storage until
the program or computer requests it for execution.
• SPOOL is an acronym for simultaneous peripheral
operations online. Generally, the spool is maintained on
the computer's physical memory, buffers, or the I/O device-
specific interrupts. The spool is processed in ascending
order, working based on a FIFO (first-in, first-out) algorithm.
Types of Operating System
• Batch Processing Operating System
• Time Sharing Operating System
• Real-time Operating System
• Multiprogramming Operating System
• Multiprocessing Operating System
• Networking Operating System
• Distributed Operating System
• Embedded Operating System
Batch Processing Operating System
• In a batch processing operating system environment users submit jobs
to a central place where these jobs are collected into a batch, and
subsequently placed on an input queue at the computer where they will
be run. In this case, the user has no interaction with the job during its
processing, and the computer’s response time is the turnaround time
the time from submission of the job until execution is complete, and the
results are ready for return to the person who submitted the job.
Time Sharing Operating System
• Another mode for delivering computing services is provided by time
sharing operating systems. In this environment a computer provides
computing services to several or many users concurrently on-line.
Here, the various users are sharing the central processor, the memory,
and other resources of the computer system in a manner facilitated,
controlled, and monitored by the operating system. The user, in this
environment, has nearly full interaction with the program during its
execution, and the computer’s response time may be expected to be
no more than a few second.
• The CPU performs many tasks by switches that are so frequent
that the user can interact with each program while it is running.
A time-shared operating system allows multiple users to share
computers simultaneously. With each action or order at a time
the shared system becomes smaller, so only a little CPU time is
required for each user. As the system rapidly switches from one
user to another, each user is given the impression that the
entire computer system is dedicated to its use, although it is
being shared among multiple users.
• his short period of time during which the user gets the attention
of the CPU is known as time slice, time slot, or quantum.
Multiprogramming Operating
System
• A multiprogramming operating system is a system that allows more
than one active user program (or part of user program) to be stored in
main memory simultaneously. Thus, it is evident that a time-sharing
system is a multiprogramming system, but note that a
multiprogramming system is not necessarily a time-sharing system.
Multiprocessing System
• A multiprocessing system is a computer hardware configuration that
includes more than one independent processing unit. The term
multiprocessing is generally used to refer to large computer hardware
complexes found in major scientific or commercial applications.
Networking Operating System
• A networking operating system (NOS) is a specialized software that runs on a
server and enables the server to manage data, users, groups, security,
applications, and other networking functions. It is designed for the sole
purpose of supporting workstations, database sharing, application sharing,
and file and printer access sharing among multiple computers in a network.
A network operating system can also refer to a specialized operating system
for a network device such as a router, switch, or firewall.
• Microsoft Windows Server (2000, 2003, 2008)
• UNIX
• Linux
• Mac OS X
Distributed Operating System
• A distributed computing system consists of a number of computers that are connected
and managed so that they automatically share the job processing load among the
constituent computers, or separate the job load as appropriate particularly configured
processors. Such a system requires an operating system which, in addition to the typical
stand-alone functionality, provides coordination of the operations and information flow
among the component computers. The networked and distributed computing
environments and their respective operating systems are designed with more complex
functional capabilities.
• In a network operating system, the users are aware of the existence of multiple
computers, and can log in to remote machines and copy files from one machine to
another. Each machine runs its own local operating system and has its own user (or
users). In a true distributed system, users should not be aware of where their programs
are being run or where their files are located; that should all be handled automatically
and efficiently by the operating system.
System calls
• A system call is a request made by a user-level program to the
operating system’s kernel for performing a specific task or accessing a
system resource. These tasks could involve interacting with hardware,
managing processes, or controlling files
Purpose:
• Resource Access: User applications often need to access hardware
resources or perform operations that require higher privileges than
those available in user space.
• Abstraction: System calls abstract the complexities of hardware
interactions, allowing applications to use standardized APIs.
• Security: They provide a controlled mechanism to perform privileged
operations, ensuring that user programs cannot directly manipulate
hardware or critical system structures, which could lead to system
instability or security vulnerabilities.
Mechanism of System Calls
• Transition from User Mode to Kernel Mode:
• User Mode: Applications run in user mode with restricted privileges, meaning they cannot directly
execute privileged operations or access hardware.
• Kernel Mode: The kernel runs in a privileged mode with full access to hardware and system
resources. When a system call is made, the system switches from user mode to kernel mode to
perform the requested operation.
• System Call Invocation:
• Software Interrupt or Trap: To invoke a system call, a user application triggers a software interrupt or
a trap, which is a mechanism to switch the processor’s state from user mode to kernel mode.
• System Call Number: The application specifies the type of system call by passing a unique identifier
(system call number) to the kernel.
• Arguments: The application provides any necessary arguments for the system call, such as file names
or memory addresses.
• Execution and Return:
• Kernel Processing: The kernel performs the requested operation, which might involve interacting with
hardware, managing files, or handling inter-process communication.
• Result Return: Once the operation is complete, the kernel returns the result (success, error code, or
data) to the user application.
• Context Switch: The system switches back to user mode, restoring the application’s state.
Types of System Calls
File Management:
• Creation: open(), create()
• Reading/Writing: read(), write()
• Modification: lseek(), ftruncate()
• Deletion: unlink(), remove()
• Directory Operations: mkdir(), rmdir()
Process Control:
• Creation: fork(), exec()
• Termination: exit(), kill()
• Status: wait(), getpid(), getppid()
Inter-process Communication (IPC):
• Signals: signal(), kill()
• Pipes and FIFOs: pipe(), mkfifo()
• Message Queues: msgget(), msgsnd(), msgrcv()
Memory Management:
• Allocation: mmap(), brk()
• Protection: mprotect()
Performance Considerations
• Overhead:
• Context Switch: Switching between user mode and kernel mode
incurs overhead, including saving and restoring processor state.
• Latency: The time taken to transition modes and perform the system
call can impact performance, especially if system calls are frequent.
Security and Reliability
Security:
• System calls are carefully designed to prevent user programs from
causing harm or accessing unauthorized resources. The kernel
validates requests to ensure they are safe and permissible.
Reliability:
• Error Handling: The kernel can handle errors gracefully and return
appropriate error codes to user applications, maintaining system
stability.
Process Management
• Process: A process is an abstract model of a sequential program in execution.
The operating system can schedule a process as a unit of work.
• The term “process” was first used by the designers of the MULTICS in 1960’s.
Since then, the term “process” is used somewhat interchangeably with ‘task’
or ‘job’. The process has been given many definitions as mentioned below:
• 1. A program in Execution.
• 2. An asynchronous activity.
• 3. The ‘animated spirit’ of a procedure in execution.
• 4. The entity to which processors are assigned.
• 5. The ‘dispatchable’ unit.
• A process is more than a program code. A process is an ‘active’ entity as oppose to
program which is considered to be a ‘passive’ entity. As you all know that a
program is an algorithm expressed with the help of a programming language.
• Process, on the other hand, includes:
• 1. Current value of Program Counter (PC)
• 2. Contents of the processors registers
• 3. Value of the variables
• 4. The process-stack (SP) which typically contains temporary data such as
subroutine parameter, return address, and temporary variables.
• 5. A data section that contains global variables.
• 6. A process is the unit of work in a system.
PCB (Process Control Blocks)
• The operating system groups all information that it needs about a
particular process into a data structure called a process descriptor or
a Process Control Block (PCB). Whenever a process is created
(initialized, installed), the operating system creates a corresponding
process control block to serve as its run-time description during the
lifetime of the process. When the process terminates, its PCB is
released to the pool of free cells from which new PCBs are drawn.
• Information stored in a PCB typically includes some or all of the following:
• 1. Process name (ID)
• 2. Priority
• 3. State (ready, running, suspended)
• 4. Hardware state.
• 5. Scheduling information and usage statistics
• 6. Memory management information (registers, tables)
• 7. I/O Status (allocated devices, pending operations)
• 8. File management information
• 9. Accounting information.
Operation on Processes
• Modern operating systems execute processes concurrently. Although there is a single Central
Processor (CPU), which execute the instructions of only one program at a time, the operating
system rapidly switches the processor between different processes.
• Some of these resources (such as memory) are simultaneously shared by all processes. Such
resources are being used in parallel between all running processes on the system. Other
resources must be used by one process at a time, so must be carefully managed so that all
processes get access to the resource.
• The most important example of a shared resource is the CPU, although most of the I/O devices
are also shared. For many of these shared resources the operating system distributes the time a
process requires of the resource to ensure reasonable access for all processes. Consider the
CPU: the operating system has a clock which sets an alarm every few hundred microseconds. At
this time the operating system stops the CPU, saves all the relevant information that is needed
to re-start the CPU exactly where it last left off (this will include saving the current instruction
being executed, the state of the memory in the CPUs registers, and other data), and removes
the process from the use of the CPU.
• The operating system then selects another process to run, returns the state of the CPU to what
it was when it last ran this new process, and starts the CPU again.
• 1. Processes Creation:

• 1. Name: The name of the program which is to run as the new process must be known.
• 2. Process ID and Process Control Block: The system creates a new process control
block, or locates an unused block in an array. This block is used to follow the execution
of the program through its course, keeping track of its resources and priority. Each
process control block is labeled by its PID or process identifier.
• 3. Locate the program to be executed on disk and allocate memory for the code
segment in RAM.
• 4. Load the program into the code segment and initialize the registers of the PCB with
the start address of the program and appropriate starting values for resources.
• 5. Priority: A priority must be computed for the process, using a default for the type of
process and any value which the operating system specified.
• 6. Schedule the process for execution.
2. Process State Transitions
3. Process Termination
• 1. Normal Termination occurs by a return from main or when
requested by an explicit call to exit.
• 2. Abnormal Termination occurs as the default action of a signal or
when requested by abort.
Process Scheduling
• Process Scheduling is the process of the process
manager handling the removal of an active process
from the CPU and selecting another process based on a
specific strategy.
• There are three types of process schedulers:
• Long term or Job Scheduler
• Short term or CPU Scheduler
• Medium-term Scheduler
• Objectives of Process Scheduling Algorithm
• Utilization of CPU at maximum level. Keep CPU as busy as
possible.
• Allocation of CPU should be fair.
• Throughput should be Maximum. i.e. Number of processes that
complete their execution per time unit should be maximized.
• Minimum turnaround time, i.e. time taken by a process to finish
execution should be the least.
• There should be a minimum waiting time and the process should
not starve in the ready queue.
• Minimum response time. It means that the time when a process
produces the first response should be as less as possible.
• Terminologies Used in CPU Scheduling
• Arrival Time: Time at which the process arrives in the ready
queue.
• Completion Time: Time at which process completes its
execution.
• Burst Time: Time required by a process for CPU execution.
• Turn Around Time: Time Difference between completion time
and arrival time.
• Turn Around Time = Completion Time – Arrival Time
• Waiting Time(W.T): Time Difference between turn around time
and burst time.
• Waiting Time = Turn Around Time – Burst Time
Different Types of CPU
Scheduling Algorithms
FCFS
• CT
• TAT = CT - AT
• WT = TAT - BT
SJF
FCFS
SRTF (Preemptive SJF)
• | Process | Arrival Time | Burst Time |
• |---------|--------------|------------|
• | P1 | 2 |1 |
• | P2 | 1 |5 |
• | P3 | 4 |1 |
• | P4 | 0 |6 |
• |P5 |2 |3 |
Priority (Non-preemptive)
Scheduling
Priority (preemptive) Scheduling
Round Robin
• TQ=4
Highest Response Ratio
Next(HRRN)
• Response Ratio = (W + S)/S
• The criteria for HRRN is Response Ratio, and
the mode is Non-Preemptive.
• HRRN is considered as the modification of Shortest Job
First to reduce the problem of starvation.
• In comparison with SJF, during the HRRN scheduling
algorithm, the CPU is allotted to the next process which
has the highest response ratio and not to the process
having less burst time.
• | Process | Arrival Time | Burst Time |
• |---------|--------------|------------|
• | P1 | 0 |8 |
• | P2 | 1 |4 |
• | P3 | 2 |9 |
• | P4 | 3 |5 |
• - Execute P2 (highest response ratio).

• Calculate Waiting Time:** • **At time 12 (P2 finishes):**


• Calculate Response Ratio:** • - P3 (Waiting Time = 9, Response Ratio = (9 + 9) / 9 =
• - Response Ratio = (Waiting Time + Burst Time) / 2.0)
Burst Time • - P4 (Waiting Time = 8, Response Ratio = (8 + 5) / 5 =
2.6)
• **At time 0:** • - Execute P4 (highest response ratio).
• - Only P1 is available. Execute P1.
• - Waiting Time for P1 = 0, Response Ratio for P1 = (0 • **At time 17 (P4 finishes):**
+ 8) / 8 = 1.0 • - Only P3 remains.
• - Execute P3.
• **At time 8 (P1 finishes):**
• - P2 (Waiting Time = 7, Response Ratio = (7 + 4) / 4 = • | P1 | P2 | P4 | P3 |
2.75) • 0 8 12 17 26
• - P3 (Waiting Time = 6, Response Ratio = (6 + 9) / 9 =
1.67)
• - **Completion Times:** P1 = 8, P2 = 12, P4 = 17, P3
• - P4 (Waiting Time = 5, Response Ratio = (5 + 5) / 5 = = 26
2.0)
• - **Average Waiting Time:** (0 + 7 + 8 + 9) / 4 = 6.0
• 1. FCFS (First-Come, First-Served) indefinitely).
• **Allocation**: Processes are allocated CPU • **Performance**: Minimum average waiting
based on their arrival time. time.
• **Complexity**: Simple and easy to implement.
• **Average Waiting Time (AWT)**: Large. • 3. LJFS (Longest Job First)
• **Preemption**: No. • **Allocation**: CPU is allocated to the process
• **Starvation**: No. with the longest CPU burst time.
• **Performance**: Slow. • **Complexity**: More complex than FCFS.
• **AWT**: Varies (depends on factors like arrival
time).
• 2. SJF (Shortest Job First)
• **Preemption**: No.
• **Allocation**: CPU is allocated to the process
with the shortest CPU burst time. • **Starvation**: Yes (shorter jobs may starve).
• **Complexity**: More complex than FCFS. • **Performance**: Longer turnaround time.
• **AWT**: Smaller than FCFS.
• **Preemption**: No.
• **Starvation**: Yes (longer jobs may wait
• 4. LRTF (Longest Remaining Time First) • **Starvation**: Yes (longer jobs may starve).
• **Allocation**: Similar to LJFS but preemptive • **Performance**: Preference for shorter jobs.
(can interrupt shorter jobs).
• **Complexity**: More complex than FCFS. • 6. RR (Round Robin)
• **AWT**: Varies (depends on factors like • **Allocation**: Processes are assigned CPU in
arrival time). the order they arrive, with a fixed time
• **Preemption**: Yes. quantum.
• **Starvation**: Yes (shorter jobs may starve). • **Complexity**: Depends on the size of the
• **Performance**: Preference for longer jobs. time quantum.
• **AWT**: Large compared to SJF and priority
scheduling.
• 5. SRTF (Shortest Remaining Time First)
• **Preemption**: Yes.
• **Allocation**: Similar to SJF but preemptive.
• **Starvation**: No (each process gets a fixed
• **Complexity**: More complex than FCFS. time).
• **AWT**: Varies (depends on factors like • **Performance**: Fairly balanced.
arrival time).
• **Preemption**: Yes.
• 7. Priority Preemptive • **Allocation**: Processes are assigned to queues based
• **Allocation**: Processes are scheduled based on on priority.
priority; higher priority tasks execute first. • **Complexity**: More complex than simple priority
• **Complexity**: Less complex than non-preemptive scheduling.
priority scheduling. • **AWT**: Smaller than FCFS.
• **AWT**: Smaller than FCFS. • **Preemption**: No.
• **Preemption**: Yes. • **Starvation**: Yes (lower priority queues may starve).
• **Starvation**: Yes (lower priority jobs may starve). • **Performance**: Good overall.
• **Performance**: Generally good.
• 10. MFLQ (Multi-Level Feedback Queue)
• 8. Priority Non-Preemptive • **Allocation**: Similar to MLQ but allows processes to
• **Allocation**: Similar to priority preemptive but does move between queues based on their behavior and
not interrupt running jobs for higher priority ones. needs.
• **Complexity**: Less complex than preemptive. • **Complexity**: Most complex; depends on time
quantum size.
• **AWT**: Smaller than FCFS.
• **AWT**: Smaller than other types in many cases.
• **Preemption**: No.
• **Preemption**: No.
• **Starvation**: Yes (lower priority jobs may starve).
• **Starvation**: No.
• **Performance**: Beneficial for batch systems.
• **Performance**: Generally good.

• 9. MLQ (Multi-Level Queue)

You might also like