0% found this document useful (0 votes)
13 views

Unit 1 and 2 Operating System

Uploaded by

priyanshooo2004
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Unit 1 and 2 Operating System

Uploaded by

priyanshooo2004
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 146

Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

What is an Operating System?

An Operating System (OS) is a background program that runs automatically the moment a
computer is switched on and remains active, until the machine is switched off. It controls and
manages all the hardware, services and applications of a computer. The components broadly
managed by the OS includes- memory, CPU, disk space, peripheral devices, Software,
Applications, Processes, Files, Communication between different components, Security,
Networking etc.

OS is the medium to communicate with the computer or mobile device. These devices are just
dumb machines without the OS. OS originated way back in 1950’s prior to which the
processing in the digital computers were done with the help of dedicated computer operator.
The operator would provide the special resources for execution of the program, one at a time.
Over the years, many types of OS evolved considering the technological advancements in
hardware and software. Various types of OS are available and can be chosen as per the
individual or institutional need. Windows, MacOS, IOS, and Android are few examples of
Operating Systems.

Figure: General Architecture of Operating System Source: cis2.oc.ctc.edu

Operating System generally is layered approach where the outer layer has the least privileges
and complexities. Outer layer is ‘Application’ using which the user makes requests and
receives responses from the system. Below this layer is ‘Shell’ which is an interpreter. It
converts the user’s request from a specific language to machine language and provide it to the
Kernel for further processing. The ‘Kernel’ lies at the next level and acts as bridge between
hardware and the user. The kernel provides the resources required to process the incoming
request.

Architectures of OS

Monolithic OS Architecture: The oldest architecture used for operating systems is


the Monolithic Architecture. The kernel manages each component of the operating system. The
kernel can directly control, communicate and access any component without any restrictions
and constraints. The entire operating system works in the kernel space. Each application has
its own address space, but all the user services and kernel services are implemented in the
same address space. OS functions like process scheduling, memory 0simple, fast and efficient
as all information is available at one place. The downside of this OS is the larger size of kernel.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

It is difficult to isolate errors, if any. Also, it is at a higher risk of damage due to intrusion and
malicious attacks.

Examples of Monolithic Operating Systems: CP/M, DOS, BSD, Linux

Figure: Monolithic OS Architecture (Source: https://www.technologyuk.net)

Layered OS Architecture: It was introduced by Dijkstra during 1960’s. The OS is divided


into layers such that each layer perform different functions. The privileges of each layers
increases from top to bottom. User Application layer has the least privilege whereas Processor
Scheduling has the highest. The highest privilege layer deals with interrupt handling, context
switching and interact with the hardware. The other layers manages memory allocations, file
storage and retrieval, device drivers, user management etc.
Each layer can communicate only with one layer above and one layer down to itself. This is a
modular approach where modifications or additions at one layer can be done without affecting
the other layers. This architecture is more consistent, simple to debug and test. The downside
is performance degradation due to multiple layers, separating inter-related tasks into different
layers and no communication between non-adjacent layers.

Examples of Layered Operating System: Multics, Microsoft Windows NT


Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Figure: Layered OS Architecture (Source: https://www.technologyuk.net)

Microkernel OS Architecture : It is the latest architecture in the modern world. The size of
the kernel reduces and contains very small number of services like process synchronization,
inter-process communication, memory and virtual memory management. Rest of the services
lies in the system application program. The system application and kernel interact using
message-passing. Microkernel is small, scalable, highly modular, portable and secured. The
downside is performance degradation due to increase in inter-module communication, complex
code,

Examples of Micro-kernel Operating System: Mach, Eclipse IDE, Symbian, L4 Linux

Figure: Microkernel OS Architecture (Source: https://www.technologyuk.net)


Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Hybrid OS Architecture : Hybrid OS architecture is a combination of all the advantages of


previously stated operating systems. This architecture is easy to manage, highly scalable,
secured, and expandable. The modular approach of microkernel and the layered approach of
Layered OS easily manages all the operations in just three layers:

• Hardware abstraction layer-Acts as an interface between the kernel and the hardware
• Microkernel layer- responsible for Process scheduling, memory management and
inter-process communication
• Application layer- Acts as an interface between user space and kernel layer. All the
functionalities for running user applications lies in this layer.

Components of OS

Though there are different types of operating systems, most of them have similar OS
components. Components binds all the parts of the computer system to work in unison.

Kernel: Kernel is a program with super power. It is the brain of operating system and lies at
its core. It controls and manages all the hardware, software and other functions of operating
system. On booting, the kernel is loaded first which then loads the other essential components
to deliver various OS services to the end-users. There are various types of kernels:

• Monolithic kernel
• Microkernels
• Exokernels
• Hybrid kernels

Shell: Shell is an interpreter that converts the commands from the user into machine
instruction. It takes input from the user, interprets it, provide it for execution to the kernel and
display the output to the user. It lies between the user application interface and the kernel. The
shell can be command-line shell or a graphical shell. Various types of shell are available, Korn
Shell, C Shell, Bourne Shell, to name a few.

Application Program Interface (API): It is a program or a code to enable communication


between two programs running on the same machine/network. It allows the services of the
operating system to be correctly disseminated in different contexts and multiple channels.

File Systems: The File system is the management of data stored on the computer. It is an
indexed mechanism of storing, retrieving and manipulating data from time to time. Different
types of File system can be deployed such as FAT, FAT32, NTFS, Ext2, Ext3 etc. depending
on the type of OS installed. The File System is responsible to maintain the consistency, security
and fast access of the data.

Interrupts: Operating system monitors the operations of the computer and performs
troubleshooting when needed. Interrupts are the routines or events that inform the OS to stop
execution of the current process because some error has occurred. The Interrupt Service
Routine (ISR) is responsible to find the cause of interrupt. The interrupts can be hardware and
software interrupts.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Device Drivers: A device driver is a group of files that enable one or more hardware devices
to communicate with the computer's operating system. Without drivers, the computer could not
send and receive data correctly to hardware devices, e.g., a printer.

Functions of OS:

Figure: Functions of Operating System. Source(scaler.com)

Memory Management: A computer memory is divided into Cache, primary and secondary
memory. The OS allocates space to data and program in the memory for further processing. It
manages the memory space shared by multiple users in a secured manner.

Process Management: From the main memory, the process goes to the CPU for execution.
Selecting the order of processes, allocating processor time to each process and tracking the
status of process under execution are some task of Process Management.

File Management: Files are stored on permanent storage. Storing, retrieving, editing, deleting,
updating files and transferring them from one device to another is the job of File Management.

Device Management: Various hardware components such as hard disk, printer, USB drive,
ports, memory etc. need to communicate from time to time to accomplish the tasks. OS enables
the devices to communicate and also decides the allocation and deallocation of these devices
as per the requests generated by the users.

I/O Management: The input and output devices are managed by the OS. Whenever the user
wishes to input data into the computer, the OS sends the signal to input devices and the buses
to get ready to take the input and similar to the output devices when the system requires to
display information.

Security: OS uses Firewalls to protect the data from outside attacks. It has robust authentication
modules to allow the access only to the registered and valid users. Within users, it can allocate
different rights and privileges based on the user’s role in the organization.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Command Interpretation: The OS acts as a bridge between high level language and low level
languages. The computer understands binary language. It interprets the instructions written in
high-level language and provides the resources required to process them.

Networking : Handling request from multiple clients on a network, smooth and secured
communication between remote devices are some services offered by operating system.

Communication: The processes, user and kernel services, various peripheries need to
communicate in order to complete tasks assigned to each of them. The OS enable
communication using various techniques and acts a mediator between user and the hardware.

Job Scheduling: The OS decide the order in which the processes should be scheduled for
processing. It sets the job priorities, pre-emption policies, and scheduling techniques for
multiple processes to be executed simultaneously.

Monitoring Health and Error Detection: The OS monitors the activities during run time
and sends timely messages on error detection to the users and administrators. It prevents any
illegal access or activity using interrupts and system abort processes.

Advantages of Operating System

User Friendly: Operating Systems became user friendly after the introduction of Window OS.
It was the first to introduce Graphical User Interface (GUI) having point and click and drag
and drop features. It minimized user’s effort to memorize commands for simple tasks like
opening a folder, moving a file etc. Communication speed between the machine and user was
exponentially increased with ease.

Multitasking: An operating system can handle several tasks simultaneously. It allows users to
carry out different tasks at the same point in time. E.g., the user can listen to its favourite song
while working on MS-Office or may be writing a code in a computer language.

Versatile: An operating system is versatile as it can be installed on several other computers.

Provides authenticated access: The OS employs an authentication method to determine


whether or not the user running a programme is authorised to execute it, therefore external
software is not needed to secure data. Password and Username, Biometric Verification, OTP
process are some of the methods for authenticating the user.

Resource Sharing: Operating Systems allow sharing of resources between various users.
Printer, Modem, Fax Machine are few devices shared by the users working on a network. Not
only the hardware, various apps, images, media files can also be shared to other devices with
the use of OS.

Timely Updates: Updates improve the security, fix any bugs, and add more features to the
machine. With the invent of Internet, we are ‘connected’ all the time and the chance of threats
and exploits become higher if the system is not timely updated to keep software and its users
safe. Ensure that your computer, mobile phone, or tablet is using the latest version of its OS to
protect your devices and data from cybersecurity issues. On receiving an OS update notification
from Mac, Microsoft, or any Linux distro, read up on what it fixes and then install the update
in a reasonable amount of time.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Disadvantages of Operating System

Expensive: All other OSes are typically thought of as being costly, unless it is an open source
operating system. Although free operating systems are a choice for users, they are not always
the best because they do not have all the functions. The cost rises even further if the operating
system has GUI capabilities.

Need of Technical Expertise: Although it is easy to use, but in case of a crash or


troubleshooting the user needs to have some knowledge and skills of the operating system as
well as computer. Technical expertise, analytical skills and continuous improvement are few
such features to perform OS tasks and functions.

Difficult to Install: An operating system is a complex program that keeps the hardware and
software components of a computer system coordinated and functioning.

Prone to Threats: When you use an operating system, the risk of getting a virus is always
higher. Users may sometimes download malicious programmes, go to malicious websites, or
open email attachments that contain viruses without realising it. All of these things can make
a computer vulnerable to viruses. Not only the virus attacks, the OS may crash due to several
reasons like installation of faulty software, fatal errors in code etc.. This may result in no
execution of functions and permanent loss of data.

Complex: Operating systems are extremely difficult to comprehend since the language used
to develop them is not adequately defined. As a result, if a complex problem occurs that
necessitates knowledge of the applied language, users may be unable to comprehend and
resolve it on their own.

Non-Transferrable: Operating Systems are non-transferrable. It means that you can’t take the
OS on a hard disk and install it on another computer. To do so requires uninstalling the OS in
the previous system.

Types of OS

The most popular Operating Systems evolved over time includes:

Batch Operating OS: In the early 1950s, General Motors Research Laboratories
(GMRL) introduced the first Single-Stream batch processing systems. It executes only one job
at a time, and data is sent in batches or groups. Jobs with similar requirements are grouped and
termed as a batch. The jobs are first stored on a disk to form a pool of jobs. The jobs are queued
in the order they are received and executed on First-cum-First basis. The older Batch OS were
non-interactive. Today, the user can schedule a job and receive notification on start and end of
the job.

The processes that require minimal human intervention and can run on its own are combined
and provide to the processor for execution. These lengthy and time-consuming processes
usually run-in background allowing the users to work on other tasks in the foreground.
Backups, Data entry, Payroll systems etc. are few examples of Batch Operating OS.

Advantages
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

• Optimum utilization of Processing unit: The CPU idle time is reduced to a great extent
as the jobs are queued in a pool and can be automatically sent for execution once the
previous job completes processing and terminates. Also, the jobs can be scheduled in
coordination with the idle time of CPU, say in midnight.
• Higher Execution Speed: Batch OS can work offline reducing the load on the processor.
Large and repeated tasks can be managed easily. To speed up the processing speed, the
batch process can partition into the number of processes and executed in several stages.
• No user Interaction: Batch OS performs job processing in repeating form without user’s
interaction, most of the time.

Disadvantages

• Difficult to track Errors: Program errors, specially logical errors cannot be detected
during execution. Also, it takes more time in debugging the programs.
• Chance of Infinite loop: The jobs are not bound with any time interval. Failure or
incompletion of any job would leave the other jobs waiting in the queue for indefinite
period.
• Expensive: Reducing human intervention and optimizing processing usage can
sometime be costly. If any issue is raised at a certain point, the entire exercise have to
be repeated since the process of Batch OS is irreversible.

Types of Batch OS

Simple Batch OS: Basic operation of a Batch OS is to create a pool of jobs ready for execution
and send them one by one to the processor. In Simple Batch OS, the user prepares the job that
includes program, control information and data. There is no interaction of the user with the
system. The job is submitted to the computer operator who creates a batch of similar jobs on
an input device. Further, special program monitor and manages the programs in the batch. For
Ex., Bank transactions, Payroll systems etc.

Job 1 Batch

Job 2 Batch Batch


Operating CPU
. System .
. .
Job n Batch
Figure: Simple Batch Operating System

Multi-Programmed Batch OS: The programs under execution have different processing
needs. Input-output operations, processing, storage on memory or disk are some important
functions frequently administered during job execution. In the multi-programmed Batch OS,
the jobs are grouped in a manner that best utilizes the CPU. For Example, the currently
executing process when goes to a waiting state, the CPU is returned and provided another task
to process.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

In the image below, program A runs for some time and then goes to waiting state. In the
meantime program B begins its execution. So the CPU does not waste its resources and gives
program B an opportunity to run.

Source: geeksforgeeks.com

Operating System

Job 1

Job 2

.
.

Job n

Empty Space

Figure: Multiprogramming Batch Operating System

Multiprocessing OS: Two or more microprocessors can simultaneously process different parts
of a program. A program under execution is divided into multiple processes that are active. The
parts of the program is provided to the CPU’s that closely communicate with the other
peripheries of the computer like memory, computer bus etc. This is a tightly coupled system
where a master processor controls the system. Two types of Multiprocessing OS are designed.

Symmetric Multiprocessing: Multiprocessors share computer memory which is termed as


symmetric multiprocessing. All processors are treated equally rather than dedicating specific
tasks to each processor. The OS keeps track of tasks assigned to multiprocessors so that they
the result of each transaction can be combined on conclusion.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Asymmetric Multiprocessing: The system where the multiprocessors are assigned specific
operations such as maintenance of system and processing applications are two separate tasks.
These tasks may be assigned each to a processor that will process the tasks independent of each
other. No communication and sharing of data usually occurs in such system.

Another way to implement Multiprocessing is ‘Time-Sharing’. It is more often named as


‘Multitasking’. The computing resources are divided and shared among multiple users. Each
user can execute its process in the allocated time interval whereafter the processor switches to
the next user’s process in the queue. The processes which could complete their processing in
the allocated time interval then wait for the execution in the next cycle. This type of OS benefits
the shorter processes to finish their execution with very low waiting time. Time-Sharing OS is
inclusive of an extra overhead maintain the states of incomplete processes by creating virtual
spaces and swapping between main memory and virtual memory from time to time. Another
overhead with Time-Sharing OS is to ensure no overlapping or interfering of processes and
tracking the memory usage.

Multithreading: Applications designed for use of multiprocessing are threaded. Each process
is a ‘thread’ executed independently and simultaneously. On completion, all the threads join
together in the correct order or sequence of the program. This concept is termed as
‘Multithreading’. A single processor’s computing power is utilized to its optimal usage. The
processor is virtually divided into multiple processors where each processor executes ‘thread’
independent of the other.

Advantages

• Less idle time: The CPU’s idle time is reduced and also, the processes get equal
opportunity to execute irrespective of their size and sequence.
• Quick Response: The processes ready for execution waits for a lesser time to get the
CPU time.
• Parallel Computing: With increase in the number of processors, more work is done in
less time. This increases the efficiency, speed and throughput of the processes.
• Reliability: In a multiprocessing environment, if one processor fails, the load can be
shifted to other microprocessors. As a result, the processing speed might get affected
but the complete breakdown is avoided.
• Continuity of Work: Execution of processes is done parallelly and if any process or a
thread is taking longer time to finish, the other processes and threads continue to
execute without any dependencies.

Disadvantages

• Larger Memory Need: Since the multiprocessors process multiple processes at one
go, larger size of memory space is much needed.
• Security and Integrity of Data: Complex algorithms and techniques are required to
ensure data remains confidential and personal to the specific process, more specifically
when in shared memory region.
• Process Communication: The inter-process communication raises various issues like
scheduling, Deadlock occurrence, overlapping of memory addresses, data loss etc., if
not handled intelligently and accurately.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

• Expensive: Setting up cost related to hardware and maintenance of multiprocessing OS


is very high.

Real-Time OS: The disadvantage of Time-Sharing OS is delayed processing of longer


processes. The lengthy but critical processes that must be completed in real-time requires
priority-based processing resources. Real-Time OS (RTOS) are designed in such a way to
execute the processes in a particular time interval. It is mandatory to complete the task and
generate correct results in the predefined time interval. RTOS are concurrent and can respond
to several processes at one time. Examples of RTOS: Airline Traffic Control Systems, Heart
Pacemaker, Robot etc.

The RTOS is of three types: Hard RTOS, Soft RTOS and Firm RTOS.

RTOS

Hard RTOS Soft RTOS Firm RTOS

Hard RTOS: The critical tasks must be completed in the defined range of time. Hard RTOS
is designed to fulfill this requirement. It manages the allocated time given to a process
judiciously, i.e., neither the task is completed before time or is delayed. e.g., Medical critical
care system, Aircraft system.

Soft RTOS: In the Soft RTOS, the deadline for the task is defined and the response time is
also prime but not critical. The tasks completed with short amount of delay is acceptable. E.g.,
Online Transaction System, Price Quotation System.

Firm RTOS: Firm RTOS applies to the tasks which are less critical and causing delay in
completion does not largely impact the outcome. E.g., Multimedia Applications.

Advantages of RTOS

• Resource Utilization: The devices and their capabilities are utilized to its maximum,
thereby minimizing resource wastage. Critical processes are given higher priority and
separated from non-critical processes.
• Reduced Task Shifting Time: Switching between tasks is set to minimum as RTOS
can be entirely event-driven.
• Focus on Active Application: The applications under execution are given highest
priority in terms of resource sharing. The tasks waiting in the queue are considered
thereafter. RTOS allows to focus more on application development rather than
resource management.
• RTOS in Embedded System: Due to the small size of programs, RTOS is successfully
embedded in systems like transport, smart devices etc.
• Error Free: RTOS, especially Hard RTOS, are completely free of errors. Interrupts are
handled without delay and in definite time period.
• Memory Allocation: RTOS manages memory allocation in the most efficient manner.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Disadvantages of RTOS:

• Higher waiting time: The tasks under execution is given full concentration to execute
successfully without errors. Meanwhile, the other tasks waiting to execute may wait
for indefinite period to go under execution.
• Use Heavy System Resources: RTOS uses a lot many resources to manage multiple
tasks. It increases procurement cost extensively.
• Complex Algorithms: The complexities of algorithms that enable RTOS to meet its
objectives of accuracy, timely delivery of results and error free execution, increases
manifold. To write such algorithms, developer requires deep technical knowledge and
expertise.
• Device Drivers and Interrupt Signals: RTOS require specific device drivers and
Interrupt signals for faster response during run time. Since, time-critical processes and
operations must be processed within the interrupts, the drivers and interrupt signals
would enable more deterministic interrupt behaviour.
• Thread Priority: Since the switching of tasks is minimum in RTOS, thread priority is
very weak.

Distributed OS: Distributed OS, also called as ‘loosely coupled’, is a system of interconnected
computers holding different components of OS. Each computer has its own local processor and
memory that too of variable size and functionality. These machines coordinate and
communicate actions on demand of end-users. For the users, it appears as single coherent
operating system that responds to their requests made from time to time. The applications are
distributed and runs on multiple computers thereby sharing the resources and reducing
overloads. It enables faster response time, computational capacity, handling heterogeneity in
hardware and software. Solaris, OSF/1, Micros, Locus are few examples of Distributed OS.

Distributed OS processes data using high-speed buses or telephone lines. The process under
execution may get input from computer X and show output on computer Y. The processing can
be divided between multiple processors on different machines. Moreover, the hardware and
applications utilized for processing may be from any third device connected in the network.
Distributed OS takes care of connectivity of computers and manages the entire network.

Types of Distributed OS: Apart from Client-Server and Peer-to Peer OS systems, two more
types of Distributed OS are available.

Three-tier: The client’s information is stored in the intermediate tier instead at the client’s
side. This simplifies the development and speeds up the accessibility of information. Mostly,
web applications are developed using Three-tier system.

N-tier: When a server or application has to transmit requests to other enterprise services on the
network, n-tier systems are used.

Advantages

• Virtualization: It appears that multiple programs run simultaneously having dedicated


resources until execution terminates. In reality, with Distributed OS, programs run
within containers or virtual space, to which only specific resources are allocated.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Several such containers can be formed on each operating system such that each
container may include many programs.

• Scalability: With addition of nodes in the distributed network system, the system’s
performance should not be depleted. Distributed OS manages additions of nodes and
balances the workload distribution without affecting the throughput of the network.
• Fault tolerant: Distributed OS is fault tolerant due to its extensive features of multiple
processors, resources and systematic communication links and message passing
mechanisms.
• Efficient: The programs are modularized into small programs and processes and
provided to parallel processors for computing. It helps to cut down on the time needed
to solve the otherwise complex problems.
• Data & Resource Sharing: Distributed OS manages secured sharing of resources
among multiple processes. It performs the process scheduling activities to allocate
resources between sharable and non-sharable processes. Synchronization of tasks and
inter-process communication are other roles of Distributed OS.
• Failure Resistant: Due to multiple instances of components, like, CPU, disk, memory,
interfaces, any failure of components does not bring the system down as other instances
take over the load. Distributed systems deal with both hardware and software errors.

Disadvantages

• Scheduling Complexities: Designing of the system where multiple processes will


execute simultaneously and require resources at different interval of times is a complex
process. Adding to it, the systems are distributed having their individual resources that
must timely communicate to complete the assigned tasks. Complexities in maintaining
states of process under execution, handling priorities, avoiding deadlocks etc cannot be
avoided.
• Inverse effect on Throughput: To increase the throughput and efficiency of the
system, factors like Load Balancing, Concurrency in computation and minimized
overheads must be improved.
• Complex and cost extensive Security Measures: The processes share common
resources and memory spaces. There exist a chance of intermixing of data and an access
to restricted memory spaces. Not only data, the network security is also a major
concern. Robust and fool-proof security measures are highly complex and bear huge
cost.
• Expensive Setup: Deployment cost, both, hardware and software, is very high. Also,
the programming environment used to build such system is evolving with time.
• Higher Maintenance Requirements: As the peripheries of Distributed OS are
remotely located, maintain the system is difficult and expensive.

Network OS: The Network Operating System (NOS) is designed to manage multiple
autonomous computers placed at multiple sites. Each autonomous system has its own local
storage, operating system and processing unit. The NOS, installed on a highly powerful
computer, provides sharing of resources, memory, applications and secured access to the
connected computers. NOS provides protocol support, security support, Web services, Remote
access support to the end-users. Examples of NOS: Novel Netware, Microsoft Windows server
(2000, 2003, 2008, 2012, 2016, 2019, 2022), Unix, Linux, Mac OS X, etc.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

The NOS is of two types: Peer-to-Peer and Client-Server

Peer-to-Peer NOS: Peer-to-Peer NOS manages a group of computers or nodes with equal
operational and functional rights. Each system is capable of performing similar kind of tasks
and possess its own local storage and memory. There is no central managing unit to control
and manage the operations of these computers. Peer-to-Peer NOS enables communication
between nodes and share data and other resources. The hub or a switch joins the computers to
form a network. Operating system such as Macintosh OSX, Linux, and Windows XP can
function as peer-to-peer network operating systems.

Advantages

• Less Initial Cost: The setup cost is not very high.


• Easy to Install: It is very easy to configure. Integration of new technology and
hardware is easy and simple.
• Negligible Technical Dependency: The NOS is the connection of various nodes
having their individual resources. Technically, each node administers its own resources
and no dedicated management of resources is needed.
• No System Crash: Peer-to-Peer NOS is an interconnection of independently managed
set of computer nodes. Failure of one node will not let the entire system to crash.

Disadvantages

• Compromise on Performance: The users can access any node in Peer-to-Peer network
that can slow down the performance of the computer nodes at the peak load.
• No Centralized Control: The system has no centralized control over the network
neither has a centralized storage.
• Vulnerable to Security Breach: Since the data moves over the network it is prone to
various vulnerabilities and security breaches.
• No Backup Facility: The backup of data and important notifications cannot be stored
centrally. On occurrence of fatal error, recovery of data is impossible.

Client-Server NOS: The OS model where the computer nodes are managed and controlled by
a central managing unit is termed as Client-Server NOS. All the nodes are referred as ‘Clients’
and the central managing unit is referred as ‘Server’. The Client OS runs on client machine and
NOS is installed on Server. The server is much powerful in terms of processing, storage and
memory. It receives the request from the client machines and response back by providing the
required resources and services. It allows multiple users to simultaneously share multiple
instances of the resource at one time. Example-Windows Server 2000

Advantages

• Centralized Control: The network administrator has full control of the system.
Validations, security patches, updates and similar other functionalities are easily
implemented. All the required information is available at a central location.
• Scalable: The server can be expanded anytime adding more elements or components
without making any changes to the programs accessing the peripheries.
• Flexible: Due to centrally controlled OS, adding up of new technologies is easier.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

• Interoperability: Client nodes have varied types of hardware and software.


Communication protocols, integration algorithms enable smooth interaction and
linking of client, network and server.
• Remote Access: The client-server NOS is not bound with the location to access the
resources. Any user with valid credentials can log into the system from anywhere.
• Backup Facility: The server takes systematic and periodical backup of the data and
files thus enabling recovery of data at the time of need.
• Reliability: Client-Server NOS is reliable as the data is centrally managed and can be
accessed only by the legitimate users.
• Security: The data is secured due to its centralized architecture. Robust and dynamic
authentication and authorization standards can be imposed for secured and valid access
to the resources.

Disadvantages

• Enhanced Setup Cost: A centralized server is needed to first setup the network.
• Dependency on Technical Experts: A specialized network operating system is needed
engaging continuous technical expertise to scale up the system.
• Dedicated Administrator: Dedicated and skilled administrator is required to manage
the services of NOS.
• Server Overload: Large network and voluminous work may impact server’s output.
• System Crash: Client-Server NOS works on a centralized located system. Failure at
this level will bring the entire network down.

Mobile OS: Mobile Operating System is a software program that allows mobile devices and
other smart devices like smart watches, to run applications and programs. The Mobile OS vary
from device to device. E.g., iPhone runs on IOS and Google Pixel runs on Android. Whichever
device you use, it contains a cellular built-in modem and SIM tray for the telephone and internet
connections. Every setting in the mobile phone, say establishing connection with the network,
security and privacy features, data usage, gallery, backups etc. are all managed and controlled
by the mobile OS.

Features of Mobile OS:

• Easy to use: Since, the mobiles are used by people with different levels of technical
skills, the interfaces must be kept simple and easy to understand.
• Good app store: Simple apps, easy installation, clear understanding and usage are the
salient features of app store.
• Good battery life: Battery power is required for the various components of OS to
function. Processing unit, storage, internet etc. all require some amount of power to
function. Hence, a good battery backup is essential.
• Data usage and organization: Setting alarms and reminders, managing notifications,
setting events in calendar are few functionalities of the smart phones. The OS fulfils
these needs and also organize the data securely.

Platforms of Mobile OS
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Symbian OS: Developed by Symbian Ltd. in 1998 and firstly used by Nokia. It provides high
level integration and communication. This OS is based on java language.

Android OS: Developed by Google, it is an open-source operating system, first launched in


2008. It is based on Linux kernel and has its name based on desserts like – ‘Cupcake’, ‘Kitkat’,
‘Éclair’ etc.

iPhone OS/IOS: Developed by Apple Inc. in 2011, highly secured and is specifically designed
to run on Apple devices such as iPhone, iPad, Tablets etc.

Bada OS: Developed by Samsung and launched in 2010. It includes features like 3-D graphics,
application installation, multipoint touch etc.

Blackberry OS: Blackberry OS developed by Research in Motion (RIM) was specifically


designed for Blackberry devices. The potential users were business and corporate users. It had
enhanced security features, provide synchronization with Microsoft Exchange, Novell
GroupWise email, Lotus Domino, and other business software

Windows Mobile OS: Developed by Microsoft, it was made for the pocket PCs and smart
mobiles.

Harmony OS: Developed by Huawei and is designed specifically for the use of IoT devices.

Palm OS: Developed by Palm Ltd. for use in Personal Digital Assistants (PDA).

WebOS (Palm/HP): Developed by Palm Ltd. It is based on Linux and used by HP in its mobile
devices and touchpads.

Mac-OS
The Mac-OS is designed and developed in 1984, by Apple Inc. to be installed and operated on
the Apple series of desktops. It’s a graphical user interface operating system. The features of
the Mac-OS operating system are-
• iCloud users can sync and access the content via different devices.
• Users can communicate with each other using messages and face time.

Linux-OS
Linux is an open-source operating system. It is one of the most popular and widely used kernels.
The features of the Linux-OS operating system are-
• Free of cost.
• Portable to any platform.
• Linux is scalable.
• Linux OS and Linux applications have very short debugging time.

Working in CLE

Shell- A Shell is a command-line interpreter. It processes the commands and generates the
output. It is a layer between the user and the kernel. The user enters the commands which are
interpreted by the shell and provided to the kernel for execution. The shell communicates with
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

the kernel using system calls. The user can work in more than one shell. Examples of shell:
Bash, cshell, tcsh, zsh etc.

Command Prompt- generated after loading of OS and allows the user to provide input and
view output of the action performed by the computer. It is a combination of certain characters,
symbols, followed by the cursor that blinks constantly until the user gives some command to
execute.

• Unix and Linux Command Prompt: $ (for users) and # (for Admin/Root user)
• DOS Command Prompt: C:/> (Assuming C Drive is currently active)

Commands- keywords defined in the OS or the application and typed by the user on the Prompt
to initiate the job. The Commands can be of two types, Internal Commands and External
Commands. Internal Commands are recognized and processed internally and their binary files
are available at the time of installation of the OS whereas the External Commands consist of
executable files from third parties. These commands, when typed on prompt, require an added
operation of searching for the executable file from the disk before beginning to process.

Scripts- Scripts are used to automate the configuration of users’ machines, setting/changing
security settings, updating software, building code, etc. Mostly, such scripts run on its own in
background.

Parameters- Variables passed to the scripts, commands and files are termed as parameters.
The variables can be user defined or system defined. There are different ways of passing
parameters on the CLE. Certain special characters sometimes also called a ‘switch’ are also
passed along with the commands for performing specific functionality. E.g., if you want to list
the files and directories on a Linux OS, you use the command ‘ls’. But, if you are interested to
view the hidden files, you have to pass the command with the switch ‘ls -a’.

Editor

A blank yet powerful canvas where you can write anything stating from a simple line to a
complex programming code, is called an editor. Technically, editors are software programs
that allow users to create and edit files of various types. It doesn’t matter which OS you are
using. Editors can be text editors or graphical editors. It should be simple, user friendly and
include various formatting options. The editors can also be categorized depending on the
people using it. Some are ideal for the expert users whereas some are available for novice or
less experience holders. Notepad, Notepad++, VI, Sublime, Atom are few examples of editors.

Features of the useful editors:

• Easy to code and free of formatting restrictions


• Easy to navigate
• Able to customize the features such as, fonts, font-size, color schemes etc.
• Plugin mechanisms
• Ability to handle UTF encoded text
• Syntax highlighting, spell check, code completion for easy and error free development
of programs
• Search and replace text, whenever required
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

• Show an indicator for the line ending type so that when the file is saved, it has
consistent line endings for quick retrieval.

Types of Editors

Operating systems have their default editors. For ex. Notepad comes installed with Windows,
vi with Linux/Unix and TextEdit for Mac OS. Apart from them, here is a list of other feature-
rich editors that can be installed in these operating systems for a great experience.

Brackets – Compatible for Windows, Linux, Mac. It is developed by Adobe and is easily
customizable. An exclusive feature, ‘Extract’, allows the user to choose colors, measurements,
fonts, gradients etc. from a photoshop file and convert into ready-for-web CSS.

Microsoft Visual Studio Code- Compatible for Windows, Linux, Mac. It is developed by
Microsoft and is the best open source editor for Python users. It has variety of features such as-
syntax checking, built-in terminal, auto-complete, free extensions etc. The best feature is
its infusion with Artificial Intelligence. The code or text written can be directly read by the
software. This result in creating auto-responses for code writing based on essential modules
and variable types.

Notepad++ - Compatible for Windows, Linux, Mac. It is developed by Microsoft and has
extensive features thereby becoming one of the favourites of developers worldwide. Simple to
use, amazing execution speed, support for more than 50 programming, scripting and markup
languages, syntax highlighting and code folding are few of the robust features of Notepad++.

Vim - Compatible for Windows, Linux, Mac. Vim originated in 1973 from the Vi editor and
still is one of the popular editor among old-school developers. Completely keyboard driven, it
is much faster and efficient but requires practice to learn the commands and shortcuts to operate
it. It supports working on multiple files at the same time, multiple file formats, extensive plug-
in support and exceptionally low memory footprint.

Sublime Text- Compatible for Windows, Linux, Mac. It is lightweight and is highly
responsive. The developer must install extra plug-ins to get the best out of this editor. Sublime
can edit multiple files simultaneously, is extremely extendable, supports split editing and auto
indentation.

Atom – Compatible for Windows, Linux, Mac. It is developed by the community of developers
and allows other developers to create their own editors. The developers can add packages and
themes, thus, making it a collaborative efforts to upgrade. The features of Atom includes-
Cross-platform editing, Built-in package manager, Smart autocompletion, File system browser,
Multiple panes, Find and replace.

BBEdit- The editor is compatible for Mac and has a user-friendly interface. It supports 44
programming languages including, Python, Javascript, CSS,Perl etc. It provides support to
compare versions of the text files and integrates well into existing workflows.

Interpreter

Operating System has interpreter that interprets the commands and executes them. An
interpreter is set of instructions to take the user command and process it. The user may provide
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

interactive input. The interpreter executes the commands without converting them into machine
code. In a program with multiple lines of code, the interpreter performs the execution one line
at a time. The interpreter in some OS is termed as Shell. Various programming languages have
their own interpreters, such as: Java uses HotSpot; Python uses PyPy and Ruby uses YARV.
The interpreter can work in three ways:

• Execute the source code directly and produce the output.


• Translate the source code in some intermediate code and then execute this code.
• Using an internal compiler to produce a precompiled code. Then, execute this
precompiled code.

The interpreters are memory efficient and fast in debugging, but takes more time to execute
as compared to precompiled code.

Types of Shell

Every time a user logs in to the operating system, shell is started. As, multiple shells are
available, it is purely user choice in which shell he wants to work. Each shell has its own distinct
features to choose from. Shells are of two types: Command line shell and Graphical shell
(GUI). A special program called Terminal in Linux/Mac OS or Command Prompt in Windows
OS is provided for the user to type in the commands as required. By default, bash (Command
line shell) is loaded in Linux/Unix OS. As for Windows OS, it display Desktop, a GUI, by
default.

Shells in Linux/Unix

Bourne Shell (sh) – Bash, denoted as sh, was developed at AT&T Labs by Steve Bourne, was
the first shell made for Unix OS. Very compact with high speed operations made it a popular
shell of its time. Path name for the binary of sh is found at /bin/sh and /sbin/sh. Solaris OS
made it as its default shell. The Bourne shell has some key limitations:

• It doesn’t have in-built functionality to handle logical and arithmetic operations.


• Also, unlike most different types of shells in Linux, the Bourne shell cannot recall
previously used commands.
• It also lacks comprehensive features to offer a proper interactive use.

Bourne-again Shell (Bash) – It is compatible with Bourne shell. It includes features from Korn
and Bourne shell. It is an advancement over Bourne shell and allows to navigate and recall
previously run commands using the arrow keys. Path name for binary of bash is found at
/bin/bash. Default prompt for a user is $ and root user is #. It prefixes ‘bash-x.xx’ with the
prompt where x.xx’ refers to the version number of shell.

C Shell (Csh) – Developed at the University of California by Bill Joy, include built-in support
for arithmetic operations and a syntax similar to the C programming language. Alias, Command
history, filename completion, control over jobs are prominent features of C shell. It is denoted
as csh and its binary is found at /bin/csh.

Korn Shell (ksh)- Developed at AT&T Bell Labs by David Korn and is denoted as ksh. It is a
superset of Bash. Ksh includes interactive features, such as, C-like arrays, functions, string
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

manipulation routines and compatibility with C shell scripts. The full path name for ksh is
/bin/ksh.

TC Shell (Tcsh)- Pronounced as ‘tee-see-shell’, was developed by Ken Greer at Carnegie


Mellon University. The "t" in tcsh refers to TENEX, a DECoperating system featuring
command-completion, which inspired Greer to create a shell with a similar feature. It is an
enhanced version of C shell. It includes a command-line editor, spelling checker and corrector,
store history of commands, job control and its syntax matches as that of C language. The full
path name to locate is /bin/tcsh.

Z Shell (Zsh) – Z shell is more interactive and includes all the features of previously mentioned
shells. It is denoted as zsh Features like interactive Tab completion, automated file searching,
regex integration, inline wildcard expansion etc. are available in zsh. The path to find zsh is
/usr/bin/zsh.

Shell Complete path- Prompt for root Prompt for non-


name user root user
Bourne shell (sh) /bin/sh and # $
/sbin/sh
Bourne-Again shell /bin/bash bash- bash-
(bash) VersionNumber# VersionNumber$
C shell (csh) /bin/csh # %
Korn shell (ksh) /bin/ksh # $
Tc shell (tcsh) /bin/tcsh # $
Z Shell (zsh) /bin/zsh <hostname># <hostname>%

Linux/Unix Command-line Editor- Vi (Visual Editor)


Vi is the old and default editor of Unix systems. It is a free-form editor for creating and editing
text files. The VI editor provides strong functionality to help developers, but many new users
avoid using VI because of its tedious and manual nature of the variety of features overwhelms
them. It is a full screen editor and has three modes of operation:

• Command mode: The editor performs actions based on user’s request like saving,
deleting, replacing etc.
• Insert mode : The mode that allows the user to insert new text in the document.
• Ex Command mode: This mode is used to enter commands into the command line,
which is located at the bottom of the vi editor. Press the escape key to enter Ex
command mode, then type: (colon). For ex., Press ‘wq’ (write and quit) after the
colon (:) to save the contents and exit the vi editor.

By default, Vi opens in command mode. To activate Insert mode, press ‘i’. ‘Esc’ key acts as
switch between command mode and insert mode.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Figure: Sample Format of Vi (in Ex Command mode)

To open Vi, type the following command on command-line:

$Vi filename (Will open the filename, if already exist, or create new otherwise)
Or
$ Vi (Will open the editor in command mode)
Or
$ Vi -r filename (will recover the filename last edited when the system crashed)

Example: On Linux Terminal, type the following to open a Vi editor.

Terminal
$ vi myfirstscript

Terminal
~
~
~
~
~
~
~
“myfirstscript”[New File]

Figure: Vi Editor

Commands in Vi Editor

Insert Text-To insert text in Vi, you must be in the insert mode.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Command Insert Text


i before cursor
a after cursor
A at the end of the line
o open a line below the current line
O open a line above the current line
r replace the current character
R replace characters until <ESC>, overwrite

Move Cursor: To move the cursor, you must be in command mode.

Command Moves the cursor


SPACE, l (el), or right arrow space to the right
h or left arrow space to the left
j or down arrow down one line
k or up arrow up one line
w word to the right
b word to the left
$ end of the line
0 (zero) beginning of the line
e end of the word to the right
- beginning of previous line
) end of the sentence
( beginning of the sentence
} end of paragraph
{ beginning of paragraph

Each command can be applied for multiple lines, words. It is called repeat factor, which is
applied by prefixing the command with the number.

e.g.: to move the cursor down by 4 lines: 4j

Delete Text: To delete text, you must be in command mode.

Command Action
d0 delete to beginning of line
dw delete to end of word
d3w delete to end of third word
db delete to beginning of word
dW delete to end of blank delimited word
dB delete to beginning of blank delimited word
dd delete current line
5dd delete 5 lines starting with the current line
dL delete through the last line on the screen
dH delete through the first line on the screen
d) delete through the end of the sentence
d( delete through the beginning of the sentence
x delete the current character
nx delete the number of characters specified by n.
nX delete n characters before the current character
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Yanking and Putting Text: You can yank (copy) and put (paste) the text anywhere in the
editor. By default, it works on the current positioning of the cursor but can be applied on the
number of lines, word or sentences.

In the following list M is a Unit of Measure that you can precede with a Repeat Factor, n.

Command Effect
yM yank text specified by M
y3w yank 3 words
nyy yank n lines
Y yank to the end of the line
P put text above current line
p put text below current line

Example: 4yy will yank (copy) 4 lines


p will put the 4 lines just yanked on the line below the current cursor.

To scroll the Screen


^f Move forward one screen
^b Move backward one screen
^d Move down (forward) one half-screen
^u Move up (back) one-half screen
^l Redraws the screen
^r Redraws the screen, removing deleted lines

Saving, Exiting Vi: To save the text or exit from Vi, you have to be in Ex command mode.
You can save and exit, exit without saving and multiple such options. If, you have not provided
the name of the file on opening the editor, you can provide it during exit.

Command Effect
:w Save the contents of the file
:q Quit from Vi
:q! quit without saving changes
ZZ save and quit
:wq save and quit
:w filename saves to filename (allows you to change the name of the file)

Search Text in Vi

In command mode, press /. The cursor would go to the bottom of the screen. Put the expression
or word after it (without a space). E.g., /story. It will try to search for the occurrence of word
story in the document.

Command Effect
/string search forward for occurrence of string in text
?string search backward for occurrence of string in text
n move to next occurrence of search string
N move to next occurrence of search string in opposite direction

Creating programs/scripts using CLE


Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Batch files: A file containing a set of instructions that execute automatically in sequence
without user intervention. The Batch file can contain variety of commands say open program,
change settings, take backup etc. Batch files have an extension of either ‘.bat’ or ‘.cmd’. and
can be written using any simple text editor like Notepad. To execute the file, say in, Windows
OS, type the following command on command prompt:

<BATCH-FILE NAME.EXTENSION>

Example:

To create a basic batch file on Windows 10, use these steps:

1. Open Start.
2. Search for Notepad and open the text editor.
3. Type the following lines in the text file to create a batch file:

@echo OFF
echo Hello World! Your first batch file was printed on the screen successfully.

4. Click the File Menu and select the Save as option.


5. Provide a name for the script, say, My_First_Batchfile.bat

To run this file in Windows 10, follow the steps:

1. Open Start.
2. Search for Command Prompt, right-click the top result, and select the Run as
administrator option.
3. Type the following command to run a Windows 10 batch file and
press Enter: C:\PATH\TO\FOLDER\BATCH-NAME.bat

Make sure to specify the Path and name of the script correctly. Assume the file,
‘My_First_Batchfile.bat’ was created under the folder ‘C:/Administrator/Myfiles/’. The
command to execute would be:

C:/> /Administrator/Myfiles/ My_First_Batchfile.bat

Once completed, the output ‘Hello World! Your first batch file was printed on the screen
successfully’ would be displayed on the console and the session will remain open until user
switches back to GUI mode.

Linux/Unix Scripts: Creating a programmable file in Linux/Unix is termed as ‘Script’. It is


similar to a Batch file in Windows. Scripts are written in a Shell and hence is called Shell
Script. A Shell Script is an open-source program run by Linux/Unix shell. A shell script may
comprise of multiple commands, can be interactive, include expressions, filters and iterations,
if needed. There are various ways to create a shell script and similarly various ways to execute
it.

Example:
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

To create a shell script in Linux/Unix, open the editor in the command prompt. Let’s assume
the editor is Vi and the shell is Bash.

1. $ Vi My_First_Script.sh
2. In the editor, press ‘I’ to go into Insert mode and type the following commands:

echo “Hello World”


echo “Welcome to the World of Linux!”

3. Press ‘Esc’ to exit this mode and then type ‘:w’ to save your script.
4. Type ‘:q’ to exit from the editor.

To execute the shell script type the command:

$ sh My_First_Script.sh

It will display two lines of output:

Terminal
$ sh My_First_Script.sh
Hello World
Welcome to the World of Linux!
$_

Multiple ways to execute shell script:

1. You also need to specify that the script is executable by setting the proper bits on the
file with chmod, e.g.:

$ chmod +x file.sh
$ ./file.sh

– Here '.'(dot) is command, and used in conjunction with shell script. You
should have file execution permission to run this command.
– The dot(.) indicates to current shell that the command following the dot(.) has
to be executed in the same shell i.e. without the loading of another shell in
memory.
– to run script, you need to be in same directory where you created your script,
if you are in different directory your script will not run.
– To overcome this problem:
• Either give complete path or
• Create a directory bin in your home directory

2. The shell script can be run directly in a shell without acquiring the file execute
permissions. E.g.,

sh file.sh
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

– invoke an interpreter i.e. place the command directly before the shell script.

Linux/Unix Commands

An application or a utility that runs from the command line is known as a Linux command. An
interface that receives lines of text and converts them into instructions for your computer is
known as a command line. A graphical user interface (GUI) is just a command-line application
abstraction. For instance, a command is carried out every time you click the "X" to close a
window.
A flag is a way we can pass options to the command you run. Most Linux commands have a
help page that we can call with the flag -h. Most of the time, flags are optional. The input we
provide to a command to enable appropriate operation is known as an argument or parameter.
The argument can be anything you write in the terminal, although it is typically a file path.
Flags can be called using hyphens (-) and double hyphens (--), and the function will execute
arguments in the order you pass them to it.
To execute the commands, open the Terminal in Linux or Unix OS using Ctrl + Alt +T. If this
doesn’t work, look for ‘terminal’ in your application panel. The examples described in this
chapter are executed on a single-user Linux OS. You can run most of the commands in Regular
User mode. For others, you would require the Root Account. Also, this is not the exhaustive
list of commands and only the most commonly used commands are explained here.

Shell Prompt

The shell issues the $ prompt, often known as the command prompt. You can type a command
while the prompt is shown. After you press Enter, Shell reads what you entered. By examining
the first word of your input, it determines which command you wish to perform. A word is a
continuous string of letters. Words are divided by spaces and tabs. The shell prompts can be
different depending on the type of user and the type of shell in which the users are working.
The symbol for the prompt may be $, % or #.

Help Commands in Linux

Every computer operating system, software programme, and application we use has a help tab
that includes an internal user guide for that programme, which is beneficial to users. Similar to
Windows, Linux, which is built on command-line programmes, offers a few useful choices.
One may always utilise the help option to get answers if they are using Linux for the first time
or are experiencing any difficulties.

a) Man- It is a built-in manual for using Linux commands. The short form of manual, man
page, displays command description, options, flags, examples and other informative
sections. The man page can be interactive where the user can enter the pattern or option
to search for within a command. After running the man command, press H to see the help
section and a table of possible keystrokes for navigating the output. To exit, press Q.
Syntax - man [option] [section number] [command name]

• option – the search result output.


• section number – the section (1 to 9) in which to look for the man page.
• command name – the name of the command which man page you want to see.
Example:
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

The -f option displays all man pages that match the specified command name and states the
sections in which the given command is present.

Use the syntax: man -f [command name]

The output is a list of results that match the search criteria. With multiple matches, the
number next to the search result indicates the section.

Using specific Section: man [section number] [command name]


Example: # Man 3 sleep

The output shows only the page from section 3 of the manual.

b) Help -The easiest approach to learn more about a built-in shell command is to use the
help command. It makes it easier for you to access the internal documentation of the
shell. It accepts a text string as a command line argument and searches the shell's
documents for the supplied string. You don't have to spend the time reading through all
the material because of this. When you wish to know the alternatives available with a
command, the help command is useful.
Syntax- help [-dms]

Options -

-d display only a brief description of the specified command.

-m organize the available information just as the man command does.

-s display the command syntax of the specified command.

-k Works exactly like apropos command

Example: #help pwd


Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

c) Apropos- Searches the names and description of all man pages for a user-specified
keyword. The apropos command returns list of all the command whose description in
the man page description matches with the keyword or which are somehow related to
the keyword given in the argument.

Syntax: apropos [Options] keyword ….

Options-

-r Interprets each keyword as regular


expression
-w Interprets each keyword as a pattern
containing shell-style wildcard characters
-e To match exact keyword against the page
names and description
-d To print debugging messages
-v To print verbose warning messages
-a, --and Only display items that match all the
supplied keywords
-h, --help Prints a help message and exit
Example: apropos who

The output will contain all the man pages that matches the keyword ‘who’.

Source: howtoforge.com

Example: apropos -e who

The output will match exact keyword ‘who’ in all the man pages.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Source: howtoforge.com

d) Whatis- This command will display a one-line description from the man page for the
user-specified keyword.
Syntax:

whatis [-dlv?V] [-r|-w] [-s list] [-m system[, …]] [-M path] [-L locale] [-C file] name

Options-

-d Emit debugging messages


-v Print verbose warning messages
-r Interpret each keyword as Regular expression
-w The keyword(s) containing wildcard characters
-l Do not trim output to terminal width
-?, --help, -h Prints a help message and exit
Example: whatis cp

Will display a brief description of cp command

Basic Commands
Let us explore some basic and frequently used commands in Linux OS.

a) pwd – Prints the name of current/working directory.


The "pwd" command allows you to find out which directory you are currently in. It
provides us with the root-directed path, or the absolute path. The symbol for it is a
forward slash (/). Typically, the user directory looks like "/home/username."
Syntax - pwd [OPTION]...
Options –
-L Prints the value of $PWD if it names the current working directory.
-P Prints the physical directory, without any symbolic links.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

b) date – Displays the current time in the given format, or sets the system date. 'date' with
no arguments prints the current time and date, in the format of the %c directive
(described below). If given an argument that starts with a +, date prints the current time
and date (or the time and date specified by the --date option, see below) in the format
defined by that argument, which is the same as in the strftime function. Except for
directives, which start with %, characters in the format string are printed unchanged.
Syntax – date [OPTION]… [+FORMAT]
or: date [-u|--utc|--universal] [MMDDhhmm[[CC]YY][.ss]]
Options –
-d, -- date = STRING Displays time described by STRING, not ‘now’.
-- debug Annotate the parsed date,and warn about questionable usage to
stderr.
-f, -- file = Like –date; once for each line of DATEFILE.
DATEFILE
-s, -- set = STRING Sets time described by STRING.
-- help Displays this help and exit.
-- version Outputs version information and exit.

%A locale's full weekday name, variable length (Sunday..Saturday)


%b locale's abbreviated month name (Jan..Dec)
%B locale's full month name, variable length (January..December)
%d Day of month (e.g., 01)
%D Date; same as %m/%d/%y
%c locale's date and time (e.g., Thu Mar 3 23:05:25 2005)
Examples

1. Print the date of the day before yesterday:


$ date --date='2 days ago'

2. Rename a file with the current date and time


$ STAMPME=$HOME/demo_file_$(date +%Y%m%d-%H%M).txt

$ mv $HOME/demo_file $STAMPME

3. Show the time on the west coast of the US (use tzselect(1) to find TZ)
$ TZ='America/Los_Angeles'

$ date
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

4. Show the local date/time for 9AM next Friday on the west coast of the US
$ date --date='TZ="America/Los_Angeles" 09:00 next Fri'

5. Print the date of the day three months and one day hence:
$ date --date='3 months 1 day'

c) cal – Displays a calendar and the date of Easter.


Syntax - cal [-31jy] [-A number] [-B number] [-d yyyy-mm] [[month] year]
Options -
-h Turns off highlighting of today.
-J Display Julian Calendar, if combined with the -o option, it displays date of
Orthodox Easter according to the Julian Calendar.
-e Display date of Easter (for western churches).
-j Display Julian days (days one-based, numbered from January 1.
-m Display the specified month. If month is specified as a decimal number,
appending ‘f’ or ‘p’ displays the same month of the following or previous
year respectively.

d) arch–Prints machine architecture.


Syntax -arch [OPTION]...
Options –
-- help Displays this help and exit.
-- version Outputs version information and exit.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

e) hostname – This command can get or set the hostname or the NIS domain name.
Syntax -hostname [-a|-A|-d|-f|-i|-I|-s|-y]
Options –
-a, -- alias Alias names
-A, -- all-fqdns All long host names (FQDNs).
-b. – boot Sets default hostname if none available.
-d, -- domain DNS domain name.
-f, -- fqdn, -- long Long host name (FQDN).
-F, -- file Reads host name or NIS domain name from given file.
-i, -- ip-address Addresses for the host name.
-I, -- all-ip-addresses All addresses for the host.
-s, -- short Short host name.
-y, -- yp, -- nis NIS/YP domain name.

f) uname – Prints certain system information.


Syntax - uname [OPTION]...
Options -
--a, -- all print all information, in the following order except omit-p
and -i if unknown.
-s, --kernel-name Prints the kernel name.
-n, -- nodename Prints the network node hostname.
-r, --kernel-release Prints the kernel release.
-v, --kernel-version Prints the kernel version.
-m, --machine Prints the machine hardware name.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

-p, --processor Prints the processor type (non-portable).


-i, --hardware-platform Prints the hardware platform (non-portable).
-o, --operating-system Prints the operating system.
-- help Displays this help and exit.

-- version Outputs version information and exit.

g) passwd-Modify a user password.


Syntax- passwd [Options...] [LOGIN]
The passwd command changes passwords for user accounts. A normal user may only
change the password for their own account, while the superuser may change the
password for any account. passwd also changes the account or associated password
validity period.

The user is first prompted for their old password, if one is present. This password is
then encrypted and compared against the stored password. The user has only one chance
to enter the correct password. The superuser is permitted to bypass this step so that
forgotten passwords may be changed.
After the password has been entered, password aging information is checked to see if
the user is permitted to change the password at this time. If not, passwd refuses to
change the password and exits.
The user is then prompted twice for a replacement password. The second entry is
compared against the first and both are required to match in order for the password to
be changed
Then, the password is tested for complexity. As a general guideline, passwords should
consist of 6 to 8 characters including one or more characters from each of the following
sets: · lower case alphabetics · digits 0 thru 9· punctuation marks Care must be taken
not to include the system default erase or kill characters. passwd will reject any
password which is not suitably complex.
Exit Values
The passwd command exits with the following values:
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

0 Success.
1 Permission denied.
2 Invalid combination of options.
3 Unexpected failure, nothing done.
4 Unexpected failure, passwd file missing.
5 Passwd file busy, try again.
6 Invalid argument to option.

h) bc – Basic calculator. bc starts by processing code from all the files listed on the
command line in the order listed. After all files have been processed, bc reads from the
standard input. All code is executed as it is read. If a file contains a command to halt
the processor, bc will never read from the standard input.
Syntax – bc[options] [file ...]
Options –
-h, -- help Prints this usage and exit.
-i, -- interactive Force interactive mode.
-l, -- mathlib Use the predefined math routines.
-q, -- quiet Don't print initial banner.
-s, -- standard Non-standard bc constructs are errors.
-w, -- warm Warn about non-standard bc constructs.
-v, -- version Prints version information and exit.

i) echo – Display message on screen, writes each given STRING to standard output,
with a space between each and a newline after the last one.
Syntax – echo [SHORT-OPTION]... [STRING]...
Options –
-n Does not output the trailing newline.
-e Enable interpretation of backslash escapes.
-E Disables interpretation of backslash escapes (default).
-- help Displays this help and exit.
-- version Outputs version information and exit.
If -e is in effect, the following sequences are recognized:
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

\\ backslash
\a alert (BEL)
\b backspace
\c produce no further output
\e escape
\f form feed
\n new line
\r carriage return
\t horizontal tab
\v vertical tab

The echo –e option acts as an interpretation of escape characters that are back-slashed. \n
the newline character is interpreted by the echo –e command.
Example: # echo –e “Hello\tWorld\nHello How are You…”
Output:
Hello World
Hello How are You…

j) ls –This command lists files and directories. Some versions may support color-coding. The
names in blue represent the names of directories.

Syntax – ls [OPTION]... [FILE]...


Options –
-a, -- all Does not ignore entries starting with ‘.’ (hidden files)
-d, -- directory Lists directories themselves, not their contents
-h, -- human-readable With -l and -s, print sizes like 1K 234M 2G etc.
-r, -- reverse Reverse the order while sorting.
-- help Displays this help and exit.
-- version Outputs version information and exit.
-l Long list of directories and files. It provides detail information of
each file and directory.
Examples:
1. To list directory contents
$ls

2. To display one file per line


$ls -1
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

3. To display total information about files/directories


$ls -l

Field Explanation
1) The first character displays the type of file.
* “-” Normal file
* “d” Directory
* “l” link file
Next 9 lines specifies the permissions. Each group of 3 lines corresponds to
permissions. So first three lines stands for user permissions, next 3 for group
and next 3 are for others permissions. Here r stands for read , w for write and x
for execute.
2) Second field specifies the number of links for that file.
3) Third field specifies owner of the file.
4) Fourth field specifies the group of the file.
5) Fifth field specifies the size of file.
6) Sixth field specifies the last modified date and time of the file.
7) Seventh Field specifies the name of the file/directory itself.

4. To show hidden files


$ls -a
5.To display file recursively
$ls -R
6.To list all subdirectories
$ls *
7.To display file inode number
$ls -i
8. To list only directories
$ls -d */

k) cat – Concatenate files to standard output. The cat command (short for “concatenate”) is one
of the most frequently used commands in Linux. Cat command allows you to create single or
multiple files, view contents of files, concatenate files (combining files), and redirect output
in terminal or files.
Syntax – cat [OPTION]... [FILE]...
Options –
-a, -- show-all equivalent to –vET.
-b, -- number- Number nonempty output lines, overrides –n.
nonblank
-e Equivalent to –vE.
-E, -- show-ends Displays $ at end of each line.
-n, --number Number all output lines.
-- help Displays this help and exit.
-- version Outputs version information and exit.
Examples:
1. Display a file:
$ cat myfile.txt
2. Display all .txt files:
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

$ cat *.txt
3. Concatenate two files:
$ cat File1.txt File2.txt > union.txt
4. Put the contents of a file into a variable
$ my_variable=`cat File3.txt`

l) paste- Merge lines of files, write to standard output lines consisting of sequentially
corresponding lines of each given file, separated by a TAB character.
Syntax- paste [options]... [file]...
Options
-s --serial Paste the lines of one file at a time rather than one line from each
file.
-d DELIM- Consecutively use the characters in DELIM-LIST instead of TAB
LIST --delimiters to separate merged lines. When DELIM-LIST is exhausted, start
DELIM-LIST again at its beginning.
The following special characters can also be used in DELIM-LIST:
• \n newline character
• \\ backslash character
• \t tab character
• \0 Empty string (not a null character).
Any other character preceded by a backslash is equivalent to the character itself. Standard
input is used for a file name of - or if no input files are given.

Examples
Combines the lines from two files:
$ paste file1.txt file2.txt > result.txt
List the files in the current directory in three columns:
ls | paste - - -
Combine pairs of lines from a file into single lines:
paste -s -d '\t\n' myfile
Number the lines in a file, similar to nl:
sed = myfile | paste -s -d '\t\n' - -
Create a colon-separated list of directories named bin, suitable for use in the PATH
environment variable:
find / -name bin -type d | paste -s -d : -

m) touch – Changes file timestamps. If the file doesn’t exist, it will create the empty file and if
it exists, it will change its timestamp. You can provide multiple files to ‘touch’ command.
E.g., if you want to create 3 empty files, there are two ways:
a. # touch file1 file2 file3 // will create three empty files-file1, file2, file3
b. # touch file{1, 2, 3} // will create three empty files-file1, file2, file3
Syntax – touch [OPTION]... [FILE]...
Options –
-a Change only the access time.
-c, -- no-create Does not create any files.
-d, -- date = STRING Parse STRING and use it instead of current time.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

-h, -- no-dereference Affect each symbolic link instead of any referenced file (useful
only on systems that can change the timestamps of a symlink).
-m Change only the modification time.
-r, -- reference = Use this file's times instead of current times.
FILE
-t STAMP Use [[CC]YY]MMDDhhmm[.ss] instead of current time.
-- help Displays this help and exit.
-- version Outputs version information and exit.
Examples:
1. The following touch command creates an empty (zero byte) new file called abc.
$ touch abc
2. By using touch command, you can also create more than one single file. For example
the following command will create 3 files named, abc,def,ghi.
$ touch abc def ghi
3. To change or update the last access and modification times of a file called abc, use the -
a option as follows. The following command sets the current time and date on a file. If the
abc file does not exist, it will create the new empty file with the name.
$ touch -a abc
4. Using -c option with touch command avoids creating new files. For example the
following command will not create a file called abc if it does not exists.
$ touch -c abc
5. If you like to change the only modification time of a file called abc, then use the -m
option with touch command.
$ touch -m abc
6. You can explicitly set the time using -c and -t option with touch command. The
following command sets the access and modification date and time to a file abc as 17:30
(17:30 p.m.) December 10 of the current year (2012).
$ touch -c -t 12101730 abc
7. The following touch command with -r option, will update the time-stamp of file abc
with the time-stamp of def file. So, both the file holds the same time stamp.
$ touch -r abc def
8. If you would like to create a file with specified time other than the current time, then the
below command touch command with -t option will gives the abc file a time stamp of
18:30:55 p.m. on December 10, 2012.
$ touch -t 201212101830.55 abc

n) who – Prints information about the users who are currently logged in.
Syntax – who [OPTION]... [ FILE | ARG1 ARG2 ]
Options –
-a, -- all Same as -b -d --login -p -r -t -T –u.
-b, -- boot Time of last system boot.
-d, -- dead Print dead processes.
-q, -- count All login names and number of users logged in.
-r, -- runlevel Prints current run-level.
-- help Displays this help and exit.
-- version Outputs version information and exit.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

o) whoami – Prints the username associated with the current effective user ID.
Syntax – whoami[OPTION]...
Options –
-- help Displays this help and exit.
-- version Outputs version information and exit.

# wc file1.text //would display # 3 20 120 file1.text

p) alias and unalias Command- The alias command lets you define temporary aliases in your
shell session. When creating an alias, you instruct your shell to replace a word with a series
of commands. aliases allow a string to be substituted for a word when it is used as the first
word of a simple command.
Syntax-alias [-p] [name[=value] ...]

Options-

-p Print the current values of aliases


-a Remove All aliases
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

For example, to set ls to have color without typing the --color flag every time, you would
use:
alias ls="ls --color=auto"

As you can see, the alias command takes one key-value pair parameter: alias
NAME="VALUE". Note that the value must be inside quotes. If you want to list all the aliases
you have in your shell session, you can run the alias command without argument.
Without arguments or with the -p option, alias prints the list of aliases on the standard output
in a form that allows them to be reused as input.
The value of an alias can be set to an expression including spaces, command options and/or
variables, the expression must be quoted, either with 'Single quotes' that will be evaluated
dynamically each time the Alias is used, or "Double quotes" which will be evaluated once when
the alias is created.
The value cannot contain any positional parameters ($1 etc), if you need to do that use a shell
function instead.

Unalias- As the name suggests, the unalias command aims to remove an alias from the already
defined aliases. Unalias can be used to remove each name from the list of defined aliases. To
remove the previous ls alias, you can use:
$ Unalias ls
Make an alias permanent
Create a file called ~/.bash_aliases, and type the alias commands into the file. .bash_aliases
will run at login (or you can just execute it with ..bash_aliases )

Expand Multiple aliases


If the last character of the alias value is a space or tab character, then the next command word
following the alias is also checked for alias expansion.
Details of alias replacement
The first word of each simple command, if unquoted, is checked to see if it has an alias. If so,
that word is replaced by the text of the alias. The alias name and the replacement text can
contain any valid shell input, including shell metacharacters, with the exception that the alias
name cannot contain '='.
The first word of the replacement text is tested for aliases, but a word that is identical to an
alias being expanded is not expanded a second time. This means that one can alias ls to "ls -
F", for instance, and Bash does not try to recursively expand the replacement text. Aliases are
not expanded when the shell is not interactive, unless the expand_aliases shell option is set
using shopt .
The rules concerning the definition and use of aliases are somewhat confusing. Bash always
reads at least one complete line of input before executing any of the commands on that line.
Aliases are expanded when a command is read, not when it is executed. Therefore, an alias
definition appearing on the same line as another command does not take effect until the next
line of input is read. The commands following the alias definition on that line are not affected
by the new alias. This behavior is also an issue when functions are executed. Aliases are
expanded when a function definition is read, not when the function is executed, because a
function definition is itself a compound command. As a consequence, aliases defined in a
function are not available until after that function is executed. To be safe, always put alias
definitions on a separate line, and do not use alias in compound commands.

q) expr- Evaluate expressions, evaluates an expression and writes the result on standard output.
A blank line below separates increasing precedence groups.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Syntax- expr expression

expr option

Options:

--help Display help and exit

--version output version information and exit

Examples:
$ expr 5 + 3 #will return 8
$ expr 5+3 # will return 5+3 as space is not given between literals, it is
treated as string.
$ expr 10 – 6 # will return 4
$ expr 7 \* 9 #will return 63 (Here the * is shell builtin operator, that is why
it needs to escaped with backslash.)
$ expr length linux # will return 5
$ expr ‘10’ = ‘20’ #matches numbers. Will return 0 if not equal, 1 if first number
is either not equal to or ess than the second

Other Supporting Commands

a) Wildcard Characters – Wildcard characters are a set of building blocks that allow you
to create a pattern defining a set of files or directories.
Syntax – [command] [wildcard-characters][object]
Options –
* Represents zero or more characters.
? Represents a single character.
[] Represents a range of characters.
{} Represents an array.

Examples:

1. This command matches all files with names starting with l (which is the prefix) and
ending with one or more occurrences of any character.

2. This example shows another use of * to copy all filenames prefixed with users-0 and
ending with one or more occurrences of any character.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

3. The following command matches all files with names beginning with l followed by any
single character and ending with st.sh (which is the suffix).

4. The command below matches all files with names starting with l followed by any of the
characters in the square bracket but ending with st.sh.

5. This example matches filenames starting with any of these characters [clst] and ending
with one or more occurrence of any character.

b) Redirection Operator – Helps in input/output in terminal. A less-than sign (<) represents


input redirection. On the other hand, a greater than sign (>) is used for the output redirection.
“<” and “>” are also called angled brackets.
Syntax – [command] [redirection-operators][object]
Options –
> Write output to file. If file doesn’t exist, it is created. If it exists, it will be
truncated.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

>> Write output to file. If file doesn’t exist, it is created. If it exists, output
will be appended.
< Used for command that read their input from the terminal. Input is read
from file, instead of terminal.

c) tr – Translate, squeeze, and/or delete characters from standard input, writing to standard
output.
Syntax – tr[OPTION]… SET1 [SET2]
Options –
-c, -C, -- complement Use the complement of SET1.
-d, -- delete Delete characters in SET1, do not translate.
[:alnum:] All letters and digits.
[:alpha:] All letters.
[=CHAR=] All characters which are equivalent to CHAR.
[:space:] All horizontal or vertical whitespace.
[:upper:] All upper case letters.
/b Backspace
/f Form Feed
/n New line
/r Return
/t Horizontal tab
/v Vertical tab
-- help Display this help and exit.
-- version Displays this help and exit.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

d) tee – Copy standard input to each file, and also to standard output. The tee command reads
standard input, then writes the output of a program to standard output and simultaneously
copies it into the specified file or files.
Syntax – tee [OPTION]… [FILE]
Options –
-a, -- append Append to the given FILEs, do not overwrite.
-i, -- ignore-interrupts Ignore interrupt signals.
-p Diagnose errors writing to non-pipes.
-- output-error[=MODE] Set behaviour on write error. See MODE below.
-- help Displays this help and exit.
-- version Outputs version information and exit.
Examples:
1.The following command (with the help of tee command) writes the output both to the
screen (stdout) and to the file.
$ ls | tee file1.txt
2. You can instruct tee command to append to the file using the option –a as shown below
$ ls | tee –a fileq.txt
3. You can also write the output to multiple files as shown below.
$ ls | tee file1 file2 file3
4. Write the output to two commands. You can also use tee command to store the output of a
command to a file and redirect the same output as an input to another command.
The following command will long list directory contents, store it in abc.txt and then count the
no. of lines and echo it
$ ls -l | tee abc.txt | wc -l

e) more- Display output one screen at a time.


Syntax- more [-dlfpcsu] [-num] [+/ pattern] [+ linenum] [file ...]

Options

num This option specifies an integer which is the screen size (in lines).
-d More will prompt the user with the message "[Press space to continue, 'q'
to quit.]" and will display "[Press 'h' for instructions.]” instead of ringing
the bell when an illegal key is pressed.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

-l More usually treats ^L (form feed) as a special character, and will pause
after any line that prevent this behavior.
-f Causes more to count logical, rather than screen lines (i.e., long lines are
not folded).
-p Do not scroll. Instead, clear the whole screen and then display the text.
-c Do not scroll. Instead, paint each screen from the top, clearing the
remainder of each line as it is displayed.
-s Squeeze multiple blank lines into one.
-u Suppress underlining.
+/ The +/ option specifies a string that will be searched for before each file is
displayed.

less- Page through text one screenful at a time, Search through output, Edit the command
line. less provides more emulation plus extensive enhancements such as allowing backward
paging through a file as well as forward movement.
Syntax- less [options]
command | less [options]

Print commands

lpr- Print files. Send a print job to the default system queue.
Syntax- lpr [-Pprinter] [-#num] [-C class] [-J job] [-T title] [-U user] [-i [numcols]]
[-1234 font] [-wnum] [-cdfghlnmprstv] [name ...]
Options-
-P Force output to a specific printer. Normally, the default printer is used (site
dependent), or the value of the environment variable PRINTER is used.
-h Suppress the printing of the burst page.
-m Send mail upon completion
-r Remove the file upon completion of spooling. Cannot be used with the -s
option, due to security concerns.
-s Use symbolic links. Usually files are copied to the spool directory. The -s
option will use symlink(2) to link data files rather than trying to copy them so
large files can be printed. This means the files should not be modified or
removed until they have been printed
-#num: The quantity num is the number of copies desired of each file named. For example,
lpr -#3 foo.c bar.c more.c
would result in 3 copies of the file foo.c, followed by 3 copies of the file bar.c, etc. On the
other hand,
cat foo.c bar.c more.c | lpr -#3
will give three copies of the concatenation of the files. Often a site will disable this feature to
encourage use of a photocopier instead.

lpc- Line printer control program.


Syntax- lpc [command [argument ...]]
Lpc is used by the system administrator to control the operation of the line printer system.
For each line printer configured in /etc/printcap, lpc can be used to:
• Disable or enable a printer
• Disable or enable a printer's spooling queue
• Rearrange the order of jobs in a spooling queue
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

• Find the status of printers, and their associated spooling queues and printer dameons.
Without any arguments, lpc will prompt for commands from the standard input. If arguments
are supplied, lpc interprets the first argument as a command and the remaining arguments as
parameters to the command. The standard input can be redirected causing lpc to read
commands from file. Commands can be abbreviated;
Options-
? [command ...]
help [command ...] Print a short description of each command specified in the
argument list, or, if no arguments are given, a list of the recognized
commands.
abort { all | printer } Terminate an active spooling daemon on the local host immediately
and then disable printing (preventing new daemons from being
started by lpr) for the specified printers.
clean { all | printer } Remove any temporary files, data files, and control files that cannot
be printed (i.e., do not form a complete printer job) from the
specified printer queue(s) on the local machine
disable { all | printer Turn the specified printer queues off. This prevents new printer
} jobs from being entered into the queue by lpr.
down { all | printer } Turn the specified printer queue off, disable printing and put
message ... message in the printer status file. The message doesn't need to be
quoted, the remaining arguments are treated like echo(1). This is
normally used to take a printer down and let others know why
lpq(1) will indicate the printer is down and print the status
message).
enable { all | printer Enable spooling on the local queue for the listed printers. This will
} allow lpr(1) to put new jobs in the spool queue,
Exit, quit Exit from lpc.

File and Directory Commands

a) cd –Change the shell working directory. Note that it uses a forward slash.
Syntax – cd [-L|[-P [-e]] [-@]] [dir]
Options –
Option Command Description
/ cd / Go to root directory.
.. cd .. Move one directory back.
- cd - cd into the previous directory.
/parent/child/ cd /home/student Go to the child directory directly.
Here, the user will be directed to
‘student’ directory under the home
directory
` cd` Go to the home directory of the
current user.
./ cd./ For current working directory.
Examples
1. Move to the sybase folder:
$ cd /usr/local/sybase
$ pwd
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

/usr/local/sybase
2. Change to another folder:
$ cd /var/log
$ pwd
/var/log
3. Quickly get back:
$ cd -
$ pwd
/usr/local/sybase
4. Move up one folder:
$ cd ..
$ pwd
/usr/local/
$ cd (Back to your home folder)
5. Change to the directory fred inside the current directory:
$ cd ./fred

b) mkdir –Create the directories, if they do not exist. Just specify the new folder’s name, ensure
it doesn’t exist, and you’re ready to go.
Syntax – mkdir [OPTION]… DIRECTORY…
Options –
-m, -- mode = MODE Set file mode (as in chmod), not a=rwx – umask.
-p, -- parents No error if existing, make parent directories as needed.
-v, -- verbose Print a message for each created directory.
-- help Displays this help and exit.
-- version Outputs version information and exit.
-p To create subdirectories

Examples:
1. To create a directory
$ mkdir dir_name

2. To create multiple directories in current location


$mkdir dir1_name di2_name dir3_nam

3. To control permissions of new directories


$mkdir -m 777 sample
4. To create specified intermediate directories for a new directory if they do not
already exist (mkdir -p path):
$mkdir folder1/folder2/sample

mkdir: cannot create directory `folder1/folder2/sample': No such file


or directory

$mkdir -p folder1/folder2/sample

$ls folder1/folder2/
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

sample

c) rmdir –Remove the directories, if they exist and are empty.


Syntax – rmdir [OPTION]... DIRECTORY...
Options –
-- ignore-fail-on-non- Ignore each failure that is solely because a directory is non-empty.
empty
-p, -- parents Remove DIRECTORY and its ancestors; e.g., 'rmdir -p a/b/c' is
similar to 'rmdir a/b/c a/b a'
-v, -- verbose Print a message for each created directory.
-- help Displays this help and exit.
-- version Outputs version information and exit.

d) cp –Copy source to destination, or multiple sources to directories.


Syntax – cp [OPTION]... [-T] SOURCE DEST
or: cp [OPTION]... SOURCE... DIRECTORY
or: cp [OPTION]... -t DIRECTORY SOURCE...
Options –
-a, -- archive Same as -dR–preserve=all.
-d Same as --no-dereference –preserve=links.
-f, -- force If an existing destination file cannot be opened, remove it and try
again (this option is ignored when the -n option is also used).
-R, -r, -- recursive Copy directories recursively.
-v, -- verbose Print a message for each created directory.
-- help Displays this help and exit.
-- version Outputs version information and exit.

Examples:
1. Copy demofile to demofile.bak :
$ cp demofile demofile.bak
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

or
$ cp demofile{,.bak}
2. With variables make sure you quote everything:
$ cp "$SOURCE" "$DEST"
3. Copy demofile.txt to demofile.bak :
$ FILE="demofile.txt"
$ cp "$FILE" "${FILE%.*}.bak"
4. Copy floppy to home directory:
$ cp -f /mnt/floppy/* ~

e) rm –This command is used to remove files in a directory or the directory itself. A


directory cannot be removed if it is not empty.
If a file is unwritable, the standard input is a tty, and the -f or –force option is not given,
rm prompts the user for whether to remove the file. If the response is not affirmative,
the file is skipped.
Syntax – rm [OPTION]... [FILE]...
Options –
-d, -- dir Remove empty directories.
-f, -- force Ignore non-existent files and arguments, never prompt.
-R, -r, -- recursive Remove directories and their contents recursively.
-v, -- verbose Print a message for each created directory.
-- help Displays this help and exit.
-- version Outputs version information and exit.

1. To remove the file “accounts.txt” in the current directory you would type
$ rm accounts.txt
2. To delete a directory named “cases” with all its contents you would enter
$ rm -r cases
This assumes that the directory “cases” is a subdirectory of the current directory.
3. In order to delete a file that is not in the current directory you can specify the full
path. For example,
$ rm /home/newuser/info
would delete the file “info” in the directory “/home/newuser/”.
4. You can selectively delete a subset of files using the wildcard character “*”. For
example,
$ rm *.txt
would remove all files that end with “.txt”.
Note: rm –r removes all the contents in a directory and the directory as well.

f) mv – Rename source to destination, or move sources to directories. mv (short for move)


is a Unix command that moves one or more files or directories from one place to
another. Since it can “move” files from one filename to another, it is also used to rename
files.
Using mv requires the user to have write permission for the directories the file will
move between. This is because mv changes the file’s location by editing the file list of
each directory.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Syntax – mv [OPTION]... [-T] SOURCE DEST


or: mv [OPTION]... SOURCE... DIRECTORY

or: mv [OPTION]... -t DIRECTORY SOURCE...


Options –
-n, -- no-clobber Do not overwrite an existing file 'rmdir -p a/b/c' is similar to
'rmdir a/b/c a/b a'.
-f, -- force Do not prompt before overwriting.
-v, -- verbose Print a message for each created directory.
-- help Displays this help and exit.
-- version Outputs version information and exit.

Security and System Commands

a) df - Show information about the file system on which each FILE resides, or all file
systems by default. With no arguments, 'df' reports the space used and available on all
currently mounted filesystems (of all types). Otherwise, 'df' reports on the filesystem
containing each argument file.

Syntax - df [OPTION]... [FILE]...


Options -
-T, --print-type Print file system type.
-i, --inodes List i-node information instead of block usage.
-h, --human-readable Print sizes in powers of 1024 (e.g., 1023M).
-a, --all Include pseudo, duplicate, inaccessible file systems .
--help Display this help and exit.
--version Output version information and exit.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

b) du (Disk Usage) - Summarize disk usage of the set of FILEs, recursively for directories.
It reports the amount of disk space used by the specified files and for each subdirectory.
With no arguments, 'du' reports the disk space for the current directory. Normally the
disk space is printed in unit of 1024 bytes, but this can be overridden

Syntax - du [OPTION]... [FILE]...


Options -
-a, --all Write counts for all files, not just directories.
-h, --human-readable Print sizes in human readable format (e.g., 1K 234M
2G).
-c, --total Produce a grand total.
-x, --one-file-system Skip directories on different file systems.
--help Display this help and exit.
--version Output version information and exit.
-k Print sizes in 1024-byte blocks, overriding the default
block size
-l Count the size of all files, even if they have appeared
already (as a hard link).
-s Display only a total for each argument
-max Show the total for each directory (and file if -all) that is
at most MAX_DEPTH levels down from the root of the
hierarchy. The root is at level 0, so 'du --max-depth=0 is
equivalent to 'du -s'.

1. List the total files sizes for everything 1 directory (or less) below the currrent
directory ( . )

$ du -hc --max-depth=1 .
400M ./data1
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

1.0G ./data2
1.3G .
1.3G total
2. List the 10 largest subdirectories in the current directory:
du -hs */ | sort -hr | head
3. Display the 10 largest subdirectories of the current folder, each with its human
readable size:
du -k * | sort -nr | cut -f2 | xargs -d '\n' du -sh | head -n 10
4. Display folder sizes, to a depth of 2, starting from the home directory (~):
du -ch --max-depth=2 ~

c) dd- Data Duplicator, convert and copy a file, write disk headers, boot records, create a
boot floppy. dd can make an exact clone of an (unmounted) disk, this will include all
blank space so the output destination must be at least as large as the input. dd can
copy a smaller drive to a larger one, but can’t copy a larger drive to a smaller one.
Syntax: dd [options]

Options:

if=FILE Input file : Read from FILE instead of standard input.


of=FILE Output file : Write to FILE instead of standard output. Unless
'conv=notrunc' is given, 'dd' truncates FILE to zero bytes (or the
size specified with 'seek=').
ibs=BYTES Read BYTES bytes at a time.
obs=BYTES Write BYTES bytes at a time
bs=BYTES Block size, both read and write BYTES bytes at a time. This
overrides 'ibs' and 'obs'.
cbs=BYTES Convert BYTES bytes at a time.
skip=BLOCKS Skip BLOCKS 'ibs'-byte blocks in the input file before copying.
seek=BLOCKS Skip BLOCKS 'obs'-byte blocks in the output file before copying.
count=BLOCKS Copy BLOCKS 'ibs'-byte blocks from the input file, instead of
everything until the end of the file.
Examples:

1. Clone the drive sda onto drive sdb:


$ dd if=/dev/sda of=/dev/sdb

2. Clone the drive hda onto an image file:


$ dd if=/dev/hda of=/image.img

d) dmesg-Print kernel (and driver) messages, control the kernel ring buffer. The dmesg
program allows you to print system messages from the buffer (mostly kernel and
drivers loading at bootup) and can also be used to configure the kernel ring buffer.
Syntax: dmesg [ -c ] [ -n level ] [ -s bufsize ]

Options -
-c clear the ring buffer contents after printing.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

-s bufsize Use a buffer of size bufsize to query the kernel ring buffer. This is
16392 by default.
-n level Set the level at which logging of messages is done to the console.
For example, -n 1 prevents all messages, expect panic messages,
from appearing on the console. All levels of messages are still
written to /proc/kmsg, so syslogd(8) can still be used to control
exactly where kernel messages appear. When the -n option is used,
dmesg will not print or clear the kernel ring buffer.
When both options are used, only the last option on the command line will have an
effect. As it's a ring-buffer, it will automatically cycle out old information when the
buffer is full.

Examples

1. Print all the bootup messages to a file:


$ sudo dmesg > messages.txt

2. Print recent messages:


$ sudo dmesg | tail -f

e) find - Search for files in a directory hierarchy.

Syntax -find [-H] [-L] [-P] [-Olevel] [-D debugopts] [path...] [expression]
Options -
-used File was last accessed n days after its status was last changed.
%H Starting-point under which file was found.
--help Display this help and exit.
--version Output version information and exit.
Examples:

1. List filenames ending in .mp3, searching in the current folder and all subfolders, find
can typically list over 100,000 files per second:
$ find . -name "*.mp3"

2. List filenames matching the name Alice or ALICE (case insensitive), search in the
current folder (.) only:
$ find . -maxdepth 1 -iname "alice" -print0

3. List filenames ending in .mp3, searching in the music folder and subfolders:
$ find ./music -name "*.mp3"

4. List all files that belong to the user Maude:


$ find . -user Maude -print0

5. List all files in sub-directories (but not the directory names)


$ find . -type f
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

6. Find files that are over a gigabyte in size:


$ find ~/Movies -size +1024M

7. Find all .gif files, pipe to xargs to get the size and then pipe into tail to display only
the grand total:
$ find . -iname "*.gif" -print0 | xargs -0 du -ch | tail -1

f) netstat - Print network connections, routing tables, interface statistics, masquerade


connections, and multicast memberships
Syntax - netstat [options]
Options -
-c, --continuous This will cause netstat to print the selected information every second
continuously.
-p, --program Show the PID and name of the program to which each socket
belongs.
-l, --listening Display listening server sockets.
--numeric, -n Show numerical addresses instead of trying to determine symbolic
host, port or user names.
-v, --verbose Print a message for each working.
--help Display this help and exit.
--version Output version information and exit.

g) locate - Remove (unlink) the FILE(s).


Syntax - locate [OPTION]... [PATTERN]...
Options -
-A, --all Only print entries that match all patterns.
-b, --basename Match only the base name of path names.
-c, --count Only print number of found entries.
-d, --database DBPATH use DBPATH instead of default database (which is
/var/lib/mlocate/mlocate.db).
-e, --existing Only print entries for currently existing files.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

-w, --wholename Match whole path name (default).


--help Display this help and exit.
--version Output version information and exit.

h) strace-Trace system calls and signals. It intercepts and records any syscalls made by a
command. Additionally, it also records any Linux signal sent to the process. We can then use
this information to debug or diagnose a program. It’s especially useful if the source code of
the command is not readily available.
Syntax-strace [OPTIONS] command

Examples:

1. Using strace with pwd command


$ strace pwd

2. Redirect strace output to a file


$ strace -o pwd-log.txt pwd

3. Use strace to print system calls summary instead of regular output.


$ strace -c pwd

4. Trace particular system calls using strace


$ strace -e trace=write pwd

5. Strace command to print timestamp of each system call.


$ strace -r pwd

i) stat-Display file or file system status. Mandatory arguments to long options are mandatory
for short options too.
Syntax: stat [OPTION]... FILE...
Examples
1. List the file permissions for file1.sh:
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

$ stat -c%A file1.sh

-rw-r--r--

2. Display permissions in octal form (%a) and the filename (%n) for all files in the
directory:
$ stat -c "%a %n" *

or with owner group size in different forms:

$ stat -c "%a %A %G:%U %g:%g %n %s" *

3. Display file system (directory) information for /etc:

$ stat -f /etc
File: "/etc"
ID: 0 Namelen: 255 Type: reiserfs
Block size: 4096 Fundamental block size: 4096
Blocks: Total: 1977922 Free: 1272318 Available: 1272318
Inodes: Total: 0 Free: 0

j) quota- Display disk usage and limits, by default only the user quotas are printed.
Syntax-
quota [ -guv | q ]
quota [ -uv | q ] user
quota [ -gv | q ] group
Options
g Print group quotas for the group of which the user is a member.
u Print user quotas (this is the default)
v Verbose, will display quotas on filesystems where no storage is allocated.
q Print a more terse message, containing only
Specifying both -g and -u displays both the user quotas and the group quotas (for the user).
Only the super-user can use the -u flag and the optional user argument to view the limits of
other users. Non- super-users can use the the -g flag and optional group argument to view only
the limits of groups of which they are members.
The -q flag takes precedence over the -v flag. Quota reports the quotas of all the filesystems
listed in /etc/fstab. For filesystems that are NFS-mounted a call to the rpc.rquotad on the server
machine is performed to get the information. If quota exits with a non- zero status, one or more
filesystems are over quota.
Files
• quota.user located at the filesystem root with user quotas
• quota.group located at the filesystem root with group quotas
• /etc/fstab to find filesystem names and locations

k) ping- Test a network connection.


Syntax- ping [options] [ hop ...] destination_host
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

When using ping for fault isolation, it should first be run on the local host, to verify that the
local network interface is up and running. Then, hosts and gateways further and further away
should be 'pinged'.
Ping is intended for use in network testing, measurement and management. Because of the load
it can impose on the network, it is unwise to use ping during normal operations or from
automated scripts.
If ping does not receive any reply packets at all it will exit with code 1. If a packet count and
deadline are both specified, and fewer than count packets are received by the time the deadline
has arrived, it will also exit with code 1. On other error it exits with code 2. Otherwise it exits
with code 0. This makes it possible to use the exit code to see if a host is alive or not.
Ping response times below 10 milliseconds often have low accuracy. A time of 10 milliseconds
is roughly equal to a distance of 1860 Miles, travelling a straight line route at the speed of
light, (or a round trip of 2 × 930 miles). From this you can see that Ping response times only
give a very rough estimate of the distance to a remote host.
Examples
1. Ping example.com, waiting for 5 seconds before sending the next packet:
$ ping -i 5 example.com

2. Ping example.com, waiting only 0.1 second between pings, may require super user:
$ ping -i 0.1 example.com

3. Ping example.com, giving an audible beep when the peer is reachable:


$ ping -a example.com

4. Ping example.com, printing the full network route ECHO_REQUEST sent and
ECHO_REPLY received:
$ ping -R example.com

Process Commands
a) ps - Report a snapshot of the current processes. Process status, information about
processes running in memory. If you want a repetitive update of this status, use top.
Syntax - ps [options]
Options -
-e, -ef, -eF, ely To see every process on the system using standard syntax.
-A Select all processes. Identical to -e.
-a Select all processes except both session leaders and processes
not associated with a terminal.
-d Select all processes except session leaders.
-e All processes. Identical to -A.
r Restrict the selection to only running processes.
-f Full format listing
--version Output version information and exit.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

-x Lift the BSD-style "must have a tty" restriction, which is


imposed upon the set of all processes when some BSD-style
(without "-") options areused or when the ps personality setting
is BSD-like.

"ps -aux" is distinct from "ps aux". The POSIX and UNIX standards require that "ps -aux" print
all processes owned by a user named "x", as well as printing all processes that would be selected
by the -a option. If the user named "x" does not exist, this ps may interpret the command as "ps
aux" instead and print a warning. This behavior is intended to aid in transitioning old scripts
and habits. It is fragile, subject to change, and thus should not be relied upon.
By default, ps selects all processes with the same effective user ID (euid=EUID) as the current
user and associated with the same terminal as the invoker. It displays the process ID (pid=PID),
the terminal associated with the process (tname=TTY), the cumulated CPU time in [DD-
]hh:mm:ss format (time=TIME), and the executable name (ucmd=CMD). Output is unsorted
by default.

b) pstree - Display a tree of processes.

Syntax - pstree [-acglpsStuZ] [ -h | -H PID ] [ -n | -N type ]


[ -A | -G | -U ] [ PID | USER ]
Options -
-a, --arguments Show command line arguments.
-A, --ascii Use ASCII line drawing characters.
-c, --compact Don't compact identical subtrees.
-h, --highlight-all Highlight current process and its ancestors.
-H PID, --highlight-pid PID highlight this process and its ancestors.
-p, --show-pids Show PIDs; implies –c.
-s, --show-parents Show parents of the selected process.
--version Output version information and exit.
-l, --long Don't truncate long lines.
-n, --numeric-sort Sort output by PID.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

-U, --unicode Use UTF-8 (Unicode) line drawing


characters.
-u, --uid-changes Show uid transitions.

c) top - Display CPU-intensive programs currently running. Top displays per-process CPU
usage (not total server load) it is useful for seeing how much work the machine is doing now
compared to some point in the past. At the top of the display output there are three numbers
representing the number of processes waiting for the CPU now, an average for the past five
minutes, and an average for the past fifteen minutes. These three numbers are the "load
average". Top should only be used to compare the load average on two different machines if
they have an identical configuration (both hardware and software.)

Syntax - top -hv|-bcEHiOSs1 -d secs -n max -u|U user -p pid -o fld -w[cols]
Options –
-h | -v Help/Version Show library version and the usage prompt, then quit.
-E Extend-Memory-Scaling as: Instructs top to force summary area memory to be scaled
-E k | m | g | t | p | e as: k - kibibytes
-n Number-of-iterations limit Specifies the maximum number of iterations, or frames,
as: -n number top should produce before ending.
-s Secure-mode operation Starts top with secure mode forced, even for root. This
mode is far better controlled through a system
configuration file
-b Batch-mode operation Starts top in Batch mode, which could be useful for
sending output from top to other programs or to a file. In
this mode, top will not accept input and runs until the
iterations limit you've set with the `-n' command-line
option or until killed.
-c Command-line/Program- Starts top with the last remembered `c' state reversed.
name toggle Thus, if top was displaying command lines, now that
field will show program names, and vice versa. See the
`c' interactive command for additional information.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

-u|-U User-filter-mode as: -u |


Display only processes with a user id or user name
-U number or name matching that given. The `-u' option matches on
effective user whereas the `-U' option matches on any
user (real, effective, saved, or filesystem).
-o Monitor-PIDs mode as: - Monitor only processes with specified process IDs. This
pN1 -pN2 ... or -pN1,N2,N3 ... option can be given up to 20 times, or you can provide a
comma delimited list with upto 20 pids. Co-mingling
both approaches is permitted.
Top Output –

d) nice - Run a program with modified scheduling priority. Priority can be adjusted by
'nice' over the range of -20 (the highest priority) to 19 (the lowest). Default = 10. If no
arguments are given, 'nice' prints the current scheduling priority, which it inherited.
Otherwise, 'nice' runs the given Command with its scheduling priority adjusted.
Syntax - nice [OPTION] [COMMAND [ARG]...]
Options –
-n, --adjustment=N Add integer N to the niceness (default 10).
--help Display this help and exit.
--version Output version information and exit.

e) renice - Alter priority of running processes.


Syntax -pstree [-acglpsStuZ] [ -h | -H PID ] [ -n | -N type ][ -A | -G | -U ] [ PID | USER
Options -
-n, --priority Specify the scheduling priority to be used for the process, process
group, or user. Use of the option -n or --priority is optional, but when
used it must be the first argument.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

-g, --pgrp Interpret the succeeding arguments as process group IDs.


-p, --pid Interpret the succeeding arguments as process IDs (the default).
-u, --user Interpret the succeeding arguments as usernames or UIDs.
--help Display this help and exit.
--version Output version information and exit.

f) kill - Send a signal to a process. It’s annoying when a program is unresponsive, and you
can’t close it by any means. Fortunately, the kill command solves this kind of problem.
Simply put, kill sends a TERM or kill signal to a process that terminates it.
You can kill processes by entering either the PID (processes ID) or the program’s
binary name:
Syntax - kill [options] <pid> [...]
Options -
kill -9 -1 Kill all processes permitted you can kill.
-l, --list [signal] List signal names. This option has optional argument, which
will convert signal number to signal name, or other way round.
--signal <signal> Specify the signal to be sent. The signal can be specified by
using name or number.
--help Display this help and exit.
Example:
$ kill 2456. # will send the signal to terminate process with PID as 2456.
1.List the running process
$ ps

PID TTY TIME CMD


1293 pts/5 00:00:00 MyProgram
$ kill 1293
[2]+ Terminated MyProgram
2.To run a command and then kill it after 5 seconds:
$ my_command & sleep 5
$ kill -0 $! && kill $!

g) last - Show a listing of last logged in users.


Syntax -last [options] [<username>...] [<tty>...]
Options -
` How many lines to show.
-a, --hostlast Display hostnames in the last column.
-d, --dns Translate the IP number back into a hostname.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

-f, --file <file> Use a specific file instead of /var/log/wtmp.


-F, --fulltimes Print full login and logout times and dates.
-i, --ip Display IP numbers in numbers-and-dots notation.
-n, --limit <number> How many lines to show.
-R, --nohostname Don't display the hostname field.
-s, --since <time> Display the lines since the specified time.
-t, --until <time> Display the lines until the specified time.
-p, --present <time> Display who were present at the specified time.
-w, --fullnames Display full user and domain names.
--help Display this help and exit.
--version Output version information and exit.

h) tail- Print the last 10 lines of each FILE to standard output. With more than one FILE,
precede each with a header giving the file name.
Syntax - tail [OPTION]... [FILE]...
Options –
-c, --bytes=[+]NUM Output the last NUM bytes; or use -c +NUM to output
starting with byte NUM of each file.
-f, -- Output appended data as the file grows;an absent option
follow[={name|descriptor}] argument means 'descriptor'.
-v, --verbose Always output headers giving file names.
-F Same as --follow=name –retry.
-n, --lines=[+]NUM Output the last NUM lines, instead of the last 10;or use -n
+NUM to output starting with line NUM.
-q, --quiet, --silent Never output headers giving file names.
--retry Keep trying to open a file if it is inaccessible.
-z, --zero-terminated Line delimiter is NUL, not newline.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

--help Display this help and exit.


--version Output version information and exit.
Examples

1. Extract the last 85 lines from a file:


$ tail -85 file.txt

2. Output the newly appended lines of a file instantly:


$ tail -f /var/log/wifi.log

3. Output newly appended lines, and keep trying if the file is temporarily inaccessible:
$ tail -f /var/log/wifi.log --retry

i. or
$ tail -F /var/log/wifi.log

4. Extract lines 40-50 from a file, first using head to get the first 50 lines then tail to get
the last 10:
$ head -50 file.txt | tail -10

i) head - Print the first 10 lines of each FILE to standard output. With more than one FILE,
precede each with a header giving the file name.
Syntax - head [OPTION]... [FILE]...
Options –
-n, --lines=[+]NUM Print the first NUM lines instead of the first 10; with the leading '-
', print all but the lastNUM lines of each file.
-q, --quiet, --silent Never output headers giving file names.
-v, --verbose Always print headers giving file names.
-z, --zero-terminated Line delimiter is NUL, not newline.
--help Display this help and exit.
--version Output version information and exit.

j) jobs- Print currently running jobs and their status. On systems that supports this feature,
jobs will print the CPU usage of each job since the last command was executed. The CPU
usage is expressed as a percentage of full CPU activity. Note that on multiprocessor
systems, the total activity can be more than 100%.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Syntax- jobs [OPTIONS] [PID]


Options-
-c -command Print the command name for each process in jobs
-g --group Only print the group id of each job
-h --help Display a help message and exit
-l --last Only the last job to be started is printed
-p --pid Print the process id for each process in all jobs
Example: $ jobs # Refer to the next screenshots

k) bg- Move jobs to the background. When you enter an ampersand (&) symbol at the end
of a command line, the command runs without occupying the terminal window. The
shell prompt is displayed immediately after you press Return. bg takes a “job ID”
available from jobs, not a PID.
Syntax - bg [option]
Options –
%Number Use the job number such as %1 or %2.
%String Use the string whose name begins with suspended command
such as %commandNameHere or %ping.
%+ OR %% Refers to the current job.
%- Refers to the previous job.

In the figure above the job with job id 1 is executed in the background. To start a new
process in the background refer to the last command, ‘sleep 500 &’.

l) fg- Move jobs to the foreground. When you enter a command in a terminal window, the
command occupies that terminal window until it completes. This is a foreground job. fg
takes a “job ID” available from jobs, not a PID. The return value is that of the command
placed into the foreground, or failure if run when job control is disabled.
Syntax - fg [option]
Options –
%Number Use the job number such as %1 or %2.
%String Use the string whose name begins with suspended command such
as %commandNameHere or %ping.
%+ OR %% Refers to the current job.
%- Refers to the previous job.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

In the figure above, fg %1 bring the process with job id 1 into the foreground.

Comparison Commands

a) cmp- Compare two files, and if they differ, tells the first byte and line number where they
differ. You can use the 'cmp' command to show the offsets and line numbers where two files
differ. 'cmp' can also show all the characters that differ between the two files, side by side.
Syntax: cmp options... FromFile [ToFile]
Options
-c Output differing bytes as characters.
-i N Ignore differences in the first N bytes of input.
-l Write the byte number (decimal) and the differing bytes (octal) for each
difference.
-s Write nothing for differing files; return exit statuses only.
-v Output version info.
Return values:
0 — files are identical
1 — files differ
2 — inaccessible or missing argument
'cmp' reports the differences between two files character by character, instead of line by line.
As a result, it is more useful than 'diff' for comparing binary files. For text files, 'cmp' is useful
mainly when you want to know only whether two files are identical.
For files that are identical, 'cmp' produces no output. When the files differ, by default, 'cmp'
outputs the byte offset and line number where the first difference occurs. You can use the '-s'
option to suppress that information, so that 'cmp' produces no output and reports whether the
files differ using only its exit status.
Unlike 'diff', 'cmp' cannot compare directories; it can only compare two files
Examples:
Consider two files:
$ cat file2.txt
My name is Mewat
$ cat file1.txt
My name is Mewat Trehaan
1. Compare file1 to file2 and outputs results
$ cmp file1.txt file2.txt
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Output: file1.txt file2.txt differ: byte 17, line 1


2. Skip same number of initial bytes from both input files
$ cmp -i 5 file1.txt file2.txt
Output: file1.txt file2.txt differ: byte 12, line 1

b) comm- Compare two sorted files line by line. Output the lines that are common, plus the
lines that are unique.
Syntax: comm [options]... File1 File2
Options:
-1 Suppress lines unique to file1
-2 Suppress lines unique to file2
-3 Suppress lines that appear in both files
• With no options, 'comm' produces three text columns as output. The utility reads file1
and file2, which should be sorted lexically.
• This will output:
o Lines only in file1; Lines only in file2; Lines in both files.
• If printing of a column is suppressed, the output will be padded with TAB characters.
• The options -1, -2, and -3 suppress printing of the corresponding columns.
• The filename '-' means standard input.
Each column will have a number of tab characters prepended to it equal to the number of lower
numbered columns that are being printed. For example, if column number two is being
suppressed, lines printed in column number one will not have any tabs preceding them, and
lines printed in column number three will have one.
Before 'comm' can be used, the input files must be sorted using the collating sequence specified
by the 'LC_COLLATE' locale, with trailing newlines significant. If an input file ends in a non-
newline character, a newline is silently appended. The 'sort' command with no options always
outputs a file that is suitable input to 'comm'.
Unlike some other comparison utilities, 'comm' has an exit status that does not depend on the
result of the comparison. Upon normal completion 'comm' produces an exit code of zero. If
there is an error it exits with nonzero status.
Examples
1. Return the unique lines in the file words.txt that don't exist in countries.txt
$ comm -23 <(sort words.txt | uniq) <(sort countries.txt | uniq)
2. Return the lines that are in both words.txt and countries.txt:
$ comm -12 <(sort words.txt | uniq) <(sort countries.txt | uniq)
3. Return the files that are in the directory 'march' but not in the directory 'april':
$ comm -23 <(ls march) <(ls april)
4. Return the files that are in the directory 'march' but not in the directory 'april':
$ comm -23 <(ls march) <(ls april)
5. Return the files that are in the directory 'april' but not in the directory 'march':
$ comm -13 <(ls march) <(ls april)

c) diff- Display the differences between two files, or each corresponding file in two
directories.
Each set of differences is called a "diff" or "patch". For files that are identical, diff normally
produces no output; for binary (non-text) files, diff normally reports only that they are different.
Syntax: diff [options] FILES
Options:
-a Treat all files as text.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

-B Ignore changes whose lines are all blank.


-I Ignore case differences in file contents.
-l Pass the output through 'pr' to paginate it.
-q Output only whether files differ.
-r Recursively compare any subdirectories found.

Multiple single letter options (unless they take an argument) can be combined into a single
command line word: so '-ac' is equivalent to '-a -c'.
Example
$ diff -q <(sort file1.txt | uniq) <(sort file2.txt | uniq)
The command above will return 0 if file1.txt = file2.txt and will return 1 if file1.txt ≠ file2.txt
Note the files have to be sorted first (the order matters) and if the files could contain duplicate
values, then the output of sort has to be run through the uniq command to eliminate any
duplicate elements.

d) diff3- Show differences among three files. When two people have made independent
changes to a common original, 'diff3' can report the differences between the original and the
two changed versions, and can produce a merged file that contains both persons' changes
together with warnings about conflicts. The files to compare are MYFILE, OLDFILE, and
YOURFILE. At most one of these three file names can be '-', which tells 'diff3' to read the
standard input for that file.
Syntax: diff3 [options] MYFILE OLDFILE YOURFILE
Options:
-a --text Treat all files as text and compare them line-by-line, even if they do not
appear to be text.
-A --show-all Output all changes from OLDFILE to YOURFILE into MYFILE,
surrounding all conflicts with bracket lines.
-e –ed Output unmerged changes from OLDFILE to YOURFILE into MYFILE.
-E --show- Output unmerged changes from OLDFILE to YOURFILE into MYFILE,
overlap bracketing conflicts.
-x --overlap- Output unmerged changes from OLDFILE to YOURFILE into MYFILE,
only output only overlapping changes.
-X Output unmerged changes from OLDFILE to YOURFILE into MYFILE,
output only overlapping changes, bracketing them.
-3 --easy-only Output unmerged changes from OLDFILE to YOURFILE into MYFILE,
Output only nonoverlapping changes.

'diff3' normally compares three input files line by line, finds groups of lines that differ, and
reports each group of differing lines. Its output is designed to make it easy to inspect two
different sets of changes to the same file.
• If 'diff3' thinks that any of the files it is comparing is binary (a non-text file), it
normally reports an error, because such comparisons are usually not useful. As with
'diff', you can force 'diff3' to consider all files to be text files and compare them line
by line by using the '-a' or '--text' options.
• Multiple single letter options (unless they take an argument) can be combined into a
single command line argument.
• An exit status of 0 means diff3 was successful, 1 means some conflicts were found,
and 2 means trouble.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

e) sdiff- Compares two files and displays the differences in a side-by-side format. The sdiff
command reads the files specified by the File1 and File2 parameters, uses the diff command to
compare them, and writes the results to standard output in a side-by-side format. The sdiff
command displays each line of the two files with a series of spaces between them if the lines
are identical. It displays a < (less than sign) in the field of spaces if the line only exists in the
file specified by the File1 parameter, a > (greater than sign) if the line only exists in the file
specified by the File2 parameter, and a | (vertical bar) for lines that are different.
When you specify the -o flag, the sdiff command merges the files specified by the File1 and
File2 parameters and produces a third file.
Syntax- sdiff file1 file2
Options:
-l Displays only the left side when lines are identical.
-o OutFile Creates a third file, specified by the OutFile variable, by a controlled
line-by-line merging of the two files specified by the File1 and the
File2 parameters.
e Starts the ed command with an empty file.
e b or e | Starts the ed command with both sides.
e l or e < Starts the ed command with the left side. e r or e >
e r or e > Starts the ed command with the right side.
l Adds the left side to the output file.
r Adds the right side to the output file.
s Stops displaying identical lines.
Example:
1. To print a comparison of two files, enter
$ sdiff file1.txt file2.txt
abc abc
def <
ghi ghi
> klm

2. To combine and edit two files, staff.jan and staff.apr, and write the results to the staff.year
file, perform the steps indicated
The staff.jan file contains the following lines:
Members of the Accounting Department
Andrea
George
Karen
Sam
Thomas

The staff.apr file contains the following lines:


Members of the Accounting Department
Andrea
Fred
Mark
Sam
Wendy
Enter the following command:
$ sdiff -o staff.year staff.jan staff.apr
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

The sdiff command will begin to compare the contents of the staff.jan and staff.apr files and
write the results to the staff.year file. The sdiff command displays the following:
Members of the Accounting Dept Members of the Accounting Dept
Andrea Andrea
George | Fred
%
The % (percent sign) is the command prompt.

f) uniq- Report or filter out repeated lines in a file. Reads standard input comparing adjacent
lines, and writes a copy of each unique input line to the standard output. The second and
succeeding copies of identical adjacent input lines are not written.
Syntax- uniq [options]... [InputFile [OutputFile]]
By default, uniq prints the unique lines in a sorted file, it discards all but one of identical
successive input lines. so that the OUTPUT contains unique lines. uniq will only compare lines
that appear successively in the input. Repeated lines in the input will not be detected if they are
not adjacent, so it may be necessary to sort the files first. If an InputFile of - (or nothing) is
given, then uniq will read from standard input. If no OutputFile file is specified, uniq writes to
standard output.

Examples
Print the file demo.txt ommiting any duplicate lines:
$ sort demo.txt | uniq
Print only the unique numbers given the input 1, 1, 2, 3
$ printf "%s\n" 1 1 2 3 | uniq -u
2
3
Count the frequency of some words:
echo "one two three one three" | tr -cs "A-Za-z" "\n" | sort | uniq -c | sort -n -r

Linking Commands

a) Hard Link -Hard links point, or reference, to a specific space on the hard drive. You can
have multiple files hard linked to the same place in the hard drive, but if you change the data
on one of those files, the other files will also reflect that change.
Syntax - ln [OPTION]... [-T] TARGET LINK_NAME
ln [OPTION]... TARGET
ln [OPTION]... TARGET... DIRECTORY
ln [OPTION]... -t DIRECTORY TARGET....
Options –
--backup[=CONTROL] Make a backup of each existing destination file.
-b Like --backup but does not accept an argument.
-d, -F, --directory Allow the superuser to attempt to hard linkdirectories (note:
will probably fail due to system restrictions, even for the
superuser).
-f, --force Remove existing destination files.
-i, --interactive Prompt whether to remove destinations.
-n, --no-dereference Treat LINK_NAME as a normal file ifit is a symbolic link to
a directory.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

-P, --physical Make hard links directly to symbolic links.


-S, --suffix=SUFFIX Override the usual backup suffix.
-t, --target- Specify the DIRECTORY in which to createthe links.
directory=DIRECTORY
-T, --no-target-directory Treat LINK_NAME as a normal file always.
-v, --verbose Print name of each linked file.
--help Display this help and exit.
-version Output version information and exit.

b) Soft Link - A symbolic link still points to a specific point on the hard drive, but if you create a second
file, this second file does not point to the hard drive, but instead, to the first file.

Syntax - ln -s [OPTION]... [-T] TARGET LINK_NAME


ln -s [OPTION]... TARGET
ln -s [OPTION]... TARGET... DIRECTORY
ln -s [OPTION]... -t DIRECTORY TARGET....
Options -
-r, --relative Create symbolic links relative to link location.
-s, --symbolic Make symbolic links instead of hard links.
--backup[=CONTROL] Make a backup of each existing destination file.
-b Like --backup but does not accept an argument.
-d, -F, --directory Allow the superuser to attempt to hard link directories (note:
will probably fail due to system restrictions, even for the
superuser).
-f, --force Remove existing destination files.
-i, --interactive Prompt whether to remove destinations.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

-n, --no-dereference Treat LINK_NAME as a normal file if it is a symbolic link to


a directory.
-P, --physical Make hard links directly to symbolic links.
-S, --suffix=SUFFIX Override the usual backup suffix.
-t, --target- Specify the DIRECTORY in which to create the links.
directory=DIRECTORY
-T, --no-target-directory Treat LINK_NAME as a normal file always.
-v, --verbose Print name of each linked file.
--help Display this help and exit.
-version Output version information and exit.

Compressed Commands

a) tar- Store, list or extract files in an archive.


Syntax-

tar [[-]function] [options] filenames...

tar [[-]function] [options] -C directory-name...

Command-line arguments that specify files to add to, extract from, or list from an archive
can be given as shell pattern matching strings. You can specify an argument for --file (or -f
) whenever you use tar; this option determines the name of the archive file that 'tar' will
work on.
Options-
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

-c --create Create a new archive (or truncate an old one) and write the named files
to it.
-d --diff. -- Find differences between files in the archive and corresponding files in
compare the file system.
--delete Delete named files from the archive. (Does not work on quarter-inch
tapes).
-r --append Append files to the end of an archive.
-t --list List the contents of an archive; if filename arguments are given, only
those files are listed, otherwise the entire given, only those files are
listed, otherwise the entire table of contents is listed.
--test-label Test the archive volume label and exit
-u --update Append the named files if the on-disk version has a modification date
more recent than their copy in the archive (if any more recent than their
copy in the archive (if any). Does not work on quarter-inch tapes.
-x --extract Extract files from an archive. The owner, modification time, and file
--get permissions are restored, if possible. If no file arguments are given,
extract all the files in the archive. If a filename argument matches the
name of a directory on the tape, that directory and its contents are
extracted (as well as all directories under that directory). If the archive
contains multiple entries corresponding to the same file (see the --
append command above), the last one extracted will overwrite all earlier
versions.
-v, --verbose verbosely list files processed.
-z, --gzip filter the archive through gzip
Examples:

To create a tar file, enter:

$ tar -cvf filename.tar directory/file

In this example, filename.tar represents the file you are creating and directory/file represents
the directory and file you want to put in the archived file.

You can tar multiple files and directories at the same time by listing them with a space
between each one:

tar -cvf filename.tar /home/vips/bca1 /home/vips/bca6

The above command places all the files in the bca1 and the bca6 subdirectories of
/home/vips in a new file called filename.tar in the current directory.

To list the contents of a tar file, enter:


tar -tvf filename.tar

To extract the contents of a tar file, enter:


tar -xvf filename.tar
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

This command does not remove the tar file, but it places copies of its unarchived contents
in the current working directory, preserving any directory structure that the archive file used.

Remember, the tar command does not compress the files by default. To create a tarred and
bzipped compressed file, use the -j option:

tar -cjvf filename.tbz file

tar files compressed with bzip2 are conventionally given the extension .tbz; however,
sometimes users archive their files using the tar.bz2 extension.

The above command creates an archive file and then compresses it as the file filename.tbz.
If you uncompress the filename.tbz file with the bunzip2 command, the filename.tbz file is
removed and replaced with filename.tar.

You can also expand and unarchive a bzip tar file in one command:

tar -xjvf filename.tbz

To create a tarred and gzipped compressed file, use the -z option:

tar -czvf filename.tgz file


tar files compressed with gzip are conventionally given the extension .tgz.

This command creates the archive file filename.tar and compresses it as the file
filename.tgz. (The file filename.tar is not saved.) If you uncompress the filename.tgz file
with the gunzip command, the filename.tgz file is removed and replaced with filename.tar.

You can expand a gzip tar file in one command:

tar -xzvf filename.tgz

b) zip and unzip-Package and compress (archive) files. zip is a compression and file packaging
utility for Unix, VMS, MSDOS, OS/2, Windows 9x/NT/XP, Minix, Atari, Macintosh,
Amiga, and Acorn RISC OS. It is analogous to a combination of the Unix commands tar
and compress and is compatible with PKZIP (Phil Katz's ZIP for MSDOS systems).
Syntax:

zip [-aABcdDeEfFghjklLmoqrRSTuvVwXyz!@$] [--longoption ...]

[-b path] [-n suffixes] [-t date] [-tt date]

[zipfile [file ...]] [-xi list]

$ zip -r filename.zip filesdir


Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

In the above example, first argument is the name of the compressed filename with extension
.zip and the remaining arguments are the name of files as well as directories to be
compressed.
The-r option specifies that you want to include all files contained in the filesdir directory
recursively.
To extract the contents of a zip file, enter the following command:

unzip filename.zip

You can use zip to compress multiple files and directories at the same time by listing them
with a space between each one:

zip -r filename.zip file1 file2 file3 /usr/work/school

The above command compresses file1, file2, file3, and the contents of the /usr/work/school/
directory (assuming this directory exists) and places them in a file named filename.zip.

c) gzip and gunzip- Compress or decompress named file(s). 'gunzip' can currently decompress
files created by 'gzip', 'zip', 'compress' or 'pack'. The detection of the input format is
automatic.
Syntax- gzip options ...

To use gzip to compress a file, enter the following command at a shell prompt:

$ gzip filename

The file is compressed and saved as filename.gz.

To expand the compressed file, enter the following command:

$ gunzip filename.gz

The filename.gz compressed file is deleted and replaced with filename.

You can use gzip to compress multiple files and directories at the same time by listing them
with a space between each one:

$ gzip -r filename.gz file1 file2 file3 /usr/work/school

The above command compresses file1, file2, file3, and the contents of the /usr/work/school/
directory (assuming this directory exists) and places them in a file named filename.gz.

d) bzip2 and bunzip2- To use bzip2 to compress a file, enter the following command at a shell
prompt:
$ bzip2 filename

The file is compressed and saved as filename.bz2.

To expand the compressed file, enter the following command:


Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

$ bunzip2 filename.bz2

The filename.bz2 compressed file is deleted and replaced with filename.

You can use bzip2 to compress multiple files and directories at the same time by listing
them with a space between each one:

$ bzip2 filename.bz2 file1 file2 file3 /usr/work/school

The above command compresses file1, file2, file3, and the contents of the /usr/work/school/
directory (assuming this directory exists) and places them in a file named filename.bz2.

e) rar and unrar- Archive files with compression and extract files from a rar archive.
Syntax-

rar command [-switch_1 -switch_N] archive [files...]

unrar command [-switch_1 -switch_N] archive [files...] [path...]

Options-

a Add files to archive.


c Add archive comment. Comment length is limited to 62000 bytes.
cf Add files comment. File comments are displayed when the ’v’ command is
given. File comment length is limited to 32767 bytes.
cw Write archive comment to a specified file.
d Delete files from archive.
e Extract files to current directory. Does not create any subdirectories.
f Freshen files in archive. Updates those files changed since they were packed
to the archive. This command will not add new to the archive
k Lock archive. Any command which intend to change the archive will be
ignored.
Commands for unrar

e Extract files to current directory.


p Print file to stdout.
v Verbosely list archive.
l List archive content.
t Test archive files.
Examples

1. Create a new rar archive archive.rar containing file1.dat, file2.dat, file3.dat:


$ rar a archive.rar file1.dat file2.dat file3.dat

2. Create a new rar archive musicmp4.rar containing the directory music:


$ rar a musicmp4.rar music/

3. Create a rar archive that splits the file/files into multiple parts of equal size (50MB):
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

$ rar a -v50M -R musicmp4.rar music/


4. Extract the rar archive musicmp4.rar:
$ unrar e musicmp4.rar

5. List the content of a rar file without uncompressing it:


$ unrar l musicmp4.rar

f) cpio: Copy files to and from archives. The following archive formats are supported: binary,
old ASCII, new ASCII, crc, HPUX binary, HPUX old ASCII, old tar, and POSIX.1 tar. The
tar format is provided for compatibility with the tar program. By default, cpio creates binary
format archives, for compatibility with older cpio programs. When extracting from archives,
cpio automatically recognizes which kind of archive it is reading and can read archives
created on machines with a different byte-order.
The cpio command is traditionally used to copy file hierarchies in conjunction with the find
command. When creating an archive in copy-out mode give find the -depth option to
minimize problems with permissions on directories that are unreadable.
The -depth option forces find to print of the entries in a directory before printing the
directory itself. This limits the effects of restrictive directory permissions by printing the
directory entries in a directory before the directory name itself.
Examples

1. Archive the content of a single directory, -ov = Copy-out + view all files:
ls | cpio -ov > directory.cpio
2. Copy all files from src to dest, the find command can provide the file list to cpio,
-pmud = Set file modification time + Unconditionally overwrite + Create
directories as necessary:
find src | cpio -pmud dest
3. Take the contents of the archive tree.cpio and extract it to the current directory
-idv = Copy-In + Create directories + verbose

cpio -idv < tree.cpio


4. Copy files from src to dest that are more than 2 days old and which contain the word
'foobar':
find src -mtime +2 | xargs grep -l foobar | cpio -pdmu dest
5. Archive an entire directory tree:
find . -print -depth | cpio -ov > tree.cpio

By carefully selecting options to the find command and combining it with other standard
utilities, it is possible to exercise very fine control over which files are copied. This next
example copies files from src to dest that are more than 2 days old and whose names
match a particular pattern:
find src -mtime +2 | grep foo[bar] | cpio -pdmu dest

g) pipes (unnamed) – pipe is a unidirectional data channel that can be used for inter-process
communication. The pipe has much functionality used to filter, sort, and display the text in
the list.
Syntax – [command-1] [pipes][command-2][pipes][command]...
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Options –
| First command’s output goesto the second command as input.

h) mkfifo(named pipes): While an unnamed Linux pipe is valid for one process only,
a named Linux pipe will ensure that the command is valid the entire time until you shut
down the system or delete it.
Syntax: mkfifo <named-pipe>

Or

mknod p <named-pipe>

Imagine you have a process running in one terminal that produces output. Now you
want to pipe that output to a second terminal. This is where a named pipe is a big help.
To redirect a standard output of any command to another process, use the “>” symbol.
To redirect a standard input of any command, use the “<” symbol. In the following
example, you name the pipe in the first terminal.

$ mkfifo named-pipe

$ ls > named-pipe

Now, in the second terminal, enter the following code to see the output.

$ cat < named-pipe

Shell Programming

Shell, command interpreter, is a program started up after opening of the user session by the
login process. The user interacts with the shell using the terminal emulation window. The
terminal emulation window can be one in the workstation's Graphical User Interface mate-
terminal on Linux. Alternatively, it can be an application such as SSH secure shell client
or PuTTY on a Windows PC that's logged into Linux over the network.

The shell is active till the occurrence of the <EOT> character, which signals requests for
termination of the execution and for informing about that fact the operating system kernel. The
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

kernel creates a new shell instance whenever a user signs into the system or starts a console
window. The core of any operating system is the kernel. It is responsible of controlling,
managing, and executing processes, as well as ensuring optimum utilisation of system
resources.

Each user has a default shell, which will be launched upon opening of a command prompt. The
default shell is specified in the configuration file/etc/passwd in the last field of the line
corresponding to the user. It is possible to change the shell during a session by simply executing
the corresponding executable file, for example: /bin/bash.

It is important to remember that the shell is a program that accepts lines of ASCII text one at a
time and interprets them, whether entered one by one from the command prompt or one by one
from a script file. The shell incorporates a programming language, making it considerably more
than just a user interface. The shell's programming language is sufficiently complex to
accomplish a wide range of tasks that would typically be handled by a language like C, but it
is interpretive rather than compiled. It is generally simpler to complete a systems function in
the shell using the shell programming capability.

The shell interpreter works based on the following scenario:

1. displays a prompt,
2. waits for entering text from a keyboard,
3. analyses the command line and finds a command,
4. submit to the kernel execution of the command,
5. accepts an answer from the kernel and again waits for user input.
Features of Shell

• File name completion: Ability to automatically finish typing file names in command
lines.

• Alias command: Let’s you rename commands, automatically include command


options, or abbreviate long command lines.

• Restricted shells: A security feature providing a controlled environment with limited


capabilities.

• Job control: Tools for tracking and accessing processes that run in the background.

• Line editing: Ability to modify the current or previous command lines with a text
editor.

• Command history: Allows commands to be stored in a buffer, then modified and


reused.

• File name substitution: also called as globbing, like, cat abc.??, rm *.c, etc.

• Key bindings that allow you to set up customized editing key sequences.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

• Integrated programming features: the functionality of several external UNIX


commands, including test, expr, getopt, and echo, has been integrated into the
shell itself, enabling common programming tasks to be done more cleanly and
efficiently.
• Control structures, especially the select construct, which enables easy menu
generation.
• One dimensional arrays that allow easy referencing and manipulation of lists of
data.
• Dynamic loading of built-ins, plus the ability to write your own and load them
into the running shell.
• Command Substitution allows the command's output to replace the command
itself. Bash expands the command by running it in a subshell environment and
replacing the command substitution with the command's standard output, with any
trailing newlines removed. Although embedded newlines are not lost, they may be
removed during word splitting.

A comparison of shells based on some mostly used features is given in the below table.

Features Bourne C TC Korn Bash


(sh) (csh) (tcsh) (ksh) (bash)

Programming language Yes Yes Yes Yes Yes


Shell variables Yes Yes Yes Yes Yes
Command alias No Yes Yes Yes Yes
Command history No Yes Yes Yes Yes
Filename completion No Yes* Yes Yes* Yes
Command line editing No No Yes Yes* Yes
Job Control No Yes Yes Yes Yes

* Not the default setting for this shell

Advantages of Shell Programming

• To automate the frequently performed operations


• To run sequence of commands as a single command
• Easy to use
• Portable
• Interactive Debugging

Disadvantages of Shell Programming

• Speed: Slow execution speed compared to any programming languages

• Quality: The shell script is efficient only for small programs and is so suitable for large
and complex programs.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

• Error Prone: Prone to costly errors, a single mistake can change the command which
might be harmful

• Efficiency: A new process is launched for almost every shell command executed. Also,
there are design flaws within the language syntax.

Here Document

A here document allows you to place into a shell program lines that are redirected to be the
input to a command in that program. By using a here document, you can provide input to a
command in a shell program without using a separate file. The notation consists of the
redirection symbol ``<<'' and a delimiter that specifies the beginning and end of the lines of
input. The delimiter can be one character or a string of characters; the ``!'' is often used. Format
of here document:

command <<delimiter

. . . input lines . . .

delimiter

Basic Shell Script Functions

Operation Command
To Print echo “Hello World” (Prints Hello World in the screen)
To read or take user input read n (Stores the content entered by user in variable n)
To make a single line Comment # This is a comment. // single line comment
To make a multi-line Comment << 'MULTILINE-COMMENT'
Everything inside the Here Document body is a multiline
comment
MULTILINE-COMMENT

Shell Scripts

Shell commands can be saved in a file for later use (Linux calls these files shell scripts;
Windows calls them batch files). This flexibility enables users to conduct complex processes
with relative ease, frequently by issuing brief commands, or to develop elaborate programmes
that do highly complex functions with remarkably little effort.
Shell scripts are usually used to avoid tedious tasks. Instead of typing in the commands one
after the other n times, you can construct a script to automate a series of instructions to be run
one after the other.
The shell reads and executes the file's lines one at a time. These programmes are interpreted
rather than compiled. Before being executed, compiled programmes are translated into machine
code. As a result, shell programmes are typically slower than binary executables, although they
are easy to build and are mostly used to automate simple activities. Shell applications can also
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

be written interactively at the command line, which is the quickest option for relatively simple
operations. However, for more complicated programming, writing scripts in an editor is
preferable.
She-bang (#!): You can write shell programs by creating scripts containing a series of shell
commands. The first line of the script should start with #!(she-bang) which indicates to the
kernel that the script is directly executable. It is a header that points to the script to be used.
You immediately follow this with the name of the shell, or program (spaces are allowed), to
execute, using the full path name. So, to set up a Bourne shell script the first line would be:
#! /bin/sh

The first line identifies the file as a shell script and tells the shell how to execute the script. It
instructs the shell to pass the script to /bin/sh for execution, where /bin/sh is the shell program
itself. By forcing the shell script to run using /bin/sh, you ensure that the script will run under
a Bourne-syntax shell.
When you enter a command, the shell does several things.
• First, it checks the command to see if it is internal to the shell. (That is, a command
which the shell knows how to execute itself.) The shell also checks to see if the
command is an alias, or substitute name, for another command.
• If neither of these conditions apply, the shell looks for a program, on disk, having the
specified name.
• If successful, the shell runs the program, sending the arguments specified on the
command line.

Shell initialization scripts


A login shell is a shell invoked when you log in. if you ``shell out'' of another program like vi,
you start another instance of the shell whenever you run a shell script, you automatically start
another instance of the shell to execute the script. The initialization files used by bash are:
• /etc/profile (set up by the system administrator and executed by all bash users at login
time),
• $HOME/.bash_profile (executed by a login bash session), and
• $HOME/.bashrc (executed by all non-login instances of bash). If .bash_profile is not
present, .profile is used instead.
Exit Status

By default in Linux if particular command is executed, it return two type of values to see
whether command is successful or not. The value can be either 0 or any non-zero number. This
value is known as Exit Status of that command.
• If return value is zero (0), command is successful,

• if return value is nonzero (>0), command is not successful or some sort of error
executing command/shell script.

To determine the exit Status we use $? variable of shell. E.g.,

$ ls
$ echo $?
will print 0 to indicate command is successful.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Setting up the Shell

Administrator defines a shell for a user. Most shell variables needed to accomplish tasks are
located in /etc/passwd file. The file also keeps the user!s full name along with the type of shell
that the user prefers. The user can change his/her shell using the command: chsh.

Data types and Variables in Linux

Datatypes
In the case of Bash, however, you do not need to describe the data type of any variable when
declaring it. Bash variables are untyped, which means that you may just type the variable name
followed by its value and it will automatically consider the data type. As a result, if you assign
any numeric value to the variable, it will perform as an integer, and if you assign a character
value to the variable, it will function as a String.
age=20
department=Sales
You have to use the echo command to print the data by prefixing a $ symbol to the variable
like:
Echo $age
Echo $department

Variables

The shell supports two types of variables: local and environment variables. The variables
contain information used to customise the shell as well as information necessary by other
programmes to function effectively. Variables can be thought as a container containing items
of similar nature. The variables provide a user friendly name to the memory address location
where the actual data is stored. The developer uses this name to manipulate the data stored in
the memory.

Local variables are only available to the shell in which they are produced and are not shared
with any processes that are launched from that shell. The built-in set command displays local
variables for the C and TC shells, as well as both local and environment variables for the
Bourne, Bash, and Korn shells. All shell variables store strings. Syntax and Example of local
variable:
set variable_name = value
set name = "Sales”
In sh and bash, the Variables can be declared as:
name=value
name=Sales
The variable name should start with a letter but can contain numbers and underscores. The
value of a variable can be returned/used by adding a prefix $. When you are declaring variables,
make sure there are no spaces before and after =. (a=12 is correct; a = 12 throws an error.)
Environment Variables: They are pre-defined in the Linux system and are generally defined
in upper case. Environment variables are transmitted from parent to child process, and so on.
The login shell inherits some environment variables from the /bin/login programme. Others are
generated by user initialization files, scripts, or the command line. When an environment
variable is set in the child shell, it is not returned to the parent shell. The environment variables
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

will be displayed using the shell env command. Environmental variables are also called global
variables. They go out of scope when the script ends or the shell where they are defined exits
The environment is the set of variables that are accessible by all commands that you execute.
We can view the environment variables through set command. The set command will display
all the global functions written by the user. We can reassign values for the variables either
temporarily or permanently:
• Temporary- Type varname=value at the command prompt

• Permanent- Type varname=value in .bashrc at the root directory. Use the export
command to export a variable to the environment. We only need to export a variable
once.

Environmental Description Example


Variable

BASH represents the Shell Name BASH=/usr/bin/bash


BASH_VERSION specifies the shell version BASH_VERSION=4.2.46(2)
which the Bash holds.

COLUMNS specify the no. of columns for COLUMNS=80


our screen.

HOME specifies the home directory for HOME=/home/ellie


the user.

LOGNAME specifies the logging user name. LOGNAME=ellie


OSTYPE tells the type of OS OSTYPE=linux-gnu
USERNAME specifies the name of currently USERNAME=ellie
logged in user

PWD represents the current working PWD=/home/folder1


directory

PATH Setting path to search for the PATH=/usr/bin:/sbin:/bin:/usr/sbin


desired

PAGER specifies the command to use to


display manual pages one
screen at a time.

PS1 defines the main shell prompt PS1=[\u@\h \W]\$


The list of available environment variables can be printed using set, env and printenv
commands. A glimpse of the environment variables for a user using the Bash shell on Solaris
5.9 is shown in the table below.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

$ env
PWD=/home/ellie
TZ=US/Pacific
PAGER=less
HOSTNAME=artemis
LD_LIBRARY_PATH=/lib:/usr/lib:/usr/local/lib:/usr/dt/lib:/usr/openwin/lib
MANPATH=/usr/local/man:/usr/man:/usr/openwin/man:/opt/httpd/man
USER=ellie
TCL_LIBRARY=/usr/local/lib/tcl8.0
EDITOR=vi
LOGNAME=ellie
SHLVL=1
SHELL=/bin/bash
HOSTTYPE=sparc
OSTYPE=solaris2.9
HOME=/home/ellie
TERM=xterm
TK_LIBRARY=/usr/local/lib/tk8.0
PATH=/bin:/usr/bin:/usr/local/bin:/usr/local/etc:/usr/ccs/bin:/usr/etc:/usr/ucb:/usr/
local/X11:/usr/openwin/bin:/etc:.
SSH_TTY=/dev/pts/10
_=/bin/env

Naming Conventions to create a Variable


• Variable name must begin with Alphanumeric character or underscore character (_),
followed by one or more Alphanumeric character. For e.g. Valid shell variable are as
follows:
HOME
SYSTEM_VERSION
vech
no
• Don't put spaces on either side of the equal sign when assigning value to variable. For
e.g.. In following variable declaration there will be no error
$ no=10
• Variables are case-sensitive, just like filename in Linux. For e.g.

$ no=10
$ No=11
$ NO=20
$ nO=2
• You can define NULL variable as follows (NULL variable is variable which has no
value at the time of definition) For e.g.
$ vech=
$ vech=""
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

• Do not use ?,* etc, to name your variable names.

Single Quotes and Backslash

Linux shell allows the use of characters that have special meanings in the Linux shell. The
shell substitutes variable names and wildcard characters before executing the command -
sometimes this is undesirable. The mechanisms to use these special characters can be done
through the following:

Escape Character or Backslash


Escape characters are initialized with a backslash \. When a character follows a \, it has a
special meaning. Common escape characters include:

Escape character Meaning


\n This adds a new line.
\e This is an equivalent to ESC.
\b This means backspace.
\t This adds a horizontal tab.
\v This adds a vertical tab
\f This represents form feed.
\r This is equivalent to carriage return.
\\ This means a literal backslash.
\' This represents a single quote.
\" This represents a double quote.

Examples:
Command Output
var1=im1.nii.gz im1.nii.gz
echo $var1

echo \$var1 $var1

echo '$var1' $var1

Single Quotes
Single quotes (") allow the user to keep the literal value of the input as a string. If there are any
special characters within the single quotes, they are handled as normal characters and do not
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

have the special role. However, no single quotes should be used within single quotes. If the
input string contains a lot of special characters, use single quotes.

Double Quotes
To group several strings together as one argument it is necessary to use double quotes: ". It is
applicable on all the characters except $, \$, \, \', \", ! and the backtick #`”. The backslashes
in the input only have special meaning if they are followed by one of the above mentioned
special characters.
Examples:
Command Output
v=Hello World Hello
echo $v

v=”Hello World” Hello World


echo $v

echo "*" *

echo "\$Student" $Student

Backquotes
Back quotes execute commands. The text enclosed by backquotes is regarded as a separate
command. They instruct the shell to execute the string between the back quotes as a system
command. The result can then be used in any way you like. This is very useful for setting
variables.
Examples:
Command Output
echo `$date` Fri May 20 18:30:12 GMT 2022

Command Line Arguments

An argument, often known as a command line argument, is described as input sent to a


command line to be processed with the help of the given command. They are also known as
positional parameters in Linux.

After typing command, arguments are entered in the terminal or console. We can also write
many arguments at the same time; they will be processed in the order they are written.

Command-line arguments help make shell scripts interactive for the users. They help a script
identify the data it needs to operate on. Hence, command-line arguments are an essential part
of any practical shell scripting uses. Arguments can be passed to a script from the command
line. Two methods can be used to receive their values from within the script: positional
parameters and the getopts function.

Positional Parameters
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

The bash shell has special variables reserved to point to the arguments which we pass through
a shell script. Bash saves these variables numerically ($1, $2, $3, … $n)

Here, the first command-line argument in our shell script is $1, the second $2 and the third is
$3. This goes on till the 9th argument. The variable $0 stores the name of the script or the
command itself.

Example:

$ sh hello how to do you do

Here $0 would be assigned sh

$1 would be assigned hello

$2 would be assigned how

And so on …

Example: Using set variable to assign values to positional parameters

set how do you do


echo $1 $2
how do

Here, “how” was assigned to $1 and “do” was assigned to $2 and so on.

We also have some special characters which are positional parameters, but their function is
closely tied to our command-line arguments.

Positional Description
Parameters

$0 The filename of the current script.


$n These variables correspond to the arguments with which a script was
invoked. Here n is a positive decimal number corresponding to the
position of an argument (the first argument is $1, the second
argument is $2, and so on).

$# The number of arguments supplied to a script.


$* All the arguments are double quoted. If a script receives two
arguments, $* is equivalent to $1 $2.

$@ All the arguments are individually double quoted. If a script receives


two arguments, $@ is equivalent to $1 $2.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

$$ The process number of the current shell. For shell scripts, this is the
process ID under which they are executing.

$! The process number of the last background command.


$? The exit status of the last command execut

Using ‘$@’ for reading command-line arguments

The command-line arguments can be read without using argument variables or getopts
options. Using ‘$@‘ with the first bracket is another way to read all command-line argument
values.

Example: Reading command line argument values without variable

Create a bash file with the following script to read the argument values without any argument
variable and calculate the sum of three command line argument values. “$@” has been used
with the first brackets here to read all argument values into an array. Next, the sum of the first
three array values will be printed.

#!/bin/bash
# Read all arguments values
argvals=("$@")
# Check the total number of arguments
if [ $# -gt 2 ]
then
# Calculate the sum of three command line arguments
sum=$((${argvals[0]}+${argvals[1]}+${argvals[2]}))
echo "The sum of 3 command line arguments is $sum"
fi

The following output will appear after executing the above script for the argument values 12,
20, and 90. The sum of these numbers is 122.

Example: Sending three numeric values in the command line arguments

Create a bash file with the following script. The script will receive three-argument values and
store them in $1, $2, and $3 variables. It will count the total number of arguments and print
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

argument values using a loop without a loop. The sum of all argument values will be printed
later.

#!/bin/bash
# Counting total number of arguments
echo "Total number of arguments: $#"

# Reading argument values individually


echo "First argument value : $1"
echo "Second argument value : $2"
echo "Third argument value : $3"

# Reading argument values using loop


for argval in "$@"
do
echo -n "$argval "
done

# Adding argument values


sum=$(($1+$2+$3))

# print the result


echo -e "\nResult of sum = $sum"

The following output will appear after executing the script file with three numeric argument
values, 50, 35, and 15.

Source: linuxhint.com

Example: $*

for word in $*

do
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

echo $word

done

Save as words.sh and execute the script


$ bash words.sh I am having fun today
Output:
I
am
having
fun
today

Shift

This command is used to shift the position of the positional parameters. i.e. $2 will be shifted
to $1 all the way to the tenth parameter being shifted to $9. Note that if in case there are more
than 9 parameters, this mechanism can be used to read beyond the 9th.

Example:

set hello good morning how do you do welcome to Linux tutorial.

Here, ‘hello’ is assigned to $1, ‘good’ to $2 and so on to ‘to’ being assigned to $9. Now the
shift command can be used to shift the parameters ‘N’ places.

Example:
shift 2
echo $1
Now $1 will be "morning$!and so on to $8 being "Linux$!and $9 being "tutorial!.

Getopts function
If you want to store data in the database or any file or create a report in a particular format
based on command line arguments values, then the getopts function is the best option to do the
task. It is a built-in Linux function. So, you can easily use this function in your script to read
formatted data from the command line.
Example: Reading arguments by getopts function
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Create a bash file with the following script to understand the use of the getopts function.
"getopts $!function is used with a while loop to read command-line argument options and
argument values. Here, 4 options are used which are "i!, "n!, "m! and "e!. the case statement is
used to match the particular option and store the argument value in a variable. Finally, print the
values of the variable.
!/bin/bash
while getopts ":i:n:m:e:" arg; do
case $arg in
i) ID=$OPTARG;;
n) Name=$OPTARG;;
m) Manufacturing_date=$OPTARG;;
e) Expire_date=$OPTARG;;
esac
done
echo -e "\n$ID $Name $Manufacturing_date $Expire_date\n"

Run the file with the following options and argument values. Here, p100 is the value of -i
option, "Hot Cake$"is the value of -n option, "01-01-2021$"is the value of -m option and "06-01-
2021$"is the value of -e option.

When you need to send simple values in a script, then it is better to use argument variables.
But if you want to send data in a formatted way, it is better to use the getopts function to retrieve
argument values. The uses of both argument variables and getopts options have shown in the
next example.

Scope of Variables

Programmers used to other languages may be surprised at the scope rules for shell functions.
Basically, there is no scoping, other than the parameters ($1, $2, $@, etc).
Taking the following simple code segment:
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

#!/bin/sh

myfunc()
{
echo "I was called as : $@"
x=2
}

### Main script starts here

echo "Script was called with $@"


x=1
echo "x is $x"
myfunc 1 2 3
echo "x is $x"

The script, when called as scope.sh a b c, gives the following output:


Script was called with a b c
x is 1
I was called as : 1 2 3
x is 2
The $@ parameters are changed within the function to reflect how the function was called.
The variable x, however, is effectively a global variable - myfunc changed it, and that change
is still effective when control returns to the main script.

Shell keywords
Keywords are the words whose meaning has already been explained to the shell.the keywords
cannot be used as variable names because of it is a reserved words with containing reserved
meaning.

echo read set unset readonly shift export if exec


fi else while do done for until case ulimit
esac break continue exit return trap wait eval umask

Control Structures
There are four types of control instructions in shell:
(a) Sequence control instruction- The sequence control instruction ensures that the
instructions are executed in the same order in which they appear in the program. The first
command in the sequence executes first and when it is complete, the second command
executes, and so on. The presence of functions in the code does not negate sequential execution;
we can still follow the sequential flow of the instructions.
(b) Case control instruction
(c) Selection or Decision control instruction - Decision and Case control instructions allow
the computer to take a decision to which instruction is to be executed next
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

(d) Repetition or Loop control instruction - The Loop control instruction helps computer to
execute a group of statements repeatedly
Bourne shell offers four decision making instructions. They are:
1. The if-then-fi statement
2. The if-then-else-fi statement
3. The if-then-elif-else-fi statement (Nested if else)
4. The case-esac statement
If-then, if-then-else, if-then-elif-else-then
The simplest control structure is the if statement that checks the return status of a command (or
a list of commands) and only runs another command (or list of commands) if the return status
of the original command is successful (zero).

if <condition>
then
<if-part>
elif <condition>
then
<elif-part>
else
<else-part>
fi

• If specified condition is not true in if part then else part will be executed.

• To use multiple conditions in one if-else block, then the elif keyword is used in shell. If
the condition is true then it executes elif-part, and this process continues. If none of the
conditions is true then it processes else-part.

• The elif part can be omitted if a strictly if-then-else structure is needed. The else part can
also be omitted to obtain a strictly if-then construction.

Example: if—then--fi

echo Enter source and target file

read source target

if cp $source &target; then

echo file copied successfully

fi

• The structure starts with if and ends with fi. The fi is if spelled backwards.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

• There is a semicolon separating the end of the cp command from the keyword then.

• The fi keyword is lined up directly under the if keyword.

• Only the exit status of the last command in a pipeline or list is checked by the shell.

Example: Use of elif


echo “Enter marks in five subjects\c”
read m1 m2 m3 m4 m5
per = `expr \($m1 + $m2 + $m3 + $m4 + $m5\) / 5` if [per –ge 60]
then
echo First division
elif [per –ge 45 –a per –lt 60] then
echo Second division
elif [per –ge 33 –a per –lt 45]
then
echo Third division
else
echo Fail
fi
fi
fi

case……esac: You can use multiple if...elif statements to perform a multiway branch.
However, this is not always the best solution, especially when all of the branches depend on
the value of a single variable. Shell supports case...esac statement which handles exactly this
situation, and it does so more efficiently than repeated if...elif statements.
case <test-value> in
<case1-val>)
<commands>
;;
<case2-val>)
<commands>
;;
...
*) ««- case that catches anything else
<commands>
;;
esac

Example
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

echo “Enter a number from 1 to 3”

read num

case $num in

1) echo you entered 1

;;

2) echo you entered 2

;;

3) echo you entered 3

;;

*) echo I said 1 to 3

;;

esac

The identifier for each case must be on its own line and each case terminates when a double
";" is encountered.
Loops
You will use different loops based on the situation.
• The while loop is followed by an expression enclosed in parentheses, a block of
statements, and terminated with the end keyword. As long as the expression is true, the
looping continues.
• The until loop executes until a given condition becomes true.
• The foreach loop is followed by a variable name and a list of words enclosed in
parentheses, a block of statements, and terminates with the end keyword. The foreach
loop iterates through a list of words, processing a word and then shifting it off, then
moving to the next word. When all words have been shifted from the list, it ends.
For loops
Various range of for loops can be declared in Bash.
Numerical Ranges
This type of for loop is characterized by counting. The range is specified by a beginning (#1)
and ending number (#5). The for loop executes a sequence of commands for each member in
a list of items.
Syntax 1
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

for VARIABLE in 1 2 3 4 5 .. N
do
command1
command2
commandN
done

Syntax 2
for VARIABLE in file1 file2 file3
do
command1
command2
commandN
done

Syntax 3
for OUTPUT in $(Linux-Or-Unix-Command-Here)
do
command1 on $OUTPUT
command2 on $OUTPUT
commandN
done

Variations of for loop

#!/bin/bash
for i in 1 2 3 4 5
do
echo "Welcome $i times". # will print Welcome 5 times
done

#!/bin/bash
for i in {1..5}
do
echo "Welcome $i times"
done

#!/bin/bash
echo "Bash version ${BASH_VERSION}..."
for i in {0..10..2}
#start…end…increment. i.e., the loop starts at 0, goes till 10 incrementing by 2 on each
iteration
do
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

echo "Welcome $i times"


done

use of seq command to count and increment numbers in the for loop
#!/bin/bash
for i in $(seq 1 2 20)
do
echo "Welcome $i times"
done

three expression for loop


#!/bin/bash
# set counter 'c' to 1 and condition
# c is less than or equal to 5
for (( c=1; c<=5; c++ ))
do
echo "Welcome $c times"
done

Example of infinite loop


#!/bin/bash
for (( ; ; ))
do
echo "infinite loops [ hit CTRL+C to stop]"
done

foreach loop: The foreach loop is a powerful control structure. The parameter name must be a
variable name; if this variable does not exist, it is created. The parameter list is a list of strings
separated by spaces. The shell begins by assigning the first string in list to the variable name.
It then runs the commands once. Then the shell assigns the next string in list to name, and
repeats the commands. The shell runs the commands once for each string in list.

Syntax:
foreach variable ( word list )
block of statements
end

Example:
foreach color (red green blue)
echo $color
end

Output:
red
green
blue

while loop: The while loop evaluates an expression, and as long as the expression is true
(nonzero), the commands below the while command will be executed until the end statement
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

is reached. Control will then return to the while expression, the expression will be evaluated,
and if still true, the commands will be executed again, and so on. When the while expression
is false, the loop ends and control starts after the end statement.

while <condition>
do
<loop-body>
done
Example
count =1
echo “Enter a number from 1 to 3” read num
case $num in
1) echo you entered 1
;;
2) echo you entered 2
;;
3) echo you entered 3
;;
*) echoIsaid1to3
;;
esac
while [$count –le 3]
do
echo “Enter the values of P, n, and r\c”
read P n r
si = `echo $P \* $n \* $r / 100 | bc`
echo Simple interest = Rs $si
count = `expr $count = 1`
done

Until Loop: Until loop is used to execute a block of code until the expression is evaluated to
be false. This is exactly the opposite of a while loop. While loop runs the code block while
the expression is true and until loop does the opposite.

Syntax of until loop:

until control command doesn’t return true

do

statement(s)

done

Example
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

i=1

until [$i –gt 10]

do

echo $i

i = `expr $i + 1` done

There are minor differences between the working of while and until loops.
• The while loop executes till the exit status of the control command is true and terminates
when the exit status becomes false.
• Unlike this the until loop executes till the exit status of the control command is false and
terminates when this status becomes true
Break and Continue
break and continue keywords can be used with the same meaning as any programming
languages. break is used to stop the execution of a loop, and continue can be used to cause the
loop to execute the next iteration.

count=0
until false
do
((count++))
if [[ $count -eq 5 ]]
then
continue
elif [[ $count -ge 10 ]]
then
break
fi
echo "Counter = $count"
done
Here when the count is equal to five continue statement will jump over to the next iteration
skipping the rest of the loop body. Similarly, the loop breaks when the count is equal to or
greater than 10.
Nesting Loops: All the loops support nesting concept which means you can put one loop inside
another similar one or different loops. This nesting can go up to unlimited number of times
based on your requirement. Structure of nested loop:
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

while command1 ; # this is loop1, the outer loop


do
Statement(s) to be executed if command1 is true

while command2 ; # this is loop2, the inner loop


do
Statement(s) to be executed if command2 is true
done

Statement(s) to be executed if command1 is true


done

Example of nested loop using for:


#!/bin/bash
# A shell script to print each number five times.
for (( i = 1; i <= 5; i++ )) ### Outer for loop ###
do

for (( j = 1 ; j <= 5; j++ )) ### Inner for loop ###


do
echo -n "$i "
done

echo "" #### print the new line ###


done

Save and close the file as nestedfor.sh. Run it as follows:

chmod +x nestedfor.sh
./nestedfor.sh

Operators

Each shell supports various basic operations. The most popular shell is the Bourne Again
Shell, or bash, which comes as a default with most Linux distributions.

There are five basic operations that one must know to use the bash shell:

1. Arithmetic Operators
2. Relational Operators
3. Boolean Operators
4. Bitwise Operators
5. File Test Operators
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Arithmetic Operators

Operator Description Example


Addition Adds the operands on either side expr $a + $b
Subtraction Subtracts the right hand operand from the left hand expr $a - $b
Multiplicati Multiplies two operands expr $a * $b
on

Division Divides the right hand operand with the left hand operand expr $a / $b
and returns the quotient

Modulus Divides the right hand operand with the left hand operand expr $a % $b
and returns the remainder

Increment Unary operator to increment an operand by one expr $((++a))


Decrement Unary operator to decrement an operand by one expr $((--a))

Example:
a=4
b=5
echo "a + b = $((a + b))"
echo "a - b = $((a - b))"
echo "a * b = $((a * b))"
echo "a / b = $((a / b))"
echo "a % b = $((a % b))"
echo "++a = $((++a))"
echo "--b = $((--b))"

Output:
a+b=9
a - b = -1
a * b = 20
a/b=0
a%b=4
++a = 5
--b = 4

Relational operators

Bash supports relational operators that work on variables with numeric values or strings that
are numeric. Relational operators do not work on strings if their values are not numeric.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Relational operators either give true or false depending on the relation. Below are two tables
representing the relational operators used in shell programming. The operators are dissimilar
in their symbols but the functionality is same for each of them. E.g., you can use -gt for greater
than similar to using > symbol.

Operat Description Example


or

== Compares two operands and returns true if they are equal; [$a == $b]
otherwise, it returns false

!= Compares two operands and returns true if they are not equal; [$a != $b]
otherwise, it returns false

< Less than operator; returns true if the left hand operand is less [$a < $b]
than the right hand operand

<= Less than or equal to operator; returns true if the left hand [$a <= $b]
operand is less than or equal to the right hand operand

> Greater than operator; returns true if the left hand operand is [$a > $b]
greater than the right hand operand

>= Greater than or equal to operator; returns true if the left hand [$a >= $b]
operand is greater than or equal to the right hand operand

Operator Description Example


-eq Checks if the value of two operands are equal or not; if [ $a -eq $b ] is not true.
yes, then the condition becomes true.

-ne Checks if the value of two operands are equal or not; if


values are not equal, then the condition becomes true. [ $a -ne $b ] is true.

-gt Checks if the value of left operand is greater than the


value of right operand; if yes, then the condition becomes [ $a -gt $b ] is not true.
true.

-lt Checks if the value of left operand is less than the value
of right operand; if yes, then the condition becomes true. [ $a -lt $b ] is true.

-ge Checks if the value of left operand is greater than or


equal to the value of right operand; if yes, then the [ $a -ge $b ] is not true.
condition becomes true.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

-le Checks if the value of left operand is less than or equal to


the value of right operand; if yes, then the condition [ $a -le $b ] is true.
becomes true.

It is very important to understand that all the conditional expressions should be placed inside
square braces with spaces around them. For example, [ $a <= $b ] is correct whereas, [$a <=
$b] is incorrect.
Example 1
a=100
b=10
if(( $a == $b ))
then
echo "a and b are equal"
fi
if(( $a != $b ))
then
echo "a and b are not equal"
fi
if(( $a > $b ))
then
echo "a is greater than b"
else
echo "a is not greater than b"
fi
if(( $a >= $b ))
then
echo "a is greater or equal to than b"
else
echo "a is not greater or equal to than b"
fi

Output:
a and b are not equal
a is greater than b
a is greater or equal to than b

Example 2
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

#!/bin/sh

a=10
b=20

if [ $a -eq $b ]
then
echo "$a -eq $b : a is equal to b"
else
echo "$a -eq $b: a is not equal to b"
fi

if [ $a -ne $b ]
then
echo "$a -ne $b: a is not equal to b"
else
echo "$a -ne $b : a is equal to b"
fi

if [ $a -gt $b ]
then
echo "$a -gt $b: a is greater than b"
else
echo "$a -gt $b: a is not greater than b"
fi

if [ $a -lt $b ]
then
echo "$a -lt $b: a is less than b"
else
echo "$a -lt $b: a is not less than b"
fi

if [ $a -ge $b ]
then
echo "$a -ge $b: a is greater or equal to b"
else
echo "$a -ge $b: a is not greater or equal to b"
fi

if [ $a -le $b ]
then
echo "$a -le $b: a is less or equal to b"
else
echo "$a -le $b: a is not less or equal to b"
fi
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Output:

10 -eq 20: a is not equal to b


10 -ne 20: a is not equal to b
10 -gt 20: a is not greater than b
10 -lt 20: a is less than b
10 -ge 20: a is not greater or equal to b
10 -le 20: a is less or equal to b

Boolean operators

The following boolean operators are supported by bash:


Operator Description Example
! The logical negation operator [ ! false ]
-o ( || ) The logical OR operator [ $a < 20 || $b > 30 ]
-a ( && ) The logical AND operator [ $a < 20 & $b > 30 ]

Example
echo "LOGICAL_AND = $((1 && 0))"
echo "LOGICAL_OR = $((0 || 1))"
echo "LOGICAL_Neg = $((!0))"

Output:
LOGICAL_AND = 0
LOGICAL_OR = 1
LOGICAL_Neg = 1

Bitwise operators
Bitwise operators are used to perform bitwise operations on bit fields.
Operato Description Example
rs

& Performs the binary AND operation bit by bit on the echo "$((0x4 & 0x1))"
arguments

| Performs the binary OR operation bit by bit on the echo "$((0x4 | 0x1))"
arguments

^ Performs the binary XOR operation bit by bit on the echo "$((0x4 ^ 0x1))"
arguments
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

~ Performs the binary NOT operation bit by bit on the echo "$((~0x4))"
arguments

<< Shifts the bit field to the left by the number of times echo "$((0x4 << 1))"
specified by the right hand operand

>> Shifts the bit field to the right by the number of times echo "$((0x4 >> 1))"
specified by the right hand operand

Example:
echo "0x4 & 0x0 = $((0x4 & 0x0))"
echo "0x4 | 0x1 = $((0x4 | 0x1))"
echo "0x1 ^ 0x1 = $((0x1 ^ 0x1))"
echo "0x1 << 4 = $((0x1 << 4))"
echo "0x4 >> 2 = $((0x4 >> 2))"

Output:
0x4 & 0x0 = 0
0x4 | 0x1 = 5
0x1 ^ 0x1 = 0
0x1 << 4 = 16
0x4 >> 2 = 1

File Test Operators


Bash provides operators to test for various properties of files; these operators are known
as file test operators.

Operat Description Example


or

-b Checks whether a file is a block special file or not [ -b $FileName]


-c Checks whether a file is character special or not [ -c $FileName]
-d Checks if a given directory exists or not [ -d $FileName]
-e Checks whether a given file exists or not [ -e $FileName]
-r Checks if the given file has read access or not [ -r $FileName]
-w Checks if the given file has write access or not [ -w $FileName]
-x Checks if the given file has execute access or not [ -x $FileName]
-s Checks the size of the given file [ -s $FileName]

Example:
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

read -p "Enter filename " FileName


if [ -e $FileName ]
then
echo "$FileName exists"
if [ -r $FileName ]
then
echo "$FileName has read access"
else
echo "$FileName does not have read access!"
fi
if [ -w $FileName ]
then
echo "$FileName has write access"
else
echo "$FileName does not have write access!"
fi
if [ -x $FileName ]
then
echo "$FileName has execute access"
else
echo "$FileName does not have execute access!"
fi

if [ -s $FileName ]
then
echo "$FileName size is non-zero"
else
echo "$FileName is empty"
fi
else
echo "$FileName not found!"
fi

Output:
Enter the input below
Test.txt
test.txt exists
test.txt has read access
test.txt has write access
test.txt has execute access
test.txt is empty

Arrays
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Variables store single data elements. Arrays, on the other hand, can store a virtually unlimited
number of data elements. When working with a large amount of data, variables can prove to
be very inefficient and it!s very helpful to get hands-on with arrays.

Initialization of Arrays

With newer versions of bash, it supports one-dimensional arrays. An array can be explicitly
declared by the declare shell-built in.
declare -a var
But it is not necessary to declare array variables as above. We can insert individual elements
to array directly as follows.
var[XX]=<value>
where "XX$!denotes the array index. To dereference array elements use the curly bracket
syntax, i.e.
${var[XX]}
Note: Array indexing always start with 0.

Another convenient way of initializing an entire array is by using the pair of parenthesis as
shown below.
var=( element1 element2 element3 . . . elementN )
There is yet another way of assigning values to arrays. This way of initialization is a sub-
category of the previously explained method.
array=( [XX]=<value> [XX]=<value> . . . )
We can also read/assign values to array during the execution time using the read shell-builtin.
read -a array
Now upon executing the above statement inside a script, it waits for some input. We need to
provide the array elements separated by space (and not carriage return). After entering the
values press enter to terminate.
To traverse through the array elements we can also use for loop.
for i in “${array[@]}”
do
#access each element as $i. . .
done

The following script summarizes the contents of this particular section.


Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

#!/bin/bash

array1[0]=one
array1[1]=1
echo ${array1[0]}
echo ${array1[1]}

array2=( one two three )


echo ${array2[0]}
echo ${array2[2]}

array3=( [9]=nine [11]=11 )


echo ${array3[9]}
echo ${array3[11]}
read -a array4
for i in "${array4[@]}"
do
echo $i
done
exit 0

Various Operations on Arrays

Many of the standard string operations work on arrays . Look at the following sample script
which implements some operations on arrays (including string operations).
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

#!/bin/bash

array=( apple bat cat dog elephant frog )


#print first element
echo ${array[0]}
echo ${array:0}
#display all elements
echo ${array[@]}
echo ${array[@]:0}
#display all elements except first one
echo ${array[@]:1}
#display elements in a range
echo ${array[@]:1:4}
#length of first element
echo ${#array[0]}
echo ${#array}
#number of elements
echo ${#array[*]}
echo ${#array[@]}
#replacing substring
echo ${array[@]//a/A}
exit 0

Following is the output produced on executing the above script.

apple
apple
apple bat cat dog elephant frog
apple bat cat dog elephant frog
bat cat dog elephant frog
bat cat dog elephant
5
5
6
6
Apple bAt cAt dog elephAnt frog

Command Substitution with Arrays

Command substitution assigns the output of a command or multiple commands into another
context. Here in this context of arrays we can insert the output of commands as individual
elements of arrays. Syntax is as follows.

array=( $(command) )
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

By default the contents in the output of command separated by white spaces are plugged into
array as individual elements. The following script list the contents of a directory, which are
files with 755 permissions.

#!/bin/bash

ERR=27
EXT=0

if [ $# -ne 1 ]; then
echo "Usage: $0 <path>"
exit $ERR
fi

if [ ! -d $1 ]; then
echo "Directory $1 doesn't exists"
exit $ERR
fi

temp=( $(find $1 -maxdepth 1 -type f) )

for i in "${temp[@]}"
do
perm=$(ls -l $i)
if [ `expr ${perm:0:10} : "-rwxr-xr-x"` -eq 10 ]; then
echo ${i##*/}
fi
done
exit $EXT

Simulating Two-dimensional Arrays

We can easily represent a 2-dimensional matrix using a 1-dimensional array. In row major
order representation elements in each row of a matrix are progressively stored in array
indexes in a sequential manner. For an mXn matrix, formula for the same can be written as.

matrix[i][j]=array[n*i+j]

Look at another sample script for adding 2 matrices and printing the resultant matrix.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

#!/bin/bash

read -p "Enter the matrix order [mxn] : " t


m=${t:0:1}
n=${t:2:1}

echo "Enter the elements for first matrix"


for i in `seq 0 $(($m-1))`
do
for j in `seq 0 $(($n-1))`
do
read x[$(($n*$i+$j))]
done
done

echo "Enter the elements for second matrix"


for i in `seq 0 $(($m-1))`
do
for j in `seq 0 $(($n-1))`
do
read y[$(($n*$i+$j))]
z[$(($n*$i+$j))]=$((${x[$(($n*$i+$j))]}+${y[$(($n*$i+$j))]}))
done
done

echo "Matrix after addition is"


for i in `seq 0 $(($m-1))`
do
for j in `seq 0 $(($n-1))`
do
echo -ne "${z[$(($n*$i+$j))]}\t"
done
echo -e "\n"
done

exit 0

Even though there are limitations for implementing arrays inside shell scripting, it becomes
useful in a handful of situations, especially when we handle with command substitution.
Looking from an administrative point of view, the concept of arrays paved the way for
development of many background scripts in GNU/Linux systems.

Sorting using array


Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Generate random number array with 10 elements and then sort:


#!/bin/bash
#arr=(one two three four)
arr=($(for i in {0..9};do echo $((RANDOM%100));done))
echo "initial array:"
echo "${arr[@]}"
echo "sorted array:"
echo ${arr[@]} | tr " " "\n"| sort -n | tr "\n" " "

Output:
initial array:
76 7 66 15 23 7 9 78 83 93
sorted array:
7 7 9 15 23 66 76 78 83 93

Functions

Shell functions typically do not return the result to the calling code. Instead, global variables
or output streams are used to communicate the result. The variable "errno $!is often used to
communicate whether a command ran successfully or not. A number of commands also print
out their result into the "stdout$!stream so that the calling function can read into a variable.

Syntax for defining functions:


function_name()
{

<statements>

}

To invoke a function, simply use the function name as a command.


$ function_name

To pass parameters to the function, add space-separated arguments like other commands.
$ function_name $arg1 $arg2 $arg3

The passed parameters can be accessed inside the function using the standard positional
variables i.e. $0, $1, $2, $3, etc.

Example:

function_name(). # function declaration


{

c = $1 + $2. # positional parameters

}
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Function_name 1 2 #call made to the function. Variable c will hold the value 3

Functions can return values using any one of the three methods:

1) Change the state of a variable or variables.

Example: a shell script to demonstrate manipulation of variable inside a function

a=1
increment(){
a=$((a+1))
return
}
increment
echo "$a"

Output:

2) Use the return command to end the function and return the supplied value to the calling
section of the shell script.

Example:
function_name()
{
echo #hello $1”
return 1
}
Output: Running the function with a single parameter will echo the value.
$ function_name ram
hello ram

Exit status: Capturing the return value (stored in $?) as follows:


$ echo $?
1

3) Capture the output echoed to the stdout.

Example:

$ var = `function_name ram`


$ echo $var
hello ram

Return values from a function


Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Bash provides return statement using that we can return value to the caller. Let us understand
this with example:

function func_return_value {
return 10
}
Above function returns value 10 to its caller. Let us execute this function:

Output:

$ func_return_value
$ echo "Value returned by function is: $?"
When you execute above function, it will generate following output:
Value returned by function is: 10

NOTE: In bash we have to use $? to capture return value of function

Calling Functions from other Functions in Shell Scripts

Similar to how we called the function from outside, we can call another function from within
our function. In the below example, I!m calling the second function (function2) with the
argument that was passed to function1 and adding 1 to it. Learn more about how to perform
operations on variables.

#!/bin/bash
function1() {
echo "this is function 1 $1"
function2 $(($1+1))
}

function2() {
echo "This is function 2 $1"
}

function1 5

Output:
this is function 1 5
this is function 1 6

Variable Scope Within Bash Functions

By default declaration of a variable is accessible globally. To make a variable accessible only


within functions many programming and scripting languages provide with scope keywords.
Let!s have a look at the scope keyword for bash.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

#!/bin/bash

A="This is global value for A"


function1() {
local A="This is local value for A"
B="This is global and local value for B"
echo $A
echo $B
}
function1
echo $A
echo $B

Output:
This is local value for A
This is global and local value for B
This is global value for A
This is global and local value for B

With the use of the #local” keyword in the above script, the local value of #A” is completely
different from the global value. But the variable name is the same.

Debugging
The most basic step while debugging the script is, #echo”. You can #echo” the command on
which you are using the variables so that you can check in the output section whether it is
taking the right values or not.

Debugging using Set command


Using the set Bash built-in you can run in normal mode those portions of the script of which
you are sure they are without fault, and display debugging information only for troublesome
zones.
Short Notation Long Notation Result
set -f set -o noglob Disable file name generation using
metacharacters (globbing).

set -v set -o verbose Prints shell input lines as they are read.
set -x set -o xtrace Print command traces before executing
command.

Set -n Set -n noexec activates syntax checking mode

Set -x or set -o xtrace


Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

The most common one is the -x option, it will run the script in debug mode. Set -x shows the
command as well as their output on the terminal so that you would know for which command,
what the output is.

There are two ways to use the -x option,

Example1: bash -x script.sh ( while running the script )

Example2: #!/bin/bash -x ( adding -x option in shebang )

Similarly, if we want to debug a particular part of the script on which you have doubts then we
use set bash built-in.

#!/bin/bash

set -x #Enabling the Debugging


echo "foo"
set +x #Disabling the Debugging

echo "bar"

Output:

root@localhost$ ./script.sh
+ echo foo
foo
+ set +x
bar

set -u or set -o nounset

Sometimes you have not provided the variable, yet you have used it somewhere in the script.
So, by default bash will ignore the variable that does not exist. So here, Script should report
an error instead of continuing the execution silently.

#!/bin/bash
Set -u
echo $foo
echo "bar"

Output:

root@localhost$ ./script.sh

/script.sh: line 3: foo: unbound variable

Set -e
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

bash will throw an error on any wrong command provided on the script, yet it will continue to
execute the remaining part of the script. However, we don!t want bash to accumulate the
errors instead, it should stop executing the script on the first error, right away.

#!/bin/bash
set -e
foo
echo "bar"

Output:

root@localhost$ ./script.sh
./script.sh: line 3: foo: command not found

Set -e determines whether the script is a pass or fail, based on the return value. However, some
commands may have a non-zero return value and that doesn’t indicate the running fails or you
want the script to continue running even if the command fails, so for that, you can turn off “set
-e” temporarily and enable “set -e” after the command ends.

#!/bin/bash
set +e
command1
command2
set -e
set +e means to turn off the -e option, and set -e means to turn on set -e again.

Debugging using Trap

The command specified in the arguments of trap command is executed before each
subsequent statement in the script.

#! /bin/bash
trap 'echo "Line- ${LINENO}: five_val=${five_val}, two_val=${two_val}, total=${total}" '
DEBUG
five_val=5
two_val=2
total=$((five_val+two_val))
echo "Total is: $total"
total=0 && echo "Resetting Total"

In this example, we specified the echo command to print the values of


variables five_val, two_val, and total. Subsequently, we passed this echo statement to
the trap command with the DEBUG signal. In effect, prior to the execution of every
command in the script, the values of variables get printed.

Output
$ ./trap_debug.sh
Line- 3: five_val=, two_val=, total=
Line- 4: five_val=5, two_val=, total=
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Line- 5: five_val=5, two_val=2, total=


Line- 6: five_val=5, two_val=2, total=7
Total is: 7
Line- 7: five_val=5, two_val=2, total=7
Line- 7: five_val=5, two_val=2, total=0
Resetting Total

Process Management

A program under execution is termed as a Process. In other words, when the CPU reads and
executes the instructions written in a program, process is initiated. The process requires various
computer resources to complete the task. In other words, Task is a process that is executing in
supervisor mode and Process is a program in execution in user mode.

Multiple processes can exist in the computer memory each requiring same resources to execute
and most often at the same time. Hence, the operating system provides its services and manages
the allocation and release of resources to the processes. The basic objective of OS is to maintain
the throughput of the CPU keeping the latency time to minimum while allocating CPU to
multiple processes. Another major tasks of OS are: avoid Deadlocks, enable Inter-Process
Communication, applying Process Scheduling techniques and maintaining consistency of
process states.

Processes can be I/O-bound or processor-bound. The former is a type of process in which the
process execution time is computed using the amount of time it takes to complete the
input/output activities. As a result, such a process is frequently runnable, but only for a short
time because it will eventually block waiting on more I/O (this includes any type of I/O, such
as keyboard activity, and not simply disc I/O).

Process-bound processes, on the other hand, spend the majority of their time executing code in
which the process execution time is computed using the CPU speed. They tend to run until they
are preempted since they do not frequently block on I/O requests. A scheduling policy for
processor-bound tasks tends to run such processes less frequently but (optimally) for longer
periods. The ultimate example of a processor-bound process is one that executes an infinite
loop. An excellent example of this is a word processor, which generally sits waiting for key
strokes but could suddenly peg the processor in a frantic spell-checking spree.

The process memory is divided into four sections:


1. Program: It comprises of the program counter value and the content of the processor's
registers as represented by the current activity.
2. Data consists of both static and global variables.
3. Stack: Stack stores temporary data such as local variables, return addresses, and
method/function calls.
4. Heap: A heap is a type of dynamic memory that is allocated during the execution of a process.

STACK
MAX
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

HEAP
DATA
0 Figure: PROGRAM
A Process Memory

The stack and heap begin at opposite ends of the process's free space and advance towards one
another. If they collide, a stack overflow error will occur, or a call to fresh memory allocation
will fail due to inadequate memory in a heap region.

Process Control Block (PCB)

The attributes or parameters of a process are also called the context of the process. These
attributes are stored in a Process Control Block (PCB) for each process. The PCB is a data
structure that holds the information in following attributes:

• Process ID: A process ID is a unique ID that is assigned to the process when it is


created.
• Program Counter: The program counter stores the address of the process's most recent
instruction.
• Process State: The process state can be one of the following: running, ready, waiting,
terminate, and so on.
• Priority: Each process in memory has a different priority. The process with the highest priority
among the processes receives the CPU first.
• General Purpose Registers: These registers are used to hold data created during process
operation.
• List of Open Files: A list of open files includes some files that must be present in main memory
while the process is running.
• List of Open Devices: The list of open devices contains the devices that are in use when the
process is running.
• Memory Management information: page tables or segment tables.
• Accounting information: user and kernel CPU time consumed, account numbers, limits, etc.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Figure: Role of PCB Source:enjoyalgorithms.com

The operating system performs numerous operations to create a process, which uses a PCB to
track each process's execution status. It assigns a process ID (PID) to each process to uniquely
identify it in order to maintain track of all processes. As previously stated, it also keeps a
number of other crucial details in the PCB. As the process goes from one state to another, the
operating system updates information on the process's PCB. To facilitate frequent access to the
PCB, the operating system stores pointers to each process's PCB in a process table.

Referring to the figure above, assume OS is controlling two processes, p0 and p1, at the
moment p0 is active and p1 is inactive. Both processes' PCBs are stored in the RAM. Since
process p0 is active right now, values in CPU registers that reflect p0's current state are present.
If OS decides to interrupt process p0 after some time, OS will update all the state information
fields of process p0, including the CPU registers, to the PCB0 of p0.. Following this, the OS
will update every CPU register from the PCB1 of Process P1 and restore the PCB1 of P1 from
the memory, beginning Process P1's execution. If process p1 is interrupted after some time, the
PCB0 of process p0 will be restored, the CPU registers will be updated with the value of PCB0
of p0, and p0 will resume execution from the exact point where it was previously halted by the
operating system. We refer to every instance of this context switching as a context switch.

Context Switching

The operating system uses this method to change the context of execution from one process to
another. In a multitasking operating system process switching is a crucial thing. Only scheduler
can switch a process. This process consists of two steps.
• Switching the address page i.e. the page table of the new process.
• Switching the kernel mode stack and hardware context.
Due to the number of instructions required to load and restore the values of PCB fields from
memory, context switching is a costly operation. Additionally, a lot of the data used by the
process p1 during execution is kept in the CPU cache because doing so is significantly faster
than doing so from memory. We refer to a cache as being hot when the desired data is present
in it. When the CPU switches the context from say, process p1 to process p2, process p2
replaces process p1's cache with its own cache. Therefore, the next time the context switches
from process p2 to process p1, p1 will not find its data in the cache and will need to access it
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

from memory, resulting in cache misses. This is why cold cache is a term used to describe this
situation.

Process States

A process undergoes state change while it runs. The ongoing activity of a process contributes
to defining some aspects of its state. Each process could be in one of the following
conditions:

• New: The process is currently being developed. A new process is started whenever a software
accesses primary memory or RAM from secondary memory or the hard drive. In essence, it
gave a moment when a procedure is started.
• Ready: The process's state is loaded into the main memory during this period and is prepared
for execution.
• Waiting: The process is suspended at this time to allow other processes to begin running. In
other words, it sets the amount of time a process must wait before CPU time and other
resources are made available for its execution. E.g., the process is awaiting a different event,
such as the conclusion of I/O or the handling of a signal.
• Running: It refers to the period of time when a CPU is actually running a process. When in this
condition, commands are being carried out.
• Blocked: It identifies the length of time that a process must wait for an event, such as the
completion of input or output activities.
• Suspended: It defines the moment at which a process is prepared for execution but has not
yet been added to the operating system's ready queue.
• Terminated: The process is now in this state after having completed its execution. It identifies
the moment at which all of a process's used resources and memory are free. This occurs when
a process is terminated or ended.

Figure: Process States Source: includehelp.com

The process is originally in the new state when it is created, as seen in the above diagram.
When a process is created, its state is changed to ready, at which point it is loaded into RAM
or main memory. When a process waits for CPU time to be allocated, its status is changed to
waiting. As soon as the CPU time and other resources are allotted to the process, it enters the
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

state of execution. When a process runs or is successfully executed, it is terminated, and the
state of the process is changed to terminate.

Threads

A thread is a stream of code execution that runs throughout the entire process. It has a program
counter that keeps track of lists of instructions to run next, as well as system registers that bind
its current working variables. The term "lightweight process" also applies to threads. There
could be several threads running in the same process, each with a different set of capabilities.
Parallelism is used by a thread as a means of enhancing the performance of an application.
Each thread of the same process makes use of a separate program counter and a stack of
activation records and control blocks.

Figure: Single-Thread and Multi-Thread Processes Source: studytonight.com

Let's use a web browser as an example, where one thread might be used to show a web page
while another thread performs payment transaction on the network. Another example is a word
processor, which might contain threads for displaying the user interface or graphics, responding
to keystrokes entered by the user, and verifying spelling and grammar in the background. In
some circumstances, a single application could be needed to carry out a number of related tasks.
The operating system's threads have a number of advantages and enhance system
performance. The operating system needs threads for a number of reasons, including:
• The operational cost across threads is low because they share the same data and
programming code.
• Compared to starting or ending a process, creating and terminating a thread is fast. In contrast
to processes, context switching occurs more quickly in threads.

Difference between Process and Thread

Process Thread
A Process simply means any program in Thread simply means a segment of a
execution. process.

The process consumes more resources Thread consumes fewer resources.


The process requires more time for creation. Thread requires comparatively less time for
creation than process.

The process is a heavyweight process Thread is known as a lightweight process


Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

The process takes more time to terminate The thread takes less time to terminate.
Processes have independent data and code A thread mainly shares the data segment,
segments code segment, files, etc. with its peer
threads.

The process takes more time for context The thread takes less time for context
switching. switching.

Communication between processes needs Communication between threads needs less


more time as compared to thread. time as compared to processes.

For some reason, if a process gets blocked In case if a user-level thread gets blocked, all
then the remaining processes can continue of its peer threads also get blocked.
their execution

Source: scaler.com

The above diagram shows how the resources are shared in two different processes vs two
threads in a single process.

Multithreading: Instead of starting a completely new process, multithreading works by


splitting an existing one up into several threads. To achieve parallelism and enhance the speed
of the programs, multithreading is used since it is quicker in many ways. The benefits of
multithreaded programming can be divided into four categories.

• Responsiveness: Multithreading is an interactive approach for an application that allows a


program to continue operating even while a part of it is blocked or performing a lengthy
function, which improves user responsiveness.
• Sharing of resources: Typically, threads share the memory and resources of any process they
are a part of. The benefit of sharing code is that it enables numerous threads of operation to
coexist within the same address space.
• Cost Effective: Allocation of memory and resources for the formation of processes appears to
be an expensive operation in the operating system. As a result of the fact that threads can
disperse the resources of any process to which they belong, it has become more cost-effective
to construct and develop context-switching threads.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

• Utilization of multiprocessor architectures: The benefits of multithreading can be


considerably increased in a multiprocessor design where threads can run concurrently on
many processors.

Multithreading Models
One of the following methods must be used to map user threads to kernel threads:

Many-to-One Model
• A large number of user-level threads are all mapped onto a single kernel thread in the many-
to-one architecture.
• The user space thread library is particularly effective at handling thread management.
• Although the other user threads would otherwise be free to continue, if a blocking system
call is made, the entire operation will stop.
• The many-to-one approach prevents individual processes from being distributed across
several CPUs because a single kernel thread can only run on a single CPU.
• A few systems still use the many-to-one paradigm today, such as Green Threads for Solaris
and GNU Portable Threads.

User Thread

k Kernel Thread

Figure: Many-to-One Model

One-to-One Model
• Each user thread is handled by a distinct kernel thread under the one-to-one approach.
• Blocking system calls and the division of tasks across several CPUs are issues that are
resolved by the one-to-one approach.
• The overhead associated with running the one-to-one model, however, is more significant,
adding to the overhead and slowing the system.
• The number of threads that can be produced is typically limited in this model's
implementations.
• The one-to-one thread paradigm is used by Linux and Windows versions 95 through XP.

User Thread
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

k k k k Kernel Thread

Figure: One-to-One Model

Many-to-Many Model
• The many-to-many model combines the best aspects of the one-to-one and many-to-one
models by multiplexing any number of user threads onto an equal or less number of kernel
threads.
• There are no limitations on how many threads users can establish.
• The entire process is not blocked when a kernel system call is blocked.
• Multiple processors can be used to divide up processes.
• Depending on the number of CPUs present and other criteria, different kernel threads may be
assigned to different processes.

User Thread

k k k Kernel Thread

Figure: Many-to-Many Model

Two-tier Model (Variation of Many-to-Many Model)


One popular variation of the many-to-many model is the two-tier model, which allows
either many-to-many or one-to-one operation. IRIX, HP-UX, and Tru64 UNIX use the
two-tier model, as did Solaris prior to Solaris 9.

User Thread

k k k k Kernel Thread
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Figure: Two-tier Model

Types of Threads

User Thread
The user-level thread is not acknowledged by the OS. The user implements user threads, which
are simple to implement. The entire process is blocked if a user executes a user-level thread
blocking operation. The user level thread has no knowledge of the kernel level thread. These
are the threads that application programs would put into their programs.
Advantages:
1. The OS is unaware of user-level threads because they are implemented using user-level
libraries.
• In comparison to kernel-level thread, user-level thread can be created and managed more
quickly.
• It is quicker to switch contexts in user-level threads.
• The entire process becomes stalled if only one user-level thread does a blocking task.
Consider Java threads, POSIX threads, etc.
Disadvantages:
1. The coordination between the thread and the kernel is missing in user-level threads.
2. A page fault caused by a thread stops the entire process.

Kernel Thread
The operating system is recognised by the kernel thread. For each thread and process in the
kernel-level thread, the system has a thread control block and a process control block. The
operating system implements the kernel-level thread. All threads are managed by the kernel,
which is aware of them all. The kernel-level thread provides a system call for user-space thread
creation and management.
Advantages:
• Kernel level threads are implemented via system calls and are acknowledged by the
operating system.
• Creating and managing kernel-level threads is slower than user-level threads.
• Context switching in a thread at the kernel level is slower.
• Even if one thread at the kernel level performs a blocking action, it has no impact on other
threads. Example: Window Solaris, Win32 Threads
Disadvantages:
• All threads are controlled and scheduled by the kernel thread.
• User thread implementation is simpler than kernel thread implementation.
• User-level threads are quicker than the kernel-level thread.

Difference between User-Level and Kernel-Level Threads


User Level threads Kernel Level Threads
These threads are implemented by These threads are implemented by Operating
users. systems

These threads are not recognized by These threads are recognized by operating
operating systems, systems,
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

In User Level threads, the Context In Kernel Level threads, hardware support is
switch requires no hardware support. needed.

These threads are mainly designed as These threads are mainly designed as independent
dependent threads. threads.

In User Level threads, if one user-level On the other hand, if one kernel thread performs a
thread performs a blocking operation blocking operation then another thread can
then the entire process will be blocked. continue the execution.

Example of User Level threads: Java Example of Kernel level threads: Window
thread, POSIX threads. Solaris.

Implementation of User Level thread is While the implementation of the kernel-level


done by a thread library and is easy. thread is done by the operating system and is
complex.

This thread is generic in nature and can This is specific to the operating system.
run on any operating system.

Challenges of Threads

System calls fork() and exec(): A duplicate child process is created using the fork() method.
The question that arises during a fork() call is whether the entire process should be copied or
just the thread that initiated the fork() request. The exec() call substitutes a new program for
the whole process that invoked it, including all of its threads.
Thread cancellation: When a thread is terminated before it has finished, it is referred to as
thread cancellation, and the thread that was cancelled is known as the target thread. There are
two types of thread cancellation:
• Asynchronous Cancellation: In this type of cancellation, the source thread ends the
target thread immediately.
• Postponed Cancellation: With deferred cancellation, the target thread checks to see
if it has to be terminated on a regular basis.

Signal handling: In Linux systems, a signal is a notification that a specific event has occurred
to a process. The following categories of signal processing are based on the signal's source:
• Asynchronous Signal: A signal that is produced independently of the receiving
process.
• Synchronous Signal: A signal that is produced and sent simultaneously.

Process Scheduling

An integral part of multiprogramming operating systems is process scheduling. It is the process


of choosing another task to be processed while removing the currently executing task from the
processor. A process is scheduled into various states, including ready, waiting, and running etc.
The primary goals of the process scheduling system are to maintain constant CPU activity and
to ensure that all programs respond with the shortest possible delay. The scheduler must use
the proper rules for switching processes in and out of CPU. In a uniprocessor, only a single
process is active. Throughout its existence, a process moves between different scheduling
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

queues. The process of selecting processes from these queues is performed by a scheduler. It
uses scheduling algorithms to decide which process it must allot to the CPU.
The process scheduler is the kernel component responsible for selecting the next process to
execute. The process scheduler is a subsystem of the kernel that divides the system's limited
processor time resource among the executable processes. By determining which processes can
execute, the scheduler is responsible for optimising system utilisation and creating the illusion
that numerous processes are running concurrently.

Categories of Scheduling

Non-Preemptive: The resource cannot be removed from a process in non-preemptive mode


until the process has finished running. When the running process ends and moves to a waiting
state, the swapping of resources takes place.

Preemptive: The OS allots resources to a process during preemptive scheduling for a


predetermined period of time. The process switches from running state to ready state or from
waiting state to ready state during resource allocation. This switching happens because the CPU
may assign other processes precedence and substitute the currently active process for the higher
priority process.

Criteria of Process Scheduling

Since multiple scheduling algorithms are available to send the process for execution, the need
exist to understand the criteria that would help in comparing these algorithms to achieve
optimum multiprogramming. Certain commonly used terminologies and criteria are described
in context of process scheduling.

● Arrival Time: Time at which the process arrives in the ready queue.
● Completion Time: Time at which process completes its execution.
● Burst Time: Time required by a process for CPU execution.
● CPU Utilization: Keep the CPU as busy as possible. It range from 0 to 100%. In
practice, it range from 40 to 90%.
● Throughput: Throughput is the rate at which processes are completed per unit of time.
● Turnaround time: This is the how long a process takes to execute a process. It is
calculated as the time gap between the submission of a process and its completion.
Turn Around Time = Completion Time – Arrival Time
● Waiting time: Waiting time is the sum of the time periods spent in waiting in the ready
queue.
Waiting Time = Turn Around Time – Burst Time
● Response time: Response time is the time it takes to start responding from submission
time. It is calculated as the amount of time it takes from when a request was submitted
until the first response is produced.

Response Time = Time at which the process gets the CPU for the first time - Arrival time

● Fairness: Each process should have a fair share of CPU.

Process Scheduling Queues


Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

The process control blocks (PCB) of all processes are kept in separate queues by the OS for
each state. When the state of a process changes, the PCB is unlinked from its current queue and
moved to a new state queue. These process scheduling queues are:

1. Job queue: Makes sure that processes stay in the system.


2. Ready queue: This stores a set of all processes in main memory, ready and waiting for
execution. The ready queue stores any new process.
3. Device queue: This queue consists of the processes blocked due to the unavailability of an
I/O device.

The OS manages each queue using a distinct set of policies, and the OS scheduler chooses how
to transfer processes between the ready queue and the run queue (which only permits one entry
per processor core on the system).

Types of Process Schedulers

Process Schedulers Source:https://www.researchgate.net

Long-term Scheduler: A long-term scheduler, which is also called a job scheduler, decides
which program should be allowed to run on the system. The operating system's (i.e.,
processor's) overhead for keeping long lists, context switching, and dispatching increases if the
number of ready processes in the ready queue rises significantly. So, only let a certain amount
of processes into the ready queue.

The processes are picked out of the pool (secondary memory) and kept in the primary memory's
ready queue. The level of multiprogramming is mostly managed by the long-term scheduler.
Among the jobs in the pool, the long-term scheduler's job is to select the ideal ratio of CPU-
and IO-bound processes.

When you admit a process or job, it becomes a process and is placed to the queue for the short-
term scheduler. In some systems, a newly generated process begins in a swapped-out state, in
which case it is assigned to a queue for the medium-term scheduler scheduling manage queues
to minimise queueing delay and optimise performance.

The CPU will spend the majority of the time idle if the job scheduler selects more IO-bound
processes, which could result in all of the jobs being blocked at all times. This will lessen
Multiprogramming to some extent and in turn may lead to a lengthy queue of ready jobs, and
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

hence lead to the ‘‘convoy effect.’’ As a result, the long-term scheduler's job is extremely
important and could have a long-term impact on the system.

A stable degree of multiprogramming is when the average rate of creating new processes and
the average rate of processes leaving the system are both the same. Time-sharing operating
systems, for example, don't have a long-term scheduler because it's only needed when a process
changes from "new" to "ready."

Short-term Scheduler: A short-term scheduler, also called a CPU scheduler, boosts the
performance of the system based on the set of criteria that was chosen. This is when the process
goes from the ready state to the running state. It chooses one of the ready processes to run and
provides the CPU to it. It works faster than long-term schedulers and is sometimes called a
"dispatcher" because it decides which process will run next. Short-term scheduler is executed
whenever an event occurs that may cause the current executing process to be interrupted. For
instance, clock interrupts, I/O interrupts, requests to the operating system, signals, etc.

The short-term scheduler's work can be crucial because if it chooses a job with a high CPU
burst time, all subsequent jobs will have to wait in the ready queue for a very long time.
Starvation is a condition that can happen if the short-term scheduler chooses the job
incorrectly.

Medium-term Scheduler: This scheduler works in close conjunction with the long-term
scheduler. Medium-term scheduling is a part of swapping that gets rid of processes from
memory. When an I/O request is made by a process that is already running, the process is
suspended, which means it can't finish. So, in order to get the process out of memory and make
room for others, it is sent to the secondary storage. This is called "swapping," and a process
that goes through swapping is called "rolled out" or "swapped out."
It reduces the degree of multiprogramming. The swapping is necessary to have a perfect mix
of processes in the ready queue.

Comparison of Process Schedulers

Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler


A job scheduler A CPU scheduler A process swapping scheduler
Slowest speed Fastest Speed Speed is between the other two
Controls the degree of Provides less control over the Reduces the degree of
multiprogramming degree of multiprogramming multiprogramming
Absent or minimal in the time-
Minimal in time-sharing OS Part of time-sharing OS
sharing OS
Selects a process from pool and Selects a process that is ready for Re-introduces processes into
loads it into memory for execution execution memory for continued execution

Scheduling Algorithms

Scheduling algorithms determine which of the tasks in the ready queue will be allotted to the
CPU based on the scheduling policy's preemptiveness or non-preemptiveness. Arrival and
service times will also play a part in scheduling. The diagram below shows various types of
process scheduling algorithms.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Types of Process Scheduling Algorithms Source: geeksforgeeks.com

First-cum First-Serve Scheduling (FCFS)

FCFS is considered the simplest of all scheduling methods for operating systems. FCFS can
be understood visualizing the real-time scenarios like customers waiting in line at the bank
or the post office or at a copying machine.

It specifies that the CPU is allotted to the task that requested it first and is implemented
using First-in First-out (FIFO) queue. Both non-preemptive and preemptive CPU scheduling
techniques are supported by FCFS. Unfortunately, however, FCFS can yield some very long
average wait times, particularly if the first process to get there takes a long time.

For example, consider the following three processes:

Process Service Time (ts) Turnaround Time (tt) Waiting Time (tw)
P1 8 8 0
P2 2 10 8
P3 5 15 10
P4 3 18 15
Average (in 12.75ms 8.25ms
milliseconds)

Turnaround-time of FCFS for each process:

P1 P2 P3 P4

0 8 10 15 18

Advantages of FCFS
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

● Easy to implement
● First come, first serve method

Disadvantages of FCFS

● FCFS suffers from Convoy effect.


● The average waiting time is much higher than the other algorithms.
● FCFS is very simple and easy to implement and hence not much efficient.

Shortest-job First (SJF)

The scheduling method known as "shortest job first" (SJF) or “shortest job next” (SJN)
chooses the waiting process with the shortest execution time to run first. Preemption may or
may not be used in this scheduling strategy. reduces the average execution time of other
processes by a significant amount. It is easy to implement in Batch systems where required
CPU time is known in advance but impossible to implement in interactive systems where
required CPU time is not known. The time a process would take to complete the task, should
be known in advance.

Allocation of CPU to the processes would depend on the nature of the process. For a preemptive
process scheduling, as soon as the process with short burst time arrives in the ready queue, the
current process is preempted. In a non-preemptive process scheduling, the process arriving with
shorter burst time would have to wait for the current process to finish execution. This leads to
the problem of Starvation, where a shorter process has to wait for a long time until the current
longer process gets executed. This happens if shorter jobs keep coming, but this can be solved
using the concept of aging.

Example:

Consider the above table and assume that all the process arrive at time 0 and the scheduling is
non-preemptive. The Gantt chart below specifies the order of execution of each process. The
average waiting time calculated is:
Process Service Time (ts) Turnaround Time (tt) Waiting Time (tw)

P1 8 18 10
P2 2 2 0
P3 5 10 5
P4 3 5 2
Average (in 8.75ms 4.25ms
milliseconds)

P2 P4 P3 P1

0 2 3 5 8
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

For the same example, consider the scheduling as preemptive. Here, lets us add a column to
show the arrival time of each process. Average wait time is calculated as (Completion time-
Service time-Arrival time):

Process Burst Arrival Time Turnaround Time Waiting Time


time/Service (ta) (tt) (tw)
Time (ts)
P1 8 0 18 (18-8-0)=10
P2 2 1 3 (3-2-1)=0
P3 5 2 11 (11-5-2)=4
P4 3 3 6 (6-3-3)=0
Average (in 9.50ms 3.50ms
milliseconds)

Gantt Chart for Preemptive SJF Scheduling:

P1 P2 P4 P3 P1

0 1 3 6 11 18

● At ( t =0ms ), P1 arrives. It’s the only process so CPU starts executing it.
● At ( t = 1ms ), P2 has arrived . At this time, P1 (remaining time ) = 7 ms . P2 has 2ms ,
so as P2 is shorter, P1 is preempted and P2 process starts executing.
● At ( t = 2ms ), P3 process has arrived. At this time, P1(remaining time) = 7 ms,
P2(remaining time ) = 1 ms , P3 = 5ms. Since P2 is having least burst time, P2 is
executed .
● At ( t = 3ms ), P4 comes , At this time, P1 = 7 ms, P2 = 0ms, P3 = 5ms, P4 = 3ms.
Since P4 has short burst time, so P4 continues to execute.
● At ( t= 6ms ),P4 is finished . Now, remaining tasks are P1 = 7ms, P3 = 5ms. As ,P3 has
short burst time, so P3 continues to execute.
● At ( t = 11ms ),P1 gets executed till it finishes.

Advantages

As SJF reduces the average waiting time thus, it is better than the first come first

serve scheduling algorithm.
● SJF is generally used for long term scheduling
Disadvantages

● One of the demerit SJF has is starvation.


● Many times it becomes complicated to predict the length of the upcoming CPU
request

Shortest Remaining Job First (SRJF)


Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

This approach for scheduling is the preemptive version of the SJF algorithm. The operating
system assigns the processor to the task closest to completion. There is a possibility that this
job will be replaced if another job with a shorter completion time becomes available. It is
utilised in batch environments where short jobs are prioritised and cannot be implemented in
interactive systems when CPU time requirements are unpredictable. Along with each unit of
processing, the short-term scheduler checks any processes available in the ready queue which
are having lower burst time than the current process.

No preemption will be performed after all processes are in the ready queue, at which point the
algorithm will operate in the same way as SJF scheduling. The context of the process is stored,
the process is taken out of execution, and the next process is scheduled in the Process Control
Block. On the subsequent run of this process, the PCB is accessible.

SRJF algorithm processes jobs faster than SJF method because its overhead costs are not
included. Context switching occurs significantly more frequently in SRJF than in SJF, using
CPU processing time. This increases its processing time and lowers its processing speed
advantage.

Example:

Process Burst Arrival Time Turnaround Time Waiting Time


time/Service (ta) (tt) (tw)
Time (ts) (Completion
time -Burst
Time)
P1 8 0 18 10
P2 2 1 3 1
P3 5 2 9 4
P4 3 3 3 0
Average (in 8.25ms 3.75ms
milliseconds)

Gantt Chart:

P1 P2 P4 P3 P1

0 1 3 6 11 18

● At the 0th unit of the CPU, there is only one process that is P1, so P1 gets executed for
the 1 time unit.
● At the 1st unit of the CPU, Process P2 arrives. Now, the P1 needs 7 more units to be
executed, and the P2 needs only 2 units. So, P2 is executed first by preempting P1.
● At the 2nd unit of time, the process P3 arrives, and the burst time of P3 is 5 units which
is more than the completion time of P2 that is 2 unit, so P2 continues its execution.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

● At the 3rd unit of time, the process P2 has terminated and the process P4 arrives. The
burst time of P4 is 3 units which is less than the completion time of P3 that is 5 units,
so P4 is executed first.
● Now after the completion of P4, the burst time of P3 is 5 units and the remaining burst
time of P1 is 7 units. That means P3 will be executed first and P1 thereafter.
● P3 gets completed at time unit 11 and P1 is sent for execution, and it gets completed at
the 18th unit.

Advantages

● In SRJF the short processes are handled very fast.


● The system also requires very little overhead since it only makes a decision when a
process completes or a new process is added.

Disadvantages

● Like the shortest job first, it also has the potential for process starvation.
● Long processes may be held off indefinitely if short processes are continually added.

Longest Job First (LJF):

Longest Job First (LJF) scheduling belongs to the non-preemptive scheduling algorithm
category. This algorithm primarily keeps track of the Burst time of all available processes at
the time of arrival, and then assigns the processor to the process with the longest burst time.
This scheduling algorithm is comparable to that of SJF. In contrast, this scheduling strategy
gives precedence to the process with the longest burst time.

Once a process begins its execution in this algorithm, it cannot be halted during its processing.
Only until the allocated process has completed its processing and been terminated may any
other process be executed. In the event that two processes have the same burst time, the tie is
broken using first-come, first-served (FCFS), with the process that arrived first being executed
first.

Process of LJF

● Firstly, the algorithm sorts the processes according to their Arrival Times in ascending
order.
● In the second stage, it will select the process with the highest Burst Time among all
arriving processes.
● The data will then be processed for the allotted burst period. The LJF additionally
monitors for the arrival of further processes till this one has completed its execution.
● Last but not least it will Repeat all the above steps until all the processes are executed.

Example: In this example, four processes P1, P2, P3 and P4 are given along with their Burst
time and arrival time.
Process Service Time Arrival Time Turnaround Time Waiting Time
(ts) (ta) (tt) (tw)
P1 8 0 8 0
P2 2 1 12 10
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

P3 5 2 16 11
P4 3 3 13 10
Average (in 12.25ms 7.75ms
milliseconds)

Gantt Chart:

P1 P3 P4 P2

0 8 13 16 18

Explanation:

● At t = 0, there is one process that is available having 8 units of burst time. So, select P1
and execute it for 8 ms.
● At t = 8 i.e. after P1 gets executed, The Available Processes are P2, P3 and P4. As you
can see the burst time of P3 is more than P2 and P4. So, select P3 and execute it for
5ms.
● At t = 13 i.e. after the execution of P3, the Available Processes are P2, P4. As you can
see the burst time of P4 is more than P2. So, select P4 and execute it for 3ms.
● Finally, after the completion of P4 execute the process P2 for 2 ms.

Advantages

● No process can complete until the longest job also reaches its completion.
● All the processes approximately finishes at the same time.

Disadvantages

● The waiting time is high.


● Processes with smaller burst time may starve for CPU.

Round-Robin (RR)

In contrast to FCFS scheduling, round robin scheduling assigns CPU bursts with time quantum
constraints. A timer is started for whatever value has been set for a time quantum when a
process is given access to the CPU. The process gets swapped off of the CPU, just like the
standard FCFS mechanism, if it completes its burst before the time quantum timer expires. The
process is switched out of the CPU and put at the tail end of the ready queue if the timer sounds
first. When every process has taken a turn, the scheduler gives the first process another turn,
and so on, because the ready queue is kept as a circular queue. Even if the average wait time
with RR scheduling can be greater than with other scheduling algorithms, the effect of all
processors sharing the CPU equally can be achieved.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

The time quantum that is chosen affects how well RR performs. In the case of a sufficiently
big quantum, RR reduces to the FCFS algorithm; in the case of a very small quantum, each
process receives 1/nth of the CPU and shares it equally.

However, a realistic system levies overhead for each context transfer, and the more context
flips there are, the smaller the time quantum. The majority of modern systems have context
switch times of about 10 microseconds and time quantum between 10 and 100 milliseconds,
therefore the overhead is negligible in comparison to the time quantum. Turnaround time also
varies with quantum time, in a non-apparent manner.

Turnaround time is generally reduced if the majority of processes complete their subsequent
CPU bursts inside one time quantum. For example, with three processes of 10 ms bursts each,
the average turnaround time for 1 ms quantum is 29, and for 10 ms quantum it reduces to 20.
If it is increased too much, though, RR just transforms into FCFS. A rule of thumb is that 80%
of CPU bursts should be smaller than the time quantum.

Example:

Consider the same table keeping the time Quantum as 2ms for each process. The Gantt chart
would look like the figure below.

P1 P2 P3 P4 P1 P3 P4 P1 P3 P1

0 2 4 6 8 10 12 13 15 16 18

Waiting time for each process:

Process Burst Turnaround Waiting Time


time/Service Time (tt) (tw)
Time (ts) Turnaround
time -Service
time
P1 8 18 (18-8)=10
P2 2 4 (4-2)=2
P3 5 16 (16-5)=11
P4 3 13 (13-3)=16
Average (in 12.75ms 9.75ms
milliseconds)

Advantages

● Round-robin is effective in a general-purpose, times-sharing system or transaction-


processing system.
● Doesn’t face the issues of starvation or convoy effect.
● Use Context switching to save states of preempted processes.
● Fair treatment for all the processes.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

● Overhead on processor is low.


● If you know the total number of processes on the run queue, then you can also assume
the worst-case response time for the same process.
● Good response time for short processes.

Disadvantages

● Care must be taken in choosing quantum value.


● Processing overhead is there in handling clock interrupt.
● Throughput is low if time quantum is too small.
● Round-robin scheduling doesn’t give special priority to more important tasks.
● Decreases comprehension

Priority-based Scheduling

The priority of a process is often the inverse of the CPU burst time in the Shortest Job First
scheduling technique, meaning that the higher the burst time, the lower the priority of that
process. The scheduling is done based on the priority of the process, with the most urgent
process being processed first, followed by the ones with lower priority in order. In the case of
priority scheduling, the priority is not always set as the opposite of the CPU burst time, but
rather it can be internally or externally set.

Processes with the same priority are carried out using FCFS. When a process is internally
defined, its priority may be determined by factors such as memory needs, time constraints, the
number of open files, the ratio of I/O bursts to CPU bursts, etc. In contrast, external priorities
are determined based on factors other than the operating system, such as the significance of the
process, the cost of using the computer's resources etc.

A process’s priority can be set using Static or Dynamic Priority Algorithm. When a process is
designed using the Static Priority algorithm, its associated priorities are determined at that
moment and do not alter throughout the process's existence. However, the dynamic priority
algorithm assigns priorities to the processes during runtime based on the processes' execution
parameters, such as approaching deadlines. Priority-based scheduling can be preemptive and
non-preemptive.

● Preemptive Priority Scheduling: The CPU is preempted, which means the processing
of the now running process is stopped and the incoming new process with higher
priority receives the CPU for its execution, if the new process that was added to the
ready queue has a higher priority than the presently operating process.
● Non-Preemptive Priority Scheduling: If a new process enters with a higher priority
than the one that is now running, the non-preemptive priority scheduling algorithm
places the incoming process at the head of the ready queue, meaning it will be executed
after the current process has finished.

Aging with Priority Scheduling

A serious issue with priority scheduling is indefinite blocking, often known as starvation, in
which a low-priority activity may be forced to wait indefinitely because there are always some
other tasks nearby with a higher priority.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

To prevent starving of any process, we can apply the idea of aging, in which the priority of a
low-priority process is continually increased based on its waiting time. For instance, if we
determine the aging factor to be 0.5 for each day of waiting, then if a process with priority 20
(which is relatively low priority) enters the ready queue, its ageing factor will be 1. After one
day, the priority is raised to 19.5, and so forth. By doing so, we can assure that no process must
wait indefinitely for CPU time to begin processing.

Example:

Process Burst Arrival Priority Turnaround Waiting Time


time/Service Time Time (tt) (tw)
Time (ts) Turnaround
Exit time- time -Service
Arrival Time time
P1 8 0 3 11 (11-8)=3
P2 2 1 2 12 (12-2)=10
P3 5 2 1 16 (16-5)=11
P4 3 3 4 3 (3-3)=0
Average (in 10.5ms 6ms
milliseconds)

Gantt Chart for Preemptive Priority-based Scheduling:

P1 P1 P1 P4 P1 P2 P3

0 1 2 3 6 11 13 18

Process Burst Arrival Priority Turnaround Waiting Time


time/Service Time Time (tt) (tw)
Time (ts) Turnaround
Exit time- time -Service
Arrival Time time
P1 8 0 3 8 (8-8)=0
P2 2 1 2 12 (12-1)=11
P3 5 2 1 16 (16-2)=14
P4 3 3 4 8 (8-3)=5
Average (in 11ms 7.5ms
milliseconds)

Gantt Chart for Non Preemptive Priority-based Scheduling:

P1 P4 P2 P3
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

0 8 11 13 18

Advantages

● High priority processes do not have to wait for their chance to be executed due to the
current running process.
● We are able to define the relative importance / priority of processes.
● The applications in which the requirements of time and resources fluctuate are useful.

Disadvantages

● The processes with low priorities may become starved because we only operate on those
with high priority. The phenomenon known as "starvation" occurs when a procedure is
perpetually delayed because no resources are ever allotted to it since other processes
are carried out first.
● All of the low priority processes will be lost if the system crashes because they are all
kept in RAM.

Multilevel Queues

There are times when, while several processes are in the ready queue, there are similar
processes in the waiting queue. Comparable how? For example, the processes could be
interactive or batch, and due to their differing response time requirements, they would have
different scheduling requirements. It seems sensible to classify these related processes into
groupings.

It is referred to as multilayer queue scheduling. We have distinct queues for various processes
with distinct scheduling requirements. For example, we can have a separate queue for system
processes that schedules them using the FCFS approach. Interactive processes will have their
own queue and can be scheduled using SJF and other methods. In addition to having their own
priorities, these queues are known as multilevel queue scheduling.

Figure: Processes in the Priority Queues Source: geeksforgeeks.com


Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

System Processes: System processes are the most important. If a system interrupt is triggered,
the operating system suspends all processes and handles the interrupt. Different reaction time
requirements necessitate distinct scheduling algorithms for the process.
Interactive Processes: Interactive process has medium priority. If we are using VLC player,
we directly interact with the application. So all these processes will come in an interactive
process. These queues will use scheduling algorithm according to requirement.
Batch Processing: The priority of batch processes is the lowest. The processes that operate in
the background automatically are categorised as batch processes. Also referred to as
background processes.

Multilevel queue scheduling must be implemented as fixed preemptive priority scheduling


and must differ in how the processes are scheduled. A different situation is when there are no
processes in the system's interactive queues. Let's say the queue is being used to execute
batch processes, all of which were initially empty. Assume that either the system or the
interactive queue receives a new process. Then, we must offer the queues that are above the
batch queue high priority in order to prevent batch process queue scheduling from happening.

Example: Consider the following table representing the Queues based on Priorities such that
Queue1>Queue2>Queue3

Process Burst time/Service Arrival Queue Number


Time (ts) Time

P1 8 0 1
P2 2 0 2
P3 5 0 3
P4 3 12 1

Gantt Chart for Multilevel Queue Scheduling:

P1 P2 P3 P4 P3

0 8 10 12 15 18

In this example, the processes P1, P2, and P3 arrive at t=0, but still, P1 runs first as it belongs
to queue number 1, which has a higher priority. After the P1 process is over, the P2 process
runs due to higher priority than P3, and then P3 runs. While the P3 process is running P4
process belonging to queue 1 of higher priority comes. So, the process P3 is stopped, and P4 is
run. After P4 is run and completed, then P3 is resumed.

Advantages:
● It enables us to use various scheduling techniques for various operations.
● It has a small scheduling overhead, meaning that the dispatcher takes little time to
move a process from the ready state to the running state.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Disadvantages:
● For lower priority activities, there is a possibility of starvation. If higher priority
processes keep arriving, the lower priority processes won't have a chance to enter the
operating state.
● Scheduling for multilevel queues is rigid.

Multilevel Feedback Queue Scheduling (MFQS)

With one minor exception, multilevel feedback queues are identical to multilayer queues. In
multilayer queue scheduling, a process cannot switch and enter another queue if it is a part of
Queue 1. But it can do so here. Feedback queues got their name because this scheduling keeps
track of how long processes take to complete and switches between them in different queues
as needed. It continually analyses the behaviour (execution time) of processes and modifies its
priority accordingly. To change the priority, we must consider our workload, which consists of
a mix of interactive tasks that are short-running (and may regularly release the CPU) and some
lengthy "CPU-bound" tasks that require a lot of CPU time but where response time is not
crucial.

Basic Rules of MFQS:


• Rule 1: If Priority(A) > Priority(B), A runs (B doesn’t).
• Rule 2: If Priority(A) = Priority(B), A & B run in RR.
● Rule 3: When a job enters the system, it is placed at the highest priority (the topmost
queue).
● Rule 4a: If a job uses up an entire time slice while running, its priority is reduced
(i.e., it moves down one queue).
● Rule 4b: If a job gives up the CPU before the time slice is up, it stays at the same
priority level.
● Rule 5: After some time period S, move all the jobs in the system to the topmost
queue.
Compared to standard multilevel queue scheduling, this type of scheduling is far more versatile
and speeds up response times. If a process uses too much CPU time, it will be moved to a
lower-priority queue. Similarly, a process that waits too long in a lower-priority queue may be
moved to a higher-priority queue. This form of aging prevents starvation. MFQS, therefore,
tries to optimize Turnaround time and minimize the Response time of the processes under
execution.

Example:
Consider creating three queues such that:
Queue 1 (Q1): Time Quantum=9ms
Queue 2 (Q2): Time Quantum=18ms
Queue 3 (Q3): FCFS
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Figure: Working of Priority Queues Source: geeksforgeeks.com

● When a process begins execution, it enters queue 1 initially.


● In Queue 1, a process executes for 9ms, and if it completes or provides CPU for I/O
operations inside these 9ms, its priority does not change; if it returns to the ready
queue, it resumes execution in Queue 1.
● If a process in Queue 1 does not finish within 9ms, its priority is decreased and it is
moved to Queue 2.
● Points 2 and 3 are also applicable to processes in Queue 2, however the time quantum
is 18 ms. In general, if a process does not complete within a time quantum, it is
moved to a queue with a lower priority.
● In the final queue, processes are scheduled first-come, first-served (FCFS).
● Well, the technology described above can vary; for instance, the final queue can also
use Round-Robin Scheduling.
Advantages

● MFQS is a flexible scheduling algorithm.


● It allows the different processes to move between queues.
● Prevents CPU starvation.

Disadvantages

● It is the most complex scheduling algorithm.


● It needs other means to be able to select the best scheduler.
● This process may have CPU overheads.

Highest Response Ratio Next (HRRN) Scheduling

Highest Response Ratio Next Scheduling is a Non-Preemptive Scheduling algorithm. This


algorithm offers the advantages of the shortest job first scheduling algorithm and eliminates its
limitations. Among all scheduling algorithms, it is one of the most efficient. In Highest
Response Ratio, the scheduling of jobs is determined by an extra parameter known as the
Response Ratio. We determine the response ratio for each possible process, and the process
with the highest response ratio is given the highest priority.
The formula to calculate the Response Ratio is:
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

Response Ratio=(W+S)/S
Where,
W is the Waiting Time and
S is the Service Time or Burst Time

Example:
Process Burst Arrival Time Turnaround Time Waiting Time (tw)
time/Service (tt) Turnaround time
Time (ts) -Service time
Exit time-Arrival
Time
P1 3 0 3 0
P2 7 2 8 1
P3 5 4 13 8
P4 2 6 6 4
P5 4 8 13 9
Average (in 8.6ms 4.4ms
milliseconds)

Gantt Chart:

P1 P2 P4 P3 P5

0 3 10 12 17 21

● Initially, at time=0, the process P1 was in the ready queue. So, the process P1
completes its execution.
● After the process P1, at time=3, only the process P2 arrived, so the process P2
executed because the operating system did not had any other option.
● At time=10, the processes P3, P4, and P5 were in ready queue. So, to schedule the
next process after P2, we had calculated the response ratio.
● Next we calculated the response ratio for P3, P4, and P5.

Response Ratio = W+S/S

RR(P3) = [(10-4) +5]/5 = 2.2


RR(P4) = [(10-6) +2]/2 = 3
RR(P5) = [(10-8) +4]/4 = 1.5
As it is clear that the Process P4 has the Highest Response ratio, so the Process P4 was
scheduled after P2.
● Then, we had two processes i.e., P3 and P5, which are in the ready queue. So, we
again calculate the Response Ratio for the Process P3 and P5.
RR (P3) = [(12-6) +2]/2 = 4
RR (P5) = [(12-8) +4]/4 = 2
● Process P3 has the Highest Response Ratio so, next Process P3 was executed.
Dr Shalini Bhartiya Operating System-Unit 1 and 2 VIPS

● After the Process P3 completed its execution, the Process P5 was only left in the
ready queue. So, the Process P3 was executed next.

Advantages

● HRRN Scheduling performs better than shortest job first Scheduling.


● HRRN Scheduling decreases job wait times and promotes shorter jobs.
● Increase productivity.

Disadvantages

● Because the burst time of every operation cannot be determined in advance, HRRN
Scheduling cannot be implemented practically.
● There may be overhead on processors in HRRN Scheduling.

You might also like