1.introduction To Os
1.introduction To Os
1) Memory Management
2) Processor Management
3) Device Management
1
4) File Management
5) Security
6) Control over system performance
7) Job accounting
8) Error detecting aids
9) Coordination between other software and users
1. MEMORY MANAGEMENT
Memory management refers to management of Primary Memory or Main Memory. Main
memory is a large array of words or bytes where each word or byte has its own address.
Main memory provides a fast storage that can be accessed directly by the CPU. For a program to
be executed, it must in the main memory. An Operating System does the following activities for
memory management:
Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are
not in use.
In multiprogramming, the OS decides which process will get memory when and how
much.
Allocates the memory when a process requests it to do so.
De-allocates the memory when a process no longer needs it or has been terminated.
2. PROCESS MANAGEMENT
In multiprogramming environment, the OS decides which process gets the processor when and
for how much time. This function is called process scheduling. An Operating System does the
following activities for processor management:
Keeps tracks of processor and status of process. The program responsible for this task is
known as traffic controller.
Allocates the processor (CPU) to a process.
De-allocates processor when a process is no longer required.
3. DEVICE MANAGEMENT
2
An Operating System manages device communication via their respective drivers. It does the
following activities for device management:
Keeps tracks of all devices. The program responsible for this task is known as the
I/O controller.
Decides which process gets the device when and for how much time.
Allocates the device in the most efficient way.
De-allocates devices.
4. File Management
A file system is normally organized into directories/folder for easy navigation and usage. These
directories may contain files and other directions.
An Operating System does the following activities for file management:
Keeps track of information, location, uses, status etc. The collective facilities are often
known as file system.
Decides who gets the resources.
Allocates the resources.
De-allocates the resources.
Other Important Activities
Following are some of the important activities that an Operating System performs:
5. SECURITY -- By means of password and similar other techniques, it prevents unauthorized
access to programs and data.
6. CONTROL OVER SYSTEM PERFORMANCE -- Recording delays between request for
a service and response from the system.
7. JOB ACCOUNTING -- Keeping track of time and resources used by various jobs and users.
8. ERROR DETECTING AIDS -- Production of dumps, traces, error messages, and other
debugging and error detecting aids.
9. COORDINATION BETWEEN OTHER SOFTWARE AND USERS -- Coordination and
assignment of compilers, interpreters, assemblers and other software to the various users of
the computer systems.
3
GENERATIONS OF OPERATING SYSTEMS
Early computing systems had no operating system. Users had complete access to the machine
language. They hand-coded all instructions. (1940s we had no operating system)
This was the beginning of batch processing systems in which jobs were gathered in
groups or batches.
Third generation operating systems were multimode systems. Some of them simultaneously
supported the following
Batch processing
Time sharing
Real-time processing
Multiprocessing. They were large and expensive.
4
They support Networking
User friendly- menu-driven systems that guide the user through various options expressed
in simple English.
The concept of virtual machines has become widely used.
Database systems have gained central importance.
The concept of distributed data processing has become firmly entrenched.
5
Spooling –
Spooling stands for Simultaneous peripheral operation online. A spool is a similar to
buffer as it holds the jobs for a device till the device is ready to accept the job. It
considers disk as a huge buffer which can store as many jobs for the device till the
output devices are ready to accept them.
Buffering –
he main memory has an area called buffer that is used to store or hold the data
temporarily that is being transmitted either between two devices or between a device or
an application. Buffering is an act of storing data temporarily in the buffer. It helps in
matching the speed of the data stream between the sender and receiver. If speed of the
sender’s transmission is slower than receiver, then a buffer is created in main memory
of the receiver, and it accumulates the bytes received from the sender and vice versa.
The basic difference between Spooling and Buffering is that Spooling overlaps the
input/output of one job with the execution of another job while the buffering overlaps
input/output of one job with the execution of the same job.
8. Semaphore - Semaphore is simply a variable. This variable is used to solve the critical
section problem and to achieve process synchronization in the multiprocessing environment.
The two most common kinds of semaphores are counting semaphores and
binary semaphores.
9. The main function of the dispatcher (the portion of the process scheduler) is assigning
ready process to the CPU. The key difference between scheduler and dispatcher is that
the scheduler selects a process out of several processes to be executed while
the dispatcher allocates the CPU for the selected process by the scheduler.
10. Firmware - is data that is stored on a computer or other hardware device's ROM (read-only
memory) that provides instruction on how that device should operate.
11. Wild card - A special symbol that stands for one or more characters. Many operating
systems and applications support wildcards for identifying files and directories. This enables
you to select multiple files with a single specification. For example, in DOS and Windows,
the asterisk(*) is a wild card that stands for any combination of letters. The file specification
m*
therefore, refers to all files that begin with m. Similarly, the specification
m*.doc
refers to all files that start with m and end with.doc.
6
Question:What is the difference between a keyword search using the ‘*‘ (asterisk)
versus a keyword search using the ‘%‘ (percent)? Both work in the catalog, but return
different sets. Why?
Answer: A wildcard is a character (*,?,%,.) that can be used to represent one or more
characters in a word. Two of the wildcard characters that can be used in Koha
searches are the asterisk (‘*‘) and the percent sign (‘%‘). However, these two
characters act differently when used in searching.
The ‘*‘ is going to force a more exact search of the first few characters you enter prior
to the ‘*‘. The asterisk will allow for an infinite number of characters in the search as
long as the first few characters designated by your search remain the same. For
example, searching for authors using the term, Smi*, will return a list that may
include Smith, Smithers, Smithfield, Smiley, etc depending on the authors in your
database.
The ‘%‘ will treat the words you enter in the terms of “is like“. So a search of Smi%
will search for words like Smi. This results in a much more varied results list. For
example, a search on Smi% will return a list containing Smothers, Smith, Smelley,
Smithfield and many others depending on what is your database.
The bottom line in searching with wildcards: ‘*‘ is more exact while ‘%‘ searches for
like terms.
7
1) MONOLITHIC OPERATING SYSTEM
A monolithic kernel is an operating system architecture where the entire operating system is
working in kernel space and is alone in supervisor mode. Operating system runs in a privileged
processor mode with access to system data and to the hardware; applications run in non-
privileged mode with a limited set of interfaces available and with limited access to system data.
8
No protection between kernel components, not (safely, easily) extensible, overall structure
becomes complicated (no clear boundaries between modules)
9
2. It allows good maintenance, where you can make changes without affecting layer
interfaces
10
efficiently. While in peer computing we have to take back-up at every workstation.
4) Upgrading and Scalability in Client-server set-up: Changes can be made easily by just
upgrading the server. Also new resources and systems can be added by making necessary
changes in server.
5) Accessibility: From various platforms in the network, server can be accessed remotely.
6) As new information is uploaded in database, each workstation need not have its own storage
capacities increased (as may be the case in peer-to-peer systems). All the changes are made only
in central computer on which server database exists.
7) Security: Rules defining security and access rights can be defined at the time of set-up of
server.
1) Congestion in Network: Too many requests from the clients may lead to congestion, which
rarely takes place in P2P network. Overload can lead to breaking-down of servers. In peer-to-
peer, the total bandwidth of the network increases as the number of peers increase.
2) Client-Server architecture is not as robust as a P2P and if the server fails, the whole network
goes down. Also, if you are downloading a file from server and it gets abandoned due to some
error, download stops altogether. However, if there would have been peers, they would have
provided the broken parts of file.
3) Cost: It is very expensive to install and manage this type of computing.
4) You need professional IT people to maintain the servers and other technical details of
network.
11
To speed up processing, jobs with similar needs are batched together and run as a group. The
programmers leave their programs with the operator and the operator then sorts the programs
with similar requirements into batches.
The processors communicate with one another through various communication lines (such as
high-speed buses or telephone lines). These are referred as loosely coupled systems or
distributed systems.
12
The advantages of distributed systems are as follows:
With resource sharing facility, a user at one site may be able to use the resources
available at another.
Speedup the exchange of data with one another via electronic mail
If one site fails in a distributed system, the remaining sites can potentially continue
operating.
Better service to the customers.
Reduction of the load on the host computer.
Reduction of delays in data processing.
Examples of network operating systems include Microsoft Windows Server 2003, Microsoft
Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD.
13
A real-time system is defined as a data processing system in which the time interval required to
process and respond to inputs is so small that it controls the environment. The time taken by the
system to respond to an input and display of required updated information is termed as the
response time. So in this method, the response time is very small as compared to online
processing.
There are two types of real-time operating systems.
Hard real-time systems
Hard real-time systems guarantee that critical tasks complete on time. In hard real-time systems,
secondary storage is limited or missing and the data is stored in ROM. In these systems, virtual
memory is almost never found.
Soft real-time systems
Soft real-time systems are less restrictive. A critical real-time task gets priority over other tasks
and retains the priority until it completes. Soft real-time systems have limited utility than hard
real-time systems
JOB CONTROL
a) Command languages
b) Job control languages
c) System messages
1. COMMAND LANGUAGES
A computer programming language composed chiefly of a set of commands or operators, used
especially for communicating with the operating system of a computer.
This is a scripting language used on IBM mainframe operating systems to instruct the system on
how to run a batch job or a subsystem.
The purpose of job control language is to say which programs run, using which files or devices
for input or output.
14
3. SYSTEM MESSAGES
System massages is a form of communication between objects, processes or other resources used
in object oriented programming, inter-process communication and parallel computing.
15