unit1_OPERATING_SYSTEM
unit1_OPERATING_SYSTEM
Kernel functions are used always in system, so always stored in memory. Non kernel functions
are stored in hard disk, and it is retrieved whenever required.
Views of OS
Operating System can be viewed from two viewpoints–
User views & System views
1. User Views:-
The user’s view of the operating system depends on the type of user.
i.If the user is using standalone system, then OS is designed for ease of use
and high performances. Here resource utilization is not given importance.
iii. If the users are in workstations, connected to networks and servers, then
the user have a system unit of their own and shares resources and files
with other systems. Here the OS is designed for both ease of use and resource
availability (files).
iv. Users of hand held systems, expects the OS to be designed for ease of use
and performance per amount of battery life.
v. Other systems like embedded systems used in home devies (like washing m/c)
& automobiles do not have any user interaction. There are some LEDs to
show the status of its work.
2. System Views:-
Operating system can be viewed as a resource allocator and control program.
i. Resource allocator - The OS acts as a manager of hardware and software resources.CPU
time, memory space, file-storage space, I/O devices, shared files etc. are the different
resources required during execution of a program. There can be conflictingrequest for these
resources by different programs running in same system. The OS assigns the resources to the
requesting program depending on the priority.
ii. Control Program – The OS is a control program and manage the execution of
userprogram to prevent errors and improper use of the computer.
When system is switched on, ‘Bootstrap’ program is executed. It is the initial program to run in
the system. This program is stored in read-only memory (ROM) or in electrically Erasable
Programmable Read-Only Memory (EEPROM). It initializes the CPU registers, memory, device
controllers and other initial setups. The program also locates and loads, the OS kernel to the
memory. Then the OS starts with the first process to be executed (ie. ‘init’ process) and then wait
for the interrupt from the user.
When the CPU is interrupted, it stops what it is doing and immediately transfers execution
to a fixed location. The fixed location (Interrupt Vector Table) contains the starting address where
the service routine for the interrupt is located. After the execution of interrupt service routine, the
CPU resumes the interrupted computation.
Interrupts are an important part of computer architecture. Each computer design has its own
interrupt mechanism, but several functions are common. The interrupt must transfer controlto the
appropriate interrupt service routine
interrupt
Processor
IVT
Interrupt
Service
Routine
Stored at a fixed
location
Storage Structure
Computer programs must be in main memory (RAM) to be executed. Main memory is the large
memory that the processor can access directly. It commonly is implemented in a semiconductor
technology called dynamic random-access memory (DRAM). Computers provide Read Only
Memory (ROM), whose data cannot be changed.
All forms of memory provide an array of memory words. Each word has its own address.
Interaction is achieved through a sequence of load or store instructions to specific memory
addresses.
A typical instruction-execution cycle, as executed on a system with a Von Neumann
architecture, first fetches an instruction from memory and stores that instruction in the instruction
register. The instruction is then decoded and may cause operands to be fetched from memory and
stored in some internal register. After the instruction on the operands has been executed, the result
may be stored back in memory.
Ideally, we want the programs and data to reside in main memory permanently. This
arrangement usually is not possible for the following two reasons:
1. Main memory is usually too small to store all needed programs and data permanently.
2.Main memory is a volatile storage device that loses its contents when power is turned off.
Thus, most computer systems provide secondary storage as an extension of main memory.
The main requirement for secondary storage is that it will be able to hold large quantities of data
permanently.
Solid-state drive (SSD) is a solid-state storage device that uses integrated circuit assemblies
as memory to store data. SSD is also known as a solid-state disk although SSDs do not have physical
disks. There are no moving mechanical components in SSD. This makes them different from
conventional electromechanical drives such as Hard Disk Drives (HDDs) or floppy disks, which
contain movable read/write heads and spinning disks. SSDs are typically more resistant to physical
shock, run silently, and have quicker access time, and lower latency compared to electromechanical
devices.
The most common secondary-storage device is a magnetic disk, which provides storage for
both programs and data. Most programs are stored on a disk until they are loaded into
memory. Many programs then use the disk as both a source and a destination of the information for
their processing.
The wide variety of storage systems in a computer system can be organized in a hierarchy
as shown in the figure, according to speed, cost and capacity. The higher levels are expensive,
but they are fast. As we move down the hierarchy, the cost per bit generally decreases, whereasthe
access time and the capacity of storage generally increases.
In addition to differing in speed and cost, the various storage systems are either volatile
or nonvolatile. Volatile storage loses its contents when the power to the device is removed. In
the absence of expensive battery and generator backup systems, data must be written to
nonvolatile storage for safekeeping. In the hierarchy shown in figure, the storage systems above
the electronic disk are volatile, whereas those below are nonvolatile.
I/O Structure
A large portion of operating system code is dedicated to managing I/O, both because of
its importance to the reliability and performance of a system and because of the varying nature of
the devices.
Every device have a device controller, maintains some local buffer and a set of special purpose
registers. The device controller is responsible for moving the data between the peripheral devices.
The operating systems have a device driver for each device controller.
To start an I/O operation, the device driver loads the registers within the device
controller. The device controller, examines the contents of these registers to determine
what action to take (such as "read a character from the keyboard"). The controller starts
the transfer of data from the device to its local buffer. Once the transfer of data iscomplete,
the device
controller informs the device driver(OS) via an interrupt that it has finished its operation.
The device driver then returns control to the operating system, and also returns the data.
For other operations, the device driver returns status information.
This form of interrupt-driven I/O is fine for moving small amounts of data, but
very difficult for bulk data movement. To solve this problem, direct memory access
(DMA) is used.
• DMA is used for high-speed I/O devices, able to transmit information at close to
memory speeds
• Device controller transfers blocks of data from buffer storage directly to main
memory without CPU intervention
• Only one interrupt is generated per block, rather than the one interrupt per byte
Computer System Architecture
Categorized roughly according to the number of general-purpose processors used –
Single-Processor Systems –
Most systems use a single processor. The variety of single-processor systems
range from PDAs through mainframes. On a single-processor system, there is one main
CPU capable of executing instructions from user processes. It contains special- purpose
processors, in the form of device-specific processors, for devices such as disk, keyboard,
and graphics controllers.
All special-purpose processors run limited instructions and do not run user
processes. These are managed by the operating system, the operating system sends
them information about their next task and monitors their status.
For example, a disk-controller processor, implements its own disk queue and
scheduling algorithm, thus reducing the task of main CPU. Special processors in the
keyboard, converts the keystrokes into codes to be sent to the CPU.
2. Economy of scale - Multiprocessor systems can cost less than equivalent number
of many single-processor systems. As the multiprocessor systems share
peripherals, mass storage, and power supplies, the cost of implementing this
system is economical. If several processes are working on the same data, the data
can also be shared among them.
3. Increased reliability- In multiprocessor systems functions are shared among
several processors. If one processor fails, the system is not halted, it only slows
down. The job of the failed processor is taken up, by other processors.
Two techniques to maintain ‘Increased Reliability’ - graceful
degradation & fault tolerant
Graceful degradation – As there are multiple processors when one
processor fails other process will take up its work and the system goes down
slowly. Fault tolerant – When one processor fails, its operations are stopped,
the system failure is then detected, diagnosed, and corrected.
The HP Non Stop system uses both hardware and software duplication to
ensurecontinued operation despite faults. The system consists of multiple pairs
of CPUs. Both processors in the pair execute same instruction and compare the
results. If the results differ, then one CPU of the pair is at fault, and both are
halted. The process that was being executed is then moved to another pair of
CPUs, and the instruction that failed is restarted. This solution is expensive,
since it involves special hardware and considerable hardware duplication.
There is no master-slave relationship. The processors have its own registers and CPU, only
memory is shared.
The benefit of this model is that many processes can run simultaneously. N
processes can run if there are N CPUs—without causing a significant
deterioration of performance. Operating systems like Windows, Windows XP,
Mac OS X, and Linux—now provide support for SMP.
Multicore Systems:
A recent trend in CPU design is to include multiple computing cores on a single chip.
Such systems are called multicore systems. They are more efficient than multiple
chips with single core, since the communication between processors within a chip is more
faster than communication between two single processors.
Clustered Systems
Clustered systems are two or more individual systems connected together via network
and sharing software resources. Clustering provides high-availability of resources and
services. The service will continue even if one or more systems in the cluster fail. High
availability is generally obtained by storing a copy of files (s/w resources) in the system.
Other forms of clusters include parallel clusters and clustering over a wide-area network
(WAN). Parallel clusters allow multiple hosts to access the same data on the shared
storage. Cluster technology is changing rapidly with the help of SAN (storage- area
networks). Using SAN resources can be shared with dozens of systems in a cluster,
that are separated by miles.
Operating-System Structure
One of the most important aspects of operating systems is the ability to multi
program. A single user cannot keep either the CPU or the I/O devices busy at all
times.
Multiprogramming increases CPU utilization by organizing jobs, so that the CPU
always has one job to execute.
The operating system keeps several jobs in memory simultaneously asshown in figure. This
set of jobs is a subset of the jobs kept in the job pool. Since the number of jobs that can be
kept simultaneously in memory is usually smaller than the number of jobs that can be kept
in the job pool (in secondary memory). The operating system picks and begins to execute
one of the jobs in memory. Eventually, the job may have to wait for some task, such as an
I/O operation, to complete. In a non-multiprogrammed system, the CPU would sit idle. In a
multiprogrammed system, the operating system simply switches to, and executes, another
job. When that job needs to wait, the CPU is switched to another job, and so on. Eventually,
the first job finishes waiting and gets the CPU back. Thus the CPU is never idle.
Job
Pool
Multiprocessor Systems:
A multiprocessor system is a computer system having two or more CPUs within a single
computer system, each sharing main memory and peripherals. Multiple programs are
executed by multiple processors parallel.
Distributed Systems
Individual systems that are connected and share the resource available in network is
called Distributed system. Access to a shared resource increases computation speed,
functionality, data availability, and reliability.
A network is a communication path between two or more systems. Distributed
systems depend on networking for their functionality. Networks vary by the protocols
used, the distances between nodes, and the transport media. TCP/IP is the mostcommon
network protocol. Most operating systems support TCP/IP.
Networks are characterized based on the distances between their nodes. Alocal-
area network (LAN) connects computers within a room, a floor, or a building. A wide-
area network (WAN) usually links buildings, cities, or countries. A global company
may have a WAN to connect its offices worldwide. A metropolitan-area network
(MAN) links buildings within a city. A small-area network connects systems within a
several feet using wireless technology. Eg. BlueTooth and 802.11.
The media to carry networks also vary - copper wires, fiber strands, and
wireless transmissions between satellites, microwave dishes, and radios.
a) Dual-Mode Operation
Since the operating system and the user programs share the hardware and software
resources of the computer system, it has to be made sure that an error in a user
program cannot cause problems to other programs and the Operating System running in
the system.
The approach taken is to use a hardware support that allows us to differentiate
among various modes of execution.
A hardware bit of the computer, called the mode bit, is used to indicate the current mode:
kernel (0) or user (1). With the mode bit, we are able to distinguish between a task that
is executed by the operating system and one that is executed by the user.
When the computer system is executing a user application, the system is in user
mode. When a user application requests a service from the operating system (via a system
call), the transition from user to kernel mode takes place.
At system boot time, the hardware starts in kernel mode. The operating system is then
loaded and starts user applications in user mode. Whenever a trap or interrupt occurs, the
hardware switches from user mode to kernel mode (that is, changes the mode bit from 1
to 0). Thus, whenever the operating system gains control of the computer, it is in kernel
mode.
The dual mode of operation provides us with the means for protecting the
operating system from errant users—and errant users from one another.
The hardware allows privileged instructions to be executed only in kernel mode.
If an attempt is made to execute a privileged instruction in user mode, the hardware
does not execute the instruction but rather treats it as illegal and traps it to the operating
system. The instruction to switch to user mode is an example of a privileged
instruction.
Initial control is within the operating system, where instructions are executed in
kernel mode. When control is given to a user application, the mode is set to user mode.
Eventually, control is switched back to the operating system via an interrupt, a trap, ora
system call.
b) Timer
Operating system uses timer to control the CPU. A user program cannot hold
CPU for a long time, this is prevented with the help of timer.
A timer can be set to interrupt the computer after a specified period. The period
may be fixed (for example, 1/60 second) or variable (for example, from 1 millisecond
to 1 second).
Fixed timer – After a fixed time, the process under execution is interrupted.
Before changing to the user mode, the operating system ensures that the timer is set to
interrupt. If the timer interrupts, control transfers automatically to the operating system,
which may treat the interrupt as a fatal error or may give the program more time.
Process Management
A program under execution is a process. A process needs resources like CPU time,
memory, files, and I/O devices for its execution. These resources are given to the process
when it is created or at run time. When the process terminates, the operating system
reclaims the resources.
The program stored on a disk is a passive entity and the program under
execution is an active entity. A single-threaded process has one program counter
specifying the next instruction to execute. The CPU executes one instruction of the
process after another, until the process completes. A multithreaded process has
multiple program counters, each pointing to the next instruction to execute for a given
thread.
The operating system is responsible for the following activities in connection with
process management:
• Scheduling process and threads on the CPU
• Creating and deleting both user and system processes
• Suspending and resuming processes
• Providing mechanisms for process synchronization
• Providing mechanisms for process communication
Memory Management
Main memory is a large array of words or bytes. Each word or byte has its own
address. Main memory is the storage device which can be easily and directly accessed
by the CPU. As the program executes, the central processor reads instructions and also
reads and writes data from main memory.
To improve both the utilization of the CPU and the speed of the computer's
response to its users, general-purpose computers must keep several programs in
memory, creating a need for memory management.
Storage Management
There are three types of storage management i) File system management ii)
Mass-storage management iii) Cache management.
File-System Management
File management is one of the most visible components of an operating system.
Computers can store information on several different types of physical media. Magnetic
disk, optical disk, and magnetic tape are the most common. Each of these media has its
own characteristics and physical organization. Each medium is controlled by a device,
such as a disk drive or tape drive, that also has its own unique characteristics.
A file is a collection of related information defined by its creator. Commonly,
files represent programs and data. Data files may be numeric, alphabetic, alphanumeric,
or binary. Files may be free-form (for example, text files), or they may be formatted
rigidly (for example, fixed fields).
The operating system implements the abstract concept of a file by managing mass
storage media. Files are normally organized into directories to make them easier to use.
When multiple users have access to files, it may be desirable to control by whom and in
what ways (read, write, execute) files may be accessed.
The operating system is responsible for the following activities in connection with file
management:
• Creating and deleting files
• Creating and deleting directories to organize files
• Supporting primitives for manipulating files and directories
• Mapping files onto secondary storage
• Backing up files on stable (nonvolatile) storage media
Mass-Storage Management
As the main memory is too small to accommodate all data and programs, and as
the data that it holds are erased when power is lost, the computer system must provide
secondary storage to back up main memory. Most modern computer systems use disks
as the storage medium for both programs and data.
Most programs—including compilers, assemblers, word processors, editors, and
formatters—are stored on a disk until loaded into memory and then use the disk as both
the source and destination of their processing. Hence, the proper management of disk
storage is of central importance to a computer system. The operating system is
responsible for the following activities in connection with disk management:
• Free-space management
• Storage allocation
• Disk scheduling
As the secondary storage is used frequently, it must be used efficiently. The entire
speed of operation of a computer may depend on the speeds of the disk.Magnetic tape
drives and their tapes, CD, DVD drives and platters are tertiarystorage devices. The
functions that operating systems provides include mounting and unmounting media in
devices, allocating and freeing the devices for exclusive use by processes, and migrating
data from secondary to tertiary storage.
Caching
Caching is an important principle of computer systems. Information is normally kept
in some storage system (such as main memory). As it is used, it is copied into a faster
storage system— the cache—as temporary data. When a particular piece of information
is required, first we check whether it is in the cache. If it is, we use the information
directly from the cache; if it is not in cache, we use the information from the source,
putting a copy in the cache under the assumption that we will need it again soon.
I/O Systems
One of the purposes of an operating system is to hide the peculiarities of specific
hardware devices from the user. The I/O subsystem consists of several components:
A memory-management component that includes buffering, caching, and
spooling
A general device-driver interface
Drivers for specific hardware devices
Only the device driver knows the peculiarities of the specific device to which it is assigned.
Computing Environments
The different computing environments are –
Traditional Computing
The current trend is toward providing more ways to access these computing
environments. Web technologies are stretching the boundaries of traditional computing.
Companies establish portals, which provide web accessibility to their internal servers.
Network computers are essentially terminals that understand web-based computing.
Handheld computers can synchronize with PCs to allow very portable use of company
information. Handheld PDAs can also connect to wireless networks to use the company's
web portal. The fast data connections are allowing home computers to serve up web
pages and to use networks. Some homes even have firewalls to protect their networks.
In the latter half of the previous century, computing resources were scarce. Years before,
systems were either batch or interactive. Batch system processed jobs in bulk, with
predetermined input (from files or other sources of data). Interactive systems waited for
input from users. To optimize the use of the computing resources, multiple users shared
time on these systems. Time-sharing systems used a timer and scheduling algorithms to
rapidly cycle processes through the CPU, giving each user a share of the resources.
Today, traditional time-sharing systems are used everywhere. The same scheduling
technique is still in use on workstations and servers, but frequently the processes are
all owned by the same user (or a single user and the operating system). User processes,
and system processes that provide services to the user, are managed so that each
frequently gets a slice of computer time.
Mobile Computing
Mobile computing refers to computing on handheld smartphones and tablet computers. These devices
share the distinguishing physical features of being portable and lightweight. Today, mobile systems are
used not only for e-mail and web browsing but also for playing music and video, reading digital books,
taking photos, and recording high-definition video.
Many developers are now designing applications that take advantage of the unique features of mobile
devices, such as global positioning system (GPS) chips, accelerometers, and gyroscopes. An embedded
GPS chip allows a mobile device to use satellites to determine its precise location on earth.
An accelerometer allows a mobile device to detect its orientation with respect to the ground and to
detect certain other forces, such as tilting and shaking. In several computer games that employ
accelerometers, players interface with the system not by using a mouse or a keyboard but rather by
tilting, rotating, and shaking the mobile device!
Perhaps more a practical use of these features is found in augmented-reality applications, which overlay
information on a display of the current environment. It is difficult to imagine how equivalent
applications could be developed on traditional laptop or desktop computer systems.
To provide access to on-line services, mobile devices typically use either IEEE standard 802.11 wireless
or cellular data networks.
Two operating systems currently dominate mobile computing: Apple iOS and Google Android. iOS
was designed to run on Apple iPhone and iPad mobile devices. Android powers smartphones and tablet
computers available from many manufacturers.
Distributed Systems
A distributed system is a collection of systems that are networked to provide the users
with access to the various resources in the network. Access to a shared resource increases
computation speed, functionality, data availability, and reliability.
Networks are characterized based on the distances between their nodes. Alocal-
area network (LAN) connects computers within a room, a floor, or a building. A wide-
area network (WAN) usually links buildings, cities, or countries. A global company
may have a WAN to connect its offices worldwide. These networks may run one protocol
or several protocols. A
metropolitan-area network (MAN) connects buildings within a city. BlueTooth and
802.11 devices use wireless technology to communicate over a distance of several feet,
in essence creating a small-area network such as might be found in a home.
The transportation media to carry networks are also varied. They include copper
wires, fiber strands, and wireless transmissions between satellites, microwave dishes,
and radios. When computing devices are connected to cellular phones, they create a
network
Client-Server Computing
Designers shifted away from centralized system architecture to - terminals connected
to centralized systems. As a result, many of today’s systems act as server systems to
satisfy requests generated by client systems. This form of specialized distributed
system, called client server system.
Server systems can be broadly categorized as compute servers and file servers: • The
compute-server system provides an interface to which a client can send a request
to perform an action (for example, read data); in response, the server executes the
action and sends back results to the client. A server running a database that
responds to client requests for data is an example of such a svstem.
• The file-server system provides a file-system interface where clients can create,
update, read, and delete files. An example of such a system is a web server that
delivers files to clients running the web browsers.
Peer-to-Peer Computing
In this model, clients and servers are not distinguished from one another; here, all nodes
within the system are considered peers, and each may act as either a client or a server,
depending on whether it is requesting or providing a service.
In a client-server system, the server is a bottleneck, because all the services
must be served by the server. But in a peer-to-peer system, services can be provided
by several nodes distributed throughout the network.
To participate in a peer-to-peer system, a node must first join the network of peers. Once
a node has joined the network, it can begin providing services to—and requesting
services from—other nodes in the network. Determining what services are
available is accomplished in one of two general ways:
• When a node joins a network, it registers its service with a centralized lookup
service on the network. Any node desiring a specific service first contacts this
centralized lookup service to determine which node provides the service. The
remainder of the communication takes place between the client and the service
provider.
• A peer acting as a client must know, which node provides a desired service by
broadcasting a request for the service to all other nodes in the network. The
node (or nodes) providing that service responds to the peer making the request.
To support this approach, a discovery protocol must be provided that allows peers
to discover services provided by other peers in the network.
Skype is an example of peer-to-peer computing. It allows clients to make voice calls and video calls
and to send text messages over the Internet using a technology known as voice over IP (VoIP). Skype
uses a hybrid peer-to- peer approach. It includes a centralized login server, but it also incorporates
decentralized peers and allows two peers to communicate.
Virtualization:
Virtualization is a technology that allows operating systems to run as applications within other operating systems.
Virtualization is one member of a class of software that also includes emulation. Emulation is used when the
source CPU type is different from the target CPU type.
With virtualization, in contrast, an operating system that is natively compiled for a particular CPU architecture
runs within another operating system also native to that CPU.
Running multiple virtual machines allows many users to run tasks on a system designed for a single user.
VMM allows the user to install multiple operating systems for exploration or to run applications written for
operating systems other than the native host. For example, an Apple laptop running Mac OS X on the x86 CPU
can run a Windows guest to allow execution of Windows applications.
Cloud Computing
Cloud computing is a type of computing that delivers computing, storage, and even applications as a service
across a network.
It’s a logical extension of virtualization, because it uses virtualization as a base for its functionality. For
example, the Amazon Elastic Compute Cloud (EC2) facility has thousands of servers, millions of virtual
machines, and petabytes of storage available for use by anyone on the Internet. Users pay per month based on
how much of those resources they use.
Embedded systems almost always run real-time operating systems. A real-time system is used when rigid time
requirements have been placed on the operation of a processor or the flow of data; thus, it is often used as a
control device in a dedicated application.
A real-time system has well-defined, fixed time constraints. Processing must be done within the defined
constraints, or the system will fail.
Operating-System Services
• Program Execution - The OS must be able to load a program into RAM, run the program,
and terminate the program, either normally or abnormally.
• I/O Operations - The OS is responsible for transferring data to and from I/O devices,
including keyboards, terminals, printers, and files. For specific devices, special functions
are provided (device drivers) by OS.
• File-System Manipulation – Programs need to read and write files or directories. The
services required to create or delete files, search for a file, list the contents of a file and
change the file permissions are provided by OS.
• Communications - Inter-process communications, IPC, either between processes running on
the same processor, or between processes running on separate processors or separate
machines. May be implemented by using the service of OS- like shared memory or message
passing.
• Error Detection - Both hardware and software errors must be detected and handled
appropriately by the OS. Errors may occur in the CPU and memory hardware (such as power
failure and memory error), in I/O devices (such as a parity error on tape, a connection failure
on a network, or lack of paper in the printer), and in the user program (such as an arithmetic
overflow, an attempt to access an illegal memory location).
• ResourceAllocation – Resources like CPU cycles, main memory, storage space, and I/O
devices must be allocated to multiple users and multiple jobs at the same time.
• Accounting
– There are services in OS to keep track of system activity and resource usage, either for
billing purposes or for statistical record keeping that can be used to optimize future
performance.
Command Interpreter
Command Interpreters are used to give commands to the OS. There are multiple command
interpreters known as shells. In UNIX and Linux systems, there are several different shells, like the
Bourne shell, C shell, Bourne-Again shell, Korn shell, and others.
The main function of the command interpreter is to get and execute the user-specified
command. Many of the commands manipulate files: create, delete, list, print, copy, execute, and
so on.
1) The command interpreter itself contains the code to execute the command. For example, a
command to delete a file may cause the command interpreter to jump to a particular section of it
code that sets up the parameters and makes the appropriate system call.
2) The code to implement the command is in a function in a separate file. The interpreter searches
for the file and loads it into the memory and executes it by passing the parameter. Thus by adding
new functions new commands can be added easily to the interpreter without disturbing it.
Most modern systems allow individual users to select their desired interface, and to customize its
operation, as well as the ability to switch between different interfaces as needed.
Because a mouse is impractical for most mobile systems, smartphones and handheld tablet computers typically
use a touchscreen interface. Here, users interact by making gestures on the touchscreen—for example, pressing
and swiping fingers across the screen. Figure 2.3 illustrates the touchscreen of the Apple iPad. Whereas earlier
smartphones included a physical keyboard, most smartphones now simulate a keyboard on the touchscreen.
Various GUI interfaces are available, however. These include the Common Desktop Environment (CDE) and
X-Windows systems, which are common on commercial versions of UNIX, such as Solaris and IBM’s AIX
system. In addition, there has been significant development in GUI designs from various open-source projects,
such as K Desktop Environment (or KDE) and the GNOME desktop by the GNU project. Both the KDE and
GNOME desktops run on Linux and various UNIX systems and are available under open-source licenses,
which means their source code is readily available for reading and for modification under specific license terms.
System Calls
• System calls is a means to access the services of the operating system.
• Generally written in C or C++, although some are written in assembly for optimal
performance.
The below figure illustrates the sequence of system calls required to copy a file contentfrom
one file (input file) to another file (output file).
There are number of system calls used to finish this task. The first system call is to write a message
on the screen (monitor). Then to accept the input filename. Then another system call to write
message on the screen, then to accept the output filename. When the program tries to open the input
file, it may find that there is no file of that name or that the file is protected against access. In these
cases, the program should print a message on the console (another system call)and then terminate
abnormally (another system call) and create a new one (another system call).
Now that both the files are opened, we enter a loop that reads from the input file (another
system call) and writes to output file (another system call).
Finally, after the entire file is copied, the program may close both files (another system call),
write a message to the console or window (system call), and finally terminate normally(final
system call).
• Most programmers do not use the low-level system calls directly, but instead use an
"Application Programming Interface", API.
• The APIs instead of direct system calls provides for greater program portability between
different systems. The API then makes the appropriate system calls through the system call
interface, using a system call table to access specific numbered system calls, as shown in
Figure 2.6.
• Each system call has a specific numbered system call. The system call table (consisting of
system call number and address of the particular service) invokes a particular service routine
for a specific system call.
• The caller need know nothing about how the system call is implemented or what it does
during execution.
Three general methods used to pass parameters to OS are –
• Process control
◦ end, abort
◦ load, execute
◦ create process, terminate process
◦ get process attributes, set process attributes
◦ wait for time
◦ wait event, signal event
◦ allocate and free memory
• File management
◦ create file, delete file
◦ open, close
◦ read, write, reposition
◦ get file attributes, set file attributes
• Device management
◦ request device, release device
◦ read, write, reposition
◦ get device attributes, set device attributes
◦ logically attach or detach devices
• Information maintenance
◦ get time or date, set time or date
◦ get system data, set system data
◦ get process, file, or device attributes
◦ set process, file, or device attributes
• Communications
◦ create, delete communication connection
◦ send, receive messages
◦ transfer status information
◦ attach or detach remote devices
a) Process Control
• Process control system calls include end, abort, load, execute, create process, terminate
process, get/set process attributes, wait for time or event, signal event, and allocate and
free memory.
• Processes must be created, launched, monitored, paused, resumed, and eventually stopped.
• When one process pauses or stops, then another must be launched or resumed
• Process attributes like process priority, max. allowable execution time etc. are set and
retrieved by OS.
• After creating the new process, the parent process may have to wait (wait time), or wait for an
event to occur(wait event). The process sends back a signal when the event hasoccurred
(signal event).
o In DOS, the command interpreter loaded first. Then loads the process and transfers
control to it. The interpreter does not resume until the process has completed, as shown
in Figure
o Because UNIX is a multi-tasking system, the command interpreter remains
completely resident when executing a process, as shown in Figure below
▪ The user can switch back to the command interpreter at any time, and can place
the running process in the background even if it was not originally launched as a
background process.
▪ In order to do this, the command interpreter first executes a "fork" system call,
which creates a second process which is an exact duplicate ( clone )of the
original command interpreter. The original process is known as the parent, and
the cloned process is known as the child, with its own unique process ID and
parent ID.
▪ The child process then executes an "exec" system call, which replaces its
code with that of the desired process.
▪ The parent ( command interpreter ) normally waits for the child to complete
before issuing a new command prompt, but in some cases it can also issuea
new prompt right away, without waiting for the child process to complete.
( The child is then said to be running "in the background", or"as a
background process". )
• File management system calls include create file, delete file, open, close, read, write,
reposition, get file attributes, and set file attributes.
• After creating a file, the file is opened. Data is read or written to a file. •
The file pointer may need to be repositioned to a point.
• The file attributes like filename, file type, permissions, etc. are set and retrieved using
system calls.
• These operations may also be supported for directories as well as ordinary files. c)
c) Device Management
• Device management system calls include request device, release device, read, write,
reposition, get/set device attributes, and logically attach or detach devices. • When a process
needs a resource, a request for resource is done. Then the control is granted to the process. If
requested resource is already attached to some other process, the requesting process has to
wait.
• In multiprogramming systems, after a process uses the device, it has to be returned to OS,
so that another process can use the device.
• Devices may be physical ( e.g. disk drives ), or virtual / abstract ( e.g. files, partitions, and
RAM disks ).
d) Information Maintenance
• Information maintenance system calls include calls to get/set the time, date, system data,
and process, file, or device attributes.
• These system calls care used to transfer the information between user and the OS. Information
like current time & date, no. of current users, version no. of OS, amount of free memory,
disk space etc. are passed from OS to the user.
e) Communication
f) Protection
• Protection provides mechanisms for controlling which users / processes have access to
which system resources.
• System calls allow the access mechanisms to be adjusted as needed, and for non priveleged
users to be granted elevated access permissions under carefully controlled temporary
circumstances.
System Programs
A collection os programs that provide a convenient environment for program development and
execution (other than OS) are called system programs or system utilities.
o File management - programs to create, delete, copy, rename, print, list, and
generally manipulate files and directories.
o Status information - Utilities to check on the date, time, number of users, processes
running, data logging, etc. System registries are used to store and recall
configuration information for particular applications.
o File modification - e.g. text editors and other tools which can change file contents.
o Program loading and execution - loaders, dynamic loaders, overlay loaders, etc.,
as well as interactive debuggers.
o Background services - All general-purpose systems have methods for launching certain
system-program processes at boot time. Some of these processes terminate after completing
their tasks, while others continue to run until the system is halted. Constantly running system-
program processes are known as services, subsystems, or daemons.
Operating-System Design and Implementation
Design Goals
Any system to be designed must have its own goals and specifications. Similarly the OS
to be built will have its own goals depending on the type of system in which it will be used,
the type of hardware used in the system etc.
• Requirements define properties which the finished system must have, and are a necessary
steps in designing any large complex system. The requirements may be of two basic groups:
o User requirements are features that users care about and understand like system
should be convenient to use, easy to learn,reliable, safe and fast.
o System requirements are written for the developers, ie. People who design the OS.
Their requirements are like easy to design, implement and maintain, flexible,
reliable, error free and efficient.
Implementation
OS structure must be carefully designed. The task of OS is divided into small components and
then interfaced to work together.
Simple Structure
Many operating systems do not have well-defined structures. They started as small, simple, and
limited systems and then grew beyond their original scope. Ex: MS-DOS.
In MS-DOS, the interfaces and levels of functionality are not well separated. Application
programs can access basic I/O routines to write directly to the display and disk drives. Such freedom
leaves MS-DOS in bad state and the entire system can crash down when user programs fail.
Monolithic Structure
UNIX OS consists of two separable parts: the kernel and the system programs. The kernel is further
separated into a series of interfaces and device drivers. The kernel provides the file system, CPU scheduling,
memory management, and other operating-system functions through system calls.
Layered Approach
• The OS is broken into number of layers (levels). Each layer rests on the layer below it, and
relies on the services provided by the next lower layer.
• Bottom layer (layer 0) is the hardware and the topmost layer is the user interface. • A
typicallayer, consists of data structure and routines that can be invoked by higher-level
layer.
The layers are selected so that each uses functions and services of only lower-level layers. So
simplifies debugging and system verification. The layers are debugged one by one from the
lowest and if any layer doesn’t work, then error is due to that layer only, as the lower layers are
already debugged. Thus the design and implementation is simplified.
A layer need not know how its lower level layers are implemented. Thus hides the operations
from higher layers (Abstraction)
• The various layers must be appropriately defined, as a layer can use only lower level
layers.
• Less efficient than other types, because any interaction with layer 0 required from top layer.
The system call should pass through all the layers and finally to layer 0. This is an overhead.
Microkernels
• The basic idea behind micro kernels is to remove all non-essential services from the kernel,
thus making the kernel as small and efficient as possible.
• The removed services are implemented as system applications.
• Most microkernels provide basic process and memory management, and message passing
between other services.
• Benefit of microkernel - System expansion can also be easier, because it only involves
adding more system applications, not rebuilding a new kernel.
• Mach was the first and most widely known microkernel, and now forms a major component
of Mac OSX.
• Disadvantage of Microkernel is, it suffers from reduction in performance due to increases
system function overhead.
Modules
• Modern OS development is object-oriented, with a relatively small core kernel and a set of
modules which can be linked in dynamically.
• Modules are similar to layers in that each subsystem has clearly defined tasks and interfaces,
but any module is free to contact any other module, eliminating the problems of going
through multiple intermediary layers.
• The kernel is relatively small in this architecture, similar to microkernels, but the kernel
does not have to implement message passing since modules are free to contact each other
directly. Eg: Solaris, Linux and MacOSX.
• The Max OSX architecture relies on the Mach microkernel for basic system management
services, and the BSD kernel for additional services. Application services and dynamically
loadable modules ( kernel extensions ) provide the rest of the OS functionality.
• Resembles layered system, but a module can call any other module.
• Resembles microkernel, the primary module has only core functions and the knowledge of
how to load and communicate with other modules.
The main function of the microkernel is to provide communication between the client program and the
various services that are also running in user space. Communication is provided through message passing,
The Solaris operating system structure, shown in Figure 2.15, is organized around a core kernel with
seven types of loadable kernel modules:
1. Scheduling classes
2. File systems
3. Loadable system calls
4. Executable formats
5. STREAMS modules
6. Miscellaneous
7. Device and bus drivers
Hybrid Systems
In practice, very few operating systems adopt a single, strictly defined structure. Instead, they combine different
structures, resulting in hybrid systems that address performance, security, and usability issues.
Explore three hybrid systems: the Apple Mac OS X operating system and the two most prominent mobile
operating systems—iOS and Android.
Mac OS X
The Apple Mac OS X operating system uses a hybrid structure. It is a layered system.
The top layers include the Aqua user interface and a set of application environments and services. Notably, the
Cocoa environment specifies an API for the Objective-C programming language, which is used for writing Mac
OS X applications. Below these layers is the kernel environment, which consists primarily of the
Mach microkernel and the BSD UNIX kernel. Mach provides memory management; support for remote
procedure calls (RPCs) and interprocess communication (IPC) facilities, including message passing; and thread
scheduling.
The BSD component provides a BSD command-line interface, support for networking and file systems, and an
implementation of POSIX APIs, including Pthreads.
In addition to Mach and BSD, the kernel environment provides an I/O kit for development of device drivers and
dynamically loadable modules (which Mac OS X refers to as kernel extensions). The BSD application
environment can make use of BSD facilities directly.
iOS
iOS is a mobile operating system designed by Apple to run its smartphone, the iPhone, as well as its tablet
computer, the iPad. iOS is structured on the Mac OS X operating system, with added functionality pertinent to
mobile devices, but does not directly run Mac OS X applications. The structure of iOS appears in Figure 2.17.
Cocoa Touch is an API for Objective-C that provides several frameworks for developing applications that run
on iOS devices. The fundamental difference between Cocoa, mentioned earlier, and Cocoa Touch is that the
latter provides support for hardware features unique to mobile devices, such as touch screens. The media
services layer provides services for graphics, audio, and video.
The core services layer provides a variety of features, including support for cloud computing and databases.
The bottom layer represents the core operating system, which is based on the kernel environment.
Android
The Android operating system was designed by the Open Handset Alliance (led primarily by Google) and was
developed for Android smartphones and tablet computers. Whereas iOS is designed to run on Apple mobile
devices and is close-sourced, Android runs on a variety of mobile platforms and is open-sourced
Android is similar to iOS in that it is a layered stack of software that provides a rich set of frameworks for
developing mobile applications. At the bottom of this software stack is the Linux kernel, although it has been
modified by Google and is currently outside the normal distribution of Linux releases.
The Android runtime environment includes a core set of libraries as well as the Dalvik virtual machine.
Software designers for Android devices develop applications in the Java language. However, rather than using
the standard Java API, Google has designed a separate Android API for Java development. The Java class files
are first compiled to Java bytecode and then translated into an executable file that runs on the Dalvik virtual
machine.
The set of libraries available for Android applications includes frameworks for developing web browsers
(webkit), database support (SQLite), and multimedia. The libc library is similar to the standard C library but is
much smaller and has been designed for the slower CPUs that characterize mobile devices.
Virtual Machines
The fundamental idea behind a virtual machine is to abstract the hardware of a single computer
(the CPU, memory, disk drives, network interface cards, and so forth) into several different
execution environments, thereby creating the illusion that each separate execution environment is
running its own private computer.
Creates an illusion that a process has its own processor with its own memory. Host OS is the main
OS installed in system and the other OS installed in the system are called guest OS.
Benefits:
• Able to share the same hardware and run several different execution environments (OS).
• Host system is protected from the virtual machines and the virtual machines are protected
from one another. A virus in guest OS, will corrupt that OS but will not affect the other guest
systems and host systems.
• Even though the virtual machines are separated from one another, software resources can
be shared among them. Two ways of sharing s/w resource for communication are: a)
To share a file system volume (part of memory). b)To develop a virtual communication
network to communicate between the virtual machines.
• The operating system runs on and controls the entire machine. Therefore,
the current system must be stopped and taken out of use while changes
are made and tested. This period is commonly called system development
time. In virtual machines such problem iseliminated. User programs are
executed in one virtual machine and system development is done in
another environment.
• Multiple OS can be running on the developer’s system concurrently.
This helps in rapidporting and testing of programmers code in different
environments.
• System consolidation – two or more systems are made to run in a single system.
Operating-System Generation
It is possible to design, code, and implement an operating system specifically for one
machine at one site. More commonly, however, operating systems are designed to run on
any of a class of machines at a variety of sites with a variety of peripheral configurations.
The system must then be configured or generated for each specific computer site, a
process sometimes known as system generation SYSGEN.
System Boot
After an operating system is generated, it must be made available for use by the hardware. But
how does the hardware know where the kernel is or how to load that kernel?
The procedure of starting a computer by loading the kernel is known as booting the system. On
most computer systems, a small piece of code known as the bootstrap program or bootstrap
loader locates the kernel, loads it into main memory, and starts its execution.
The bootstrap program can perform a variety of tasks. Usually, one task is to run diagnostics to
determine the state of the machine. If the diagnostics pass, the program can continue with the
booting steps. It can also initialize all aspects of the system, from CPU registers to device
controllers and the contents of main memory. Sooner or later, it starts the operating system.