0% found this document useful (0 votes)
259 views18 pages

CSC 201 Course Material

This document provides an introduction and overview of the CSC 201 course on Introduction to Operating Systems I. The course is divided into two modules over 10 units and 5 weeks each. Module 1 covers the overview of operating systems, including the role, functionality, mechanisms to support different devices, and design issues of operating systems. Module 2 covers operating system principles like structuring methods, abstractions, processes, resources, APIs, device organization, and interrupts. The course will be assessed through attendance, assignments, quizzes, tests, and a final exam worth 60% of the total marks.

Uploaded by

David Omagu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
259 views18 pages

CSC 201 Course Material

This document provides an introduction and overview of the CSC 201 course on Introduction to Operating Systems I. The course is divided into two modules over 10 units and 5 weeks each. Module 1 covers the overview of operating systems, including the role, functionality, mechanisms to support different devices, and design issues of operating systems. Module 2 covers operating system principles like structuring methods, abstractions, processes, resources, APIs, device organization, and interrupts. The course will be assessed through attendance, assignments, quizzes, tests, and a final exam worth 60% of the total marks.

Uploaded by

David Omagu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 18

INTRODUCTION

CSC 201 – Introduction to Operating Systems I is a three (3) credit unit course of ten(10) study
units.

This course is divided into two modules. Module 1 is made up of five (5) units while module 2 is
made up of five (5) units.

Module 1 which is Overview of Operating Systems includes Role and purpose of the operating
system, Functionality of a typical operating system, Mechanisms to support client-server models
and hand-held devices, Design issues (efficiency, robustness, flexibility, portability, security,
compatibility) and Influences of security, networking, multimedia, windowing systems.

Module 2 which is Operating System Principles includes Structuring methods (monolithic, layered,
modular, micro-kernel models), Abstractions, processes, and resources, Concepts of application
program interfaces (APIs), Device organization, Interrupts: methods and implementations.

This Course Guide gives you a brief overview of the course contents, course duration, and course
materials.

STUDY UNITS
There are eleven study units in this course:

Module 1: Overview of Operating Systems


Unit 1 : Role and purpose of the operating system
Unit 2 : Functionality of a typical operating system
Unit 3 : Mechanisms to support client-server models, hand-held devices
Unit 4 : Design issues (efficiency, robustness, flexibility, portability, security, compatibility)
Unit 5 : Influences of security, networking, multimedia, windowing systems

Module 2: Operating System Principles


Unit 1 :Structuring methods (monolithic, layered, modular, micro-kernel models).
Unit 2 :Abstractions, processes, and resources
Unit 3 :Concepts of application program interfaces (APIs)
Unit 4 :Device organization
Unit 5 :Interrupts: methods and implementations

EXAMINATION AND GRADING


The final examination for the course will carry 60 per cent of the total marks available for this
course. The examination will cover every aspect of the course, so you are advised to revise
everything before the examination.

COURSE MARKING SCHEME


The table below shows how the actual course marking is broken down.

Assessment Marks
Attendance 10
Assignment 5
Quiz 10
Test 1-3 ( 5 Marks each) 15
Final Examination 60
Total 100

Table 1: Course Marking Scheme

Course Overview
Unit Title of Work Weeks Assessment
Activity (End of Unit)

Course Guide
Module 1: Overview of Operating Systems
1 Role and purpose of the operating system Week 1
2 Functionality of a typical operating system Week 2
3 Mechanisms to support client-server models, hand- Week 3
held devices
4 Design issues (efficiency, robustness, flexibility, Week 4
portability, security, compatibility)
5 Influences of security, networking, multimedia, Week 5
windowing systems

Module 2: Operating System Principles


1 Structuring methods (monolithic, layered, modular, Week 6
micro-kernel models).
2 Abstractions, processes, and resources Week 7
3 Concepts of application program interfaces (APIs) Week 8
4 Device organization Week 9
5 Interrupts: methods and implementations Week 10

Module 1: Overview of Operating Systems

Unit 1 : Role and purpose of the operating system

Table of Contents
1.0 Introduction
2.0 Objectives
3.0 Main Body
3.1
3.2
4 Conclusion/Summary
5 Study Questions

1.0 Introduction
Having just read through the Course Guide, you are now to go through this first unit of
the course which is very fundamental to the understanding of what an Operating system
is and the role it plays in the whole computer system.
Now let us go through your study objectives for this unit.

Operating system (OS), program that manages a computer’s resources, especially the allocation of
those resources among other programs. Typical resources include the central processing unit (CPU),
computer memory, file storage, input/output (I/O) devices, and network connections. The purpose
of operating systems is to manage computer memory, processes and the operation of all hardware
and software. An operating system is the most important software on a computer as it enables the
computer hardware to communicate effectively with all other computer software.
1. Program Execution: The Operating System is responsible for execution of all types of programs
whether it be user programs or system programs. The Operating System utilizes various resources
available for the efficient running of all types of functionalities.
2. Handling Input/Output Operations: The Operating System is responsible for handling all sort
of inputs, i.e., from keyboard, mouse, desktop, etc. The Operating System does all interfacing in the
most appropriate manner regrading all kind of Inputs and Outputs. For example, there is difference
in nature of all types of peripheral devices such as mouse or keyboard, then Operating System is
responsible for handling data between them.
3. Manipulation of File System: The Operating System is responsible for making of decisions
regarding the storage of all types of data or files, i.e., floppy disk/hard disk/pen drive, etc. The
Operating System decides as how the data should be manipulated and stored.
4. Error Detection and Handling: The Operating System is responsible for detection of any types
of error or bugs that can occur while any task. The well secured OS sometimes also acts as
countermeasure for preventing any sort of breach to the Computer System from any external source
and probably handling them.
5. Resource Allocation: The Operating System ensures the proper use of all the resources available
by deciding which resource to be used by whom for how much time. All the decisions are taken by
the Operating System.
6. Accounting: The Operating System tracks an account of all the functionalities taking place in the
computer system at a time. All the details such as the types of errors occurred are recorded by the
Operating System.
7. Information and Resource Protection: The Operating System is responsible for using all the
information and resources available on the machine in the most protected way. The Operating
System must foil an attempt from any external resource to hamper any sort of data or information.

Module 1: Overview of Operating Systems


Unit 2 : Functionality of a typical operating system

Table of Contents
1.0 Introduction
2.0 Objectives
3.0 Main Body
3.1
3.2
4 Study Questions
5 Conclusion/Summary

An operating system has three main functions:


(1) manage the computer’s resources, such as the central processing unit, memory, disk drives, and
printers,
(2) establish a user interface, and
(3) execute and provide services for applications software. Thus, you both establish a user interface
and execute software. In any computer, the operating system:
 Controls the backing store and peripherals such as scanners and printers.
 Deals with the transfer of programs in and out of memory.  Organizes the use of memory
between programs.
 Organizes processing time between programs and users.
 Maintains security and access rights of users.
How The Operating System Works show in the following figure
Module 1: Overview of Operating Systems

Unit 3 : Mechanisms to support client-server models, hand-held devices


Table of Contents
1.0 Introduction
2.0 Objectives
3.0 Main Body
3.1 The Meaning of Handheld (What Does Handheld Mean?)
3.2
4 Study Questions
5 Conclusion/Summary

Introduction

In the client-server architecture, when the client computer sends a request for data to the server
through the internet, the server accepts the requested process and deliver the data packets requested
back to the client. Clients do not share any of their resources.

Figure 6: client server model

The Meaning of Handheld (What Does Handheld Mean?)

A handheld is any portable device that can be carried and held in one’s palm. A handheld can be any
computing or electronic device that is compact and portable enough to be held and used in one or
both hands. A handheld may contain cellular communication, but this category can also include
other computing devices. A handheld is primarily designed to provide a suite of computing,
communication and informational tools in a device about the size of standard palm. Typically,
handheld devices are not as powerful as a computer, but modern handhelds increasingly include
powerful dual-core processors, RAM, SD storage capacity, an operating system, and native and
add-on applications. They are often powered by a dry cell lithium or similar battery. Moreover,
these types of devices increasingly use a touch screen interface. Personal digital assistants (PDA),
tablet PCs and portable media players are all considered handheld devices.
A mobile device (or handheld computer) is a computer small enough to hold and operate in the
hand. Typically, any handheld computer device will have an LCD.

Module 1: Overview of Operating Systems

Unit 4 : Design issues (efficiency, robustness, flexibility, portability, security, compatibility)

Table of Contents
1.0 Introduction
2.0 Objectives
3.0 Main Body
3.1 Design issues : efficiency
3.2 Design issues : robustness
3.3 Design issues : flexibility
3.4 Design issues : portability
3.5 Design issues : security
3.6 Design issues : compatibility
4 Study Questions
5 Conclusion/Summary

Efficiency:
Operating system efficiency is characterized by the amount of useful work accomplished by system
compared to the time and resources used.

An OS allows the computer system resources to be used efficiently. Ability to Evolve: An OS


should be constructed in such a way as to permit the effective development, testing, and
introduction of new system functions at the same time without interfering with service.
Efficiency: Most I/O devices slow compared to main memory (and the CPU)
▪ Use of multiprogramming allows for some processes to be waiting on I/O while another process
executes
▪ Often, I/O still cannot keep up with processor speed
▪ Swapping may use to bring in additional Ready processes; More I/O operations
✓ Optimize I/O efficiency especially Disk & Network I/O
✓ The quest for generality/uniformity:
o Ideally, handle all I/O devices in the same way; Both in the OS and in user applications
o Problem:
▪ Diversity of I/O devices
▪ Especially, different access methods (random access versus stream based) as well as vastly
different data rates.
▪ Generality often compromises efficiency!
o Hide most of the details of device I/O in lower-level routines so that processes and upper levels
see devices in general terms such as read, write, open, close, lock, unlock

Robustness
In computer science, robustness is the ability of a computer system to cope with errors during
execution and cope with erroneous input. Also, The word robust, when used with regard to
computer software, refers to an operatingsystem or other program that performs well not only under
ordinary conditions but also under unusual conditions that stress its designers' assumptions.A major
feature of Unix-like operating systems is their robustness. That is, they canoperate for prolonged
periods (sometimes years) withoutcrashing(i.e., stoppingoperating) or requiringrebooting(i.e.,
restarting). And although individual applicationprograms sometimes crash, they almost always do
so without affecting other programs or the operating system itself. Robustness can encompass many
areas of computer science, such as robust programming, robust machine learning, and Robust
Security Network. Formal techniques, such as fuzz testing, are essential to showing robustness since
this type of testing involves invalid or unexpected inputs. Alternatively, fault injection can be used
to test robustness. Various commercial products perform robustness testing of software analysis.
A distributed system may suffer from various types of hardware failure. The failure of a link, the
failure of a site, and the loss of a message are the most common types. To ensure that the system is
robust, we must detect any of these failures, reconfigure the system so that computation can
continue, and recover when a site or a link is repaired.
In general, building robust systems that encompass every point of possible failure is difficult
because of the vast quantity of possible inputs and input combinations. Since all inputs and input
combinations would require too much time to test, developers cannot run through all cases
exhaustively. Instead, the developer will try to generalize such cases. For example, imagine
inputting some integer values. Some selected inputs might consist of a negative number, zero, and a
positive number. When using these numbers to test software in this way, the developer generalizes
the set of all reals into three numbers. This is a more efficient and manageable method, but more
prone to failure. Generalizing test cases is an example of just one technique to deal with failure
specifically, failure due to invalid user input. Systems generally may also fail due to other reasons
as well, such as disconnecting from a network.
Regardless, complex systems should still handle any errors encountered gracefully. There are many
examples of such successful systems. Some of the most robust systems are evolvable and can be
easily adapted to new situations.

Portability
Portability is the ability of an application to run properly in a different platform to the one it was
designed for, with little or no modification. Portability in high-level computer programming is the
usability of the same software in different environments. When software with the same functionality
is produced for several computing platforms, portability is the key issue for development cost
reduction.

Compatibility
Compatibility is the capacity for two systems to work together without having to be altered to do so.
Compatible software applications use the same data formats. For example, if word processor
applications are compatible, the user should be able to open their document files in either product.
Compatibility issues come up when users are using the same type of software for a task, such as
word processors, that cannot communicate with each other. This could be due to a difference in
their versions or because they are made by different companies. The huge variety of application
software available and all the versions of the same software mean there are bound to be
compatibility issues, even when people are using the same kind of software. Compatibility issues
come up when users are using the same type of software for a task, such as word processors, that
cannot communicate with each other. This could be due to a difference in their versions or because
they are made by different companies. Compatibility issues can be small, for example certain
features not working properly in older versions of the same software, but they can also be
problematic, such as when a newer version of the software cannot open a document created in an
older version. In Microsoft Word for example, documents created in Word 2016 or 2013 can be
opened in Word 2010 or 2007, but some of the newer features (such as collapsed headings or
embedded videos) will not work in the older versions. If someone using Word 2016 opens a
document created in Word 2010, the document will open in Compatibility Mode. Microsoft Office
does this to make sure that documents created in older versions still work properly.

Flexibility
Flexible operating systems are taken to be those whose designs have been motivated to some degree
by the desire to allow the system to be tailored, either statically or dynamically, to the requirements
of specific applications or application domains.

Module 1: Overview of Operating Systems

Unit 5 : Influences of security, networking, multimedia, windowing systems

Table of Contents
1.0 Introduction
2.0 Objectives
3.0 Main Body
3.1
3.2
4 Study Questions
5 Conclusion/Summary

Introduction

An operating system (OS) is basically a collection of software that manages computer hardware
resources and provides common services for computer programs. Operating system is a crucial
component of the system software in a computer system.
Networking
Network Operating System is one of the important types of operating system. Network Operating
System runs on a server and gives the server the capability to manage data, users, groups, security,
applications, and other networking functions. The basic purpose of the network operating system is
to allow shared file and printer access among multiple computers in a network, typically a local area
network (LAN), a private network or to other networks. Some examples of network operating
systems include Microsoft Windows Server 2003, Microsoft Windows Server 2008, UNIX, Linux,
Mac OS X, Novell NetWare, and BSD.
Advantages
• Centralized servers are highly stable.
• Security is server managed.
• Upgradation of new technologies and hardware can be easily integrated into the system.
• It is possible to remote access to servers from different locations and types of systems.
Disadvantages
• High cost of buying and running a server.
• Dependency on a central location for most operations.
• Regular maintenance and updates are required.
Security
Operating systems security plays a primitive role in protecting memory, files, user authentication
and data access protection. Consistent protection means that the system meets standard security
requirements and have the required functionality to enforce security practices. For every computer
system and software design, it is imperative that it should address all security concerns and
implement required safeguards to enforce security policies. At the same time, it is important to keep
a balance since rigorous security measures can not only increase costs but also limit the user-
friendliness, usefulness and smooth performance of the system. Hence, system designers have to
ensure effective performance without compromising on security. A computer’s operating system
must concentrate on delivering a functionally complete and flexible set of security mechanism for
security policies to be effectively enforced.
Multimedia
Multimedia Operating Systems are the operating systems that can deal with the multimedia files.
Multimedia files are different from the traditional files (e.g., texts). They need special
considerations for process management, secondary storage management, file management, and
soon. operating systems may get the task to handle different kinds of data as well which comes in
the category of multimedia. A recent trend in technology is the inclusion of multimedia data within
computer systems. Multimedia kind of data consists of continuous media in the form of files (audio
or video) data as well as conventional files to run. Continuous media data vary from conventional
data like that where continuous media files such as frames of video or images or audio files must be
delivered according to certain time restrictions.
Video on demand requires huge servers. The system has to be able to access, say, 1000 disks, and
distribute signals to the distribution network at high speed in real time. The only way for an
operating system to be able to do this is to reserve bandwidth, CPU, memory, etc. ahead of time. So,
the server has an admission control algorithm; it cannot accept a new request unless it can be
reasonably confident that it will be able to satisfy it.
Uncompressed movies are far too big to transmit. Thus, the system needs a compression (encoding)
algorithm and a decompression (decoding) algorithm. The former can be slow and/or expensive,
because it is only done once, but the latter has to be fast. Unlike a regular file compression
algorithm, an image compression algorithm can be lossy. This means that some data can be lost.
The standard compression algorithm for images is JPEG. An operating system whose primary job is
serving videos would differ from a traditional operating system in three ways.
• process scheduling
• the file system
• disk scheduling
If all that the system is doing is serving videos, and if all of the videos have the same frame rate,
video resolution, etc, then process scheduling is very simple. Each video has a thread which reads
one frame at a time and sends it to the user. Round Robin is a perfect scheduling algorithm for this.
But you need a timing mechanism to make sure that each process runs at the correct frequency. Run
a clock with 30 ticks per second. At each tick, all threads are run sequentially in the same order,
when a thread is done, it issues a suspend system call that releases the CPU. This algorithm works
perfectly as long as the number of threads is small enough. But there is a problem. The frame sizes
for MPEG differ in size, and processes come and go.
Windows
In 1985 Microsoft came out with its Windows operating system, which gave PC compatibles some
of the same capabilities as the Macintosh. Year after year, Microsoft refined and improved Windows
so that Apple, which failed to come up with a significant new advantage, lost its edge. IBM tried to
establish yet another operating system, OS/2, but lost the battle to Gates’s company. In fact,
Microsoft also had established itself as the leading provider of application software for the
Macintosh. Thus, Microsoft dominated not only the operating system and application software
business for PC-compatibles but also the application software business for the only nonstandard
system with any sizable share of the desktop computer market.

Module 2: Operating System Principles

Unit 1 : Structuring methods (monolithic, layered, modular, micro-kernel models).

Table of Contents
1.0 Introduction
2.0 Objectives
3.0 Main Body
3.1
3.2
4 Conclusion/Summary

2.5.1 Simple Structure


• Operating systems such as MS-DOS and the original UNIX did not have well-defined structures.
• There was no CPU Execution Mode (user and kernel), and so errors in applications could cause
the whole system to crash.
Figure 8: Simple Structure

2.5.2 Monolithic Approach


• Functionality of the OS is invoked with simple function calls within the kernel, which is one large
program.
• Device drivers are loaded into the running kernel and become part of the kernel.

Figure 9: A monolithic kernel, such as Linux and other Unix systems

2.5.3 Layered Approach


This approach breaks up the operating system into different layers.
• This allows implementers to change the inner workings, and increases modularity.
• As long as the external interface of the routines don’t change, developers have more freedom to
change the inner workings of the routines.
• With the layered approach, the bottom layer is the hardware, while the highest layer is the user
interface.
✓ The main advantage is simplicity of construction and debugging.
✓ The main difficulty is defining the various layers.
✓ The main disadvantage is that the OS tends to be less efficient than other implementations.
Figure 10: Layered

2.5.4 Microkernels
This structures the operating system by removing all nonessential portions of the kernel and
implementing them as system and user level programs. Generally, they provide minimal process
and memory management, and a communications facility. Communication between components of
the OS is provided by message passing.
The benefits of the microkernel are as follows:
✓ Extending the operating system becomes much easier.
✓ Any changes to the kernel tend to be fewer, since the kernel is smaller.
✓ The microkernel also provides more security and reliability.
✓ Main disadvantage is poor performance due to increased system overhead from message passing

Figure 11: A Microkernel architecture

The Microsoft Windows NT Operating System. The lowest level is a monolithic kernel, but many
OS components are at a higher level, but still part of the OS.

Module 2: Operating System Principles

Unit 2 : Abstractions, processes, and resources

Table of Contents
1.0 Introduction
2.0 Objectives
3.0 Main Body
3.1
3.2
4 Study Questions
5 Conclusion/Summary

Abstraction
The process of establishing the decomposition of a problem into simpler and more understood
primitives is basic to science and software engineering. This process has many underlying
techniques of abstraction.
An abstraction is a model. The process of transforming one abstraction into a more detailed
abstraction is called refinement. The new abstraction can be referred to as a refinement of the
original one. Abstractions and their refinements typically do not coexist in the same system
description. Precisely what is meant by a more detailed abstraction is not well defined. There needs
to be support for substitutability of concepts from one abstraction to another. Composition occurs
when two abstractions are used to define another higher abstraction. Decomposition occurs when an
abstraction is split into smaller abstractions.
Information management is one of the goals of abstraction. Complex features of one abstraction are
simplified into another abstraction. Good abstractions can be very useful while bad abstractions can
be very harmful. A good abstraction leads to reusable components.
Information hiding distinguishes between public and private information. Only the essential
information is made public while internal details are kept private. This simplifies interactions and
localizes details and their operations into well-defined units.
Abstraction, in traditional systems, naturally forms layers representing different levels of
complexity. Each layer describes a solution. These layers are then mapped onto each other. In this
way, high level abstractions are materialized by lower-level abstractions until a simple realization
can take place.

Abstraction can be accomplished on functions, data, and processes. In functional abstraction, details
of the algorithms to accomplish the function are not visible to the consumer of the function. The
consumer of the function needs to only know the correct calling convention and have trust in the
accuracy of the functional results.
In data abstraction, details of the data container and the data elements may not be visible to the
consumer of the data. The data container could represent a stack, a queue, a list, a tree, a graph, or
many other similar data containers. The consumer of the data container is only concerned about
correct behavior of the data container and not many of the internal details. Also, exact details of the
data elements in the data container may not be visible to the consumer of the data element. An
encrypted certificate is the ultimate example of an abstract data element. The certificate contains
data that is encrypted with a key not know to the consumer. The consumer can use this certificate to
be granted capabilities but cannot view nor modify the contents of the certificate.
Traditionally, data abstraction and functional abstraction combine into the concept of abstract data
types (ADT). Combining an ADT with inheritance gives the essences of an object-based paradigm.
In process abstraction, details of the threads of execution are not visible to the consumer of the
process. An example of process abstraction is the concurrency scheduler in a database system. A
database system can handle many concurrent queries. These queries are executed in a particular
order, some in parallel while some sequential, such that the resulting database cannot be
distinguished from a database where all the queries are done in a sequential fashion. A consumer of
a query which represents one thread of execution is only concerned about the validity of the query
and not the process used by the database scheduler to accomplish the query.

Process
In the Operating System, a Process is something that is currently under execution. So, an active
program can be called a Process. For example, when you want to search something on web then you
start a browser. This is denoted by process state. It can be ready, waiting, running, etc.
Resource
operating system (OS), program that manages a computer’s resources, especially the allocation of
those resources among other programs. Typical resources include the central processing unit (CPU),
computer memory, file storage, input/output (I/O) devices, and network connections. operating
system (OS), program that manages a computer’s resources, especially the allocation of those
resources among other programs. Typical resources include the central processing unit (CPU),
computer memory, file storage, input/output (I/O) devices, and network connections. Management
tasks include scheduling resource use to avoid conflicts and interference between programs. Unlike
most programs, which complete a task and terminate, an operating system runs indefinitely and
terminates only when the computer is turned off. Modern multiprocessing operating systems allow
many processes to be active, where each process is a “thread” of computation being used to execute
a program. One form of multiprocessing is called time-sharing, which lets many users share
computer access by rapidly switching between them. Time-sharing must guard against interference
between users’ programs, and most systems use virtual memory, in which the memory, or “address
space,” used by a program may reside in secondary memory (such as on a magnetic hard disk drive)
when not in immediate use, to be swapped back to occupy the faster main computer memory on
demand. This virtual memory both increases the address space available to a program and helps to
prevent programs from interfering with each other, but it requires careful control by the operating
system and a set of allocation tables to keep track of memory use. Perhaps the most delicate and
critical task for a modern operating system is allocation of the CPU; each process is allowed to use
the CPU for a limited time, which may be a fraction of a second, and then must give up control and
become suspended until its next turn. Switching between processes must itself use the CPU while
protecting all data of the processes.
The first digital computers had no operating systems. They ran one program at a time, which had
command of all system resources, and a human operator would provide any special resources
needed. The first operating systems were developed in the mid-1950s. These were small “supervisor
programs” that provided basic I/O operations (such as controlling punch card readers and printers)
and kept accounts of CPU usage for billing. Supervisor programs also provided multiprogramming
capabilities to enable several programs to run at once. This was particularly important so that these
early multimillion-dollar machines would not be idle during slow I/O operations.

Module 2: Operating System Principles

Unit 3 : Concepts of application program interfaces (APIs)

Table of Contents
1.0 Introduction
2.0 Objectives
3.0 Main Body
3.1
3.2
4 Study Questions
5 Conclusion/Summary

An application program interface (API) is code that allows two software programs to communicate
with each other. An API defines the correct way for a developer to request services from an
operating system (OS) or other application, and expose data within different contexts and across
multiple channels.
Operating System (OS) application programmer interfaces (APIs) are a key enabler to target
independent software and seamless upgrading from the software perspective when the underlying
hardware is changed. This is one of the significant cost reduction mechanisms promised by the use
of open systems.
There are four principal types of API commonly used in web-based applications: public, partner,
private and composite. In this context, the API “type” indicates the intended scope of use. Public
APIs an enterprise also may seek to monetize the API by imposing a per-call cost to utilize the
public API. Partner APIs.

An application program interface (API) is code that allows two software programs to communicate
with each other. An API defines the correct way for a developer to request services from an
operating system (OS) or other application, and expose data within different contexts and across
multiple channels.
Operating System (OS) application programmer interfaces (APIs) are a key enabler to target
independent software and seamless upgrading from the software perspective when the underlying
hardware is changed. This is one of the significant cost reduction mechanisms promised by the use
of open systems.
There are four principal types of API commonly used in web-based applications: public, partner,
private and composite. In this context, the API “type” indicates the intended scope of use. Public
APIs an enterprise also may seek to monetize the API by imposing a per-call cost to utilize the
public API. Partner APIs.
Module 2: Operating System Principles

Unit 4 : Device organization

Table of Contents
1.0 Introduction
2.0 Objectives
3.0 Main Body
3.1
3.2
4 Study Questions
5 Conclusion/Summary

Module 2: Operating System Principles


Unit 5 : Interrupts: methods and implementations

Table of Contents
1.0 Introduction
2.0 Objectives
3.0 Main Body
3.1
3.2
4 Study Questions
5 Conclusion/Summary

Interrupts in operating system


Interrupts are signals sent to the CPU by external devices, normally I/O devices. They tell the
CPU to stop its current activities and execute the appropriate part of the operating system. Software
Interrupts are generated by programs when they want to request a system call to be performed by
the operating system. An interrupt occurs when the output device is idle, the ISR gets from the
FIFO and write the data to the output device. For example if we are implementing a digital
controller that executes a control algorithm 100 times a second, then we will set up the internal
timer hardware to request an interrupt every 10 ms.
How are interrupts implemented?
Monolithic operating system
A monolithic kernel is an operating system architecture where the entire operating system is
working in kernel space. A set of primitives or system calls implement all operating system services
such as process management, concurrency, and memory management.
Monolithic kernel is a single large process running entirely in a single address space. It is a single
static binary file. The kernel can invoke functions directly. Examples of monolithic kernel-based
OSs: Unix, Linux. In microkernels, the kernel is broken down into separate processes, known as
servers. A layered operating system is an operating system that groups related functionality together,
and separates it from the unrelated. Its architectural structure resembles a layer cake. It starts at
level 0, or the hardware level and works its way up to the operator, or user. Monolithic kernel is a
single large process running entirely in a single address space. It is a single static binary file.
Examples of monolithic kernel-based OSs: Unix, Linux. In microkernels, the kernel is broken down
into separate processes, known as servers.
Is Linux monolithic or microkernel?
The example of operating system having the Monolithic kernels is UNIX, LINUX while the os
having Microkernel are QNX, L4, HURD, initially Mach (not mac os x) later it will convert into
hybrid kernel, even MINIX is not pure kernel because device driver is compiled as part of the
kernel.
kernel and OS
The operating system is the software package that communicates directly to the hardware and our
application. The kernel is the lowest level of the operating system. The kernel is the main part of the
operating system and is responsible for translating the command into something that can be
understood by the computer.
STUDY QUESTIONS
What is the mechanism of client server model?
1. How does HTTP support the client server model?
2. What are the types of client-server models?
3. What are the features of a smartphone?
4. Which type of screen is there in today’s handheld device?
5. What is the use of handheld devices?
6. What is the difference between monolithic and layered structure?
7. How is a micro kernel structured?
8. What is modular kernel and layered approach in operating system?
9. Where is microkernel used?
10. When a computer is switched on where is the operating system loaded?
11. What is API and types of API?
12. Is API part of operating system?
13. Why use APIs instead of system calls?
14. Is 64 bits better or 32 bits?
15. What is the main function of BIOS?
16. Which operating system is best Why?
17. What are the components of operating system?
18. What is difference between client and server?
19. Compare and Contrast monolithic operating system and A layered operating system?
20. What is the difference between kernel and OS?

You might also like