0% found this document useful (0 votes)
7 views10 pages

Assignment_DCA1201-(2314503234)

Uploaded by

Muhammed Mukri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views10 pages

Assignment_DCA1201-(2314503234)

Uploaded by

Muhammed Mukri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

NAME MUKRI MUHAMMAD SAMAD

ROLL NO 2314503234
PROGRAM BCA
SEMESTER II
COURSE CODE DCA_1201
COURSE NAME OPERATING SYSTEM
SESSION MARCH 2024

SET-1

Q1. Explain the evolution of operating systems. Write a brief note on operating system
structures.
Ans.
1. A brief explanation of every type of operating system is given, which shows its evolution.
• Simple Batch Operating Systems:
These systems were developed in the 1950s. In these, the jobs were processed in a
sequential manner, with no user interference. The jobs were queued and run one by one;
thus, the CPU was kept busy, but it was not interactive.

• Multi-programmed Batch Operating Systems:


Introduced in the 1960s, these allowed multiple jobs in memory simultaneously, permitting
the CPU to switch between tasks to optimize any idle time.

• Time-sharing Operating Systems:


Emerging in the 1970's, these systems enabled many users to interact simultaneously with
a computer by switching between user tasks very quickly, laying the foundations of multi-
tasking as seen in today's times.

• Personal Computer Operating Systems:


In the 1980s and 1990s, MS-DOS and Windows were designed for single users, with
graphical interfaces and general support for applications that really opened up computing.

• Multi-processor Operating Systems:


The operating systems of the 1990s supported multiple parallel-working CPUs to gain
performance related to complex computation and data processing.

• Distributed Systems:
These systems, quite common in the 2000s, are linking multiple computers across a
network to execute shared tasks. This has given scalability and resource sharing—two
important drivers for cloud computing.
• Real-time Systems:
These are systems which found an important place in the late years of the 20th century,
designed to process and give a response immediately; this is critical in applications found
in industrial control and medical devices.

2. Operating System Structures


The operating structure is the major key to administering efficiently complex, large-scale
systems. There are two key approaches to them:
• Layered Approach:
This strategy, proposed by Edsger Dijkstra, structures the operating system into layers.
In the hierarchy, these layers are well-defined with clearly specified functions and
interfaces. The lowest layer shall never depart from the hardware, but the topmost one
is in contact with the user. Speaking of its modular design, independent testing for each
layer makes development and maintenance easy, but this does require some careful
planning as to which functionality goes in which layer in order not to incur any
dependency problems.

• Kernel-Based Approach:
The kernel-based approach is based on the proposal by Brinch Hansen, whereby the
kernel contains a minimal base component that gives only a number of general low-
level services related to process creation and communication. All other parts of the
operating system can be constructed on top of the kernel but retain higher-level
flexibility of policies for design. Any good kernel design will balance the delicate
compromise between a minimum of essential functionality without overload or
restricting the system flexibility.

• Virtual Machine:
This technique provides a virtual layer that emulates hardware for every user, thus
giving an illusion of resource exclusiveness to all. This can run many operating systems
on one platform. A famous implementation is VM/370 from IBM, under which each
user is provided with a separate, virtual machine. Here the problem is that the resources
would have to be managed properly to ensure that system performance does not suffer.

These set out to describe the design and functionality of an operating system, offer a
way of managing complexity, and guarantee robust performance.

Q2. What is Scheduling? Discuss the CPU scheduling algorithms.


Ans.
One of the central aspects of any operating system is CPU scheduling because it determines what
process can be executed based on the availability to utilize the CPU. The important algorithms for
it are First-Come-First-Serve, Shortest-Job-First, and Round-Robin.

1. Scheduling: Allots CPU resources to various processes in the ready queue. It is


implemented using either FIFO, priority queues, or linked lists. Efficient scheduling has
better performance of the system.

• CPU-I/O Burst Cycle: This process alternates between CPU and I/O wait time and
makes a process either CPU-bound, which is long, or I/O-bound, which becomes short.
The cycle is very important in studying the effects of scheduling.
• Pre-emptive vs non-pre-emptive Scheduling: In non-pre-emptive scheduling, the
CPU will be given to the process until it leaves the running state. In pre-emptive
scheduling, another process may force a process to leave the CPU, hence significantly
enhancing the system's responsiveness.
• Dispatcher: It manages context switching between processes. If it is fast, it will really
cut down latency, hence improving systems' responsiveness.

2. Scheduling Algorithms

• First-Come-First-Serve: This is basically the sort of scheduling where the order of


execution processes takes place as it occurs in the actual arrival; this might result in
inefficiency referred to as "convoy effect," whereby the short processes have to wait
for the long ones; hence, they extend the average waiting and turn-around time. Say,
for instance, considering processes P1 of 24 ms, P2 of 3 ms, and P3 of 3 ms, the average
waiting time for FCFS will be 17 ms.
• Shortest-Job-First: SJF stands for Shortest-Job-First, where there exists priority for
progress with the shortest CPU burst. This minimizes both waiting and turnaround
times. This strategy works best in batch systems, though it's very difficult to predict the
CPU bursts. Pre-emptive SJF means, in the case of arrival of a new process with a
shorter burst; it will preempt the current process.
• Priority Scheduling: This is a scheduling discipline wherein the CPU IDLE time is
employed first by the highest-priority job. It could be either preemptive or no
preemptive. Low-priority processes starve in this case because of aging, wherein
priority is increased with the passage of time.
• Round Robin (RR): Each process gets a fixed time slice, known as a quantum, in
cycles, to ensure fairness. The size of the quantum becomes critical: a small quantum
will increase context switches, while a large one behaves like FCFS. Example: Given
a 4 ms quantum, with processes P1 having a burst time of 24 ms and processes P2 and
P3 each having a burst time of 3 ms, the average waiting time will turn out to be 5.66
ms.
• Multilevel Queue Scheduling: Prioritization or division of the type of process; each
with its scheduling algorithm. No process moves from one queue to another; starvation
can take place among the lower-priority processes.
• Multilevel Feedback Queue Scheduling: In this, a process can move from one queue
to another. It is a bridge for different scheduling strategies and adapts to the situation
to achieve impressive improvements in both performance and fairness.

Q3. Discuss Interprocess Communication and critical-section problem along with use of
semaphores.
Ans.
Interprocess Communication and Critical Section Problem: Implementation Using
Semaphores

Interprocess communication: Processes either in a shared-memory environment or via a


message-passing system must cooperate with one another. In the former, this occurs through
writing and reading in a common buffer; the programmer is responsible for the management of
this buffer explicitly. By contrast, an IPC facility provided by the operating system enables
processes to exchange messages as a means of cooperation and synchronization of actions;
message passing may be direct or indirect and fixed or variable-sized. In message passing, there
are no shared variables, and communication instead involves an exchange of messages using, say,
the operations send(message) and receive(message). Direct: The naming of processes by other
processes gives rise to symmetric links or bi-directional links between pairs of processes. Indirect:
More than two processes share mailboxes for sending and receiving messages. It is possible to
implement different capacities of links in message-passing systems, ranging from zero capacity
(in which there is no buffering and message exchange is synchronous) to unbounded capacity
(where messages are queued, and sender processes are not delayed waiting for this action).

The critical-section problem arises when many processes must access shared resources. That
program segment is called the critical section, where shared resources are accessed, and it has to
be protected against concurrent accesses by more than one process. This problem thus can be
approached by ensuring three key requirements:

• Mutual Exclusion: In its critical segment, only one process can execute at a time.
• Progress: A process wanting to enter the critical section can do so provided no other
process is in the critical section.
• Bounded Waiting: Bound is placed on the number of times another process can enter its
critical section before a process is allowed entry into its own, if process has requested entry.

Two-process solutions, such as Algorithm 1 and Algorithm 2, tackle critical sections for a pair of
processes. Algorithm 1 alternates between the accesses of the two processes but still does not
satisfy the progress requirement. Algorithm 2 improves on this by flagging readiness but still
doesn't ensure progress. Algorithm 3 combines ideas from both, using flags and a turn variable to
meet all requirements, allowing only one process in the critical section while guaranteeing fairness.

Semaphores are one of the basic synchronization techniques that provide access to shared
resources. Semaphores were invented to be abstract data types by Dijkstra. Access to these will be
through atomic operations only, viz., P () for decrement or wait, V() for increment or signal, and
Init() for initialization. Semaphores will hold a count to control the access of resources. If the count
of a semaphore is zero, that resource is not free; so, any process requesting a P operation on this
semaphore will block until that resource is free. Semaphores avoid busy waiting by making a queue
of processes and waking them up once the resource release has taken place.

Semaphores will be helpful because of their machine independence, simplicity, and an ability to
handle several processes and critical sections. They guarantee mutual exclusion and waiting
without hardware dependence, allowing them to be applied to several synchronization scenarios,
such as access control to several resources.

SET-2

Q4.
a. What is a Process Control Block? What information does it hold and why?
b. What is Thrashing? What are its causes?

Ans.
a. The Process Control Block can be termed as a very important data structure for any
operating system, and it represents every visiting process. In the simplest words, it can be
defined as a repository of information that makes a process an active entity in the system.
The PCB becomes extremely important for process management and scheduling quite
simply because all the information required to handle and manage the process is stored
within the block itself.

Information contained within a PCB includes:


• Process State: It determines the process state if it is new, ready, running, waiting,
or halted.
• Program Counter: A register that holds the address of the next instruction to be
executed; this is necessary in the pickup of the process as it was when the interrupt
occurred
• CPU Registers: These can comprise accumulators, index register, stack pointers,
general registers. The registers hold the working data currently as well as the
condition of the process.
• CPU Scheduling Info: The process priority and points persist in scheduling
queues, which take the necessary steps so that the process gets a share of the CPU
on a time basis. Memory Management Info: Base and limit registers, page table, or
segment table defining the memory the process has. Accounting Info: Keeps track
of usage metrics, such as CPU time and real-time use, time limits, and process
identifiers, for tracking purposes and managing system resources.
• I/O Status Information: This holds the list of I/O devices allocated to a process,
and normally the files opened that support the input and output operations.

The main function of the PCB is to maintain all the information of processes; therefore, the
OS manages or schedules control on the process execution.

b. Thrashing is when the state of a computer system results in a process taking so much time
in paging, that is, in moving data between RAM and disk storage, instead of executing an
actual instruction. This excessive paging activity causes a tremendous degradation in
system performance, with drastic drops in throughput and CPU utilization. It occurs when
a process does not have sufficient frames to hold its working set, thus inducing frequent
page faults, with consequent continual swapping of the processes' working set between
memory and disk.

Causes of Thrashing:
• Insufficient Frames Allocation: If a process is not given ample frames to hold it
working set, it continues to generate page faults at a constant rate due to its frequent
access for pages that won't be in memory at the time. Admission Raith repeated
paging wastes execution time.
• High Degree of Multiprogramming: The operating system can allow more
concurrent processes to better utilize the CPU. However, if such additional
processes are competing for limited memory, all of them are stealing frames from
one another; each process then suffers more frequent page faults, leading to
thrashing.
• Global Page Replacement Policy: In a global replacement policy, one process
may capture frames from another process. Cross-process frame stealing may
increase the page faults of all processes - exacerbating thrashing.
• Local Page Replacement Policy: While this limits page faults within one process
itself, there is still thrashing due to insufficient frames allocated to a process - as
the process is unable to keep its working set in memory.
• High Page Fault Frequency: if page fault rates get too high, then the processes
will need more frames. Since there are not enough resources for these, particularly
when thrashing, in which processes repetitively fault and swap pages, the system
cannot give them that.
Q5.
a. Discuss the different File Access Methods.
b. What are I/O Control Strategies?

Ans.
a. File access methods prescribe the mode of accessing information in files from secondary
storage. The three major ones are sequential access, direct access, and indexed sequential
access.

• Sequential access: A record is read in sequence, from the first to the last similar to a tape.
All the previous records should have been processed before the target record is accessed.
This method is best for applications needing to process most records, like transaction files.

• Direct access: Direct access means that the records are accessed directly by their key
values without processing the records in order. The media used for storage is usually a disk
supporting rapid access to blocks of data. This thus makes direct access very useful for
database and reservation systems where response time is very critical. By contrast, files
have to be declared either for direct or sequential access when they are created, and direct
access is not supported by all operating systems.

• Indexed Sequential Access: A hybrid technique between sequential and direct access is
combined. Since an index pointer provides direct access to the file blocks, once a block is
accessed its constituent records are accessed sequentially. This approach, therefore, allows
effective access. This is because it incorporates the use of an index for quick location of
data, hence balancing the advantages of both sequential and direct access.

b. I/O control strategies


I/O control strategies that define how a computer communicates with its associated I/O
devices, further enhancing performance. The most important among these are program-
controlled I/O, interrupt-controlled I/O, and direct memory access.

• Program-controlled I/O: In program-controlled I/O, or polled I/O, I/O operations are


directly executed inside the CPU. The CPU continuously polls each device in turn to find
out whether the device needs attention. It has the advantages that allow precision and
control of priority but wastes CPU time polling the devices to see whether they are ready.
• Interrupt driven I/O: In interrupt driven I/O, the devices can interrupt the CPU in case
they need service. Now, the CPU suspends the current process, saves the state of the
system, and executes a routine of the interrupt handler. It holds the maximum advantage in
reducing waiting time, particularly in the case when more than one device is connected.
Vectored interrupts enhance this by allowing the device itself to supply the address of its
handling routine, hence speeding up the process.
• Direct Memory Access (DMA): DMA provides a high-speed device with the capability
for transferring data in and out of memory without bothering the CPU for each transfer;
the management of these transfers burdens less work on the CPU inside a DMA controller.
The CPU must only initiate the process by sending the memory location and the number
of bytes to the DMA controller. The methodology is workable for a hard disk, a high-speed
device. It uses advanced controllers and bus arbitration techniques, including daisy chain
arbitration, priority encoded arbitration, and finally distributed arbitration.

Q6. Explain the different Multiprocessor Interconnections and types of Multiprocessor


Operating Systems.
Ans.
Multiprocessor Interconnections
Multiprocessor interconnections impact the bandwidth of communication, complexity and cost,
IPC, and scalability. The interconnection architectures are as follows:

1. Bus-Oriented Systems:
• Architecture: Common bus shared by processors and memory.
• Communication: Processors communicate through the shared bus.
• Challenges: Shared resources may lead to contention; this can be avoided by a cache
memory of high cache hit ratio for improved performance.
• Scalability: Up to 10 processors; simple to implement, hence popular.

2. Crossbar-Connected Systems:
• Architecture: Each processor is connected to all memory modules through a crossbar
switch.
• Communications: Concurrent access is supported if the two processors refer to different
memory modules.
• Problems: Contention exists if many processors may access the same memory; data
distribution reduces this problem.
• Scalability: Quadratic growth -- n² cross points-- is expensive and thus not very scalable.

3. Hyper-cubes:
• Architecture: The processors are located at the vertices of a hypercube.
• Communication: Each processor connects point-to-point directly to log₂N other processors
where N is the number of nodes.
• Scalability: Logarithmic growth in complexity. It is best for problems with recursive
structures or locality of reference.
• Advantages: It provides a potentially good base for highly scalable, large-scale
multiprocessors.

4. Multistage Switch-Based Systems:


• Architecture: linking N inputs and N outputs by log₂N stages each with N/2 switches.
• Communication: Fixed routing, source and destination addressed.
• Advantages: High bandwidth is supported, and every input can relate to all outputs if the
processors read different modules in memory. Buffering may decrease contention.
• Disadvantages: Contentions will occur either in memory modules or in the network.
Types Multiprocessor Operating Systems

1. Separate Supervisors:
• Architecture: Each node has a processor that runs its own image of the operating system.
The processors have a different personal memory and I/O.
• Example: Hypercube structures.
• Advantages: More parallelism is achieved due to the splitting of applications into subtasks
that execute in parallel at different nodes.
• Difficulties: More complicated services and data structures are required to support the
multiprocessors.

2. Master/Slave Systems:
• Architecture: All but one of the processors are undertaking computational tasks; the
remaining one is running the operating system.
• Advantages: Easy to implement; porting uniprocessor systems for multiprocessor
operations goes through easily.
• Challenges: Scalability is restricted. Master processor is a single point of failure and uses
its processing power for exclusively control tasks.

3. Symmetric Multiprocessing (SMP):


• Architecture: All processors are functionally equivalent. Each processor can access all
resources, including memory and I/O devices.
• Operating System: Any processor can execute OS. Master processor role is temporary, and
workload driven.
• Advantages: It is comparatively easy to port existing uniprocessor OS like UNIX in shared
memory systems. Puts forward the facility of parallel execution of applications using
shared memory.
• Difficulties: Has to manage concurrent access of shared data structures for efficient
parallelism.

You might also like