Assignment_DCA1201-(2314503234)
Assignment_DCA1201-(2314503234)
ROLL NO 2314503234
PROGRAM BCA
SEMESTER II
COURSE CODE DCA_1201
COURSE NAME OPERATING SYSTEM
SESSION MARCH 2024
SET-1
Q1. Explain the evolution of operating systems. Write a brief note on operating system
structures.
Ans.
1. A brief explanation of every type of operating system is given, which shows its evolution.
• Simple Batch Operating Systems:
These systems were developed in the 1950s. In these, the jobs were processed in a
sequential manner, with no user interference. The jobs were queued and run one by one;
thus, the CPU was kept busy, but it was not interactive.
• Distributed Systems:
These systems, quite common in the 2000s, are linking multiple computers across a
network to execute shared tasks. This has given scalability and resource sharing—two
important drivers for cloud computing.
• Real-time Systems:
These are systems which found an important place in the late years of the 20th century,
designed to process and give a response immediately; this is critical in applications found
in industrial control and medical devices.
• Kernel-Based Approach:
The kernel-based approach is based on the proposal by Brinch Hansen, whereby the
kernel contains a minimal base component that gives only a number of general low-
level services related to process creation and communication. All other parts of the
operating system can be constructed on top of the kernel but retain higher-level
flexibility of policies for design. Any good kernel design will balance the delicate
compromise between a minimum of essential functionality without overload or
restricting the system flexibility.
• Virtual Machine:
This technique provides a virtual layer that emulates hardware for every user, thus
giving an illusion of resource exclusiveness to all. This can run many operating systems
on one platform. A famous implementation is VM/370 from IBM, under which each
user is provided with a separate, virtual machine. Here the problem is that the resources
would have to be managed properly to ensure that system performance does not suffer.
These set out to describe the design and functionality of an operating system, offer a
way of managing complexity, and guarantee robust performance.
• CPU-I/O Burst Cycle: This process alternates between CPU and I/O wait time and
makes a process either CPU-bound, which is long, or I/O-bound, which becomes short.
The cycle is very important in studying the effects of scheduling.
• Pre-emptive vs non-pre-emptive Scheduling: In non-pre-emptive scheduling, the
CPU will be given to the process until it leaves the running state. In pre-emptive
scheduling, another process may force a process to leave the CPU, hence significantly
enhancing the system's responsiveness.
• Dispatcher: It manages context switching between processes. If it is fast, it will really
cut down latency, hence improving systems' responsiveness.
2. Scheduling Algorithms
Q3. Discuss Interprocess Communication and critical-section problem along with use of
semaphores.
Ans.
Interprocess Communication and Critical Section Problem: Implementation Using
Semaphores
The critical-section problem arises when many processes must access shared resources. That
program segment is called the critical section, where shared resources are accessed, and it has to
be protected against concurrent accesses by more than one process. This problem thus can be
approached by ensuring three key requirements:
• Mutual Exclusion: In its critical segment, only one process can execute at a time.
• Progress: A process wanting to enter the critical section can do so provided no other
process is in the critical section.
• Bounded Waiting: Bound is placed on the number of times another process can enter its
critical section before a process is allowed entry into its own, if process has requested entry.
Two-process solutions, such as Algorithm 1 and Algorithm 2, tackle critical sections for a pair of
processes. Algorithm 1 alternates between the accesses of the two processes but still does not
satisfy the progress requirement. Algorithm 2 improves on this by flagging readiness but still
doesn't ensure progress. Algorithm 3 combines ideas from both, using flags and a turn variable to
meet all requirements, allowing only one process in the critical section while guaranteeing fairness.
Semaphores are one of the basic synchronization techniques that provide access to shared
resources. Semaphores were invented to be abstract data types by Dijkstra. Access to these will be
through atomic operations only, viz., P () for decrement or wait, V() for increment or signal, and
Init() for initialization. Semaphores will hold a count to control the access of resources. If the count
of a semaphore is zero, that resource is not free; so, any process requesting a P operation on this
semaphore will block until that resource is free. Semaphores avoid busy waiting by making a queue
of processes and waking them up once the resource release has taken place.
Semaphores will be helpful because of their machine independence, simplicity, and an ability to
handle several processes and critical sections. They guarantee mutual exclusion and waiting
without hardware dependence, allowing them to be applied to several synchronization scenarios,
such as access control to several resources.
SET-2
Q4.
a. What is a Process Control Block? What information does it hold and why?
b. What is Thrashing? What are its causes?
Ans.
a. The Process Control Block can be termed as a very important data structure for any
operating system, and it represents every visiting process. In the simplest words, it can be
defined as a repository of information that makes a process an active entity in the system.
The PCB becomes extremely important for process management and scheduling quite
simply because all the information required to handle and manage the process is stored
within the block itself.
The main function of the PCB is to maintain all the information of processes; therefore, the
OS manages or schedules control on the process execution.
b. Thrashing is when the state of a computer system results in a process taking so much time
in paging, that is, in moving data between RAM and disk storage, instead of executing an
actual instruction. This excessive paging activity causes a tremendous degradation in
system performance, with drastic drops in throughput and CPU utilization. It occurs when
a process does not have sufficient frames to hold its working set, thus inducing frequent
page faults, with consequent continual swapping of the processes' working set between
memory and disk.
Causes of Thrashing:
• Insufficient Frames Allocation: If a process is not given ample frames to hold it
working set, it continues to generate page faults at a constant rate due to its frequent
access for pages that won't be in memory at the time. Admission Raith repeated
paging wastes execution time.
• High Degree of Multiprogramming: The operating system can allow more
concurrent processes to better utilize the CPU. However, if such additional
processes are competing for limited memory, all of them are stealing frames from
one another; each process then suffers more frequent page faults, leading to
thrashing.
• Global Page Replacement Policy: In a global replacement policy, one process
may capture frames from another process. Cross-process frame stealing may
increase the page faults of all processes - exacerbating thrashing.
• Local Page Replacement Policy: While this limits page faults within one process
itself, there is still thrashing due to insufficient frames allocated to a process - as
the process is unable to keep its working set in memory.
• High Page Fault Frequency: if page fault rates get too high, then the processes
will need more frames. Since there are not enough resources for these, particularly
when thrashing, in which processes repetitively fault and swap pages, the system
cannot give them that.
Q5.
a. Discuss the different File Access Methods.
b. What are I/O Control Strategies?
Ans.
a. File access methods prescribe the mode of accessing information in files from secondary
storage. The three major ones are sequential access, direct access, and indexed sequential
access.
• Sequential access: A record is read in sequence, from the first to the last similar to a tape.
All the previous records should have been processed before the target record is accessed.
This method is best for applications needing to process most records, like transaction files.
• Direct access: Direct access means that the records are accessed directly by their key
values without processing the records in order. The media used for storage is usually a disk
supporting rapid access to blocks of data. This thus makes direct access very useful for
database and reservation systems where response time is very critical. By contrast, files
have to be declared either for direct or sequential access when they are created, and direct
access is not supported by all operating systems.
• Indexed Sequential Access: A hybrid technique between sequential and direct access is
combined. Since an index pointer provides direct access to the file blocks, once a block is
accessed its constituent records are accessed sequentially. This approach, therefore, allows
effective access. This is because it incorporates the use of an index for quick location of
data, hence balancing the advantages of both sequential and direct access.
1. Bus-Oriented Systems:
• Architecture: Common bus shared by processors and memory.
• Communication: Processors communicate through the shared bus.
• Challenges: Shared resources may lead to contention; this can be avoided by a cache
memory of high cache hit ratio for improved performance.
• Scalability: Up to 10 processors; simple to implement, hence popular.
2. Crossbar-Connected Systems:
• Architecture: Each processor is connected to all memory modules through a crossbar
switch.
• Communications: Concurrent access is supported if the two processors refer to different
memory modules.
• Problems: Contention exists if many processors may access the same memory; data
distribution reduces this problem.
• Scalability: Quadratic growth -- n² cross points-- is expensive and thus not very scalable.
3. Hyper-cubes:
• Architecture: The processors are located at the vertices of a hypercube.
• Communication: Each processor connects point-to-point directly to log₂N other processors
where N is the number of nodes.
• Scalability: Logarithmic growth in complexity. It is best for problems with recursive
structures or locality of reference.
• Advantages: It provides a potentially good base for highly scalable, large-scale
multiprocessors.
1. Separate Supervisors:
• Architecture: Each node has a processor that runs its own image of the operating system.
The processors have a different personal memory and I/O.
• Example: Hypercube structures.
• Advantages: More parallelism is achieved due to the splitting of applications into subtasks
that execute in parallel at different nodes.
• Difficulties: More complicated services and data structures are required to support the
multiprocessors.
2. Master/Slave Systems:
• Architecture: All but one of the processors are undertaking computational tasks; the
remaining one is running the operating system.
• Advantages: Easy to implement; porting uniprocessor systems for multiprocessor
operations goes through easily.
• Challenges: Scalability is restricted. Master processor is a single point of failure and uses
its processing power for exclusively control tasks.