unit5paper2AD
unit5paper2AD
Operating System
Operating System lies in the category of system software. It basically manages all the
resources of the computer. An operating system acts as an interface between the
software and different parts of the computer or the computer hardware. The operating
system is designed in such a way that it can manage the overall resources and operations
of the computer.
Operating System is a fully integrated set of specialized programs that handle all the
operations of the computer. It controls and monitors the execution of all other programs
that reside in the computer, which also includes application programs and other system
software of the computer. Examples of Operating Systems are Windows, Linux, Mac OS,
etc.
An Operating System (OS) is a collection of software that manages computer hardware
resources and provides common services for computer programs. The operating system
is the most important type of system software in a computer system.
In the Computer System (comprises of Hardware and software), Hardware can only
understand machine code (in the form of 0 and 1) which doesn't make any sense to a
naive user.
We need a system which can act as an intermediary and manage all the processes and
resources present in the system.
ADVERTISEMENT
• Users (people who are using the computer)
• Application Programs (Compilers, Databases, Games, Video player, Browsers,
etc.)
• System Programs (Shells, Editors, Compilers, etc.)
• Operating System ( A special program which acts as an interface between user
and hardware )
• Hardware ( CPU, Disks, Memory, etc)
• Price Factor: Price is one of the factors to choose the correct Operating System as
there are some OS that is free, like Linux, but there is some more OS that is paid
like Windows and macOS.
• Accessibility Factor: Some Operating Systems are easy to use like macOS and
iOS, but some OS are a little bit complex to understand like Linux. So, you must
choose the Operating System in which you are more accessible.
• Compatibility factor: Some Operating Systems support very less applications
whereas some Operating Systems supports more application. You must choose the
OS, which supports the applications which are required by you.
• Security Factor: The security Factor is also a factor in choosing the correct OS, as
macOS provide some additional security while Windows has little fewer security
features.
Examples of Operating Systems
• Windows (GUI-based, PC)
• GNU/Linux (Personal, Workstations, ISP, File, and print server, Three-tier
client/Server)
• macOS (Macintosh), used for Apple’s personal computers and workstations
(MacBook, iMac).
• Android (Google’s Operating System for smartphones/tablets/smartwatches)
• iOS (Apple’s OS for iPhone, iPad, and iPod Touch)
Process Scheduling :
In computing, a process is the instance of a computer program that is being executed by
one or many threads. Scheduling is important in many different computer environments.
One of the most important areas of scheduling is which programs will work on the CPU.
This task is handled by the Operating System (OS) of the computer and there are many
different ways in which we can choose to configure programs.
What is Process Scheduling?
Process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process based on a particular
strategy.
Process scheduling is an essential part of a Multiprogramming operating system. Such
operating systems allow more than one process to be loaded into the executable memory
at a time and the loaded process shares the CPU using time multiplexing.
Process scheduler
Categories of Scheduling
Scheduling falls into one of two categories:
It brings the new process to the ‘Ready State’. It controls the Degree of Multi-programming,
i.e., the number of processes present in a ready state at any point in time. It is important
that the long-term scheduler make a careful selection of both I/O and CPU-bound
processes. I/O-bound tasks are which use much of their time in input and output
operations while CPU-bound processes are which spend their time on the CPU. The job
scheduler increases efficiency by maintaining a balance between the two. They operate
at a high level and are typically used in batch-processing systems.
It is responsible for selecting one process from the ready state for scheduling it on the
running state. Note: Short-term scheduler only selects the process to schedule it doesn’t
load the process on running. Here is when all the scheduling algorithms are used. The
CPU scheduler is responsible for ensuring no starvation due to high burst time processes.
• Switching context.
• Switching to user mode.
• Jumping to the proper location in the newly loaded program.
3. Medium-Term Scheduler
It is responsible for suspending and resuming the process. It mainly does swapping
(moving processes from main memory to disk and vice versa). Swapping may be
necessary to improve the process mix or because a change in memory requirements has
overcommitted available memory, requiring memory to be freed up. It is helpful in
maintaining a perfect balance between the I/O bound and the CPU bound. It reduces the
degree of multiprogramming.
It is a process-swapping
It is a job scheduler It is a CPU scheduler
scheduler.
Speed lies in between
Generally, Speed is lesser Speed is the fastest
both short and long-term
than short term scheduler among all of them.
schedulers.
It is barely present or
It is a minimal time- It is a component of
nonexistent in the time-
sharing system. systems for time sharing.
sharing system.
Context Switching
In order for a process execution to be continued from the same point at a later time,
context switching is a mechanism to store and restore the state or context of a CPU in
the Process Control block. A context switcher makes it possible for multiple processes to
share a single CPU using this method. A multitasking operating system must include
context switching among its features.
• Program Counter
• Scheduling information
• The base and limit register value
• Currently used register
• Changed State
• I/O State information
• Accounting information
Let us now look at the general definition of inter-process communication, which will
explain the same thing that we have discussed above.
Definition
To understand inter process communication, you can consider the following given
diagram that illustrates the importance of inter-process communication:
It is one of the essential parts of inter process communication. Typically, this is provided
by interprocess communication control mechanisms, but sometimes it can also be
controlled by communication processes.
These are the following methods that used to provide the synchronization:
3. Mutual Exclusion
4. Semaphore
5. Barrier
6. Spinlock
Mutual Exclusion:-
It is generally required that only one process thread can enter the critical section at a time.
This also helps in synchronization and creates a stable state to avoid the race condition.
Semaphore:-
Semaphore is a type of variable that usually controls the access to the shared resources
by several processes. Semaphore is further divided into two types which are as follows:
7. Binary Semaphore
8. Counting Semaphore
Barrier:-
A barrier typically not allows an individual process to proceed unless all the processes
does not reach it. It is used by many parallel languages, and collective routines impose
barriers.
Spinlock:-
Spinlock is a type of lock as its name implies. The processes are trying to acquire the
spinlock waits or stays in a loop while checking that the lock is available or not. It is known
as busy waiting because even though the process active, the process does not perform
any functional operation (or task).
9. Pipes
10. Shared Memory
11. Message Queue
12. Direct Communication
13. Indirect communication
14. Message Passing
15. FIFO
Pipe:-
The pipe is a type of data channel that is unidirectional in nature. It means that the data
in this type of data channel can be moved in only a single direction at a time. Still, one
can use two-channel of this type, so that he can able to send and receive data in two
processes. Typically, it uses the standard methods for input and output. These pipes are
used in all types of POSIX systems and in different versions of window operating systems
as well.
Shared Memory:-
In general, several different messages are allowed to read and write the data to the
message queue. In the message queue, the messages are stored or stay in the queue
unless their recipients retrieve them. In short, we can also say that the message queue
is very helpful in inter-process communication and used by all operating systems.
To understand the concept of Message queue and Shared memory in more detail, let's
take a look at its diagram given below:
Message Passing:-
• send (message)
• received (message)
Direct Communication:-
Indirect Communication
Indirect communication can only exist or be established when processes share a common
mailbox, and each pair of these processes shares multiple communication links. These
shared links can be unidirectional or bi-directional.
FIFO:-
• Socket:-
It acts as a type of endpoint for receiving or sending the data in a network. It is correct for
data sent between processes on the same computer or data sent between different
computers on the same network. Hence, it used by several types of operating systems.
• File:-
A file is a type of data record or a document stored on the disk and can be acquired on
demand by the file server. Another most important thing is that several processes can
access that file as required or needed.
• Signal:-
As its name implies, they are a type of signal used in inter process communication in a
minimal way. Typically, they are the massages of systems that are sent by one process
to another. Therefore, they are not used for sending data but for remote commands
between multiple processes.
Usually, they are not used to send the data but to remote commands in between several
processes.
There are numerous reasons to use inter-process communication for sharing the data.
Here are some of the most important reasons that are given below:
Note: IPC cannot be considered a solution to all problems but what is important is that it
does its job very well.
Advantages of IPC:
1. Enables processes to communicate with each other and share resources, leading to
increased efficiency and flexibility.
2. Facilitates coordination between multiple processes, leading to better overall system
performance.
3. Allows for the creation of distributed systems that can span multiple computers or
networks.
4. Can be used to implement various synchronization and communication protocols,
such as semaphores, pipes, and sockets.
Disadvantages of IPC:
Deadlock detection and recovery is the process of detecting and resolving deadlocks in
an operating system. A deadlock occurs when two or more processes are blocked,
waiting for each other to release the resources they need. This can lead to a system-
wide stall, where no process can make progress.
9. Prevention: The operating system takes steps to prevent deadlocks from occurring
by ensuring that the system is always in a safe state, where deadlocks cannot occur.
This is achieved through resource allocation algorithms such as the Banker’s
Algorithm.
10. Detection and Recovery: If deadlocks do occur, the operating system must detect
and resolve them. Deadlock detection algorithms, such as the Wait-For Graph, are
used to identify deadlocks, and recovery algorithms, such as the Rollback and Abort
algorithm, are used to resolve them. The recovery algorithm releases the resources
held by one or more processes, allowing the system to continue to make progress.
Difference Between Prevention and Detection/Recovery: Prevention aims to avoid
deadlocks altogether by carefully managing resource allocation, while detection and
recovery aim to identify and resolve deadlocks that have already occurred.
Deadlock detection and recovery is an important aspect of operating system design and
management, as it affects the stability and performance of the system. The choice of
deadlock detection and recovery approach depends on the specific requirements of the
system and the trade-offs between performance, complexity, and risk tolerance. The
operating system must balance these factors to ensure that deadlocks are effectively
detected and resolved.
In the previous post, we discussed Deadlock Prevention and Avoidance. In this post,
the Deadlock Detection and Recovery technique to handle deadlock is discussed.
Deadlock Detection :
1. If resources have a single instance –
In this case for Deadlock detection, we can run an algorithm to check for the cycle in the
Resource Allocation Graph. The presence of a cycle in the graph is a sufficient
condition for deadlock.
In the above diagram, resource 1 and resource 2 have single instances. There is a
cycle R1 → P1 → R2 → P2. So, Deadlock is Confirmed.
ADVANTAGES OR DISADVANTAGES:
14. Improved System Stability: Deadlocks can cause system-wide stalls, and
detecting and resolving deadlocks can help to improve the stability of the system.
15. Better Resource Utilization: By detecting and resolving deadlocks, the operating
system can ensure that resources are efficiently utilized and that the system remains
responsive to user requests.
16. Better System Design: Deadlock detection and recovery algorithms can provide
insight into the behavior of the system and the relationships between processes and
resources, helping to inform and improve the design of the system.
17. Performance Overhead: Deadlock detection and recovery algorithms can introduce
a significant overhead in terms of performance, as the system must regularly check
for deadlocks and take appropriate action to resolve them.
18. Complexity: Deadlock detection and recovery algorithms can be complex to
implement, especially if they use advanced techniques such as the Resource
Allocation Graph or Timestamping.
19. False Positives and Negatives: Deadlock detection algorithms are not perfect and
may produce false positives or negatives, indicating the presence of deadlocks when
they do not exist or failing to detect deadlocks that do exist.
20. Risk of Data Loss: In some cases, recovery algorithms may require rolling back the
state of one or more processes, leading to data loss or corruption.
Overall, the choice of deadlock detection and recovery approach depends on the
specific requirements of the system, the trade-offs between performance, complexity,
and accuracy, and the risk tolerance of the system. The operating system must balance
these factors to ensure that deadlocks are effectively detected and resolved.
Handling Deadlocks
• If a process that is holding some resource, requests another resource that can not
be immediately allocated to it, all resources currently being held are released and if
necessary, request again together with the additional resource.
• If a process requests a resource that is currently held by another process, the OS
may pre-empt the second process and require it to release its resources. This works
only if both processes do not have the same priority.
Circular wait One way to ensure that this condition never holds is to impose a total
ordering of all resource types and to require that each process requests resources in
increasing order of enumeration, i.e., if a process has been allocated resources of type
R, then it may subsequently request only those resources of types following R in
ordering.
2. Deadlock Avoidance: The deadlock avoidance Algorithm works by proactively
looking for potential deadlock situations before they occur. It does this by tracking the
resource usage of each process and identifying conflicts that could potentially lead to a
deadlock. If a potential deadlock is identified, the algorithm will take steps to resolve the
conflict, such as rolling back one of the processes or pre-emptively allocating resources
to other processes. The Deadlock Avoidance Algorithm is designed to minimize the
chances of a deadlock occurring, although it cannot guarantee that a deadlock will
never occur. This approach allows the three necessary conditions of deadlock but
makes judicious choices to assure that the deadlock point is never reached. It allows
more concurrency than avoidance detection A decision is made dynamically whether
the current resource allocation request will, if granted, potentially lead to deadlock. It
requires knowledge of future process requests. Two techniques to avoid deadlock :
25. Process initiation denial
26. Resource allocation denial
Advantages of deadlock avoidance techniques:
• This technique does not limit resource access or restrict process action.
• Requested resources are granted to processes whenever possible.
• It never delays the process initiation and facilitates online handling.
• The disadvantage is the inherent pre-emption losses.
4. Deadlock Ignorance: In the Deadlock ignorance method the OS acts like the
deadlock never occurs and completely ignores it even if the deadlock occurs. This
method only applies if the deadlock occurs very rarely. The algorithm is very simple. It
says ” if the deadlock occurs, simply reboot the system and act like the deadlock never
occurred.” That’s why the algorithm is called the Ostrich Algorithm.
Advantages:
• Ostrich Algorithm does not provide any information about the deadlock situation.
• It can lead to reduced performance of the system as the system may be blocked for
a long time.
• It can lead to a resource leak, as resources are not released when the system is
blocked due to deadlock.
The term memory can be defined as a collection of data in a specific format. It is used to
store instructions and process data. The memory comprises a large array or group of
words or bytes, each with its own location. The primary purpose of a computer system is
to execute programs. These programs, along with the information they access, should be
in the main memory during execution. The CPU fetches instructions from memory
according to the value of the program counter.
To achieve a degree of multiprogramming and proper utilization of memory, memory
management is important. Many memory management methods exist, reflecting various
approaches, and the effectiveness of each algorithm depends on the situation.
Here, we will cover the following memory management topics:
Main Memory
What is Memory Management?
In a multiprogramming computer, the Operating System resides in a part of memory, and
the rest is used by multiple processes. The task of subdividing the memory among
different processes is called Memory Management. Memory management is a method in
the operating system to manage operations between main memory and disk during
process execution. The main aim of memory management is to achieve efficient utilization
of memory.
Why Memory Management is Required?
• Allocate and de-allocate memory before and after process execution.
• To keep track of used memory space by processes.
• To minimize fragmentation issues.
• To proper utilization of main memory.
• To maintain data integrity while executing of process.
Now we are discussing the concept of Logical Address Space and Physical Address
Space
Logical and Physical Address Space
• Logical Address Space: An address generated by the CPU is known as a “Logical
Address”. It is also known as a Virtual address. Logical address space can be
defined as the size of the process. A logical address can be changed.
• Physical Address Space: An address seen by the memory unit (i.e the one loaded
into the memory address register of the memory) is commonly known as a “Physical
Address”. A Physical address is also known as a Real address. The set of all
physical addresses corresponding to these logical addresses is known as Physical
address space. A physical address is computed by MMU. The run-time mapping
from virtual to physical addresses is done by a hardware device Memory
Management Unit(MMU). The physical address always remains constant.
Static and Dynamic Loading
Loading a process into the main memory is done by a loader. There are two different
types of loading :
• Static Loading: Static Loading is basically loading the entire program into a fixed
address. It requires more memory space.
• Dynamic Loading: The entire program and all data of a process must be in physical
memory for the process to execute. So, the size of a process is limited to the size of
physical memory. To gain proper memory utilization, dynamic loading is used. In
dynamic loading, a routine is not loaded until it is called. All routines are residing on
disk in a relocatable load format. One of the advantages of dynamic loading is that
the unused routine is never loaded. This loading is useful when a large amount of
code is needed to handle it efficiently.
Static and Dynamic Linking
To perform a linking task a linker is used. A linker is a program that takes one or more
object files generated by a compiler and combines them into a single executable file.
• Static Linking: In static linking, the linker combines all necessary program modules
into a single executable program. So there is no runtime dependency. Some
operating systems support only static linking, in which system language libraries are
treated like any other object module.
• Dynamic Linking: The basic concept of dynamic linking is similar to dynamic
loading. In dynamic linking, “Stub” is included for each appropriate library routine
reference. A stub is a small piece of code. When the stub is executed, it checks
whether the needed routine is already in memory or not. If not available then the
program loads the routine into memory.
Swapping
When a process is executed it must have resided in memory. Swapping is a process of
swapping a process temporarily into a secondary memory from the main memory, which
is fast compared to secondary memory. A swapping allows more processes to be run and
can be fit into memory at one time. The main part of swapping is transferred time and the
total time is directly proportional to the amount of memory swapped. Swapping is also
known as roll-out, or roll because if a higher priority process arrives and wants service,
the memory manager can swap out the lower priority process and then load and execute
the higher priority process. After finishing higher priority work, the lower priority process
swapped back in memory and continued to the execution process.
swapping in memory
management
Memory Management with Monoprogramming (Without Swapping)
This is the simplest memory management approach the memory is divided into two
sections:
• In this approach, the operating system keeps track of the first and last location
available for the allocation of the user program
• The operating system is loaded either at the bottom or at top
• Interrupt vectors are often loaded in low memory therefore, it makes sense to load
the operating system in low memory
• Sharing of data and code does not make much sense in a single process
environment
• The Operating system can be protected from user programs with the help of a fence
register.
Operating System
p1
p2
p3
p4
Partition Table
Once partitions are defined operating system keeps track of the status of memory
partitions it is done through a data structure called a partition table.
Sample Partition Table
0k 200k allocated
Contiguous Memory
Allocation
Memory Allocation
To gain proper memory utilization, memory allocation must be allocated efficient manner.
One of the simplest methods for allocating memory is to divide memory into several fixed-
sized partitions and each partition contains exactly one process. Thus, the degree of
multiprogramming is obtained by the number of partitions.
• Multiple partition allocation: In this method, a process is selected from the input
queue and loaded into the free partition. When the process terminates, the partition
becomes available for other processes.
• Fixed partition allocation: In this method, the operating system maintains a table
that indicates which parts of memory are available and which are occupied by
processes. Initially, all memory is available for user processes and is considered one
large block of available memory. This available memory is known as a “Hole”. When
the process arrives and needs memory, we search for a hole that is large enough to
store this process. If the requirement is fulfilled then we allocate memory to process,
otherwise keeping the rest available to satisfy future requests. While allocating a
memory sometimes dynamic storage allocation problems occur, which concerns
how to satisfy a request of size n from a list of free holes. There are some solutions
to this problem:
First Fit
In the First Fit, the first available free hole fulfil the requirement of the process allocated.
First Fit
Here, in this diagram, a 40 KB memory block is the first available free hole that can store
process A (size of 25 KB), because the first two blocks did not have sufficient memory
space.
Best Fit
In the Best Fit, allocate the smallest hole that is big enough to process requirements. For
this, we search the entire list, unless the list is ordered by size.
Best Fit
Here in this example, first, we traverse the complete list and find the last hole 25KB is the
best suitable hole for Process A(size 25KB). In this method, memory utilization is
maximum as compared to other memory allocation techniques.
Worst Fit
In the Worst Fit, allocate the largest available hole to process. This method produces the
largest leftover hole.
Worst Fit
Here in this example, Process A (Size 25 KB) is allocated to the largest available memory
block which is 60KB. Inefficient memory utilization is a major issue in the worst fit.
Fragmentation
Fragmentation is defined as when the process is loaded and removed after execution
from memory, it creates a small free hole. These holes can not be assigned to new
processes because holes are not combined or do not fulfill the memory requirement of
the process. To achieve a degree of multiprogramming, we must reduce the waste of
memory or fragmentation problems. In the operating systems two types of fragmentation:
27. Internal fragmentation: Internal fragmentation occurs when memory blocks are
allocated to the process more than their requested size. Due to this some unused
space is left over and creating an internal fragmentation problem.Example: Suppose
there is a fixed partitioning used for memory allocation and the different sizes of
blocks 3MB, 6MB, and 7MB space in memory. Now a new process p4 of size 2MB
comes and demands a block of memory. It gets a memory block of 3MB but 1MB
block of memory is a waste, and it can not be allocated to other processes too. This
is called internal fragmentation.
28. External fragmentation: In External Fragmentation, we have a free memory block,
but we can not assign it to a process because blocks are not contiguous. Example:
Suppose (consider the above example) three processes p1, p2, and p3 come with
sizes 2MB, 4MB, and 7MB respectively. Now they get memory blocks of size 3MB,
6MB, and 7MB allocated respectively. After allocating the process p1 process and
the p2 process left 1MB and 2MB. Suppose a new process p4 comes and demands
a 3MB block of memory, which is available, but we can not assign it because free
memory space is not contiguous. This is called external fragmentation.
Both the first-fit and best-fit systems for memory allocation are affected by external
fragmentation. To overcome the external fragmentation problem Compaction is used. In
the compaction technique, all free memory space combines and makes one large block.
So, this space can be used by other processes effectively.
Another possible solution to the external fragmentation is to allow the logical address
space of the processes to be noncontiguous, thus permitting a process to be allocated
physical memory wherever the latter is available.
Paging
Paging is a memory management scheme that eliminates the need for a contiguous
allocation of physical memory. This scheme permits the physical address space of a
process to be non-contiguous.
• The Physical Address Space is conceptually divided into several fixed-size blocks,
called frames.
• The Logical Address Space is also split into fixed-size blocks, called pages.
• Page Size = Frame Size
Let us consider an example:
Paging
The address generated by the CPU is divided into:
As the name suggests, more than one programs can be active at the same time. Before
the concept of Multiprogramming, there were single tasking operating systems like MS
DOS that used to allow only one program to be loaded at a time and run. These systems
were not efficient as CPU was not used efficiently. For example, in a single tasking system
if the current program waits for some input/output to finish, the CPU is not used. The idea
of multiprogramming is to assign CPUs to other processes while the current process
might not be finished. This has the below advantages.
1) User get the feeling that he/she can run multiple applications on a single CPU even if
the CPU is running one process at a time.
2) CPU is utilized better
All modern operating systems like MS Windows, Linux, etc are multiprogramming
operating systems,
Features of Multiprogramming
29. Need Single CPU for implementation.
30. Context switch between process.
31. Switching happens when current process undergoes waiting state.
32. CPU idle time is reduced.
33. High resource utilization.
34. High Performance.
Disadvantages of Multiprogramming
35. Prior knowledge of scheduling algorithms (An algorithm that decides which next
process will get hold of the CPU) is required.
36. If it has a large number of jobs, then long-term jobs will have to require a long wait.
37. Memory management is needed in the operating system because all types of tasks
are stored in the main memory.
38. Using multiprogramming up to a larger extent can cause a heat-up issue.
Scheduling Algorithms are of two types.
39. Preemptive Scheduling algorithm: In the preemptive scheduling algorithm if more
than one process wants to enter into the critical section then it will be allowed and it
can enter into the critical section without any interruption only if no other progress is
in the critical section.
40. Non-Preemptive scheduling algorithm: If a process gets a critical section then it
will not leave the critical section until or unless it works gets done.
How do Multiprogramming Operating Systems Work?
In multiprogramming system, multiple programs are to be stored in memory and each
program has to be given a specific portion of memory which is known as process. The
operating system handles all these process and their states. Before the process
undergoes execution, the operating system selects a ready process by checking which
one process should udergo execution. When the chosen process undergoes CPU
execution, it might be possible that in between process need any input/output operation
at that time process goes out of main memory for I/O operation and temporarily stored in
secondary storage and CPU switches to next ready process. And when the process which
undergoes for I/O operation comes again after completing the work, then CPU switches
to this process. This switching is happening so fast and repeatedly that creates an illusion
of simultaneous execution.
You can manage connectivity in your active I/O configurations through I/O operations,
which offers a centralized point of control. In addition to allowing you to view and
change the paths between a processor and an input/output device, which may involve
using dynamic switching, it actively participates in identifying unusual I/O conditions.
Before understanding the I/O scheduling, It’s important to get an overview of I/O
operations.
52. N-Step Scan: It holds all the pending requests until the arm starts its way back. New
requests are grouped for the next cycle of rotation.
53. C-SCAN [Circular SCAN] : It provides a uniform wait time as the arm serves
requests on its way during the inward cycle. To know more, refer Difference between
SCAN and C-SCAN.
54. C-LOOK [Optimized version of C-SCAN] : Arm doesn’t necessarily return to the
lowest-numbered track, it returns from the lowest request to be served. It optimized
the C-SCAN as the arm doesn’t move to the end of the disk if not required. To know
more, refer to the Difference between C-LOOK and C-SCAN.
File management :
File management in Operating Systems is a fundamental and crucial component. The
operating system manages computer system files. Operating systems control all files
with various extensions.
The operating system’s file system can manage single and group files in a computer
system. The operating system’s file management manages all of the files on the
computer system with different extensions(such as .exe, .pdf, .txt, .docx, etc.).
We can also use the files system in operating system to get details of any files present
on our system. The details can be:
• locations of the file (the logical locations where the files are stored in the
computer system)
• the owners of the file (who can write or read the particular file)
• when were the files created (modification time and time of file creation)
• a type of file (format of the files, for example, docs, pdfs, text, etc).
• State of completion of files etc.
File management is the operating system, or to make the operating system understand
a file, the file must be in a structure or format or predefined structure. There are three
types of file structures present in the operating systems:
60. Text file: The text file is a non-executable file containing a sequence of symbols,
numbers, and letters organized in the form of lines.
61. Source file: A source file is an executable file that contains a series of processes
and functions. In simpler terms, we can say that a source file is a file that
contains the instructions of a program.
62. Object file: An object file is a file that contains object code in the form of
assembling language code or machine language code. In simpler terms, we can
say that an object file contains program instructions in the form of a series of
organized bytes in the form of blocks.
What is a Distributed Operating System?
1. Client-Server Systems
• This model allows for scalable resource utilization, efficient sharing, modular
development, centralized control, and fault tolerance.
• It facilitates collaboration between distributed entities, promoting the development of
reliable, scalable, and interoperable distributed systems.
2. Peer-to-Peer(P2P) Systems
3. Middleware
4. Three-Tier
• The data tier manages storage and retrieval operations, often employing distributed
databases or file systems across multiple nodes.
• This modular approach enables scalability, fault tolerance, and efficient resource
utilization, making it ideal for distributed computing environments.
5. N-Tier
In an N-tier architecture, applications are structured into multiple tiers or layers beyond
the traditional three-tier model. Each tier performs specific functions, such as
presentation, logic, data processing, and storage, with the flexibility to add more tiers as
needed. In a distributed operating system, this architecture enables complex
applications to be divided into modular components distributed across multiple nodes or
servers.
• Each tier can scale independently, promoting efficient resource utilization, fault
tolerance, and maintainability.
• N-tier architectures facilitate distributed computing by allowing components to run on
separate nodes or servers, improving performance and scalability.
• This approach is commonly used in large-scale enterprise systems, web
applications, and distributed systems requiring high availability and scalability.