0% found this document useful (0 votes)
39 views

Osmodule III

This document discusses memory management and protection. It explains that main memory and CPU registers are the only storage the CPU can directly access. It describes the need for memory management to handle speed differences between CPU and memory access and to protect processes from accessing each other's memory spaces. The memory management unit uses base and limit registers to map virtual addresses to physical addresses and enforce memory protection.

Uploaded by

Sandip
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views

Osmodule III

This document discusses memory management and protection. It explains that main memory and CPU registers are the only storage the CPU can directly access. It describes the need for memory management to handle speed differences between CPU and memory access and to protect processes from accessing each other's memory spaces. The memory management unit uses base and limit registers to map virtual addresses to physical addresses and enforce memory protection.

Uploaded by

Sandip
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

Module III

Memory Management & Protection

Main memory and the registers built into the processor itself are the only storage that
the CPU can access directly. There are machine instructions that take memory
addresses as arguments, but none that take disk addresses. Therefore, any instructions
in execution, and any data being used by the instructions, must be in one of these
direct-access storage devices. If the data are not in memory, they must be moved there
before the CPL (command programming language) can operate on them. Registers that
are built into the CPU are generally accessible within one cycle of the CPU clock. Most
CPUs can decode instructions and perform simple operations on register contents at
the rate of one or more operations per clock tick. The same cannot be said of main
memory, which is accessed via a transaction on the memory bus. Memory access may
take many cycles of the CPU clock to complete, in which case the processor normally
needs to stall (obstruct, stop), since it does not have the data required to complete the
instruction that it is executing. This situation is intolerable because of the frequency of
memory accesses. The remedy is to add fast memory between the CPU and main
memory. A memory buffer used to accommodate a speed differential, called a cache.

Not only are we concerned with the relative speed of accessing physical memory, but
we also must ensure correct operation, has to protect the operating system from access
by user processes. This protection must be provided by the hardware. We first need to
make sure that each process has a separate memory space.
To do this, we need the ability to determine the range of legal addresses that the
process may access and to ensure that the process can access only these legal
addresses. We can provide this protection by using two registers, usually a base and a
limit. The base register holds the smallest legal physical memory address; the limit
register specifies the size of the range.
For example, if the base register holds 300040 and limit register is 120900, then the
program can legally access all addresses from 300040 through 420940 (inclusive).

Protection of memory space is accomplished by having the CPU hardware compare


every address generated in user mode with the registers. Any attempt by a program
executing in user mode to access operating-system memory or other users' memory
results in a trap to the operating system, which treats the attempt as a fatal error. This
scheme prevents a user program from accidentally or deliberately modifying the code or
data structures of either the operating system or other users.

The base and limit registers can be loaded only by the operating system, which
uses a special privileged instruction. Since privileged instructions can be executed
only in kernel mode, and since only the operating system executes in kernel mode, only
the operating system can load the base and limit registers. This scheme allows the
operating system to change the value of the registers but prevents user programs from
changing the registers' contents.
Memory Management is also known as Storage or Space Management.
Memory management involves subdividing memory to accommodate multiple
processes. It is the process of allocating memory efficiently to pack as many processes
into memory as possible.

THE NEED FOR MEMORY MANAGEMENT

Main memory is generally the most critical resource in a computer system in terms of
the speed at which programs run and hence it is important to manage it as efficiently as
possible.
The requirements of memory management are:
• Relocation
• Protection
• Sharing
• Logical Organization
• Physical Organization

Relocation
Programmer does not know where the program will be placed in memory when it is
executed. While the program is executing, it may be swapped to disk and returned to
main memory at a different location (relocated).so memory references must be
translated to actual physical memory address.
Protection
Processes should not be able to reference memory locations of another process without
permission.

Sharing
Allow several processes to access the same portion of memory. It means allow each
process to access the same copy of the program rather than their own separate copy.

Logical Organization
Programs are written in modules .Modules can be written and compiled independently.
Different degrees of protection given to modules (read-only, execute-only).

Physical Organization
Overlaying allows various modules to be assigned the same region of memory.
Programmer does not know how much space will be available

Address Translation or Address Binding schemes

The binding of instructions and data to memory can be done in any of the
following ways.

1.At compile time


The Compiler generates physical addresses at compile time. It requires knowledge
of where the process resides. If location changes, then it will be necessary to recompile
the code. This scheme is rarely used in MSDOS .COM files.

2.At link-edit time


The compiler generates relocatable addresses for each process unit. Linkage editor
converts the relocatable address to absolute address. A program can be loaded only
where specified and cannot move once loaded.

3.At load time


Similar to at link-edit time, but do not fix the starting address. Program can be loaded
anywhere, but cannot be split.

4.At execution time


Address translation is done dynamically during execution. Hardware is needed to
perform the virtual to physical address translation quickly.

Memory Management Unit(MMU)

An address generated by the CPU is commonly referred to as a logical address,


whereas an address seen by the memory unit—that is, the one loaded into the memory-
address register of the memory—is commonly referred to as a physical address. The
compile-time and load-time address-binding methods generate identical logical and
physical addresses. However, the execution-time addresses binding scheme results in
differing logical and physical addresses. In this case, we usually refer to the logical
address as a virtual address.The set of all logical addresses generated by a program is
a logical address space. The set of all physical addresses corresponding to these
logical addresses is a physical address space. Thus, in the execution-time address-
binding scheme, the logical and physical address spaces differ. The run-time mapping
from virtual to physical addresses is done by a hardware device called the memory-
management unit (MMU).
The base register is now called are location register. The value in the relocation register
is added to every address generated by a user process at the time the address is sent
to memory.
For example, if the base is at 14000, then an attempt by the user to address location 0
is dynamically relocated to location 14000; an access to location 346 is mapped to
location 14346.

Memory Management Unit maps logical to physical address by adding relocation


register value to every address generated by processor.

Swapping
Swapping is the act of moving processes between memory and a backing store. This is
done to free up available memory. Swapping is necessary when there are more
processes than available memory. A process can be swapped temporarily out of
memory to a backing store, and then brought back into memory for continued execution.
Swapping is used to implement multiprogramming in system with little
hardware support for memory management. Swapping is helpful in improving processor
utilization in partitioned memory environment.
Consider a multiprogramming environment with a round-robin CPU
scheduling algorithm. When a quantum (time-slice) expires, the memory manger will
start to swap out processes that just finished, and to swap in another process to the
memory space that has been freed. In the meantime, the CPU scheduler will allocate a
time slice to some other process in memory. Thus when each process finishes its
quantum, it will be swapped with another process.
A variant of this swapping policy is used for priority based scheduling algorithms. If a
high priority process arrives, the memory manager swaps out the low priority process so
that it can load and execute high priority process. When high priority process finishes,
the lower priority process can be swapped back in and continued. This variant of
swapping is called roll out, roll in.

Swapper
When the scheduler decides to admit a new process for which no suitable free
partition can be found, the swapper is invoked to vacate such a partition. The swapper
is an operating system process whose major responsibility include:
 Selection of processes to swap out
 Selection of process to swap in
 Allocation and management of swap space
The swapper usually selects a victim among the suspended processes
that occupy partition or memory large enough to satisfy the needs of incoming process.
Among the qualifying processes swapper select the one with low priority. Another
important criterion for selection is the time spent in memory by the potential victim. The
choice of the process to swap in usually based on the amount of time spent on
secondary storage, priority etc.
Contiguous Allocation
Contiguous literally means adjacent. Here it means that the program is loaded into a
series of adjacent (contiguous) memory locations. In contiguous memory allocation, the
memory is usually divided into two partitions, one for the OS and the other for the user
process.

At any time, only one user process is in memory and it is run to completion and then the
next process is brought into the memory. This scheme is sometimes referred to as the
Single Contiguous Memory Management.

We usually want several user processes to reside in memory at the same time. We
therefore need to consider how to allocate available memory to the processes that are
in the input queue waiting to be brought into memory. In contiguous memory allocation,
each process is contained in a single section of memory that is contiguous to the
section containing the next process.
Memory Protection
If we have a system with a relocation register together with a limit register, we
accomplish our goal. The relocation register contains the value of the smallest physical
address; the limit register contains the range of logical addresses (for example,
relocation = 100040 and limit = 74600). Each logical address must fall within the range
specified by the limit register. The MMU maps the logical address dynamically by
adding the value in the relocation register. This mapped address is sent to memory.

When the CPU scheduler selects a process for execution, the dispatcher loads the
relocation and limit registers with the correct values as part of the context switch.
Because every address generated by a CPU is checked against these registers, we can
protect both the operating system and the other users’ programs and data from being
modified by this running process.

Memory Allocation

1. Fixed sized partition


One of the simplest methods for allocating memory is to divide memory into several
fixed-sized partitions. Each partition may contain exactly one process. Thus, the degree
of multiprogramming is bound by the number of partitions. In this multiple partition
method, when a partition is free, a process is selected from the input queue and is
loaded into the free partition. When the process terminates, the partition becomes
available for another process.
In the fixed sized partition the system divides memory into fixed size partition (may or
may not be of the same size) here entire partition is allowed to a process and if there is
some wastage inside the partition is allocated to a process and if there is some wastage
inside the partition then it is called internal fragmentation.
Advantage: Memory management is easy.
Disadvantage: Internal fragmentation
2. Variable size partition
In the variable size partition, the memory is treated as one unit and space allocated to a
process is exactly the same as required and the leftover space can be reused again. In
the variable-partition scheme, the operating system keeps a table indicating which parts
of memory are available and which are occupied. Initially, all memory is available for
user processes and is considered one large block of available memory, a hole.
How OS manages the memory partitions
To manage all the partitions, the OS creates a Partition Description Table
(PDT). Initially all the entries in PDT are marked as ‘FREE’. When a process is loaded
into one of the partitions, the ‘status’ column is changed to ‘ALLOC’. The PCB of each
process contains the Id of the partition in which the process is running.

Eventually, as you will see, memory contains a set of holes of various sizes. As
processes enter the system, they are put into an input queue. The operating system
takes into account the memory requirements of each process and the amount of
available memory space in determining which processes are allocated memory. When a
process is allocated space, it is loaded into memory, and it can then compete for CPU
time. When a process terminates, it releases its memory, which the operating system
may then fill with another process from the input queue.

As processes enter the system, they are put into an input queue. The operating system
takes into account the memory requirements of each process and the amount of
available memory space in determining which processes are allocated memory. When a
process is allocated space it is loaded into memory, and it can then compete for CPU
time. When a process terminates, It releases its memory, which the operating system
may then fill with another process from the input queue.
This procedure is a particular instance of the general dynamic storage allocation
problem, which concerns how to satisfy a request of size n from a list of free holes.
There are many solutions to this problem.
Allocation Policies
The processes are allocated to the partitions based on the allocation
policy of the system. The allocation policies are:
 First Fit
 Best Fit
 Worst Fit
In first fit policy, the memory manager will choose the first available partition that can
accommodate the process even though its size is more than that of the process.
In worst fit policy, the memory manager will choose the largest available partition that
can accommodate the process.
In best-fit policy, the memory manager will choose the partition that is just big enough
to accommodate the process.
Free partitions are 1 and 4. Consider a new process of size 50K.So, First Fit and Worst
Fit will allocate Partition 1 while Best Fit will allocate Partition 4.
Advantage: There is no internal fragmentation.
Disadvantage: Management is very difficult as memory is becoming purely fragmented
after some time leading to External fragmentation.

Contiguous literally means adjacent. Here it means that the program is loaded into a
series of adjacent (contiguous) memory locations. In contiguous memory allocation, the
memory is usually divided into two partitions, one for the OS and the other for the user
process.

At any time, only one user process is in memory and it is run to completion and then the
next process is brought into the memory. This scheme is sometimes referred to as the
Single Contiguous Memory Management

Advantages:
 Starting physical address of program is known at compile time
 Executable machine code has absolute addresses only. They need not be
changed/translated at execution time
 Fast access time as there is no need for address translation·
 Does not have large wasted memory
 Time complexity is small·

Disadvantages:
 It does not support multi-programming and hence no concept of sharing.

FRAGMENTATION
As processes are loaded and removed from the memory, the free memory space is
broken into little pieces. No other processes can be loaded to this free space because
of its smaller size and hence, it remains unused. This problem is known as
Fragmentation.

Both the first-fit and best-fit strategies for memory allocation suffer from external
fragmentation. As processes are loaded and removed from memory, the free memory
space is broken into little pieces. External fragmentation exists when there is enough
total memory space to satisfy are quest but the available spaces are not contiguous:
storage is fragmented into a large number of small holes. This fragmentation problem
can be severe. Wasting of memory between partitions, due to scattering of free space
into a number of discontinuous areas, is called External Fragmentation.
When partitioning is static, memory is wasted in each
partition where a process of smaller size than the partition is loaded .Wasting of
memory within a partition, due to a difference in size of a partition and the process
within it is called Internal Fragmentation.
One solution to these problems is compaction. It is possible to combine all the
holes (free spaces) into a large block by pushing all the processes downward as far as
possible. Compaction is possible only if relocation is dynamic, and is done at execution
time.
The following figure illustrates the compaction of memory.

Compaction is usually not done because it consumes a lot of CPU time. It is usually
done on a large machine like mainframe or supercomputer because they are supported
with a special hardware to perform this task (compaction).

NON-CONTIGUOUS ALLOCATION

Non-contiguous memory means that the available memory is not contiguous but is
distributed. This scheme has the benefit of minimizing external fragmentation. The
logical address space of the process is allowed to be non- contiguous, thus allowing a
process to be allocated physical memory wherever the later is available

Non-contiguous memory allocation is of different types,

1. Paging
2. Segmentation
3. Segmentation with paging
1. PAGING
A non-contiguous policy with a fixed size partition is called paging. Paging is a
process whereby Physical memory is broken down into fixed size blocks called page
frame. Logical memory is broken down into blocks of the same size as physical
memory blocks and is called pages. When a process is to be executed, its pages are
loaded from the backing store into any available memory frames.

Every process will have a separate page table. The entries in the page table are the
base address of each page in physical memory. At each entry either we have an invalid
pointer which means the page is not in main memory or we will get the corresponding
frame number. When the frame number is combined with instruction of set D than we
will get the corresponding physical address. Size of a page table is generally very large
so cannot be accommodated inside the PCB, therefore, PCB contains a register value
PTBR( page table base register) which leads to the page table.

Every address generated by CPU is divided into two parts: page number (p) and page
offset (d). P is used as an index into a page table .Page table contains base address of
each page in physical memory. The base address is combined with d to get physical
memory address that is then put into the MAR.
Address translation
Logical address=Page number + Page Offset
Physical address=Frame Number + Page offset
Advantages and disadvantages of paging

 It reduces external fragmentation but still it suffer from internal fragmentation.


 It is simple to implement but assumed as an efficient memory management
technique.
 Due to equal size of pages and frames swapping, becomes very easy.
 Page table requires extra memory space, so may not be good for a system
having small RAM.
2. SEGMENTATION
In the paging scheme, pages are of a fixed size an alternate approach, called
segmentation, divides the process’s address space into a number of segments - each of
variable size. A program is a collection of segments. A segment is a logical unit such as:
main program, procedure, function, method, object, local variables, global variables,
common block, stack, symbol table, arrays etc.
The segments of a program are loaded in non contiguous memory location .The logical
address of an instruction consist of two parts-segment number and offset. To address
mapping we use the following architecture.

Translation of Logical address into physical address by segment table


CPU generates a logical address which contains two parts:
1. Segment Number
2. Offset
The Segment number is mapped to the segment table. The limit of the respective
segment is compared with the offset. If the offset is less than the limit then the address
is valid otherwise it throws an error as the address is invalid.
In the case of valid address, the base address of the segment is added to the offset to
get the physical address of actual word in the main memory.

Advantages of Segmentation

 No internal fragmentation
 Average Segment Size is larger than the actual page size.
 Less overhead
 It is easier to relocate segments than entire address space.
 The segment table is of lesser size as compare to the page table in paging.

Disadvantages
 It can have external fragmentation.
 it is difficult to allocate contiguous memory to variable sized partition.
 Costly memory management algorithms.

3. Segmentation with paging


In segmentation with paging, we take advantages of both segmentation as well as
paging. All the properties are the same as that of paging because segments are divided
into pages.
In segmented paging,
 Process is first divided into segments and then each segment is divided into
pages.
 These pages are then stored in the frames of main memory.
 The base address of the segment table is stored in the segment table base
register.

CPU generates a logical address consisting of three parts-


1. Segment Number
2. Page Number
3. Page Offset

 Segment Number specifies the specific segment from which CPU wants to reads
the data.

 Page Number specifies the specific page of that segment from which CPU wants
to read the data.

 Page Offset specifies the specific word on that page that CPU wants to read.

Advantages-
The advantages of segmented paging are-
 Segment table contains only one entry corresponding to each segment.
 It reduces memory usage.
 The size of Page Table is limited by the segment size.
 It solves the problem of external fragmentation.
Disadvantages-

The disadvantages of segmented paging are-


 Segmented paging suffers from internal fragmentation.
 The complexity level is much higher as compared to paging.

VIRTUAL MEMORY

Virtual memory is a memory management technique that allows the execution of


processes that may not be completely in main memory and do not require contiguous
memory allocation. The address space of virtual memory can be larger than that
physical memory.
The virtual memory abstraction is implemented by using
secondary storage to augment (increase) the processor’s main memory. Data is
transferred from secondary to main storage as and when necessary and the data
replaced is written back to the secondary storage according to a predetermined
replacement algorithm. If the data swapped is designated a fixed size, this swapping is
called paging; if variable sizes are permitted and the data is split along logical lines
such as subroutines or matrices, it is called segmentation.
Why Do you Need Virtual Memory?
Storage allocation has always been an important consideration in computer
programming due to the high cost of main memory and the relative abundance and
lower cost of secondary storage.
Program code and data required for execution of a process must reside in main memory
to be executed, but main memory may not be large enough to accommodate the needs
of an entire process. Early computer programmers divided programs into sections that
were transferred into main memory for a period of processing time. As the program
proceeded, new sections moved into main memory and replace the sections that were
not needed at that time.
Advantages:
• Programs are no longer constrained by the amount of physical memory
that is available.
• Increased degree of multiprogramming.
• Less overhead due to swapping.
Virtual memory can be implemented via:

 Demand paging
 Demand segmentation

Demand Paging
Demand paging is similar to paging with swapping. In demand paging a page of the
process is loaded in main memory only when it is demanded. In pure demand paging
initially the main memory is considered as a set of free frames. We start the execution
of the process with no pages of the process in main memory. Each page of the process
is loaded only when it is demanded i.e., never bring a page until it is required.

To swap a process from main memory to secondary to load a new


process we invoke the service of Swapper. In paged virtual memory system in order to
swap out a single page of a process to load a new page, the OS invoke the service of
Lazy Swapper.

Page Faults
The logical address is in the range of valid addresses, but the corresponding page is not
currently present in memory, but rather is stored on disk. The operating system must
bring it into memory before the process can continue to execute this condition a page
fault.

PAGE REPLACEMENT POLICIES OR ALGORITHMS


If there are no free memory frame to accommodate a newly demanded page, then one
of the existing pages in memory needs to be swapped out and the new page loaded. To
decide “Which page should I swap out?, we have page replacement policies or
algorithms.
1. FIFO PAGE REPLACEMENT ALGORITHM:
Replace the page which is in memory for the longest time. When a page must be
replaced, the oldest one is identified and removed from the main memory. In order to
implement the FIFO replacement algorithm, the memory manager must keep track of
the relative order of the loading of pages into the main memory. To accomplish this is to
maintain a FIFO queue.

To illustrate the problems that are possible with a FIFO replacement algorithm, we
consider the reference string.

7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
For our example reference string, our three frames are initially empty. The first three
references (7,0,1) cause page faults and are brought into these empty frames. The next
reference (2) replaces page 7, because page 7 was brought in first. Since 0 is the next
reference and 0 is already in memory, we have no fault for this reference. The first
reference to 3 results in replacement of page 0, since it is now first in line. Because of
this replacement, the next reference, to 0, will fault. Page 1 is then replaced by page 0.

The first three references (7, 0, 1) cause page faults and get loaded into three empty
frames. The next reference (2)causes a page fault and page replacement. The page
replaced is page 7 ,because it was brought in the first. H represents a ‘HIT’ means the
referenced page is already in the memory (no page fault). . The FIFO page-replacement
algorithm is easy to understand and program. However, its performance is not always
good.
Generally, on increasing the number of frames to a process’ virtual memory, its
execution becomes faster as less number of page faults occur. Sometimes the reverse
happens, i.e. more number of page faults occur when more frames are allocated to a
process. This most unexpected result is termed as Belady’s Anomaly.
Belady’s anomaly is the name given to the phenomenon where increasing the number
of page frames results in an increase in the number of page faults for a given memory
access pattern.
Performance
Hit ratio = 5/20 = 25%
Advantages:
Easy to understand/ implement.
Disadvantages:
It has a poor performance.

2. OPTIMAL PAGE REPLACEMENT ALGORITHM

One result of the discovery of Belady’s anomaly was the search for an optimal page-
replacement algorithm—the algorithm that has the lowest page-fault rate of all
algorithms and will never suffer from Belady’s anomaly. Such an algorithm does exist
and has been called OPT or MIN. Replace the page that will not be used for the
longest period of time. Use of this page-replacement algorithm guarantees the lowest
possible page fault rate for a fixed number of frames.
Consider the reference string.

7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

The first three references (7, 0, 1) cause page and get loaded into three empty
frames. The reference to page 2 replaces page 7, because 7 will not be used until
reference 18, whereas page 0 will be used at reference 5, and page 1 at reference
14.The reference to page 3 replaces page 1 as it would be the last of the three pages in
memory to be referenced again.

Performance:
Hit ratio = 11/20 = 55%

Advantages:
Has the lowest page fault rate for a fixed number of frames.

Disadvantages:
Difficult to implement, because it requires future knowledge of reference string.

3. LRU PAGE REPLACEMENT ALGORITHM


Replace the page that has not been used for the longest period of time. When a page
has to be replaced, LRU chooses that page that has not been used for the longest
period of time.
Consider the reference string.

7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1

LRU algorithm sees that, of the three frames in memory, page 2 was used least
recently. The most recently used page was page 0 and just before that, page 3 was
used. Thus LRU replaces page 2 not knowing that page 2 is about to be used again.
When page 2 is again referenced, LRU replaces page 3 since it is the least recently
used.
Performance
Hit ratio = 8/20 = 40%

Advantages:
Reasonable approximation of Optimal Algorithm.

Disadvantages:
Requires substantial hardware assistance.

THRASHING

Too much or over allocation of memory can lead to a serious performance problem
known as thrashing. Thrashing occurs when all of the pages that are memory resident
are high-demand pages that will be referenced in the near future. Thus, when a page
fault occurs, the page that is removed from memory will soon give rise to a new fault,
which in turn removes a page that will soon give rise to a new fault . In a system that is
thrashing, a high percentage of the system’s resources is devoted to paging, and overall
CPU utilization an throughput drop dramatically.

Security

Security refers to providing a protection system to computer system resources such as


CPU, memory, disk, software programs and most importantly data/information stored in
the computer system. If a computer program is run by an unauthorized user, then
he/she may cause severe damage to computer or data stored in it. So a computer
system must be protected against unauthorized access, malicious access to system
memory, viruses, worms etc.

 Authentication

 One Time passwords


 Program Threats
 System Threats
 Computer Security Classifications
Authentication
Authentication refers to identifying each user of the system and associating the
executing programs with those users. It is the responsibility of the Operating System to
create a protection system which ensures that a user who is running a particular
program is authentic. Operating Systems generally identifies/authenticates users using
following three ways −

 Username / Password − User need to enter a registered username and


password with Operating system to login into the system.
 User card/key − User need to punch card in card slot, or enter key generated by
key generator in option provided by operating system to login into the system.

 User attribute - fingerprint/ eye retina pattern/ signature − User need to pass
his/her attribute via designated input device used by operating system to login
into the system.

One Time passwords


One-time passwords provide additional security along with normal authentication. In
One-Time Password system, a unique password is required every time user tries to
login into the system. Once a one-time password is used, then it cannot be used again.
One-time password are implemented in various ways.

 Random numbers − Users are provided cards having numbers printed along
with corresponding alphabets. System asks for numbers corresponding to few
alphabets randomly chosen.

 Secret key − User are provided a hardware device which can create a secret id
mapped with user id. System asks for such secret id which is to be generated
every time prior to login.

 Network password − Some commercial applications send one-time passwords


to user on registered mobile/ email which is required to be entered prior to login.

Program Threats
Operating system's processes and kernel do the designated task as instructed. If a
user program made these process do malicious tasks, then it is known as Program
Threats. One of the common example of program threat is a program installed in a
computer which can store and send user credentials via network to some hacker.
Following is the list of some well-known program threats.

 Trojan Horse − Such program traps user login credentials and stores them to
send to malicious user who can later on login to computer and can access
system resources.

 Trap Door − If a program which is designed to work as required, have a security


hole in its code and perform illegal action without knowledge of user then it is
called to have a trap door.

 Logic Bomb − Logic bomb is a situation when a program misbehaves only


when certain conditions met otherwise it works as a genuine program. It is
harder to detect.
 Virus − Virus as name suggest can replicate themselves on computer system.
They are highly dangerous and can modify/delete user files, crash systems. A
virus is generatlly a small code embedded in a program. As user accesses the
program, the virus starts getting embedded in other files/ programs and can
make system unusable for user

System Threats
System threats refers to misuse of system services and network connections to put
user in trouble. System threats can be used to launch program threats on a complete
network called as program attack. System threats creates such an environment that
operating system resources/ user files are misused. Following is the list of some well-
known system threats.

 Worm − Worm is a process which can choked down a system performance by


using system resources to extreme levels. A Worm process generates its
multiple copies where each copy uses system resources, prevents all other
processes to get required resources. Worms processes can even shut down an
entire network.

 Port Scanning − Port scanning is a mechanism or means by which a hacker


can detects system vulnerabilities to make an attack on the system.

 Denial of Service − Denial of service attacks normally prevents user to make


legitimate use of the system. For example, a user may not be able to use
internet if denial of service attacks browser's content settings.

You might also like