0% found this document useful (0 votes)
38 views

Unit 2

The document discusses the content of a course on operating systems. It covers 5 units: operating system overview, process management, scheduling and deadlock management, storage management, and storage structures. For the unit on process management, it discusses concepts like processes, process scheduling, operations on processes, interprocess communication, process synchronization using semaphores and monitors, and provides examples from the Windows 10 operating system.

Uploaded by

Alex Son
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

Unit 2

The document discusses the content of a course on operating systems. It covers 5 units: operating system overview, process management, scheduling and deadlock management, storage management, and storage structures. For the unit on process management, it discusses concepts like processes, process scheduling, operations on processes, interprocess communication, process synchronization using semaphores and monitors, and provides examples from the Windows 10 operating system.

Uploaded by

Alex Son
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Course Content

UNIT I : OPERATING SYSTEMS OVERVIEW

UNIT II : PROCESS MANAGEMENT

UNIT III :SCHEDULING AND DEADLOCK MANAGEMENT

UNIT IV: STORAGE MANAGEMENT

UNIT V: STORAGE STRUCTURE

and Project
Management
(SEPM)

Department of Computer Science and Engineering 1


COURSE CONTENT
UNIT II PROCESS MANAGEMENT 9
 Processes: Process Concept
 Process Scheduling
 Operations on Processes
 Inter process Communication
 Process Synchronization:
 The Critical-Section Problem
 Semaphores
 Classic Problems of Synchronization
 Monitors.
 Case Study: Windows 10 operating system

Department of Computer Science and Engineering 2


PROCESS CONCEPTS

Process
• A program in execution,

• which forms the basis of all computation

• process execution must progress in sequential fashion

• An operating system executes a variety of programs:

• Batch system – jobs

• Time-shared systems – user programs or tasks

Department of Computer Science and Engineering


3
PROCESS CONCEPTS

Batch Operating system


• Users of batch operating system does not interact with computer
directly.
• Each user prepares his jobs on an offline device like punch cards &
submits it to the computer operator.
• To speed up processing ,jobs with similar needs are batched together &
run as a group.
• Programmers leave their program with the operator & operator sorts
the program with similar requirements into batches .
User Job O
p Job Batch
User er
Job Computer
at
o
Job Batch
User Job r

Department of Computer Science and Engineering


PROCESS CONCEPTS

Disadvantages of Batch operating System


• Lack of interaction between user & the job.

• CPU is idle, because the speed of the mechanical IO devices is slower


than the CPU.

• Difficult to provide the desired priority.

Department of Computer Science and Engineering


PROCESS CONCEPTS

Time sharing Operating Systems


• Enables many users to located at various terminals to use a particular
computer system at the same time.
Advantages of Time Sharing Systems
• Provides the advantage of quick response .
• Avoids duplication of software.
• Reduce CPU idle time.
Disadvantages
• Problem of reliability
• Question of security & integrity of user program & data.
• Problem of data communication

Department of Computer Science and Engineering


PROCESS CONCEPTS

Difference between Multiprogrammed Batch system &


Time Sharing System

Multiprogrammed Batch system Time Sharing System

• Objective is to maximize • Objective is to minimize


processor use. response time

Department of Computer Science and Engineering


PROCESS CONCEPTS

• Structure of a process in memory


• The program code, also called text section

• Current activity including program counter, processor registers

• Stack containing temporary data

• Function parameters, return addresses, local variables

• Data section containing global variables

• Heap containing memory dynamically allocated during run time

Department of Computer Science and Engineering


Structure of a process in memory

Citations
Abraham Silberschatz, Peter Baer Galvin and Greg Gagne, “Operating
System Concepts”, 9th Edition, John Wiley and Sons Inc., 2012.
Department of Computer Science and Engineering
Difference between a program & process

PROGRAM PROCESS

Program is passive entity Process is active entity

Files containing a list of Program Counter specifying the


instructions stored on disk next instruction to execute & set
(executable file) of associated resources

Program becomes a process when


an executable file is loaded into
memory

Department of Computer Science and Engineering


Process State
• As a process executes, it changes state
• new: The process is being created
• running: Instructions are being executed
• waiting: The process is waiting for some event to occur
• ready: The process is waiting to be assigned to a processor
• terminated: The process has finished execution
Citations
Abraham
Silberschatz, Peter
Baer Galvin and
Greg Gagne,
“Operating System
Concepts”, 9th
Edition, John Wiley
and Sons Inc., 2012.
Department of Computer Science and Engineering
Process Control Block (PCB)
Information associated with each process
(also called task control block)
• Process state – running, waiting, etc
• Program counter – location of instruction to next
execute
• CPU registers – contents of all process-centric
registers
• CPU scheduling information- priorities,
scheduling queue pointers
• Memory-management information – memory
allocated to the process
• Accounting information – CPU used, clock time
elapsed since start, time limits
• I/O status information – I/O devices allocated to
process, list of open files
Department of Computer Science and Engineering
Process Scheduling

I. Objective of Multiprogramming

• Have some process running at all the times

• Maximize CPU utilization

II. Objective of Timesharing

• Switch the CPU among processes so frequently

• Users can interact with each program while it is running

• To meet these objectives ,Process scheduler selects an available process


for program execution on the CPU.

Department of Computer Science and Engineering


Scheduling Queues

Maintains scheduling queues of processes


Job queue – set of all processes in the system
Ready queue – set of all processes residing in main memory, ready and
waiting to execute
Device queues – set of processes waiting for an I/O device .Processes
migrate among the various queues

Department of Computer Science and Engineering


Representation of Process Scheduling

Queuing diagram represents queues, resources, flows

Department of Computer Science and Engineering


Schedulers

Schedulers

Long Term / Job Short Term / Medium Term


Scheduler CPU scheduler Scheduler

Department of Computer Science and Engineering


Schedulers

Short-term scheduler (or CPU scheduler)

• selects process from ready queue & allocates CPU to them.

• Sometimes the only scheduler in a system

• Short-term scheduler is invoked frequently (milliseconds)  (must


be fast)

Department of Computer Science and Engineering


Schedulers

Long-term scheduler (or job scheduler)


• selects processes from pool i.e. main memory

• Long-term scheduler is invoked infrequently (seconds, minutes)  (may be slow)

• The long-term scheduler controls the degree of multiprogramming

• Processes can be described as either:

• I/O-bound process – spends more time doing I/O than computations, many short
CPU bursts

• CPU-bound process – spends more time doing computations; few very long CPU
bursts
• Long-term scheduler strives for good process mix

Department of Computer Science and Engineering


Schedulers

Medium-term scheduler

• It can be added if degree of multiple programming needs to decrease

• Remove process from memory, store on disk, bring back in from disk to
continue execution: swapping

Department of Computer Science and Engineering


Operations on Processes

• System must provide mechanisms for:

• process creation, &

• process termination,

Department of Computer Science and Engineering


Process Creation

• Process may create several new processes via create process system call
during the course of execution.
• The creating process are called parent process and the new processes are
called the children of that process.
• Each of these new processes may in turn create other processes, forming a
tree of processes.
• Generally, process identified and managed via a process identifier
number (pid)

Department of Computer Science and Engineering


Process Creation

• Resource sharing options


• Parent and children share all resources
• Children share subset of parent’s resources
• Parent and child share no resources
• Execution options
• Parent and children execute concurrently
• Parent waits until children terminate

Department of Computer Science and Engineering


Process Tree for Linux operating System

init
pid = 1

login kthreadd sshd


pid = 8415 pid = 2 pid = 3028

bash khelper pdflush sshd


pid = 8416 pid = 6 pid = 200 pid = 3610

emacs tcsch
ps
pid = 9204 pid = 4005
pid = 9298

Department of Computer Science and Engineering


Process Creation

• Address space

• Child duplicate of parent

• Child has a program loaded into it

• UNIX examples

• fork() system call creates new process

• exec() system call used after a fork() to replace the process’


memory space with a new program

Department of Computer Science and Engineering


Process Creation

Department of Computer Science and Engineering


C Program Forking Separate Process

Department of Computer Science and Engineering


Process Termination

• A process terminates when its finishes executing its final statement and asks
the operating system to delete it by using the exit() system call.

• At that point , the processes may return a status value (typically an integer) to
its parent process(via the wait() system call).

• All the resources of the process – including physical and virtual memory, open
files and I/O buffers – are de allocated by operating system

Department of Computer Science and Engineering


Process Termination
Termination can occur in other circumstances as well:
• A process can cause the termination of another process via on appropriate
system call.

• Usually, such a system call can be invoked only by the parent of the
process that is to be terminated.

• Otherwise , users could arbitrarily kill each others job.

Department of Computer Science and Engineering


Process Termination

Parent may terminate the execution of one of its children for a variety of
reasons such as these:

• Child has exceeded its usage of some of the allocated resources.(To


determine whether this has occurred ,the parent must have a mechanism to
inspect the state of its children)

• Task assigned to child is no longer required.

• The parent is exiting and the operating systems does not allow a child to
continue if its parent terminates.

Department of Computer Science and Engineering


Process Termination
• Some operating systems do not allow child to exists if its parent has terminated.
If a process terminates, then all its children must also be terminated.

• cascading termination. All children, grandchildren, etc. are terminated.

• The termination is initiated by the operating system.

• The parent process may wait for termination of a child process by using the
wait()system call. The call returns status information and the pid of the
terminated process

pid = wait(&status);

• If no parent waiting (did not invoke wait()) process is a zombie

• If parent terminated without invoking wait , process is an orphan


Department of Computer Science and Engineering
Inter process Communication

• Processes executing concurrently in the operating system may be either


independent processes or cooperating processes.

Independent processes

• They cannot affect or affected by other processes executing in the system.

Cooperative processes

• They can affect or be affected by other processes executing in the system,


including sharing data

• Any processes that shares data with other processes is a cooperative process.

Department of Computer Science and Engineering


Inter process Communication
There are several reasons for providing an environment that allows process
cooperation
• Information sharing
• Computation speedup
• Modularity
• Convenience

• Cooperating processes need Inter process communication (IPC) mechanism


that allow them to exchange data & information.

• There are two fundamental models of IPC

• Shared memory

• Message passing

Department of Computer Science and Engineering


Shared Memory & Message Passing

• In shared memory model, a region of memory that is shared by cooperating


processes is established.

• Processes can then exchange information by reading and writing data to the
shared region.

• In the message passing , communication takes place by means of messages


exchanged between the cooperating processes.

Department of Computer Science and Engineering


Communications Models
(a) Message passing. (b) shared memory.

Department of Computer Science and Engineering


Shared Memory Systems
• Producer-Consumer Problem
• Paradigm for cooperating processes, producer process produces
information that is consumed by a consumer process

• unbounded-buffer places no practical limit on the size of the buffer

• bounded-buffer assumes that there is a fixed buffer size

Department of Computer Science and Engineering


Bounded Buffer

• Shared data
#define BUFFER_SIZE 10
typedef struct {
...
} item;

item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;

• Solution is correct, but can only use BUFFER_SIZE-1 elements

Department of Computer Science and Engineering


Bounded-Buffer – Producer

item next_produced;
while (true) {
/* produce an item in next produced */
while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
}

Department of Computer Science and Engineering


Bounded Buffer – Consumer

item next_consumed;
while (true) {
while (in == out)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;

/* consume the item in next consumed */


}

Department of Computer Science and Engineering


Interprocess Communication – Message Passing

• Mechanism for processes to communicate and to synchronize their


actions
• Message system – processes communicate with each other without
resorting to shared variables
• IPC facility provides two operations:
• send(message)
• receive(message)
• The message size is either fixed or variable

Department of Computer Science and Engineering


Interprocess Communication – Message Passing

• If processes P and Q wish to communicate, they need to:


• Establish a communication link between them
• Exchange messages via send/receive
• Implementation issues:
• How are links established?
• Can a link be associated with more than two processes?
• How many links can there be between every pair of communicating
processes?
• What is the capacity of a link?
• Is the size of a message that the link can accommodate fixed or
variable?
• Is a link unidirectional or bi-directional?

Department of Computer Science and Engineering


Interprocess Communication – Message Passing

• Implementation of communication link


• Physical:
• Shared memory
• Hardware bus
• Network
• Logical:
• Direct or indirect –Naming
• Synchronous or asynchronous - Synchronization
• Automatic or explicit buffering –buffering

Department of Computer Science and Engineering


Direct Communication

• Processes must name each other explicitly:

• send (P, message) – send a message to process P

• receive(Q, message) – receive a message from process Q

• Properties of communication link

• Links are established automatically

• A link is associated with exactly one pair of communicating


processes

• Between each pair there exists exactly one link

• The link may be unidirectional, but is usually bi-directional

Department of Computer Science and Engineering


Indirect Communication
• Messages are directed and received from mailboxes (also referred to as
ports)
• Each mailbox has a unique id
• Processes can communicate only if they share a mailbox
• Properties of communication link
• Link established only if processes share a common mailbox
• A link may be associated with many processes
• Each pair of processes may share several communication links
• Link may be unidirectional or bi-directional

Department of Computer Science and Engineering


Indirect Communication
• Operations
• create a new mailbox (port)
• send and receive messages through mailbox
• destroy a mailbox
• Primitives are defined as:
send(A, message) – send a message to mailbox A
receive(A, message) – receive a message from mailbox A

Department of Computer Science and Engineering


Indirect Communication
• Mailbox sharing
• P1, P2, and P3 share mailbox A
• P1, sends; P2 and P3 receive
• Who gets the message?
• Solutions
• Allow a link to be associated with at most two processes
• Allow only one process at a time to execute a receive operation
• Allow the system to select arbitrarily the receiver. Sender is notified
who the receiver was.

Department of Computer Science and Engineering


Synchronization
• Message passing may be either blocking or non-blocking
• Blocking is considered synchronous
• Blocking send -- the sender is blocked until the message is
received
• Blocking receive -- the receiver is blocked until a message is
available
• Non-blocking is considered asynchronous
• Non-blocking send -- the sender sends the message and continue
• Non-blocking receive -- the receiver receives:
 A valid message, or
 Null message
 Different combinations possible
 If both send and receive are blocking, we have a rendezvous
Department of Computer Science and Engineering
Synchronization
 Producer-consumer becomes trivial

message next_produced;
while (true) {
/* produce an item in next produced */
send(next_produced);
}
message next_consumed;
while (true) {
receive(next_consumed);

/* consume the item in next consumed */


}

Department of Computer Science and Engineering


Buffering

• Queue of messages attached to the link.

• implemented in one of three ways

1. Zero capacity–no messages are queued on a link.Sender must wait for


receiver (rendezvous)

2. Bounded capacity– finite length of n messages Sender must wait if link full

3. Unbounded capacity – infinite length .Sender never waits

Department of Computer Science and Engineering


Examples of IPC Systems - Mach
• Mach communication is message based
• Even system calls are messages
• Each task gets two mailboxes at creation- Kernel and Notify
• Only three system calls needed for message transfer msg_send(),
msg_receive(), msg_rpc()
• Mailboxes needed for commuication, created via port_allocate()
• Send and receive are flexible, for example four options if mailbox
full:
• Wait indefinitely
• Wait at most n milliseconds
• Return immediately
• Temporarily cache a message

Department of Computer Science and Engineering


Process Synchronization
Race Condition
• counter++ could be implemented as
register1 = counter
register1 = register1 + 1
counter = register1
• counter-- could be implemented as
register2 = counter
register2 = register2 - 1
counter = register2
• Consider this execution interleaving with “count = 5” initially:
S0: producer execute register1 = counter {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = counter {register2 = 5}
S3: consumer execute register2 = register2 – 1 {register2 = 4}
S4: producer execute counter = register1 {counter = 6 }
S5: consumer execute counter = register2 {counter = 4}
Department of Computer Science and Engineering
Critical Section Problem
• Consider system of n processes {p0, p1, … pn-1}

• Each process has critical section segment of code

• Process may be changing common variables, updating table, writing file,


etc

• When one process in critical section, no other may be in its critical


section

• Critical section problem is to design protocol to solve this

• Each process must ask permission to enter critical section in entry section,
may follow critical section with exit section, then remainder section

Department of Computer Science and Engineering


Critical Section
• General structure of process Pi

Department of Computer Science and Engineering


Algorithm for Process Pi

do {

while (turn == j);


critical section
turn = j;
remainder section
}while (true);

Department of Computer Science and Engineering


Solution to Critical-Section Problem
1.Mutual Exclusion - If process Pi is executing in its critical section, then no
other processes can be executing in their critical sections

2.Progress - If no process is executing in its critical section and there exist


some processes that wish to enter their critical section, then the selection of
the processes that will enter the critical section next cannot be postponed
indefinitely

3.Bounded Waiting - A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made
a request to enter its critical section and before that request is granted
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the n processes
Department of Computer Science and Engineering
Semaphore
• Synchronization tool that provides more sophisticated ways (than Mutex locks)
for process to synchronize their activities.
• Semaphore S – integer variable
• Can only be accessed via two indivisible (atomic) operations
• wait() and signal()
• Originally called P() and V()
• Definition of the wait() operation
wait(S) {
while (S <= 0) C
; // busy wait
S--;
} E
• Definition of the signal() operation
signal(S) {
Department of Computer Science and Engineering
S++;
Semaphore Usage
• Counting semaphore – integer value can range over an unrestricted
domain
• Binary semaphore – integer value can range only between 0 and 1
• Same as a mutex lock
• Can solve various synchronization problems
• Consider P1 and P2 that require S1 to happen before S2
Create a semaphore “synch” initialized to 0
P1:
S1 ;
signal(synch);
P2:
wait(synch);
S2 ;
• Can implement a counting semaphore S as a binary semaphore
Department of Computer Science and Engineering
Semaphore Implementation
• Must guarantee that no two processes can execute the wait() and
signal() on the same semaphore at the same time
• Thus, the implementation becomes the critical section problem where the
wait and signal code are placed in the critical section
• Could now have busy waiting in critical section implementation
• But implementation code is short
• Little busy waiting if critical section rarely occupied
• Note that applications may spend lots of time in critical sections and
therefore this is not a good solution

Department of Computer Science and Engineering


Semaphore Implementation with no Busy waiting
• With each semaphore there is an associated waiting queue
• Each entry in a waiting queue has two data items:
• value (of type integer)
• pointer to next record in the list
• Two operations:
• block – place the process invoking the operation on the appropriate
waiting queue
• wakeup – remove one of processes in the waiting queue and place it
in the ready queue
• typedef struct{
int value;
struct process *list;
} semaphore;

Department of Computer Science and Engineering


Semaphore Implementation with no Busy waiting
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
} Department of Computer Science and Engineering
Classical Problems of Synchronization

• Classical problems used to test newly-proposed synchronization schemes

• Bounded-Buffer Problem

• Readers and Writers Problem

• Dining-Philosophers Problem

Department of Computer Science and Engineering


Bounded-Buffer Problem
• n buffers, each can hold one item
• Semaphore mutex initialized to the value 1 – binary semaphore
• Semaphore full initialized to the value 0 - counting semaphore
• Semaphore empty initialized to the value n - counting semaphore

Department of Computer Science and Engineering


Bounded-Buffer Problem
• The structure of the producer process
do {
...
/* produce an item in next_produced */
...
wait(empty);
wait(mutex);
...
/* add next produced to the buffer */
...
signal(mutex);
signal(full);
} while (true);
Department of Computer Science and Engineering
Bounded-Buffer Problem
 The structure of the consumer process
Do {
wait(full);
wait(mutex);
...
/* remove an item from buffer to next_consumed */
...
signal(mutex);
signal(empty);
...
/* consume the item in next consumed */
...
} while (true);
Department of Computer Science and Engineering
Readers-Writers Problem
• A data set is shared among a number of concurrent processes
• Readers – only read the data set; they do not perform any updates
• Writers – can both read and write
• Problem – allow multiple readers to read at the same time
• Only one single writer can access the shared data at the same time
• Several variations of how readers and writers are considered – all
involve some form of priorities
• Shared Data
• Data set
• Semaphore rw_mutex initialized to 1
• Semaphore mutex initialized to 1
• Integer read_count initialized to 0

Department of Computer Science and Engineering


Readers-Writers Problem
• A data set is shared among a number of concurrent processes
• Readers – only read the data set; they do not perform any updates
• Writers – can both read and write
• Problem – allow multiple readers to read at the same time
• Only one single writer can access the shared data at the same time
• Several variations of how readers and writers are considered – all
involve some form of priorities
• Shared Data
• Data set
• Semaphore rw_mutex initialized to 1
• Semaphore mutex initialized to 1
• Integer read_count initialized to 0

Department of Computer Science and Engineering


Readers-Writers Problem
• The structure of a writer process

do {
wait(rw_mutex);
...
/* writing is performed */
...
signal(rw_mutex);
} while (true);

Department of Computer Science and Engineering


Readers-Writers Problem
• The structure of a reader process
do {
wait(mutex);
read_count++;
if (read_count == 1)
wait(rw_mutex);
signal(mutex);
...
/* reading is performed */
...
wait(mutex);
read count--;
if (read_count == 0)
signal(rw_mutex);
signal(mutex);
} while (true);
Department of Computer Science and Engineering
Dining-Philosophers Problem

• Philosophers spend their lives alternating thinking and eating


• Don’t interact with their neighbors, occasionally try to pick up 2
chopsticks (one at a time) to eat from bowl
• Need both to eat, then release both when done
• In the case of 5 philosophers
• Shared data
• Bowl of rice (data set)
• Semaphore chopstick [5] initialized to 1

Department of Computer Science and Engineering


Dining-Philosophers Problem Algorithm
• The structure of Philosopher i:
do {
wait (chopstick[i] );
wait (chopStick[ (i + 1) % 5] );

// eat

signal (chopstick[i] );
signal (chopstick[ (i + 1) % 5] );

// think
} while (TRUE);
• What is the problem with this algorithm?

Department of Computer Science and Engineering


Dining-Philosophers Problem Algorithm

• Deadlock handling

• Allow at most 4 philosophers to be sitting simultaneously at the


table.

• Allow a philosopher to pick up the forks only if both are available


(picking must be done in a critical section.

• Use an asymmetric solution -- an odd-numbered philosopher picks


up first the left chopstick and then the right chopstick. Even-
numbered philosopher picks up first the right chopstick and then
the left chopstick.

Department of Computer Science and Engineering


Monitors
• A high-level abstraction that provides a convenient and effective mechanism
for process synchronization
• Abstract data type, internal variables only accessible by code within the
procedure
• Only one process may be active within the monitor at a time
• But not powerful enough to model some synchronization schemes
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }

procedure Pn (…) {……}

Initialization code (…) { … }


}
} Department of Computer Science and Engineering
Schematic view of a Monitor

Department of Computer Science and Engineering


Condition Variables

• condition x, y;

• Two operations are allowed on a condition variable:

• x.wait() – a process that invokes the operation is suspended until


x.signal()

• x.signal() – resumes one of processes (if any) that invoked x.wait()

• If no x.wait() on the variable, then it has no effect on the


variable

Department of Computer Science and Engineering


Monitor with Condition Variables

Department of Computer Science and Engineering


Condition Variables Choices

• If process P invokes x.signal(), and process Q is suspended in x.wait(), what


should happen next?
• Both Q and P cannot execute in paralel. If Q is resumed, then P must wait
• Options include
• Signal and wait – P waits until Q either leaves the monitor or it waits for
another condition
• Signal and continue – Q waits until P either leaves the monitor or it waits
for another condition
• Both have pros and cons – language implementer can decide
• Monitors implemented in Concurrent Pascal compromise
• P executing signal immediately leaves the monitor, Q is resumed
• Implemented in other languages including Mesa, C#, Java

Department of Computer Science and Engineering


THANK YOU

Department of Computer Science and Engineering

You might also like