Unit 5 Amortised Analysis 1
Unit 5 Amortised Analysis 1
UNIT -5
AMORTIZED ANALYSIS
➢ This is most commonly the case with data structures, which have state that
persists between operations.
➢ The basic idea is that a worst-case operation can alter the state in such a way
that the worst case cannot occur again for a long time, thus "amortizing" its
cost.
➢ This is most commonly the case with data structures, which have state that
persists between operations.
➢ The basic idea is that a worst-case operation can alter the state in such a way
that the worst case cannot occur again for a long time, thus "amortizing" its
cost.
Amortized Analysis
Aggregate Method
Accounting Method
Potential Method
operations.
➢ In the worst case: The average cost, or amortized cost, per operation is:=
T(n)/n.
➢ Note that this amortized cost applies to each operation, even when there
➢ Example 1:
1. Push Operation
D Top
2. Pop Operation
C
3. Multipop Operation B
A
X Top
D Top D
C C
B Push (X) B
A A
X Top
D D Top
C C
B B
Pop()
A A
➢ do pop()
X Top
D
C
Pop(3)
B B Top
A A
empty stack.
➢ The worst-case cost of a MULTIPOP operation in the sequence is O(n), since the
➢ The worst-case time of any stack operation is therefore O(n), and hence a
➢ Although this analysis is correct, the O(n²) result, obtained by considering the
➢ Using the aggregate method of amortized analysis, we obtain a better upper bound
PUSH, POP, and MULTIPOP operations on an initially empty stack can cos most
O(n). Why?
➢ Claim any sequence of Push, Pop, Multipop has at most worst case complexity of
O(n)
➢ Each object can be popped at most one time for each time itis pushed.
➢ The number of push operations is O(n) at most So the number of pops, either from
➢ i=0
➢ A(i)=0
Counter A[2] A[1] A[0]
➢ i=i+1
Value
➢ If i< length(A) 0 0 0 0
➢ Then A(i)=1 1 0 0 1
2 0 1 0
3 0 1 1
4 1 0 0
➢ Other operations ripple through the number and flip many bits
➢ A Cursory Analysis
➢ Credits can be used later to help perform other operations whose amortized cost
➢ This is very different than the aggregate method in which all operations have the
➢ The total amortized cost for any sequence must be an upper bound on the actual
costs.
➢ Example 1:
1. Push Operation
D Top
2. Pop Operation
C
3. Multipop Operation B
A
X Top
D Top D
C C
B Push (X) B
A A
X Top
D D Top
C C
B B
Pop()
A A
➢ do pop()
X Top
D
C
Pop(3)
B B Top
A A
empty stack.
empty stack.
➢ Since we start with an empty stack, pushes must be done first and this builds up
the amortized credit. All pops are charged against this credit, there can never be
more pops (of either type) than pushes.
➢ T(n) = 2n
➢ T(n) = O(n)
➢ i=0
➢ A(i)=0
Counter A[2] A[1] A[0]
➢ i=i+1
Value
➢ If i< length(A) 0 0 0 0
➢ Then A(i)=1 1 0 0 1
2 0 1 0
3 0 1 1
4 1 0 0
➢ Amortized Cost
➢ T(n) = 2n
➢ T(n) = O(n)
➢ Since the number of 1's is never negative, the amount of credit is also never negative
➢ The Amortized cost ci' of the ith operation with respect to potential function is
defined by:
➢ ci' = ci+ ɸ(Di) - ɸ(Di-1)
➢ Where
➢ Di is Data structure.
➢ The Amortized cost of each operation is therefore its actual cost plus increase in
➢ Example 1:
1. Push Operation
D Top
2. Pop Operation
C
3. Multipop Operation B
A
X Top
D Top D
C C
B Push (X) B
A A
X Top
D D Top
C C
B B
Pop()
A A
➢ do pop()
X Top
D
C
Pop(3)
B B Top
A A
➢ Potential charge:
X Top
ɸ(Di) - ɸ(Di-1) = (S+1)-S =1
➢ Amortized Cost: D Top D
➢ ci' = ci+ ɸ(Di) - ɸ(Di-1) C C
➢ =1+1 =2 B B
Push (X)
➢ ci' =O(1)
A A
➢ Potential charge:
ɸ(Di) - ɸ(Di-1) = (S-1)-S =-1
X Top
➢ Amortized Cost:
➢ ci' = ci+ ɸ(Di) - ɸ(Di-1) D D Top
➢ =1+(-1) =0 C C
➢ ci' =O(1) B B
Pop()
A A
➢ Potential charge:
ɸ(Di) - ɸ(Di-1) = (S-k)-S =-k
X Top
➢ Amortized Cost:
➢ ci' = ci+ ɸ(Di) - ɸ(Di-1) D
➢ =k+(-k) =0 C
➢ ci' =O(1) B B Top
multiop(3)
A A
➢ i=0
➢ A(i)=0
Counter A[2] A[1] A[0]
➢ i=i+1
Value
➢ If i< length(A) 0 0 0 0
➢ Then A(i)=1 1 0 0 1
2 0 1 0
3 0 1 1
4 1 0 0
Therefore, asymptotically.
A tradeoff is a situation where one thing increases and another thing decreases. It is a
way to solve a problem in:
1. Either in less time and by using more space, or
2. In very little space by spending a long amount of time.
➢ The best Algorithm is that which helps to solve a problem that requires less space
in memory and also takes less time to generate the output.
➢ But in general, it is not always possible to achieve both of these conditions at the
same time.
➢ The most common condition is an algorithm using a lookup table. This means that
the answers to some questions for every possible value can be written down. One
way of solving this problem is to write down the entire lookup table, which will let
you find answers very quickly but will use a lot of space.
➢ Another way is to calculate the answers without writing down anything, which uses
very little space, but might take a long time.
➢ Therefore, the more time-efficient algorithms you have, that would be less space-
efficient.
Fn = Fn – 1 + Fn – 2, where, F0 = 0 and F1 = 1.
determined by the random bits; thus either the running time, or the
output (or both) are random variables.
➢ An algorithm that uses random numbers to decide what to do next anywhere in its
➢ For example, in Randomized Quick Sort, we use random number to pick the next
➢ If a randomized algorithm continually returns the correct answer but the running
➢ To compute expected time taken in worst case, all possible values of the used
random variable needs to be considered in worst case and time taken by every
possible value needs to be evaluated.
➢ Average of all evaluated times is the expected worst case time complexity.
optimization problem.
➢ For the traveling salesperson problem, the optimization problem is to find the shortest
➢ For the vertex cover problem, the optimization problem is to find the vertex cover with
fewest vertices, and the approximation problem is to find the vertex cover with few
vertices.
➢ An approximation algorithm guarantees to seek out high accuracy and top quality
➢ Approximation algorithms are used to get an answer near the (optimal) solution of
➢ An approximation algorithm guarantees to seek out high accuracy and top quality
➢ Approximation algorithms are used to get an answer near the (optimal) solution of
➢ Performance Measure :
An Approximate Algorithm returns a legal solution, but the cost of that legal
solution may not be optimal.
It has hardware.
same as computer But unlike computer they are not for general purpose they
are designed for specific functionality only.
➢ Scheduling is how processors and other resources are allocated to processes and
➢ Each task is assigned some priority and task scheduling is done in strict order of
their priority.
➢ A higher priority task scheduled before a lower priority job. A higher priority task
Such an algorithm assigns priority to a task at design time and the priority of each
When the task is ready, it is moved to the ready queue. The individual queue is
The task is moved to exit, blocked or waiting for the state according to its
execution status.
deadline of the task determines the priority of the task. In real-time systems,
meeting deadline is very essential.
➢ The problems which can be solved in polynomial time of complexity are called as
➢ Ex.
1. Calculating the greatest common divisor.
2. Searching & Sorting Algorithms
3. Finding a maximum matching.
4. Decision versions of linear programming.
Tractable means that the problems can be solved in theory as well as in practice. But
the problems that can be solved in theory but not in practice are known as
intractable.
➢ Non tractable (NP Class) Problems: The NP in NP class stands for Non-
➢ Ex.
➢ The solutions of the NP class are hard to find since they are being solved by a non-
➢ But if solution is given and we just want to verify solution whether is correct or incorrect then it
can be done in a polynomial time.
➢ So we can verify the problems solution in polynomial time but cannot solve in polynomial time
➢ To understand the relation between P & NP Class problems consider the following three cases
1: If P==NP Which means
➢ Every problem can be solvable in polynomial Time which is not actually feasible.
P NP NP