100% found this document useful (1 vote)
16 views

Unit 5 Amortised Analysis 1

daa

Uploaded by

nayan.sawant
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
16 views

Unit 5 Amortised Analysis 1

daa

Uploaded by

nayan.sawant
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

Design & Analysis of Algorithms

UNIT -5

AMORTIZED ANALYSIS

Prepared By Mr. Vipin K. Wani


Amortized Analysis
2

➢ Amortized analysis is a worst-case analysis of a a sequence of operations — to


obtain a tighter bound on the overall or average cost per operation in the
sequence than is obtained by separately analysing each operation in the
sequence.

Prepared By Mr. Vipin K. Wani


Amortized Analysis
3

➢ In an amortized analysis, the time required to perform a sequence of data-


structure operations is averaged over all the operations performed.

➢ Amortized analysis differs from average-case analysis in that probability is not


involved.

➢ an amortized analysis guarantees the average performance of each operation in


the worst case.

Prepared By Mr. Vipin K. Wani


Amortized Analysis
4

➢ Amortized analysis requires knowledge of which series of operations are


possible.

➢ This is most commonly the case with data structures, which have state that
persists between operations.

➢ The basic idea is that a worst-case operation can alter the state in such a way
that the worst case cannot occur again for a long time, thus "amortizing" its
cost.

Prepared By Mr. Vipin K. Wani


Amortized Analysis
5

➢ Amortized analysis requires knowledge of which series of operations are


possible.

➢ This is most commonly the case with data structures, which have state that
persists between operations.

➢ The basic idea is that a worst-case operation can alter the state in such a way
that the worst case cannot occur again for a long time, thus "amortizing" its
cost.

Prepared By Mr. Vipin K. Wani


Amortized Vs Probabilistic Analysis
6

Probabilistic Analysis Amortized Analysis

Average case running time: average over all No involvement of probability


possible inputs for one algorithm (operation).
If using probability, called expected running Average performance on a sequence of
time. operations, even some operation is expensive
Guarantee average performance of each
operation among the sequence in worst case

Prepared By Mr. Vipin K. Wani


Amortized Analysis
7

➢ There are three methods to perform Amortized Analysis as mentioned bellow

Amortized Analysis

Aggregate Method

Accounting Method

Potential Method

Prepared By Mr. Vipin K. Wani


Aggregate Method
8

➢ We determine an upper bound T(n) on the total cost of a sequence of n-

operations.

➢ In the worst case: The average cost, or amortized cost, per operation is:=

T(n)/n.

➢ Note that this amortized cost applies to each operation, even when there

are several types of operations in the sequence.

Prepared By Mr. Vipin K. Wani


Aggregate Method
9

➢ Example 1:

➢ Objective is to Find worst case time, T(n), for n operations

➢ The amortized cost is T(n)/n

➢ The Various Stack Operations are

1. Push Operation
D Top
2. Pop Operation
C
3. Multipop Operation B
A

Prepared By Mr. Vipin K. Wani


Aggregate Method
10

➢ Push Operation: Push(X) has Complexity O(1)

X Top
D Top D
C C
B Push (X) B
A A

Prepared By Mr. Vipin K. Wani


Aggregate Method
11

➢ Pop Operation: Pop(X) has Complexity O(1)

X Top
D D Top
C C
B B
Pop()
A A

Prepared By Mr. Vipin K. Wani


Aggregate Method
12

➢ Multipop Operation : multipop(k)

➢ While Stack_not_Empty() and K !=0

➢ do pop()

➢ K=K-1; Complexity is: K

X Top
D
C
Pop(3)
B B Top

A A

Prepared By Mr. Vipin K. Wani


Aggregate Method
13

➢ Analysis of Stack operations:

➢ Let us analyze a sequence of n PUSH, POP, and MULTIPOP operations on an initially

empty stack.

➢ The worst-case cost of a MULTIPOP operation in the sequence is O(n), since the

stack size is at most n.

➢ The worst-case time of any stack operation is therefore O(n), and hence a

sequence of n operations costs O(n²), since we may have O(n) MULTIPOP


operations costing O(n) each.

➢ Although this analysis is correct, the O(n²) result, obtained by considering the

worst-case cost of each operation individually, is not tight.

Prepared By Mr. Vipin K. Wani


Aggregate Method
14

➢ Analysis of Stack operations:

➢ Using the aggregate method of amortized analysis, we obtain a better upper bound

that considers the entire se of n operations.

➢ In fact, although a single MULTIPOP operation can be expensive, any sequence of n

PUSH, POP, and MULTIPOP operations on an initially empty stack can cos most
O(n). Why?

Prepared By Mr. Vipin K. Wani


Aggregate Method
15

➢ Analysis of Stack operations:

➢ Claim any sequence of Push, Pop, Multipop has at most worst case complexity of

O(n)

➢ Each object can be popped at most one time for each time itis pushed.

➢ The number of push operations is O(n) at most So the number of pops, either from

Pop or Multipop, is at most O(n)

➢ The overall complexity is O(n)

➢ The amortized cost: O(n)/n = O(1)

Prepared By Mr. Vipin K. Wani


Aggregate Method
16

➢ Example-2: Counting Binary Number

➢ i=0

➢ While (i<length(A) and A(i)==1)

➢ A(i)=0
Counter A[2] A[1] A[0]
➢ i=i+1
Value
➢ If i< length(A) 0 0 0 0
➢ Then A(i)=1 1 0 0 1
2 0 1 0
3 0 1 1
4 1 0 0

Prepared By Mr. Vipin K. Wani


Aggregate Method
17

➢ Example-2: Counting Binary Number

Counter A[2] A[1] A[0] Cost Total


Value Cost
0 0 0 0 0 0
1 0 0 1 1 0+1=1
2 0 1 0 2 1+2=3
3 0 1 1 1 3+1=4
4 1 0 0 3 4+3=7
5 1 0 1 1 7+1=8
6 1 1 0 2 8+2=10
7 1 1 1 1 10+1=11

Prepared By Mr. Vipin K. Wani


Aggregate Method
18

➢ Example-2: Counting Binary Number

➢ We measure cost as number of bit flips

➢ Some operations only flip one bits

➢ Other operations ripple through the number and flip many bits

➢ What is the average cost per operations

➢ A Cursory Analysis

➢ Worst case for the incremental analysis is O(k)

➢ A Sequence of n incremental operations would have worst case behavior of O(n k)

➢ Although this is upper bound but not tight.

Prepared By Mr. Vipin K. Wani


Aggregate Method
19

➢ Example-2:Analysing Counting Binary Number

➢ Not all bits are flipped each iteration

➢ A[0] is flipped every iteration

➢ A[1] is flipped every 2nd iteration

➢ A[2] is flipped every 4th iteration

➢ Summing all bits flips we have∞


log n
➢ ∑i=0 = | n/2i | < ∑i=0∞ 1/2i =2n

➢ So its turn out the worst case is bounded by O(n)

➢ And hence O(n)/n is O(1)

Prepared By Mr. Vipin K. Wani


Accounting Method
20

➢ Different charges are assigned to different operation.

➢ Overcharges result in a credit.

➢ Credits can be used later to help perform other operations whose amortized cost

is less than actual.

➢ This is very different than the aggregate method in which all operations have the

same amortized cost

Prepared By Mr. Vipin K. Wani


Accounting Method
21

➢ Choice of amortized amounts:

➢ The total amortized cost for any sequence must be an upper bound on the actual

costs.

➢ The total credit assignment must be nonnegative.

➢ So the amortized amounts selected must never result in a negative credit.

Prepared By Mr. Vipin K. Wani


Accounting Method
22

➢ Example 1:

➢ Objective is to Find worst case time, T(n), for n operations

➢ The amortized cost is T(n)/n

➢ The Various Stack Operations are

1. Push Operation
D Top
2. Pop Operation
C
3. Multipop Operation B
A

Prepared By Mr. Vipin K. Wani


Accounting Method
23

➢ Push Operation: Push(X) has Complexity O(1)

X Top
D Top D
C C
B Push (X) B
A A

Prepared By Mr. Vipin K. Wani


Accounting Method
24

➢ Pop Operation: Pop(X) has Complexity O(1)

X Top
D D Top
C C
B B
Pop()
A A

Prepared By Mr. Vipin K. Wani


Accounting Method
25

➢ Multipop Operation : multipop(k)

➢ While Stack_not_Empty() and K !=0

➢ do pop()

➢ K=K-1; Complexity is: K

X Top
D
C
Pop(3)
B B Top

A A

Prepared By Mr. Vipin K. Wani


Accounting Method
26

➢ Let us analyze a sequence of 4 Push, Pop & multipop operations on initial

empty stack.

Operation Actual cost Amortized Cost


Push 1 2
Pop 1 0
Multipop K 0

Prepared By Mr. Vipin K. Wani


Accounting Method
27

➢ Sequence of Stack operations

operations Actual Cost Amortized Cost Credit


PUSH(A) 1 2 1
PUSH(A) 1 2 2
POP() 1 0 1
PUSH(C) 1 2 2
POP() 1 0 1
POP() 1 0 0
PUSH(D) 1 2 1
POP() 1 0 0
Total Amortized Cost 8

Prepared By Mr. Vipin K. Wani


Accounting Method
28

➢ Analysis of Stack operations

➢ Let us analyze a sequence of n PUSH, POP, and MULTIPOP operations on an initially

empty stack.

➢ Since we start with an empty stack, pushes must be done first and this builds up

the amortized credit. All pops are charged against this credit, there can never be
more pops (of either type) than pushes.

➢ Therefore the total amortized cost is O(n)

Prepared By Mr. Vipin K. Wani


Accounting Method
29

➢ Analysis of Stack operations

➢ Total amortized cost = (2+2+0+2+0+0+2+0)

➢ Total amortized cost = 8 (for 4 Push operations)

➢ Total amortized cost = 2*4 (for 4 Push operations)

➢ Total amortized cost = 2*n (for n Push operations)

➢ T(n) = 2n

➢ T(n) = O(n)

Prepared By Mr. Vipin K. Wani


Accounting Method
30

➢ Example-2: Counting Binary Number

➢ i=0

➢ While (i<length(A) and A(i)==1)

➢ A(i)=0
Counter A[2] A[1] A[0]
➢ i=i+1
Value
➢ If i< length(A) 0 0 0 0
➢ Then A(i)=1 1 0 0 1
2 0 1 0
3 0 1 1
4 1 0 0

Prepared By Mr. Vipin K. Wani


Accounting Method
31

➢ Example-2: Counting Binary Number

➢ Amortized Cost

➢ 2 for setting bit to a 1

➢ 0 for setting bit to a 0

➢ The credit for any number are the number of 1 bits

Counter A[2] A[1] A[0] Amortized Actual Credit


Value Cost Cost
0 0 0 0 0 0 0
1 0 0 1 2 1 1
2 0 1 0 2 2 1
3 0 1 1 2 1 2

Prepared By Mr. Vipin K. Wani


Accounting Method
32

➢ Amortized analysis: Counting Binary Number


Counter Amortized Actual
A[2] A[1] A[0] Credit
Value Cost Cost
0 0 0 0 0 0 0
1 0 0 1 2 1 1
2 0 1 0 2 2 1
3 0 1 1 2 1 2
4 1 0 0 2 3 1
5 1 0 1 2 1 2
6 1 1 0 2 2 2
7 1 1 1 2 1 3
Total Amortized Cost: 14

Prepared By Mr. Vipin K. Wani


Accounting Method
33

➢ Amortized analysis: Counting Binary Number

➢ Analysis of the increment operation:

➢ The total amortized cost = ∑i=1n ci' for n increments

➢ Total amortized cost = (0+2+2+2+2+2+2+2) for 7 increments

➢ Total amortized cost = 14 for 7 increments

➢ Total amortized cost = 2*7 for 7 increments

➢ Total amortized cost = 2*n for n

➢ T(n) = 2n

➢ T(n) = O(n)

Prepared By Mr. Vipin K. Wani


Accounting Method
34

➢ Amortized analysis: Counting Binary Number


➢ Increment(A)
➢ i=0
➢ While (i<length(A) and A(i)==1)
➢ A(i)=0
➢ i=i+1
➢ If i< length(A)
➢ Then A(i)=1

➢ Analysis of the increment operation

➢ The while loop resetting bits is charged against credits

➢ Only one bit is set in line 6, so the total charge is 2

➢ Since the number of 1's is never negative, the amount of credit is also never negative

➢ The total amortized cost for n increments is O(n)

Prepared By Mr. Vipin K. Wani


Potential Method
35

➢ Same as accounting method: something prepaid is used later.

➢ Different from accounting method.

1. The prepaid work not as credit, but as "potential energy", or "potential".

2. The potential is associated with the data structure as a whole rather


than with specific objects within the data structure.

Prepared By Mr. Vipin K. Wani


Potential Method
36

➢ The Amortized cost ci' of the ith operation with respect to potential function is
defined by:
➢ ci' = ci+ ɸ(Di) - ɸ(Di-1)

➢ i.e. (actual cost + potential change)

➢ Where

➢ ci' is Amortized cost of the ith operation.

➢ ci is Actual cost of the ith operation.

➢ Di is Data structure.

➢ A potential function : (Di) →R (real numbers)

➢ ɸ(Di) is called the potential of D

Prepared By Mr. Vipin K. Wani


Potential Method
37

➢ The Amortized cost of each operation is therefore its actual cost plus increase in

potential due to the operation.

➢ The total Amortized cost of the operation is:

➢ ∑i=1n ci' = ∑i=1n [ci+ ɸ(Di) - ɸ(Di-1)]

➢ = ∑i=1n ci+ ɸ(Dn) - ɸ(D0)

Prepared By Mr. Vipin K. Wani


Potential Method
38

➢ Example 1:

➢ Objective is to Find worst case time, T(n), for n operations

➢ The amortized cost is T(n)/n

➢ The Various Stack Operations are

1. Push Operation
D Top
2. Pop Operation
C
3. Multipop Operation B
A

Prepared By Mr. Vipin K. Wani


Potential Method
39

➢ Push Operation: Push(X) has Complexity O(1)

X Top
D Top D
C C
B Push (X) B
A A

Prepared By Mr. Vipin K. Wani


Potential Method
40

➢ Pop Operation: Pop(X) has Complexity O(1)

X Top
D D Top
C C
B B
Pop()
A A

Prepared By Mr. Vipin K. Wani


Potential Method
41

➢ Multipop Operation : multipop(k)

➢ While Stack_not_Empty() and K !=0

➢ do pop()

➢ K=K-1; Complexity is: K

X Top
D
C
Pop(3)
B B Top

A A

Prepared By Mr. Vipin K. Wani


Potential Method
42

➢ Potential for stack is the number elements in the stack


➢ So ɸ(D0)=0 and ɸ(Di)>=0

➢ Potential charge:
X Top
ɸ(Di) - ɸ(Di-1) = (S+1)-S =1
➢ Amortized Cost: D Top D
➢ ci' = ci+ ɸ(Di) - ɸ(Di-1) C C
➢ =1+1 =2 B B
Push (X)
➢ ci' =O(1)
A A

Prepared By Mr. Vipin K. Wani


Potential Method
43

➢ Potential for stack is the number elements in the stack


➢ So ɸ(D0)=0 and ɸ(Di)>=0

➢ Potential charge:
ɸ(Di) - ɸ(Di-1) = (S-1)-S =-1
X Top
➢ Amortized Cost:
➢ ci' = ci+ ɸ(Di) - ɸ(Di-1) D D Top
➢ =1+(-1) =0 C C
➢ ci' =O(1) B B
Pop()
A A

Prepared By Mr. Vipin K. Wani


Potential Method
44

➢ Potential for stack is the number elements in the stack


➢ So ɸ(D0)=0 and ɸ(Di)>=0

➢ Potential charge:
ɸ(Di) - ɸ(Di-1) = (S-k)-S =-k
X Top
➢ Amortized Cost:
➢ ci' = ci+ ɸ(Di) - ɸ(Di-1) D
➢ =k+(-k) =0 C
➢ ci' =O(1) B B Top
multiop(3)
A A

Prepared By Mr. Vipin K. Wani


Potential Method
45

Operations Amortized cost by


potential method
Push O(1)
Pop O(1)
Multipop O(1)

So amortized cost of each operation is O(1), and total amortized cost of n


operations is O(n).
Since total amortized cost is an upper bound of actual cost, the worse case cost of
n operations is O(n).
T(n) = n

Prepared By Mr. Vipin K. Wani


Potential Method
46

➢ Example-2: Counting Binary Number

➢ i=0

➢ While (i<length(A) and A(i)==1)

➢ A(i)=0
Counter A[2] A[1] A[0]
➢ i=i+1
Value
➢ If i< length(A) 0 0 0 0
➢ Then A(i)=1 1 0 0 1
2 0 1 0
3 0 1 1
4 1 0 0

Prepared By Mr. Vipin K. Wani


Potential Method
47

➢ Example-2: Counting Binary Number

➢ Potential of the counter after ith Increment() operation to be bi the number of

1's in the counter after ith operation.

➢ Therefore ɸ (Di)=bi the number of 1's. clearly, ɸ(D) ≥ 0.

Counter A[2] A[1] A[0] Actual ɸ(Di) ɸ(Di-1) Potential Amortized


Value Cost Difference cost
0 0 0 0 0 0 0 0 0
1 0 0 1 1 1 0 1 2
2 0 1 0 2 1 1 0 2
3 0 1 1 1 2 1 1 2

Prepared By Mr. Vipin K. Wani


Potential Method
48

➢ Example-2: Counting Binary Number

Counter A[2] A[1] A[0] Actual ɸ(Di) ɸ(Di-1) Potential Amortize


Value Cost Difference d cost
0 0 0 0 0 0 0 0 0
1 0 0 1 1 1 0 1 2
2 0 1 0 2 1 1 0 2
3 0 1 1 1 2 1 1 2
4 1 0 0 3 1 2 -1 2
5 1 0 1 1 2 1 1 2
6 1 1 0 2 2 2 0 2
7 1 1 1 1 3 2 1 2
Total Amortized Cost: 14

Prepared By Mr. Vipin K. Wani


Potential Method
49

➢ Example-2: Analysing Counting Binary Number

The total amortized cost of 7 operations = 14

The total amortized cost of 7 operations = 2*7

The total amortized cost of n operations = 2*n

Therefore, asymptotically.

The total amortized cost of n operations is O(n).

Thus worst case cost is O(n).

Prepared By Mr. Vipin K. Wani


Time-Space Trade-Off
50

A tradeoff is a situation where one thing increases and another thing decreases. It is a
way to solve a problem in:
1. Either in less time and by using more space, or
2. In very little space by spending a long amount of time.
➢ The best Algorithm is that which helps to solve a problem that requires less space
in memory and also takes less time to generate the output.
➢ But in general, it is not always possible to achieve both of these conditions at the
same time.
➢ The most common condition is an algorithm using a lookup table. This means that
the answers to some questions for every possible value can be written down. One
way of solving this problem is to write down the entire lookup table, which will let
you find answers very quickly but will use a lot of space.
➢ Another way is to calculate the answers without writing down anything, which uses
very little space, but might take a long time.
➢ Therefore, the more time-efficient algorithms you have, that would be less space-
efficient.

Prepared By Mr. Vipin K. Wani


Time-Space Trade-Off
51

Types of Space-Time Trade-off

1. Compressed or Uncompressed data


2. Re Rendering or Stored images
3. Smaller code or loop unrolling
4. Lookup tables or Recalculation

Prepared By Mr. Vipin K. Wani


Time-Space Trade-Off
52

1. Compressed or Uncompressed data:


➢ A space-time trade-off can be applied to the problem of data storage.
➢ If data stored is uncompressed, it takes more space but less time.
➢ But if the data is stored compressed, it takes less space but more time to run the
decompression algorithm.
➢ There are many instances where it is possible to directly work with compressed
data.
➢ In that case of compressed bitmap indices, where it is faster to work with
compression than without compression.

Prepared By Mr. Vipin K. Wani


Time-Space Trade-Off
53

2. Re-Rendering or Stored images:


➢ In this case, storing only the source and rendering it as an image would take
more space but less time i.e., storing an image in the cache is faster than re-
rendering but requires more space in memory.

Prepared By Mr. Vipin K. Wani


Time-Space Trade-Off
54

3. Smaller code or Loop Unrolling:


➢ Smaller code occupies less space in memory but it requires high computation
time that is required for jumping back to the beginning of the loop at the end of
each iteration.
➢ Loop unrolling can optimize execution speed at the cost of increased binary size.
➢ It occupies more space in memory but requires less computation time.

Prepared By Mr. Vipin K. Wani


Time-Space Trade-Off
55

4. Lookup tables or Recalculation:


➢ In a lookup table, an implementation can include the entire table which reduces
computing time but increases the amount of memory needed.
➢ It can recalculate i.e., compute table entries as needed, increasing computing time
but reducing memory requirements.

Prepared By Mr. Vipin K. Wani


Time-Space Trade-Off
56

For Example: Let us Consider the example of Fibonacci Series.


For recursive computation of Fibonacci Series, the sequence Fn of
the Fibonacci Numbers is defined by the recurrence relation:

Fn = Fn – 1 + Fn – 2, where, F0 = 0 and F1 = 1.

Time Complexity for recursive method: O(2N)


Auxiliary Space for recursive method : O(1)

The time complexity of the recursive implementation is exponential due to


multiple calculations of the same sub problems again and again.

Prepared By Mr. Vipin K. Wani


Time-Space Trade-Off
57

For non-recursive computation of Fibonacci Series, and optimize the above


approach, the idea is to use Dynamic Programming to reduce the complexity :

Time Complexity For non-recursive : O(N)


Auxiliary Space For non-recursive : O(N)

The time complexity of the above implementation is linear by using an auxiliary


space for storing the overlapping sub problems states so that it can be used further
when required.

Prepared By Mr. Vipin K. Wani


Randomized Algorithm
58

➢ A Randomized Algorithm is an algorithm that employs a degree of

randomness as part of its logic.

➢ The algorithm typically uses uniformly random bits as an auxiliary input

to guide its behavior, in the hope of achieving good performance in the


"average case" over all possible choices of random bits.

➢ Formally, the algorithm's performance will be a random variable

determined by the random bits; thus either the running time, or the
output (or both) are random variables.

Prepared By Mr. Vipin K. Wani


Randomized Algorithm
59

➢ An algorithm that uses random numbers to decide what to do next anywhere in its

logic is called Randomized Algorithm..

➢ For example, in Randomized Quick Sort, we use random number to pick the next

pivot (or we randomly shuffle the array).

➢ And in Karger’s algorithm, we randomly pick an edge.

➢ If a randomized algorithm continually returns the correct answer but the running

times change, it is called a Las Vegas algorithm

Prepared By Mr. Vipin K. Wani


Randomized Algorithm
60

➢ Some randomized algorithms have deterministic time complexity.

➢ For example, this implementation of Karger’s algorithm has time complexity as


O(E). Such algorithms are called Monte Carlo Algorithms and are easier to analyse
for worst case.
➢ On the other hand, time complexity of other randomized algorithms (other than Las
Vegas) is dependent on value of random variable.
➢ Such Randomized algorithms are called Las Vegas Algorithms.

➢ These algorithms are typically analyzed for expected worst case.

➢ To compute expected time taken in worst case, all possible values of the used
random variable needs to be considered in worst case and time taken by every
possible value needs to be evaluated.
➢ Average of all evaluated times is the expected worst case time complexity.

Prepared By Mr. Vipin K. Wani


Randomized Algorithm
61

Randomized Algorithm Exemple : Randomized Quick sort


randamQuickSort(arr[], low, high)
1. If low >= high, then EXIT.
2. While pivot 'x' is not a Central Pivot.
(i) Choose uniformly at random a number from [low..high]. Let the randomly picked
number number be x.
(ii) Count elements in arr[low..high] that are smaller than arr[x]. Let this count be sc.
(iii) Count elements in arr[low..high] that are greater than arr[x]. Let this count be gc.
(iv) Let n = (high-low+1). If sc >= n/4 and gc >= n/4, then x is a central pivot.
3. Partition arr[low..high] around the pivot x.
4. // Recur for smaller elements randQuickSort(arr, low, sc-1)
5. // Recur for greater elements randQuickSort(arr, high-gc+1, high)
Prepared By Mr. Vipin K. Wani
Approximate Algorithm
62

➢ An Approximate Algorithm is a way of approach NP-COMPLETENESS for the

optimization problem.

➢ This technique does not guarantee the best solution.

➢ The goal of an approximation algorithm is to come as close as possible to the optimum

value in a reasonable amount of time which is at the most polynomial time.

➢ Such algorithms are called approximation algorithm or heuristic algorithm.

➢ For the traveling salesperson problem, the optimization problem is to find the shortest

cycle, and the approximation problem is to find a short cycle.

➢ For the vertex cover problem, the optimization problem is to find the vertex cover with

fewest vertices, and the approximation problem is to find the vertex cover with few
vertices.

Prepared By Mr. Vipin K. Wani


Approximate Algorithm
63

➢ Features of Approximation Algorithm :

➢ An approximation algorithm guarantees to run in polynomial time though it does

not guarantee the most effective solution.

➢ An approximation algorithm guarantees to seek out high accuracy and top quality

solution(say within 1% of optimum)

➢ Approximation algorithms are used to get an answer near the (optimal) solution of

an optimization problem in polynomial time

Prepared By Mr. Vipin K. Wani


Approximate Algorithm
64

➢ Features of Approximation Algorithm :

➢ An approximation algorithm guarantees to run in polynomial time though it does

not guarantee the most effective solution.

➢ An approximation algorithm guarantees to seek out high accuracy and top quality

solution(say within 1% of optimum)

➢ Approximation algorithms are used to get an answer near the (optimal) solution of

an optimization problem in polynomial time

Prepared By Mr. Vipin K. Wani


Approximate Algorithm
65

➢ Performance Measure :

➢ Suppose we work on an optimization problem where every solution carries a cost.

An Approximate Algorithm returns a legal solution, but the cost of that legal
solution may not be optimal.

➢ For Example, suppose we are considering for a minimum size vertex-cover


(VC). An approximate algorithm returns a VC for us, but the size (cost) may not be
minimized.

Prepared By Mr. Vipin K. Wani


Embedded Algorithm
66

➢ Embedded means something that is attached to another thing.

➢ An embedded system can be thought of as a computer hardware system having


software embedded in it.
➢ An embedded system can be an independent system or it can be a part of a large
system.
➢ An embedded system is a microcontroller or microprocessor based system which is
designed to perform a specific task.
➢ For example, a fire alarm is an embedded system; it will sense only smoke.

➢ An embedded system has three components −

 It has hardware.

 It has application software.

 It has Real Time Operating system (RTOS)


Prepared By Mr. Vipin K. Wani
Embedded Algorithm
67

➢ Embedded devices are capable of doing computations and handling inputs

same as computer But unlike computer they are not for general purpose they
are designed for specific functionality only.

➢ Embedded algorithms are used to perform computations in embedded devices.

Prepared By Mr. Vipin K. Wani


Embedded System Scheduling
68

➢ Scheduling is how processors and other resources are allocated to processes and

threads. Most of the time, embedded devices operate in real-time systems, so


meeting the deadline of the process is crucial.

➢ Each task is assigned some priority and task scheduling is done in strict order of

their priority.

➢ A higher priority task scheduled before a lower priority job. A higher priority task

can even pre-empt the lower priority task in execution.

➢ Broadly, scheduling algorithms can be classified as fixed priority algorithms and

dynamic priority algorithms.

Prepared By Mr. Vipin K. Wani


Embedded System Scheduling
69

 Fixed priority algorithms:

 Such an algorithm assigns priority to a task at design time and the priority of each

task remains constant throughout the lifetime of the device.

 Initially, the task would be in the waiting queue.

 When the task is ready, it is moved to the ready queue. The individual queue is

maintained for each priority level.

 The task with the highest priority is scheduled first.

 The task is moved to exit, blocked or waiting for the state according to its

execution status.

 This is a kind of static algorithm, and it is easy to implement.

Prepared By Mr. Vipin K. Wani


Embedded System Scheduling
70

 Dynamic priority algorithms:

➢ In a dynamic priority algorithm, the priority of a task changes dynamically. The

deadline of the task determines the priority of the task. In real-time systems,
meeting deadline is very essential.

➢ Initially, the task would be in the waiting queue.

➢ When the task is ready, it is moved into the ready queue.

➢ The individual queue is maintained for each priority

Prepared By Mr. Vipin K. Wani


Sorting Algorithm for Embedded System
71
Embedded devices are resource constrained. Due to their small size, embedded devices
possess limited memory and computation capacity.
The sorting algorithm for embedded devices should have the following property
1. The sorting method should be in place
2. Algorithm should be non-recursive
3. Algorithm should be able to sort the data in reasonably less time.
➢ In-place algorithms require memory, ie, in constant or logarithmic order of input
data.
➢ The recursive algorithm may lead to infinite calls and hence infinite memory
requirements. Due to stacking. recursive algorithms are memory intensive and slow.

Prepared By Mr. Vipin K. Wani


Sorting Algorithm for Embedded System
72
➢ Insertion sort uses the analogy of sorting cards in hand. It works in a way people are
used to sorting cards. One card is removed at a time from the deck and it is inserted
at the correct location in hand. Upcoming cards are processed in the same way.
➢ To insert a new card, all the cards in hand having a value larger than the new card
are shifted to the right side by one.
➢ The new card is inserted into the space created after moving some k cards on the
right side Insertion sort in-place algorithm, it does not require extra memory.
➢ Sorting is done in the input array itself. In iteration k the first k elements are always
sorted.
➢ Running time is the number of steps required to solve the problem on the RAM
model. Each instruction may take a different amount of time.

Prepared By Mr. Vipin K. Wani


Tractable problems
73

➢ The problems which can be solved in polynomial time of complexity are called as

tractable or P class problems.

➢ Ex. O (n), O(n2), O(nlogn)

➢ Ex.
1. Calculating the greatest common divisor.
2. Searching & Sorting Algorithms
3. Finding a maximum matching.
4. Decision versions of linear programming.

Prepared By Mr. Vipin K. Wani


Tractable problems
74
 Features of P Class Problems

 The solution to P problems is easy to find.

 P is often a class of computational problems that are solvable and tractable.

Tractable means that the problems can be solved in theory as well as in practice. But
the problems that can be solved in theory but not in practice are known as
intractable.

Prepared By Mr. Vipin K. Wani


Non tractable problems
75

➢ Non tractable (NP Class) Problems: The NP in NP class stands for Non-

deterministic Polynomial Time. It is the collection of decision problems that


can be solved by a non-deterministic machine in polynomial time.

➢ Ex. O (2n), O(3n),

➢ Ex.

1. Traveling Salesman's Problem,

2. 0/1 Knapsack problems

Prepared By Mr. Vipin K. Wani


Non tractable problems
76

➢ Features of NP Class Problems

➢ The NP class Problems are also called as intractable problems.

➢ The solutions of the NP class are hard to find since they are being solved by a non-

deterministic machine but the solutions are easy to verify.

➢ Problems of NP can be verified by a Turing machine in polynomial time

Prepared By Mr. Vipin K. Wani


Non tractable problems
77

➢ Consider an algorithm to solve Sudoku


problem
➢ For every blank space having the option 1 to 9
➢ And have to fill approximately 50 empty boxes
➢ So complexity is 950

➢ So is not a P class problem.

Prepared By Mr. Vipin K. Wani


Non tractable problems
78

➢ But if solution is given and we just want to verify solution whether is correct or incorrect then it
can be done in a polynomial time.
➢ So we can verify the problems solution in polynomial time but cannot solve in polynomial time

Prepared By Mr. Vipin K. Wani


P Vs NP Class problems
79

➢ To understand the relation between P & NP Class problems consider the following three cases
1: If P==NP Which means
➢ Every problem can be solvable in polynomial Time which is not actually feasible.

2: If P!=NP Which means


➢ Every problem cannot be solvable in polynomial Time which is not again actually
feasible.
➢ So What is the relation between P & NP
P

P NP NP

Prepared By Mr. Vipin K. Wani


Thank You…!
80

Prepared By Mr. Vipin K. Wani

You might also like