0% found this document useful (0 votes)
43 views72 pages

EContent 1 2023 10 25 18 46 33 UNIT1PPTSDAAppt 2023 07 20 15 12 39

Uploaded by

bsettle1003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views72 pages

EContent 1 2023 10 25 18 46 33 UNIT1PPTSDAAppt 2023 07 20 15 12 39

Uploaded by

bsettle1003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 72

INTRODUCTION

 An algorithm is a set of steps of operations to solve a problem


performing calculation, data processing, and automated reasoning
tasks.

 An algorithm is an efficient method that can be expressed within finite


amount of Time and space.

 The important aspects of algorithm design include creating an efficient


algorithm to solve a problem in an efficient way using minimum time
and space.

 To solve a problem, different approaches can be followed. Some of


them can be efficient with respect to time consumption, whereas other
approaches may be memory efficient.
PROPERTIES OF
ALGORITHM
TO EVALUATE AN ALGORITHM WE HAVE TO SATISFY THE FOLLOWING
CRITERIA:

1.INPUT: The Algorithm should be given zero or more input.

2.OUTPUT: At least one quantity is produced. For each input the algorithm
produced value from specific task.

3.DEFINITENESS: Each instruction is clear and unambiguous.

4.FINITENESS: If we trace out the instructions of an algorithm, then for all cases,
the algorithm terminates after a finite number of steps.

5.EFFECTIVENESS: Every instruction must very basic so that it can be carried


out, in principle, by a person using only pencil & paper.
ALGORITHM (CONTD…)

 A well-defined computational procedure that takes some value,


or set of values, as input and produces some value, or set of
values, as output.

 Written in a pseudo code which can be implemented in the


language of programmer’s choice.

PSEUDO CODE: A notation resembling a simplified


programming language, used in program design.
How To Write an Algorithm
Step-1:start Step-1: start
Step-2:Read a,b,c Step-2: Read a,b,c
Step-3:if a>b Step-3:if a>b then go to step 4
if a>c otherwise go to step 5
print a is largest Step-4:if a>c then
else print a is largest otherwise
if b>c print c is largest
print b is largest Step-5: if b>c then
else print b is largest otherwise
print c is largest print c is largest
Step-4 : stop step-6: stop
Differences
Algorithm Program
1.At design phase 1.At Implementation
phase
2.Natural language 2.written in any
programming
language
3.Person should have 3.Programmer
Domain knowledge
4.Analyze 4.Testing
ALGORITHM SPECIFICATION
Algorithm can be described (Represent) in four ways.

1.Natural language like English:


When this way is chooses, care should be taken, we
should ensure that each & every statement is
definite.
(no ambiguity)

2. Graphic representation called flowchart:


This method will work well when the algorithm is small&
simple.
3. Pseudo-code Method:
In this method, we should typically describe algorithms as program,
which resembles language like Pascal & Algol(Algorithmic Language).
4.Programming Language:
we have to use programming language to write algorithms like
C, C++,JAVA etc.
PSEUDO-CODE CONVENTIONS
1.Comments begin with // and continue until the end of line.

2.Blocks are indicated with matching braces { and }.

3.An identifier begins with a letter. The data types of variables are not
explicitly declared.
node= record
{
data type 1 data 1;
data type n data n;
node *link;
}
4. There are two Boolean values TRUE and FALSE.
Logical Operators
AND, OR, NOT
Relational Operators
<, <=,>,>=, =, !=
5. Assignment of values to variables is done using the assignment statement.
<Variable>:= <expression>;

6. Compound data types can be formed with records. Here is an example,


Node. Record
{
data type – 1 data-1;
.
.
.
data type – n data – n;
node * link;
}

Here link is a pointer to the record type node. Individual data items of
a record can be accessed with  and period.
Contd…
7. The following looping statements are employed.
For, while and repeat-until While Loop:
While < condition > do
{
<statement-1>
..
..
<statement-n>
}
For Loop:
For (initalization;comparasion;incre or decre) do
for variable=value1 to value2 Step step do
{
<statement-1>
.
.
.
<statement-n>
}
repeat-until:

repeat
<statement-1>
.
.
.
<statement-n>
until<condition>

8. A conditional statement has the following forms.

 If <condition> then <statement>


 If <condition> then <statement-1>
Else <statement-1>
Case statement:

Case
{
: <condition-1> : <statement-1>
.
.
.
: <condition-n> : <statement-n>
: else : <statement-n+1>
}

9. Input and output are done using the instructions read & write. No
format is used to specify the size of input or output quantities
Contd…
10. There is only one type of procedure: Algorithm, the heading takes the
form,
Algorithm Name (Parameter lists)

consider an example, the following algorithm fields & returns the


maximum of n given numbers:

1. algorithm Max(A,n)
2. // A is an array of size n
3. {
4. Result := A[1];
5. for i:= 2 to n do
6. if A[i] > Result then
7. Result :=A[i];
8. return Result;
9. }
Issue in the study of algorithm
1. How to create an algorithm.
2. How to validate an algorithm.
3. How to analyses an algorithm
4. How to test a program.

1 .How to create an algorithm: To create an algorithm we have following


design technique
a) Divide & Conquer
b) Greedy method
c) Dynamic Programming
d) Branch & Bound
e) Backtracking
2.How to validate an algorithm: Once an algorithm is created it
is necessary to show that it computes the correct output for all possible
legal input , this process is called algorithm validation.
3.How to analyses an algorithm: Analysis of an algorithm or
performance analysis refers to task of determining how much computing
Time & storage algorithms required.
a)Computing time-Time complexity: Frequency or Step count method
b)Storage space- To calculate space complexity we have to use number
of input used in algorithms.
4.How to test the program: Program is nothing but an expression
for the algorithm using any programming language. To test a program
we need following
a)Debugging: It is processes of executing programs on sample data sets
to determine whether faulty results occur & if so correct them.
b)Profiling or performance measurement is the process of executing a
correct program on data set and measuring the time & space it takes
to compute the result.
ANALYSIS OF ALGORITHM
PRIORI POSTERIORI
1.Done priori to run algorithm 1.Analysis after running
on a specific system it on system.
2.Hardware independent 2.Dependent on hardware
3.Approximate analysis 3.Actual statistics of an
algorithm
4.Dependent on no of time 4.They do not do
posteriori
statements are executed analysis
PERFORMANCE ANALYSIS
Performance Analysis: An algorithm is said to be efficient and fast
if it take less time to execute and consumes less memory space at run time
is called Performance Analysis.
1.SPACE COMPLEXITY:
The space complexity of an algorithm is the amount of
Memory Space required by an algorithm during course of execution is
called space complexity .There are three types of space
a)Instruction space :executable program
b)Data space: Required to store all the constant and variable data space.
c)Environment: It is required to store environment information needed to
resume the suspended space.
2. TIME COMPLEXITY:
The time complexity of an algorithm is the total amount of time
required by an algorithm to complete its execution.
Space complexity
Now there are two types of space complexity

a) Constant space complexity

b) Linear(variable)space complexity
1.Constant space complexity: A fixed amount of space
for all the input values.

Example : int square(int a)


{
return a*a;
}
Here algorithm requires fixed amount of space for all
the input values.
2.Linear space complexity: The space needed for
algorithm is based on size.
 Size of the variable ‘n’ = 1 word
 Array of a values = n word
 Loop variable = 1 word
 Sum variable = 1 word
Example:
int sum(int A[],int n)
{ n
int sum=0,i; 1
for (i=0;i<n;i++) 1
Sum=sum+A[i]; 1
Return sum;
} Ans : 1+n+1+1 = n+3 words
Examples:
1.Algorithm sum(a,,b,c)
{
a=10; a-1
b=20; b-1
c=a+b; c-1
}
s(p)=c+sp
3+0=3
0(n)=3
2. algorithm sum(a,n)
{
total-=0; -1
Fori=1 to n do -1,1
Total=total+a[i]--n
Return total
DAA

Algorithm-1 Algorithm-2 Algorithm-3:recursive procedure


DAA
1.Constant time complexity : If a program required fixed
amount of time for all input values is called Constant
time complexity .

Example : int sum(int a , int b)


{
return a+b;
}
2.Linear time complexity: If the input values are
increased then the time complexity will changes.
 comments = 0 step
 Assignment statement= 1 step
 condition statement= 1 step
 loop condition for n times = n+1 steps
 body of the loop = n steps
TIME COMPLEXITY

The time T(p) taken by a program P is the sum of the


compile time and the run time(execution time)

Statement S/e Frequency Total

1. Algorithm Sum(a,n) 0 - 0
2.
{ 0 - 0
3. S=0.0; 1 1 1
4. for i=1 to n do 1 n+1 n+1
5. s=s+a[I]; 1 n n
6. return s; 1 1 1
7. } 0 - 0

Total 2n+3
KINDS OF ANALYSIS

1.Worst-case: (usually)
• T(n) = maximum time of algorithm on any input of size n.

2.Average-case: (sometimes)
• T(n) = expected time of algorithm over all inputs of
size n.
• Need assumption of statistical distribution of inputs.

3.Best-case:
• T(n) = minimum time of algorithm on any input of size n.

COMPLEXITY:
Complexity refers to the rate at which the storage time grows as a
function of the problem size
ASYMPTOTIC NOTATION

ASYMPTOTIC NOTATION: The mathematical way of


representing the Time complexity.
The notation we use to describe the asymptotic running time of
an algorithm are defined in terms of functions whose domains
are the set of natural numbers.

Definition : It is the way to describe the behavior of functions in


the limit or without bounds.
Asymptotic growth: The rate at which the function grows…
“growth rate” is the complexity of the function or the amount of
resource it takes up to compute.

Growth rate Time +memory


They are 3 asymptotic notations are mostly used to
represent time complexity of algorithm.

1.Big oh (O)notation
2.Big omega (Ω) notation
3.Theta(Θ) notation
4.Little oh notation
1.Big oh (O)notation
1.Big oh (O)notation : Asymptotic “less than”(slower rate).This
notation mainly represent upper bound of algorithm run time.
Big oh (O)notation is useful to calculate maximum amount of time of
execution.
By using Big-oh notation we have to calculate worst case time
complexity.
Formula : f(n)<=c g(n) n>=n0 , c>0 ,n0 >=1

Definition: Let f(n) ,g(n) be two non negative (positive) function


now the f(n)=O(g(n)) if there exist two positive constant c,n0 such that
f(n)<= c.g(n) for all value of n>0 & c>0
1.Big O-notation
 For a given function g (n) , we denote by O ( g ( n)) the
set of functions
 f (n) : there exist positive constants c and n0 s.t.
O( g (n))   
 0  f ( n )  cg ( n ) for all n  n 0 

 We use O-notation to give an asymptotic upper bound of


a function, to within a constant factor.
 f (n)  O( g (n)) means that there existes some constant c
s.t. f (n) is always  cg (n) for large enough n.
Examples
Example : f(n)=2n +3 & g(n)= n
Formula : f(n)<=c g(n) n>=n0 , c>0 ,n0 >=1
f(n)=2n+3 & g(n)=n
Now 3n+2<=c.n
3n+2<=4.n
Put the value of n =1
5<=4 false
N=2 8<=8 true now n0>2 For all value of n>2 & c=4
now f(n)<= c.g(n)
3n+2<=4n for all value of n>2
Above condition is satisfied this notation takes maximum amount of time to
execute .so that it is called worst case complexity.
2.Ω-Omega notation
Ω-Omega notation : Asymptotic “greater than”(faster rate).
It represent Lower bound of algorithm run time.
By using Big Omega notation we can calculate minimum amount of
time. We can say that it is best case time complexity.
Formula : f(n)>=c g(n) n>=n0 , c>0 ,n0 >=1
where c is constant, n is function
 Lower bound
 Best case
Ω-Omega notation
 For a given functiong (n) , we denote by ( g ( n)) the
set of functions

 f (n) : there exist positive constants c and n0 s.t.


( g (n))   
 0  cg (n)  f (n) for all n  n0 
 We use Ω-notation to give an asymptotic lower bound
on a function, to within a constant factor.
 f (n)  ( g (n)) means that there exists some constant c
fs.t.
(n)  cg (n)
is always for large enough n.
Examples
Example : f(n)=3n +2
Formula : f(n)>=c g(n) n>=n0 , c>0 ,n0 >=1
f(n)=3n+2
3n+2>=1*n, c=1 put the value of n=1
n=1 5>=1 true n0>=1 for all value of n
It means that f(n)= Ω g(n).
3. -Theta notation
Theta (Θ) notation : Asymptotic “Equality”(same rate).
It represent average bond of algorithm running time.
By using theta notation we can calculate average amount of time.
So it called average case time complexity of algorithm.
Formula : c1 g(n)<=f(n)<=c2 g(n)

where c is constant, n is function


Average bound
Θ -Theta notation
 For a given function g (n), we denote by ( g ( n)) the
set of functions

 f (n) : there exist positive constants c1 , c2 , and n0 s.t.


( g (n))   
 0  c1 g (n)  f (n)  c2 g (n) for all n  n0 

 A function f (n) belongs to the set ( g ( n)) if there


exist positive constantsc1 cand
2 such that it can be
“sand- wiched” between c1 g (n) c2 gand
( n) or sufficienly
large
f (n) n.
( g (n))
 g (n)  fthat
c1means (n) there
c 2 g (n)exists some constant c1
and c2 s.t. for large enough n.
Examples
Example : f(n)=3n+2
Formula : c1 g(n)<=f(n)<=c2 g(n)
f(n)=2n+3
1*n<=3n+2<=4*n now put the value of
n=1 we get 1<=5<=4 false
n=2 we get 2<=8<=8 true
n=3 we get 3<=11<=12 true
Now all value of n>=2 it is true above condition is satisfied.
4.Little oh notation
Little o notation is used to describe an upper bound
that cannot be tight. In other words, loose upper
bound of f(n).
Slower growth rate
f(n) grows slower than g(n)
Let f(n) and g(n) are the functions that map positive
real numbers. We can say that the function f(n) is
o(g(n)) if for any real positive constant c, there exists
an integer constant n0 ≤ 1 such that f(n) > 0.
Using mathematical relation, we can say that f(n) =
o(g(n)) means,
if

Example on little o asymptotic notation:

1.If f(n) = n2 and g(n) = n3 then check whether


f(n) = o(g(n)) or not.
Sol:

The result is 0, and it satisfies the equation mentioned


above. So we can say that f(n) = o(g(n)).
Recurrence and different
methods to solve recurrence
Recurrence equation:
Equation that defines a sequence recursively.

T(n) = T(n-1) + n, n>0 -------(1 (recurrence


relation))
T(0) = 0 --------(2 (initial
condition))
3 methods to solve recurrence equation.
Recurrence equation
Substitution method
 1 forward substitution
Use initial conditional in initial term, use this value to find
next term.
Recurrence relation
T(n) = T(n-1) + n Initial condition T(0)=0

Sol.: T(n) = T(n-1) + n -----(1)


If n=1
T(1) = T(0)+1
= 0+1
T(1) = 1 ----(2)
Substitution method
If n=2
T(2) = T(1)+2
= 1+2
T(2) = 3 ----(3)
If n=3
T(3) = T(2)+3
= 1+2+3
T(3) = 1+2+3 ----(4)
T(4)=1+2+3+4 -------(5)
T(5)=1+2+3+4+5 ------------(6)
By observing,
T(n) = ∑n=n(n+1)/2
= n2+n/2
= O(n2)

It is difficult to find pattern so generally not used.


Backward substitution method
2. Backward substitution
T(n) = T(n-1) + n -----(1)
If n=n-1
T(n-1) = T(n-1-1)+(n-1)
= T(n-2) + (n-1) ----(2)
Put eqn(2) in eqn (1)
T(n) = T(n-2) + (n-1)+ n ------(3)
let n=n-2
T(n-2) = T(n-1-2)+(n-2)
= T(n-3) + (n-2) ----(4)
Backward substitution method
Put eqn(4) in eqn (3)
T(n) = T(n-3) + (n-2) + (n-1)+ n ------(3)
|
|
T(n) = T(n-k) + (n-k+1) + (n-k+2)+------+ n
Let k=n then,
T(n) = T(n-n) + (n-n+1) + (n-n+2)+------+ n
T(n) = T(0) +1+2+3+------+ n
T(n) = 0 +1+2+3+------+ n
T(n) = n(n+1)/2
= n2+n/2
= O(n2)
Master’s Method
Master’s Method
Master’s Method
 Case 3:
 If log a< k then
b
 If p >= 0 then nk logp n
 If p < 0 then nk

Ex1 . T(n) = 9T(n/3) + n


a = 9, b = 3, f (n) = θ(n1log0n ), k =1 , p=0
logba = 2
CASE 1: logba > k
T(n) = (n2).
Master’s Method
Ex2 . T(n) = 2T(n/3) + 1
a = 2, b = 3 , f (n) = θ( n0log0n ), k =0 , p=0 and p >-1
logba = log3 2 = n0 = 1
CASE 2: logba=k,
If p > -1 then nk logp+1 n

 T(n) = (logn).
Ex 3 T(n) = 4T(n/2) + n3
a = 4, b = 2 ,f (n) = θ(n3log0n ), k=3, p=0
logba = 2
< k
CASE 3: logba

If p >= 0 then n log n k p

T(n) = (n3).
Recursion tree method
T(n) = 4 T(n/2) + n3
Where n3 = root node
 T(n/2) = size of sub-problem
 4 = number of sub-problem
Step 1: find cost of each level
Step 2: find depth of tree
Step 3: find number of leaves

There are total 3 cases to solve examples. But case 1 and case 2 are
solved by above 3 steps.
Case 1: cost of root node is maximum
Case 2: cost of leaf node is maximum.
Case 3: cost of each level is same.
Recursion tree method
Case 3: cost of each level is same.
Step 1: find cost of each level
Step 2: find depth of tree
Step 3: find level of tree
Step 4: total cost = cost of each level * number of level
CASE 1: recursion tree
CASE 2: recursion tree
CASE 3: recursion tree
How to find
Amortized Analysis
Amortized analysis means finding an average running time per
operation over a worst case sequence of operations.
Amortized analysis can be used to show that
The average cost of an operation is small, if one averages over a
sequence of operations,
Even though a single operation within the sequence might be
expensive. In average case analysis we are taking average of all
inputs but in amortized analysis we take sequence of input.
3 techniques.
 Aggregate analysis: Brute force
 Accounting method: Assign costs to each operation so that it is easy
to sum them up while still ensuring that result is accurate
 Potential method: A more sophisticated version of the accounting
method
Aggregate method
Show that a sequence of n operations takes T(n) time.
We can then say that the amortized cost per operation is T(n)/n
Makes no distinction between operation types.
Stack operations:
•PUSH(S,x), O(1)
•POP(S), O(1)
•MULTIPOP(S,k)
while(not STACK-EMPTY(S) and k>0)
{
POP(S)
k=k-1
}
Let us consider a sequence of nPUSH, POP, MULTIPOP.
The worst case cost for MULTIPOP in the sequence is O(n), since the stack size
is at most n.
Thus the cost of the sequence is O(n2). Correct, but not tight.
Aggregate method
For any value of n, any sequence of n PUSH, POP,
and MULTIPOP operations takes a total of O(n) time.
The average cost of an operation is O(n)/n = O(1).
In aggregate analysis,
Assign the amortized cost of each operation to be the
average cost.
In this example, therefore, all three stack operations
have an amortized cost of O(1).
Accounting Method
The potential method
Same as accounting method:
something prepaid is used later.
Different from accounting method
The prepaid work not as credit, but as “potential energy”,
or “potential”.
The potential is associated with the data structure as a
whole rather than with specific objects within the data
structure.
The potential method
The potential Method
Stack example
Potential for a stack is the number of objects in the stack.
The potential Method
Stack example
So amortized cost of each operation is O(1), and total
amortized cost of noperations is O(n).
Since total amortized cost is an upper bound of actual
cost, the worse case cost of noperations is O(n).

You might also like