0% found this document useful (0 votes)
21 views

BCS401 ADA m1 Notes

The document provides an introduction to algorithms, covering their definition, problem-solving fundamentals, and efficiency analysis. It discusses algorithm design techniques, correctness proofs, and the importance of time and space efficiency, along with asymptotic notations for analyzing algorithm performance. Additionally, it includes examples like Euclid's algorithm for calculating the greatest common divisor and emphasizes the significance of understanding input size and computational capabilities in algorithm design.

Uploaded by

sandhyaqv685
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

BCS401 ADA m1 Notes

The document provides an introduction to algorithms, covering their definition, problem-solving fundamentals, and efficiency analysis. It discusses algorithm design techniques, correctness proofs, and the importance of time and space efficiency, along with asymptotic notations for analyzing algorithm performance. Additionally, it includes examples like Euclid's algorithm for calculating the greatest common divisor and emphasizes the significance of understanding input size and computational capabilities in algorithm design.

Uploaded by

sandhyaqv685
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Analysis and Design of Algorithms

BCS401

Module-1: Introduction

Contents
1. Introduction
1.1. What is an Algorithm?
1.2. Fundamentals of Algorithmic Problem Solving
2. Fundamentals of the Analysis of Algorithm Efficiency
2.1. Analysis Framework
2.2. Asymptotic Notations
 Big-Oh notation
 Omega notation
 Theta notation
 Little-oh notation
2.3 Basic asymptotic efficiency Classes
2.4 Mathematical analysis of non-recursive algorithm
2.5 Mathematical analysis of Recursive algorithm

3. Brute force design technique:


3.1. Selection sort and Bubble Sort
3.2. Sequential search and Brute Force String Matching.

ruksana banu [Assistant Professor CS&E] 1


ADA BCS401

Module-1: Introduction
1. Introduction
1.1. What is an Algorithm?
Algorithm: An algorithm is a finite sequence of unambiguous instructions for
solving a particular problem, i.e., for obtaining a required output for any
legitimate input in a finite amount of time

Problem

Algorithm

Input COMPUTER Input

The important points related to algorithm are:


1. The nonambiguity requirement for each step of an algorithm cannot be
compromised.
2. The range of inputs for which an algorithm works must be specified carefully.
3. The same algorithm can be represented in several different ways.
4. There may exist several algorithms for solving the same problem.
5. Algorithms for the same problem can be based on very different ideas and can
solve the problem with dramatically different speeds

EUCLID’S ALGORITHM:
The greatest common divisor of two nonnegative, not-both-zero
integers m and n, denoted gcd(m, n), is defined as the largest integer that
divides both m and n evenly, i.e., with a remainder of zero. Euclid of
Alexandria (third century B.c.) outlined an algorithm for solving this
problem in one of the volumes of his Elements most famous for its
systematic exposition of geometry. In modern terms, Euclid’s algorithm is
based on applying repeatedly the equality
gcd(m, n) = gcd(n, m mod n),

ruksana banu [Assistant Professor CS&E] 1


ADA BCS401

where m mod n is the remainder of the division of m by n, until m mod n is


= gcd(m, 0) m (why?), the last value of m is also the
equal to 0. Since
greatest common divisor of the initial m and n.
For example, gcd(60, 24) can be computed as follows: gcd(60,
24) = gcd(24, 12) = gcd(12, 0) = 12.

(If you are not impressed by this algorithm, try finding the greatest common
divisor of larger numbers, such as those in Problem 6 in this section’s
exercises.)
Here is a more structured description of this algorithm:
Euclid’s algorithm for computing gcd(m, n)
Step 1=If n 0, return the value of m as the answer and stop;
otherwise, proceed to Step 2.
Step 2 Divide m by n and assign the value of the remainder to r.
Step 3 Assign the value of n to m and the value of r to n. Go to Step
1. Alternatively, we can express the same algorithm in pseudocode:

ALGORITHM Euclid(m, n)
//Computes gcd(m, n) by Euclid’s algorithm
//Input: Two nonnegative, not-both-zero integers m and n
//Output: Greatest common divisor of m and n
while n /= 0 do
r › m mod n
m› n
n› r
return m

1.2. Fundamentals of Algorithmic Problem Solving


We now briefly discuss a sequence of steps one typically goes
through in designing and analyzing an algorithm

ruksana banu [Assistant Professor CS&E] 2


ADA BCS401

 Understanding the Problem - From a practical perspective, the first thing


you need to do before designing an algorithm is to understand completely the
problem given. An input to an algorithm specifies an instance of the problem
the algorithm solves. It is very important to specify exactly the set of instances
the algorithm needs to handle.
 Ascertaining the Capabilities of the Computational Device - Once you
completely understand a problem, you need to ascertain the capabilities of the
computational device the algorithm is intended for. Select appropriate model
from sequential or parallel programming model.

 Choosing between Exact and Approximate Problem Solving - The next


principal
decision is to choose between solving the problem exactly and solving it
approximately. Because, there are important problems that simply cannot be
solved exactly for most of their instances and some of the available algorithms
for solving a problem exactly can be unacceptably slow because of the
problem’s intrinsic complexity.
 Algorithm Design Techniques - An algorithm design technique (or
“strategy” or “paradigm”) is a general approach to solving problems
algorithmically that is applicable to a variety of problems from different areas of
computing. They provide guidance for designing algorithms for new problems for
which there is no known satisfactory algorithm.

ruksana banu [Assistant Professor CS&E] 3


ADA BCS401

 Designing an Algorithm and Data Structures - One should pay close attention
choosing data structures appropriate for the operations performed by the algorithm.
For example, the sieve of Eratosthenes would run longer if we used a linked list
instead of an array in its implementation. Algorithms + Data Structures =
Programs
 Methods of Specifying an Algorithm- Once you have designed an algorithm;
you need to specify it in some fashion. These are the two options that are most
widely used nowadays for specifying algorithms. Using a natural language has
an obvious appeal;
however, the inherent ambiguity of any natural language makes a concise
and clear description of algorithms surprisingly difficult. Pseudocode is a
mixture of a natural language and programming language like constructs.
Pseudocode is usually more precise than natural language, and its usage often
yields more succinct algorithm descriptions.
 Proving an Algorithm’s Correctness - Once an algorithm has been
specified, you have to prove its correctness. That is, you have to prove that the
algorithm yields a required result for every legitimate input in a finite amount
of time. For some algorithms, a proof of correctness is quite easy; for others, it
can be quite complex. A common technique for proving correctness is to use
mathematical induction because an algorithm’s iterations provide a natural
sequence of steps needed for such proofs.
 Analyzing an Algorithm - After correctness, by far the most important is
efficiency. In fact, there are two kinds of algorithm efficiency: time efficiency,
indicating how fast the algorithm runs, and space efficiency, indicating how
much extra memory it uses. Another desirable characteristic of an algorithm is
simplicity. Unlike efficiency, which can be precisely defined and investigated
with mathematical rigor, simplicity, like beauty, is to a considerable degree in
the eye of the beholder.
 Coding an Algorithm - Most algorithms are destined to be ultimately
implemented as computer programs. Implementing an algorithm correctly is
necessary but not sufficient: you would not like to diminish your algorithm’s
power by an inefficient implementation. Modern compilers do provide a
certain safety net in this regard, especially when they are used in their code
optimization mode.

ruksana banu [Assistant Professor CS&E] 4


ADA BCS401

2. Fundamentals of the Analysis of Algorithm Efficiency


2.1. Analysis Framework
Measuring an Input’s Size
It is observed that almost all algorithms run longer on larger inputs. For example,
it takes longer to sort larger arrays, multiply larger matrices, and so on. Therefore, it
is logical to investigate an algorithm's efficiency as a function of some parameter n
indicating the algorithm's input size.
There are situations, where the choice of a parameter indicating an input size does
matter. The choice of an appropriate size metric can be influenced by operations of
the algorithm in question. For example, how should we measure an input's size for a
spell-checking algorithm? If the algorithm examines individual characters of its
input, then we should measure the size by the number of characters; if it works by
processing words, we should count their number in the input.
We should make a special note about measuring the size of inputs for algorithms
involving properties of numbers (e.g., checking whether a given integer n is
prime). For such algorithms, computer scientists prefer measuring size by the
number b of bits in the n's binary representation:= log2 n ] + 1. This metric usually
gives a better idea about the efficiency of algorithms in question.

Units for Measuring Running lime


To measure an algorithm's efficiency, we would like to have a metric that does not
depend on these extraneous factors. One possible approach is to count the number
of times each of the algorithm's operations is xecuted. This approach is both
excessively difficult and, as we shall see, usually unnecessary. The thing to do is to
identify the most important operation of the algorithm, called the basic operation,
the operation contributing the most to the total running time, and compute the
number of times the basic operation is executed.
For example, most sorting algorithms work by comparing elements (keys) of a list
being sorted with each other; for such algorithms, the basic operation is a key
comparison.
As another example, algorithms for matrix multiplication and polynomial
evaluation require two arithmetic operations: multiplication and addition.
Let cop be the execution time of an algorithm's basic operation on a particular
computer, and let C(n) be the number of times this operation needs to be executed
for this algorithm. Then we can estimate the running time T(n) of a program

𝑇(𝑛) ≈ 𝑐𝑜𝑝𝐶(𝑛)
implementing this algorithm on that computer by the formula:

ruksana banu [Assistant Professor CS&E] 5


ADA BCS401

Unless n is extremely large or very small, the formula can give a reasonable estimate
of the algorithm's running time.
It is for these reasons that the efficiency analysis framework ignores multiplicative
constants and concentrates on the count's order of growth to within a
constant multiple for large-size inputs.

Orders of Growth
Why this emphasis on the count's order of growth for large input sizes? Because for
large values of n, it is the function's order of growth that counts: just look at table
which contains values of a few functions particularly important for analysis of
algorithms.
Table: Values of several functions important for analysis of algorithms

Algorithms that require an exponential number of operations are practical for solving
only problems of very small sizes.

Performance Analysis
There are two kinds of efficiency: time efficiency and space efficiency.
● Time efficiency indicates how fast an algorithm in question runs;
● Space efficiency deals with the extra space the algorithm requires.
In the early days of electronic computing, both resources time and space were at a
premium. The research experience has shown that for most problems, we can achieve
much more spectacular progress in speed than inspace. Therefore, we primarily
concentrate on time efficiency.
Space complexity
Total amount of computer memory required by an algorithm to complete its execution
is called as space complexity of that algorithm. The Space required by an algorithm is
the sum of following components
● A fixed part that is independent of the input and output. This includes
memory space for codes, variables, constants and so on.

ruksana banu [Assistant Professor CS&E] 6


ADA BCS401

● A variable part that depends on the input, output and recursion


stack. ( We call these parameters as instance characteristics)
Space requirement S(P) of an algorithm P, S(P) = c + Sp where c is a constant
depends on the fixed part, Sp is the instance characteristics\

Example-1: Consider following algorithm abc()

Here fixed component depends on the size of a, b and c. Also instance


characteristics Sp=0
Example-2: Let us consider the algorithm to find sum of array. For the algorithm
given here the problem instances are characterized by n, the number of elements to
be summed. The space needed by a[ ]depends on n.So the space complexity can be
written as;Ssum(n) ≥ (n+3); n for a[ ], One each for n, i and s.

Time complexity
Usually, the execution time or run-time of the program is refereed as its time
complexity denoted by tp(instance characteristics). This is the sum of the time taken
to execute all instructions in the program. Exact estimation runtime is a complex
task, as the number of instructions executed is dependent on the input data. Also
different instructions will take different time to execute. So for the estimation of the
time complexity we count only the number of program steps. We can determine
the steps needed by a program to solve a particular problem instance in two ways.
Method-1: We introduce a new variable count to the program which is initialized to
zero. We also introduce statements to increment count by an appropriate amount
into the program. So when each time original program executes, the count also
incremented by the step count.
Example: Consider the algori hm sum(). After the introduction of the count the
program will be as follows. We can estimate that invocation of sum() executes total
number of 2n+3 steps.

ruksana banu [Assistant Professor CS&E] 7


ADA BCS401

Method-2: Determine the step count of an algorithm by building a table in which we


list the total number of steps contributed by each statement. An example is shown
below. The code will find the sum of n numbers.
Example: Matrix addition

The above method is both excessively difficult and, usually unnecessary. The thing to do is
to identify the most important operation contributing the m operation of the algorithm,
called the basic operation, the st to the total running time, and compute the number of
times the basic operation is executed.

Trade-off
There is often a time-space-tradeoff involved in a problem, that is, it cannot be
solved with few computing time and low memory consumption. One has to make a
compromise and to exchange computing time for memory consumption or vice
versa, depending on which algorithm one chooses and how one parameterizes it.
2.2. Asymptotic Notations
The efficiency analysis framework concentrates on the order of growth of an algorithm’s
basic operation count as the principal indicator of the algorithm’s efficiency. To compare
and rank such orders of growth, computer scientists use three notations: O(big oh), Ω(big
omega), Θ (big theta) and o(little oh)

Big-Oh notation
Definition: A function t(n) is said

t(n)∈O(g(n)), if t (n) is bounded


to be in O(g(n)), denoted

above by some constant multiple


of g(n) for all large n, i.e., if there
exist some positive constant c and
some nonnegative integer n0 such
that

ruksana banu [Assistant Professor CS&E] 8


ADA BCS401

t(n) ≤ cg(n) for all n ≥ n0.

Informally, O(g(n)) is the set of all functions with a lower or same order of
growth as g(n). Note that the definition gives us a lot of freedom in choosing
specific valuesfor constants c and n0.

Examples: 𝑛 є (𝑛2), 100𝑛 + 5 є (𝑛2), 1 (𝑛 − 1)є𝑂(𝑛2)

𝑛3 ∉ (𝑛2), 0.00001𝑛3 ∉ (𝑛2), 𝑛4+𝑛+1∉(𝑛2)


2

Strategies to prove Big-O: Sometimes the easiest way to prove that f (n) =
O(g(n)) is to take c to be the sum of the positive coefficients off(n). We can
usually ignore the negative coefficients.

Example: To prove 100n + 5 ∈ O(n2)


100n + 5 ≤ 105n2. (c=105, n0=1)
Example: To prove n2 + n = O(n3)
Take c = 1+1=2, if n ≥n0=1, then n2 + n= O(n3)

Omega notation
Definition: A function t(n) is said

t(n)∈Ω(g(n)), if t(n) is bounded


to be in Ω(g(n)), denoted

below by some positive constant


multiple of g(n) for all large n,i.e.,
if there exist some positive
constant c and some nonnegative
integer n0 suchthat

t(n) ≥ c g(n) for all n ≥ n0.

Here is an example of the formal proof that n3 ∈Ω(n2):n3 ≥ n2 for


all n ≥ 0, i.e., we can select c = 1 and n0 = 0.

Example:

ruksana banu [Assistant Professor CS&E] 9


ADA BCS401

Example: To prove n3 + 4n2 = Ω(n2)


We see that, if n≥0, n3+4n2≥ n3≥ n2; Therefore
n3+4n2 ≥ 1n2for alln≥0 Thus, we have shown
that n3+4n2 =Ω(n2) where c = 1 & n0=0

Theta notation

∈ Θ(g(n)),if t (n) is bounded both


A function t(n) is said to be in Θ(g(n)), denoted t(n)

above and below by some positive


constant multiples ofg(n)
for all large n, i.e., if there exist
some positive constants c1 and c2
and somenonnegative integer n0
such that
c2g(n) ≤ t(n) ≤c1g(n) for all n ≥ n0.

Example: n2 + 5n + 7 = Θ(n2)

1
ruksana banu [Assistant Professor CS&E]
0
ADA BCS401

Strategies for Ω and Θ


● Proving that a f(n) = Ω(g(n)) often requires more thought.
– Quite often, we have to pick c < 1.
– A good strategy is to pick a value of c which you think will
work, and determine which value of n0 is needed.
– Being able to do a little algebra helps.
– We can sometimes with the positive
simplify by ignoring
terms of f(n)
coefficients.

1
ruksana banu [Assistant Professor CS&E]
1
ADA BCS401

● The following theorem shows us that proving f(n) = Θ(g(n))


is nothing new: Theorem: f(n) = Θ(g(n)) if and only
iff(n) = O(g(n)) and f(n) = Ω(g(n)).
Thus, we just apply the previous two strategies.

1
ruksana banu [Assistant Professor CS&E]
2
ADA BCS401

Theorem: If t1(n) ∈O(g1(n)) and t2(n) ∈O(g2(n)), then t1(n) + t2(n) ∈O(max{g1(n),
g2(n)}). (The analogous assertions are true for the Ω and Ө notations as well.)
Proof: The proof extends to orders of growth the following simple fact aboutfour
arbitrary real numbers a1, b1, a2, b2: if a1 ≤ b1 and a2 ≤ b2, then a1 + a2 ≤ 2 max{b1,
b2}.
Since t1(n) ∈O(g1(n)), there exist some positive constant c1 and some nonnegative
integer n1 such that t1(n) ≤ c1g1(n) for all n ≥ n1.
Similarly, since t2(n) ∈O(g2(n)), t2(n) ≤ c2g2(n) for all n ≥ n2.
Let us denote c3 = max{c1, c2} and consider n>=max{n1,n2} so that
we can use both inequalities. Adding them yields the following: t1(n) + t2(n) ≤
c1g1(n) + c2g2(n)
≤ c3 g1(n) + c3g2(n) = c3[g1(n) + g2(n)]

≤ c32 max{g1(n), g2(n)}.


Hence, t1(n) + t2(n) ∈ O(max{g1(n), g2(n)}), with the constants c and n0
required by the O definition being 2c3 = 2 max{c1, c2} and max{n1, n2},
respectively.

Little Oh The function f(n)= o(g(n)) [ i.e f of n is a little oh of g of n ] if and only if

lim ƒ(𝑛) = 0

𝑛→∞
Example: 𝑔(𝑛)

For comparing the order of growth limit is used

If the case-1 holds good in the above limit, we represent it by little-oh.

1
ruksana banu [Assistant Professor CS&E]
3
ADA BCS401

2.3. Basic asymptotic efficiency Classes


Class Name Comments

1
ruksana banu [Assistant Professor CS&E]
4
ADA BCS401

2.4. Mathematical Analysis of Non-recursive & Recursive Algorithms


Analysis of Non-recursive Algorithms

General Plan for Analyzing the Time Efficiency of Nonrecursive Algorithms


1. Decide on a parameter (or parameters) indicating an input’s size.
2. Identify the algorithm’s basic operation. (As a rule, it is located in innermost
loop.)
3. Check whether the number of times the basic operation is execut d depends
only on the size of an input. If it also depends on some additional property,
the worst-case, average-case, and, if separately.
4. Set up a sum expressing the number of times the algorithm’s executed.
5. Using standard formulas and rules of sum manipulation, either
Example-1: To find maximum element in the given array

Here comparison is the basic operation. Note that number of comparisions will
be same for all arrays of size n. Therefore, no need to distinguish worst, best and
average cases. Total number of basic operations are,

Example-2: To check whether all the elements in the given array are distinct

1
ruksana banu [Assistant Professor CS&E]
5
ADA BCS401

Here basic operation is comparison. The maximum no. of comparisons happens in


the worst case. i.e. all the elements in the array are distinct and algorithms return
true).
Total number of basic operations (comparison) in the worst case are,

Other than the worst case, the total comparisons are less than . For example if

the first two elements of the array are equal, only one comparison is computed. So in
general C(n) =O(n2)

Example-3: To perform matrix multiplication

Number of basic operations (multiplications) is

Total running time:


Suppose if we take into account of addition; Algoritham also have same number of
additions
A(n) = n3

Total running time:

1
ruksana banu [Assistant Professor CS&E]
6
ADA BCS401

Example-4: To count the bits in the binary representation

The basic operation is count=count + 1 repeats no. of times


2.5. Mathematical Analysis of Recursive & Recursive Algorithms

General plan for analyzing the time efficiency of recursive algorithms

1. Decide on a parameter (or parameters) indicating an input’s size.


2. Identify the algorithm’s basic operation.
3. Check whether the number of times the basic operation is executed can varyon
different inputs of the same size; if it can, the worst-case, average-case, and best-case
efficiencies must be investigated separately. Set up a recurrence relation, with an
appropriate initial condition, for the number of times the basic operation is executed.
4. Solve the recurrence or, at least, ascertain the order of growth of its solution.

Example-1

Since the function F(n) is computed according to the formula

The number of multiplicationsM(n) needed to compute it must satisfy the equality

1
ruksana banu [Assistant Professor CS&E]
7
ADA BCS401

Such equations are called recurrence relations


Condition that makes the algorithm stop if n = 0 return 1. Thus recurrence
relation and initial condition for the algorithm’s number of multiplications
M(n) can be stated as

We can use backward substitutions method to solve this

….

Example-2: Tower of Hanoi puzzle. In this puzzle, There are n disks of different
sizes that canslide onto any of three pegs. Initially, all the disks are on the first peg in
order ofsize, the largest on the bottom and the smallest on top. The goal is to move all
thedisks to the third peg, using the second one as an auxiliary, if necessary. We
canmove only one disk at a time, and it is forbidden to place a larger disk on top of
asmaller one.The problem has an elegant recursive solution, which is illustrated in
Figure.

1. If n = 1, we move the single disk directly from the source peg to the destination
peg.
2. To move n>1 disks from peg 1 to peg 3 (with peg 2 as auxiliary),
o we first move recursively n-1 disks from peg 1 to peg 2 (with peg 3 as
auxiliary),
o then move the largest disk directly from peg 1 to peg 3, and,
o finally, move recursively n-1 disks from peg 2 to peg 3 (using peg 1 as
auxiliary).

1
ruksana banu [Assistant Professor CS&E]
8
ADA BCS401

Figure: Recursive solution to the Tower of Hanoi puzzle

Algorithm: TowerOfHanoi(n, source, dest, aux)


If n == 1, THEN
move disk from
source to dest else
TowerOfHanoi (n - 1, source, aux, dest)
move disk from source to dest
TowerOfHanoi (n - 1, aux, dest, source)
End if

1
ruksana banu [Assistant Professor CS&E]
9
ADA BCS401

Computation of Number of Moves


The number of moves M(n) depends only on n. The recurrence equation is

We have the following recurrence relation for the number of moves M(n):

We solve this recurrence by t e same method of backward substitutions:

The pattern of the first three sums on the left suggests that the next one will be

24M(n − 4) + 23 + 22 + 2 + 1, and generally, after i substitutions, we get

Since the initial condition is specified for n = 1, which is achieved for


i = n-1, we get the following formula for the solution to recurrence,

Example-3: To count bits of a decimal number in its binary representation

The recurrence relation can be written as


.
Also note that A(1) = 0.

2
ruksana banu [Assistant Professor CS&E]
0
ADA BCS401

The standard approach to solving such a recurrence is to solve it only for n = 2k


and then take advantage of the theorem called the smoothness rule which
claims that under very broad

2
ruksana banu [Assistant Professor CS&E]
1
ADA BCS401

assumptions the order of growth observed for n = 2k gives a correct answer


about the order of growth for all values of n.

CSE, SVIT 22
2.6. Brute force design technique:
Brute force is straight forward approach to solving a problem, usually
directly based on the problem statement and definitions of the concepts
involved.

• Selection sort

We start selection sort by scanning the entire given list to find its smallest element
and exchange it with the first element, putting the smallest element in its final position in the
sorted list. Then we scan the list, starting with the second element, putting the second
smallest element in its final position. Generally, on the ith pass through the list, which we

number from 0 to n-2, the algorithm searches for the last n-1 elements and swaps it with Ai:

A0 ≤ A1 ≤ …… ≤Ai-1 | Ai…….Amin ….. An-1


in their final positions the last n-i elements

After n-1 passes, the list is sorted.

The number of times the algorithm executed depends only on the array’s size and is
given by

After solving using summation formulas

Thus selection sort has a Θ(n2) time complexity.

• Sequential search

This is also called as Linear search. Here we start from the initial element of the array and
compare it with the search key. We repeat the same with all the elements of the array till we
CSE, SVIT 23
encounter the search key or till we reach end of the array.

CSE, SVIT 24
The time efficiency in worst case is O(n), where n is the number of elements of the array. In
best case it is O(1), it means the very first element is the search key.

• String matching algorithm with complexity Analysis

Another example of Brute force approach is string matching, where string of n characters
called text and a string of m characters (m<=n) called the pattern is given. Here job is to find
whether the pattern is present in text or not. If we want to find i-the index of the leftmost
character of the first matching substring in the

We start matching with the very first character, if a match then only j is incremented and
again compared with next character of both the strings. If not then I is incremented and j starts
from beginning of pattern string. If pattern found we return the position from where the
pattern began. Pattern is tried to match till n-m elements, later we need not try to match as the
elements will be lesser than pattern. If it doesn’t match by n-m elements then pattern is not

matched.

The worst case is Θ(nm). Best case is Θ(m).


CSE, SVIT 25

You might also like