0% found this document useful (0 votes)
21 views

ADA Notes Module 1

Uploaded by

rahuls.22.beds
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

ADA Notes Module 1

Uploaded by

rahuls.22.beds
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

ALGORITHMS

In computer science and mathematics, an algorithm is a set of instructions used for solving
problems in a step-by-step manner. This step-by-step explanation of doing something is known
as an algorithm

FUNDAMENTALS OF ALGORITHMIC PROBLEM SOLVING:


Algorithmic problem solving is solving problem that require the formulation
of an algorithm for the solution.

Understanding the Problem


It is the process of finding the input of the problem that the algorithm
solves.
It is very important to specify exactly the set of inputs the algorithm needs
to handle.
 A correct algorithm is not one that works most of the time, but one that
works correctly for all legitimate inputs.

Ascertaining the Capabilities of the Computational Device


If the instructions are executed one after another, it is called sequential
algorithm.
If the instructions are executed concurrently, it is called parallel algorithm.

Choosing between Exact and Approximate Problem Solving


The next principal decision is to choose between solving the problem
exactly or solving it approximately.
Based on this, the algorithms are classified as exact algorithm and
approximation algorithm.

Deciding a data structure:


Data structure plays a vital role in designing and analysis the algorithms.
Some of the algorithm design techniques also depend on the structuring
data specifying a problem’s instance
Algorithm+ Data structure=programs

Algorithm Design Techniques


An algorithm design technique (or “strategy” or “paradigm”) is a general
approach to solving problems algorithmically that is applicable to a variety
of problems from different areas of computing.
Learning these techniques is of utmost importance for the following
reasons.
First, they provide guidance for designing algorithms for new problems,
Second, algorithms are the cornerstone of computer science

Methods of Specifying an Algorithm


Pseudo-code is a mixture of a natural language and programming
language-like constructs. Pseudo code is usually more precise than natural
language, and its usage often yields more succinct algorithm descriptions.
In the earlier days of computing, the dominant vehicle for specifying
algorithms was a flowchart, a method of expressing an algorithm by a
collection of connected geometric shapes containing descriptions of the
algorithm’s steps.
Programming language can be fed into an electronic computer directly.
Instead, it needs to be converted into a computer program written in a
particular computer language. We can look at such a program as yet another
way ofspecifying the algorithm, although it is preferable to consider it as the
algorithm’s implementation.

Proving an Algorithm’s Correctness


Once an algorithm has been specified, you have to prove it correctness.
That is, you have to prove that the algorithm yields a required result for every
legitimate input in a finite amount of time.
A common technique for proving correctness is to use mathematical
induction because an algorithm’s iterations provide a natural sequence of
steps needed for such proofs.
It might be worth mentioning that although tracing the algorithm’s
performance for a few specific inputs can be a very worthwhile activity, it
cannot prove the algorithm’s correctness conclusively. But in order to show
that an algorithm is incorrect, you need just one instance of its input for
which the algorithm fails.

Analyzing an Algorithm:

1. Efficiency.
Time efficiency, indicating how fast the algorithm runs,
Space efficiency, indicating how much extra memory it uses.

2. simplicity.
An algorithm should be precisely defined and investigated with
mathematical expressions.
Simpler algorithms are easier to understand and easier to program.
Simple algorithms usually contain fewer bugs.

Coding an Algorithm
Most algorithms are destined to be ultimately implemented as computer
programs. Programming an algorithm presents both a peril and an
opportunity.
A working program provides an additional opportunity in allowing an
empirical analysis of the underlying algorithm. Such an analysis is based on
timing the program on several inputs and then analyzing the results
obtained.
CS6404 __ Design and Analysis of Algorithms _ Unit I ______1.9

1.4 FUNDAMENTALS OF THE ANALYSIS OF ALGORITHM EFFICIENCY


The efficiency of an algorithm can be in terms of time and space. The algorithm efficiency
can be analyzed by the following ways.
a. Analysis Framework.
b. Asymptotic Notations and its properties.
c. Mathematical analysis for Recursive algorithms.
d. Mathematical analysis for Non-recursive algorithms.

1.5 Analysis Framework


There are two kinds of efficiencies to analyze the efficiency of any algorithm. They are:
 Time efficiency, indicating how fast the algorithm runs, and
 Space efficiency, indicating how much extra memory it uses.

The algorithm analysis framework consists of the following:


 Measuring an Input’s Size
 Units for Measuring Running Time
 Orders of Growth
 Worst-Case, Best-Case, and Average-Case Efficiencies

(i) Measuring an Input’s Size


 An algorithm’s efficiency is defined as a function of some parameter n indicating the
algorithm’s input size. In most cases, selecting such a parameter is quite straightforward.
For example, it will be the size of the list for problems of sorting, searching.
 For the problem of evaluating a polynomial p(x) = anxn + . . . + a0 of degree n, the size of
the parameter will be the polynomial’s degree or the number of its coefficients, which is
larger by 1 than its degree.
 In computing the product of two n × n matrices, the choice of a parameter indicating an
input size does matter.
 Consider a spell-checking algorithm. If the algorithm examines individual characters of its
input, then the size is measured by the number of characters.
 In measuring input size for algorithms solving problems such as checking primality of a
positive integer n. the input is just one number.
 The input size by the number b of bits in the n’s binary representation is b=(log2 n)+1.

(ii) Units for Measuring Running Time


Some standard unit of time measurement such as a second, or millisecond, and so on can be
used to measure the running time of a program after implementing the algorithm.
Drawbacks
 Dependence on the speed of a particular computer.
 Dependence on the quality of a program implementing the algorithm.
 The compiler used in generating the machine code.
 The difficulty of clocking the actual running time of the program.
So, we need metric to measure an algorithm’s efficiency that does not depend on these
extraneous factors.
One possible approach is to count the number of times each of the algorithm’s operations
is executed. This approach is excessively difficult.
The most important operation (+, -, *, /) of the algorithm, called the basic operation.
Computing the number of times the basic operation is executed is easy. The total running time is
determined by basic operations count.
CS6404 __ Design and Analysis of Algorithms _ Unit I ______1.10

(iii) Orders of Growth


 A difference in running times on small inputs is not what really distinguishes efficient
algorithms from inefficient ones.
 For example, the greatest common divisor of two small numbers, it is not immediately clear
how much more efficient Euclid’s algorithm is compared to the other algorithms, the
difference in algorithm efficiencies becomes clear for larger numbers only.
 For large values of n, it is the function’s order of growth that counts just like the Table 1.1,
which contains values of a few functions particularly important for analysis of algorithms.

TABLE 1.1 Values (approximate) of several functions important for analysis of algorithms

n √� log2n n n log2n n2 n3 2n n!
1 1 0 1 0 1 1 2 1
2 1.4 1 2 2 4 4 4 2
4 2 2 4 8 16 64 16 24
8 2.8 3 8 2.4•101 64 5.1•102 2.6•102 4.0•104
10 3.2 3.3 10 3.3•101 102 103 103 3.6•106
16 4 4 16 6.4•101 2.6•102 4.1•103 6.5•104 2.1•1013
102 10 6.6 102 6.6•102 104 106 1.3•1030 9.3•10157
103 31 10 103 1.0•104 106 109
104 102 13 104 1.3•105 108 1012 Very big
105 3.2•102 17 105 1.7•106 1010 1015 computation
106 103 20 106 2.0•107 1012 1018

(iv) Worst-Case, Best-Case, and Average-Case Efficiencies


Consider Sequential Search algorithm some search key K
ALGORITHM SequentialSearch(A[0..n - 1], K)
//Searches for a given value in a given array by sequential search
//Input: An array A[0..n - 1] and a search key K
//Output: The index of the first element in A that matches K or -1 if there are no
// matching elements
i ←0
while i < n and A[i] ≠ K do
i ←i + 1
if i < n return i
else return -1
Clearly, the running time of this algorithm can be quite different for the same list size n.

In the worst case, there is no matching of elements or the first matching element can found
at last on the list. In the best case, there is matching of elements at first on the list.

Worst-case efficiency
 The worst-case efficiency of an algorithm is its efficiency for the worst case input of size n.
 The algorithm runs the longest among all possible inputs of that size.
 For the input of size n, the running time is Cworst(n) = n.
CS6404 __ Design and Analysis of Algorithms _ Unit I ______1.11

Best case efficiency


 The best-case efficiency of an algorithm is its efficiency for the best case input of size n.
 The algorithm runs the fastest among all possible inputs of that size n.
 In sequential search, If we search a first element in list of size n. (i.e. first element equal to
a search key), then the running time is Cbest(n) = 1

Average case efficiency


 The Average case efficiency lies between best case and worst case.
 To analyze the algorithm’s average case efficiency, we must make some assumptions about
possible inputs of size n.
 The standard assumptions are that
o The probability of a successful search is equal to p (0 ≤ p ≤ 1) and
o The probability of the first match occurring in the ith position of the list is the same
for every i.

Yet another type of efficiency is called amortized efficiency. It applies not to a single run of
an algorithm but rather to a sequence of operations performed on the same data structure.

1.6 ASYMPTOTIC NOTATIONS AND ITS PROPERTIES

Asymptotic notation is a notation, which is used to take meaningful statement about the
efficiency of a program.
The efficiency analysis framework concentrates on the order of growth of an algorithm’s
basic operation count as the principal indicator of the algorithm’s efficiency.
To compare and rank such orders of growth, computer scientists use three notations, they
are:
 O - Big oh notation
 Ω - Big omega notation
 Θ - Big theta notation
Let t(n) and g(n) can be any nonnegative functions defined on the set of natural numbers.
The algorithm’s running time t(n) usually indicated by its basic operation count C(n), and g(n),
some simple function to compare with the count.

Example 1:

where g(n) = n2.


CS6404 __ Design and Analysis of Algorithms _ Unit I ______1.12

(i) O - Big oh notation


A function t(n) is said to be in O(g(n)), denoted � ∈ � , if t (n) is bounded above
by some constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and
some nonnegative integer n0 such that
� � � ��� 0.
Where t(n) and g(n) are nonnegative functions defined on the set of natural numbers.
O = Asymptotic upper bound = Useful for worst case analysis = Loose bound

FIGURE 1.5 Big-oh notation: � ∈ � .

Example 2: Prove the assertions + 5 ∈ � 2 .


Proof: 100n + 5 ≤ 100n + n (for all n ≥ 5)
= 101n
≤ 101n2 (Ӭ 2
)
Since, the definition gives us a lot of freedom in choosing specific values for constants c
and n0. We have c=101 and n0=5
Example 3: Prove the assertions + 5 ∈ � .
Proof: 100n + 5 ≤ 100n + 5n (for all n ≥ 1)
= 105n
i.e., 100n + 5 ≤ 105n
i.e., t(n) ≤ cg(n)
ӫ + 5 ∈ � with c=105 and n0=1

(ii) Ω - Big omega notation


A function t(n) is said to be in Ω(g(n)), denoted t(n) ∈ Ω(g(n)), if t(n) is bounded below by
some positive constant multiple of g(n) for all large n, i.e., if there exist some positive constant c
and some nonnegative integer n0 such that
t (n) ≥ cg(n) for all n ≥ n0.
Where t(n) and g(n) are nonnegative functions defined on the set of natural numbers.
Ω = Asymptotic lower bound = Useful for best case analysis = Loose bound
CS6404 __ Design and Analysis of Algorithms _ Unit I ______1.13

FIGURE 1.6 Big-omega notation: t (n) ∈ Ω (g(n)).

Example 4: Prove the assertions n3+10n2+4n+2 ∈ Ω (n2).


Proof: n3+10n2+4n+2 ≥ n2 (for all n ≥ 0)
i.e., by definition t(n) ≥ cg(n), where c=1 and n0=0

(iii) Θ - Big theta notation


A function t(n) is said to be in Θ(g(n)), denoted t(n) ∈ Θ(g(n)), if t(n) is bounded both above
and below by some positive constant multiples of g(n) for all large n, i.e., if there exist some
positive constants c1 and c2 and some nonnegative integer n0 such that
c2g(n) ≤ t (n) ≤ c1g(n) for all n ≥ n0.
Where t(n) and g(n) are nonnegative functions defined on the set of natural numbers.
Θ = Asymptotic tight bound = Useful for average case analysis

FIGURE 1.7 Big-theta notation: t (n) ∈ Θ(g(n)).

Example 5: Prove the assertions − ∈ Θ 2


.
Proof: First prove the right inequality (the upper bound):
− = − for all n ≥ 0.
Second, we prove the left inequality (the lower bound):
− = − −[ ][ ] for all n ≥ 2.
CS6404 __ Design and Analysis of Algorithms _ Unit I ______1.14

ӫ −
4
i.e., −
4
Hence, − ∈ Θ 2
, where c2= 4 , c1= and n0=2
Note: asymptotic notation can be thought of as "relational operators" for functions similar to the
corresponding relational operators for values.
= ⇒ Θ(), ≤ ⇒ O(), ≥ ⇒ Ω(), < ⇒ o(), > ⇒ ω()

Useful Property Involving the Asymptotic Notations


The following property, in particular, is useful in analyzing algorithms that comprise two
consecutively executed parts.

THEOREM: If t1(n) ∈ O(g1(n)) and t2(n) ∈ O(g2(n)), then t1(n) + t2(n) ∈ O(max{g1(n), g2(n)}).
(The analogous assertions are true for the Ω and Θ notations as well.)

PROOF: The proof extends to orders of growth the following simple fact about four arbitrary real
numbers a1, b1, a2, b2: if a1 ≤ b1 and a2 ≤ b2, then a1 + a2 ≤ 2 max{b1, b2}.
Since t1(n) ∈ O(g1(n)), there exist some positive constant c1 and some nonnegative integer
n1 such that
t1(n) ≤ c1g1(n) for all n ≥ n1.
Similarly, since t2(n) ∈ O(g2(n)),
t2(n) ≤ c2g2(n) for all n ≥ n2.
Let us denote c3 = max{c1, c2} and consider n ≥ max{n1, n2} so that we can use
both inequalities. Adding them yields the following:
t1(n) + t2(n) ≤ c1g1(n) + c2g2(n)
≤ c3g1(n) + c3g2(n)
= c3[g1(n) + g2(n)]
≤ c32 max{g1(n), g2(n)}.

Hence, t1(n) + t2(n) ∈ O(max{g1(n), g2(n)}), with the constants c and n0 required by the
definition O being 2c3 = 2 max{c1, c2} and max{n1, n2}, respectively.
The property implies that the algorithm’s overall efficiency will be determined by the part
with a higher order of growth, i.e., its least efficient part.
ӫ t1(n) ∈ O(g1(n)) and t2(n) ∈ O(g2(n)), then t1(n) + t2(n) ∈ O(max{g1(n), g2(n)}).

Basic rules of sum manipulation

Summation formulas
CS6404 __ Design and Analysis of Algorithms _ Unit I ______1.15

1.7 MATHEMATICAL ANALYSIS FOR RECURSIVE ALGORITHMS


General Plan for Analyzing the Time Efficiency of Recursive Algorithms
1. Decide on a parameter (or parameters) indicating an input’s size.
2. Identify the algorithm’s basic operation.
3. Check whether the number of times the basic operation is executed can vary on different
inputs of the same size; if it can, the worst-case, average-case, and best-case efficiencies
must be investigated separately.
4. Set up a recurrence relation, with an appropriate initial condition, for the number of times
the basic operation is executed.
5. Solve the recurrence or, at least, ascertain the order of growth of its solution.

EXAMPLE 1: Compute the factorial function F(n) = n! for an arbitrary nonnegative integer n.
Since n!= 1•. . . . • (n − 1) • n = (n − 1)! • n, for n ≥ 1 and 0!= 1 by definition, we can compute
F(n) = F(n − 1) • n with the following recursive algorithm. (ND 2015)
ALGORITHM F(n)
//Computes n! recursively
//Input: A nonnegative integer n
//Output: The value of n!
if n = 0 return 1
else return F(n − 1) * n

Algorithm analysis
 For simplicity, we consider n itself as an indicator of this algorithm’s input size. i.e. 1.
 The basic operation of the algorithm is multiplication, whose number of executions we
denote M(n). Since the function F(n) is computed according to the formula F(n) = F(n −1)•n
for n > 0.
 The number of multiplications M(n) needed to compute it must satisfy the equality
M(n) = M(n-1) + 1 for n > 0

To compute To multiply
F(n-1) F(n-1) by n
M(n − 1) multiplications are spent to compute F(n − 1), and one more multiplication is
needed to multiply the result by n.

Recurrence relations
The last equation defines the sequence M(n) that we need to find. This equation defines
M(n) not explicitly, i.e., as a function of n, but implicitly as a function of its value at another point,
namely n − 1. Such equations are called recurrence relations or recurrences.
Solve the recurrence relation � = � − + , i.e., to find an explicit formula for
M(n) in terms of n only.
To determine a solution uniquely, we need an initial condition that tells us the value with
which the sequence starts. We can obtain this value by inspecting the condition that makes the
algorithm stop its recursive calls:
if n = 0 return 1.
This tells us two things. First, since the calls stop when n = 0, the smallest value of n for
which this algorithm is executed and hence M(n) defined is 0. Second, by inspecting the
pseudocode’s exiting line, we can see that when n = 0, the algorithm performs no multiplications.
CS6404 __ Design and Analysis of Algorithms _ Unit I ______1.16

Thus, the recurrence relation and initial condition for the algorithm’s number of multiplications
M(n):
M(n) = M(n − 1) + 1 for n > 0,
M(0) = 0 for n = 0.

Method of backward substitutions


M(n) = M(n − 1) + 1 substitute M(n − 1) = M(n − 2) + 1
= [M(n − 2) + 1]+ 1
= M(n − 2) + 2 substitute M(n − 2) = M(n − 3) + 1
= [M(n − 3) + 1]+ 2
= M(n − 3) + 3

= M(n − i) + i

= M(n − n) + n
= n.
Therefore M(n)=n

EXAMPLE 2: consider educational workhorse of recursive algorithms: the Tower of Hanoi


puzzle. We have n disks of different sizes that can slide onto any of three pegs. Consider A
(source), B (auxiliary), and C (Destination). Initially, all the disks are on the first peg in order of
size, the largest on the bottom and the smallest on top. The goal is to move all the disks to the third
peg, using the second one as an auxiliary.

FIGURE 1.8 Recursive solution to the Tower of Hanoi puzzle.


CS6404 __ Design and Analysis of Algorithms _ Unit I ______1.17

ALGORITHM TOH(n, A, C, B)
//Move disks from source to destination recursively
//Input: n disks and 3 pegs A, B, and C
//Output: Disks moved to destination as in the source order.
if n=1
Move disk from A to C
else
Move top n-1 disks from A to B using C
TOH(n - 1, A, B, C)
Move top n-1 disks from B to C using A
TOH(n - 1, B, C, A)

Algorithm analysis
The number of moves M(n) depends on n only, and we get the following recurrence
equation for it: M(n) = M(n − 1) + 1+ M(n − 1) for n > 1.
With the obvious initial condition M(1) = 1, we have the following recurrence relation for the
number of moves M(n):
M(n) = 2M(n − 1) + 1 for n > 1,
M(1) = 1.
We solve this recurrence by the same method of backward substitutions:
M(n) = 2M(n − 1) + 1 sub. M(n − 1) = 2M(n − 2) + 1
= 2[2M(n − 2) + 1]+ 1
= 22M(n − 2) + 2 + 1 sub. M(n − 2) = 2M(n − 3) + 1
= 2 [2M(n − 3) + 1]+ 2 + 1
2

= 23M(n − 3) + 22 + 2 + 1 sub. M(n − 3) = 2M(n − 4) + 1


= 2 M(n − 4) + 2 + 2 + 2 + 1
4 3 2


= 2iM(n − i) + 2i−1 + 2i−2 + . . . + 2 + 1= 2iM(n − i) + 2i − 1.

Since the initial condition is specified for n = 1, which is achieved for i = n − 1,
M(n) = 2n−1M(n − (n − 1)) + 2n−1 – 1 = 2n−1M(1) + 2n−1 − 1= 2n−1 + 2n−1 − 1= 2n − 1.
Thus, we have an exponential time algorithm

EXAMPLE 3: An investigation of a recursive version of the algorithm which finds the number of
binary digits in the binary representation of a positive decimal integer.

ALGORITHM BinRec(n)
//Input: A positive decimal integer n
//Output: The number of binary digits in n’s binary representation
if n = 1 return 1
else return BinRec( n/2 )+ 1

Algorithm analysis
The number of additions made in computing BinRec( n/2 ) is A( n/2 ), plus one more
addition is made by the algorithm to increase the returned value by 1. This leads to the recurrence
A(n) = A( n/2 ) + 1 for n > 1.
Since the recursive calls end when n is equal to 1 and there are no additions made
CS6404 __ Design and Analysis of Algorithms _ Unit I ______1.18

then, the initial condition is A(1) = 0.


The standard approach to solving such a recurrence is to solve it only for n = 2k
A(2k) = A(2k−1) + 1 for k > 0,
A(20) = 0.

backward substitutions
A(2k) = A(2k−1) + 1 substitute A(2k−1) = A(2k−2) + 1
= [A(2k−2) + 1]+ 1= A(2k−2) + 2 substitute A(2k−2) = A(2k−3) + 1
= [A(2k−3) + 1]+ 2 = A(2k−3) + 3 ...
...
= A(2k−i) + i
...
= A(2k−k) + k.
Thus, we end up with A(2k) = A(1) + k = k, or, after returning to the original variable n = 2k and
hence k = log2 n,
A(n) = log2 n ϵ Θ (log2 n).

1.8 MATHEMATICAL ANALYSIS FOR NON-RECURSIVE ALGORITHMS


General Plan for Analyzing the Time Efficiency of Nonrecursive Algorithms
1. Decide on a parameter (or parameters) indicating an input’s size.
2. Identify the algorithm’s basic operation (in the innermost loop).
3. Check whether the number of times the basic operation is executed depends only on the size
of an input. If it also depends on some additional property, the worst-case, average-case,
and, if necessary, best-case efficiencies have to be investigated separately.
4. Set up a sum expressing the number of times the algorithm’s basic operation is executed.
5. Using standard formulas and rules of sum manipulation either find a closed form formula
for the count or at the least, establish its order of growth.

EXAMPLE 1: Consider the problem of finding the value of the largest element in a list of n
numbers. Assume that the list is implemented as an array for simplicity.
ALGORITHM MaxElement(A[0..n − 1])
//Determines the value of the largest element in a given array
//Input: An array A[0..n − 1] of real numbers
//Output: The value of the largest element in A
maxval ←A[0]
for i ←1 to n − 1 do
if A[i]>maxval
maxval←A[i]
return maxval

Algorithm analysis
 The measure of an input’s size here is the number of elements in the array, i.e., n.
 There are two operations in the for loop’s body:
o The comparison A[i]> maxval and
o The assignment maxval←A[i].
CS6404 __ Design and Analysis of Algorithms _ Unit I ______1.19

 The comparison operation is considered as the algorithm’s basic operation, because the
comparison is executed on each repetition of the loop and not the assignment.
 The number of comparisons will be the same for all arrays of size n; therefore, there is no
need to distinguish among the worst, average, and best cases here.
 Let C(n) denotes the number of times this comparison is executed. The algorithm makes
one comparison on each execution of the loop, which is repeated for each value of the
loop’s variable i within the bounds 1 and n − 1, inclusive. Therefore, the sum for C(n) is
calculated as follows:
�−�

� � = ∑�
�=�
i.e., Sum up 1 in repeated n-1 times
�−�

� � = ∑�= �−� ∈� �
�=�

EXAMPLE 2: Consider the element uniqueness problem: check whether all the Elements in a
given array of n elements are distinct.
ALGORITHM UniqueElements(A[0..n − 1])
//Determines whether all the elements in a given array are distinct
//Input: An array A[0..n − 1]
//Output: Returns “true” if all the elements in A are distinct and “false” otherwise
for i ←0 to n − 2 do
for j ←i + 1 to n − 1 do
if A[i]= A[j ] return false
return true

Algorithm analysis
 The natural measure of the input’s size here is again n (the number of elements in the array).
 Since the innermost loop contains a single operation (the comparison of two elements), we
should consider it as the algorithm’s basic operation.
 The number of element comparisons depends not only on n but also on whether there are
equal elements in the array and, if there are, which array positions they occupy. We will
limit our investigation to the worst case only.
 One comparison is made for each repetition of the innermost loop, i.e., for each value of the
loop variable j between its limits i + 1 and n − 1; this is repeated for each value of the outer
loop, i.e., for each value of the loop variable i between its limits 0 and n − 2.

EXAMPLE 3: Consider matrix multiplication. Given two n × n matrices A and B, find the time
efficiency of the definition-based algorithm for computing their product C = AB. By definition, C
CS6404 __ Design and Analysis of Algorithms _ Unit I ______1.20

is an n × n matrix whose elements are computed as the scalar (dot) products of the rows of matrix A
and the columns of matrix B:

where C[i, j ]= A[i, 0]B[0, j]+ . . . + A[i, k]B[k, j]+ . . . + A[i, n − 1]B[n − 1, j] for every pair of
indices 0 ≤ i, j ≤ n − 1.

ALGORITHM MatrixMultiplication(A[0..n − 1, 0..n − 1], B[0..n − 1, 0..n − 1])


//Multiplies two square matrices of order n by the definition-based algorithm
//Input: Two n × n matrices A and B
//Output: Matrix C = AB
for i ←0 to n − 1 do
for j ←0 to n − 1 do
C[i, j ]←0.0
for k←0 to n − 1 do
C[i, j ]←C[i, j ]+ A[i, k] ∗ B[k, j]
return C
Algorithm analysis
 An input’s size is matrix order n.
 There are two arithmetical operations (multiplication and addition) in the innermost loop.
But we consider multiplication as the basic operation.
 Let us set up a sum for the total number of multiplications M(n) executed by the algorithm.
Since this count depends only on the size of the input matrices, we do not have to
investigate the worst-case, average-case, and best-case efficiencies separately.
 There is just one multiplication executed on each repetition of the algorithm’s innermost
loop, which is governed by the variable k ranging from the lower bound 0 to the upper
bound n − 1.
 Therefore, the number of multiplications made for every pair of specific values of variables
i and j is

The total number of multiplications M(n) is expressed by the following triple sum:

Now, we can compute this sum by using formula (S1) and rule (R1)

.
The running time of the algorithm on a particular machine m, we can do it by the product

If we consider, time spent on the additions too, then the total time on the machine is
CS6404 __ Design and Analysis of Algorithms _ Unit I ______1.21

EXAMPLE 4 The following algorithm finds the number of binary digits in the binary
representation of a positive decimal integer. 
ALGORITHM Binary(n)
//Input: A positive decimal integer n
//Output: The number of binary digits in n’s binary representation
count ←1
while n > 1 do
count ←count + 1
n← n/
return count
Algorithm analysis
 An input’s size is n.
 The loop variable takes on only a few values between its lower and upper limits.
 Since the value of n is about halved on each repetition of the loop, the answer should be
about log2 n.
 The exact formula for the number of times.
 The comparison > will be executed is actually log2 n + 1.
Selection sort
Selection sort is a simple comparison-based sorting algorithm. It divides the input list into
two parts: the sublist of items already sorted, which is built up from left to right at the front
(left) of the list, and the sublist of items remaining to be sorted that occupy the rest of the list.
Here's a step-by-step breakdown of how selection sort works:

1. Start with the first element: Consider the first element of the list as the minimum.
2. Find the minimum element in the unsorted part: Scan the entire list to find the
smallest element.
3. Swap the minimum element with the first unsorted element: Swap the found
minimum element with the first element of the unsorted part.
4. Move the boundary of the sorted part: Increase the boundary of the sorted part by
one, so the sorted part now includes the minimum element found in step 2.
5. Repeat until the list is sorted: Repeat the process for the next unsorted element,
treating the boundary of the sorted and unsorted parts as shifting to the right.

Algorithm

selectionSort(arr, n)

for i from 0 to n-1

min_index = i

for j from i+1 to n-1

if arr[j] < arr[min_index]

min_index = j

if min_index != i

swap (arr[i] ,arr[min_index])


Bubble Sort Algorithm
Bubble sort is another simple comparison-based sorting algorithm. It repeatedly steps through
the list, compares adjacent elements, and swaps them if they are in the wrong order. This
process is repeated until the list is sorted. The algorithm gets its name because smaller
elements "bubble" to the top of the list.

How Bubble Sort Works

1. Start at the beginning of the list.


2. Compare each pair of adjacent elements. If the elements are in the wrong order,
swap them.
3. Move to the next pair of adjacent elements and repeat the comparison and swap if
necessary.
4. Continue this process for each pair of elements to the end of the list.
5. Repeat the entire process for the whole list until no swaps are needed, indicating
that the list is sorted.

Pseudo code

bubbleSort(arr, n)
for i from 0 to n-1
for j from 0 to n-i-2
if arr[j] > arr[j+1]
swap arr[j] and arr[j+1]

Sequential Search Algorithm

Sequential search, also known as linear search, is a simple search algorithm used to find a
particular element in a list. It checks each element of the list sequentially until a match is
found or the end of the list is reached.

How Sequential Search Works

1. Start from the beginning of the list.


2. Compare the target element with the current element of the list.
3. If the current element matches the target, return the index of the current element.
4. If the current element does not match, move to the next element.
5. Repeat steps 2-4 until a match is found or the end of the list is reached.
6. If the end of the list is reached without finding a match, return a sentinel value (e.g., -1)
indicating that the element is not in the list.

Pseudocode
plaintext
Copy code
sequentialSearch(arr, target)
for i from 0 to length(arr) - 1
if arr[i] == target
return i
return -1

Time Complexity Analysis

The time complexity of sequential search depends on the position of the target element in the
list or if the element is not present:

 Best Case: O(1)


o The target element is the first element in the list.
 Average Case: O(n)
o On average, the target element might be somewhere in the middle of the list.
 Worst Case: O(n)
o The target element is the last element in the list or not present at all, requiring the
algorithm to check every element.

Brute Force Search Algorithm

Brute force search is a straightforward and simple search algorithm that involves checking
each possible solution until the correct one is found. It's often used when the problem size is
small or when other search algorithms are impractical. The brute force approach is typically
applied to a variety of search problems, including string matching, optimization problems,
and combinatorial problems.

How Brute Force Search Works

1. Generate all possible candidates: Enumerate all possible solutions to the problem.
2. Check each candidate: For each candidate, check if it satisfies the given criteria or matches
the target.
3. Return the solution: If a candidate matches the criteria, return it as the solution.
4. Continue until a solution is found: If no candidates match, continue until all possibilities are
exhausted.

Example: Brute Force Search for String Matching

Let's take an example of brute force search applied to string matching, where we want to find
a substring within a string.

Pseudo code
plaintext
Copy code
bruteForceSearch(text, pattern)
n = length of text
m = length of pattern
for i from 0 to n - m
j = 0
while j < m and text[i + j] == pattern[j]
j = j + 1
if j == m
return i
return -1

Time Complexity Analysis

The time complexity of brute force search depends on the specific problem being solved. For
the string matching example:

 Best Case: O(1)


o The pattern is found at the first position of the text.
 Average Case: O((n−m+1)⋅m)
o On average, the algorithm will compare the pattern to many parts of the text.
 Worst Case: O((n−m+1)⋅m)
o The pattern is not in the text, or the text consists of repeated characters requiring
maximum comparisons.

Where n is the length of the text and mmm is the length of the pattern.

You might also like