0% found this document useful (0 votes)
10 views14 pages

DAA UNIT-1

Uploaded by

prespective
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views14 pages

DAA UNIT-1

Uploaded by

prespective
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

UNIT-1

DAA
Algorithm:
Algorithm is a step by step procedure, which defines a set of instructions to be executed in certain order
to get the desired output. Algorithms are generally created independent of underlying languages, i.e. an
algorithm can be implemented in more than one programming language.

An algorithm is defined as follows:


• An algorithm is a set of rules for carrying out calculation either by hand or on a machine.
• An algorithm is a finite step-by-step procedure to achieve a required result.
• An algorithm is a sequence of computational steps that transform the input into the output.
• An algorithm is a sequence of operations performed on data that have to be organized in data
structures.
• An algorithm is an abstraction of a program to be executed on a physical machine (model of
Computation).

Characteristics of an Algorithm:
Not all procedures can be called an algorithm. An algorithm should have the below mentioned
characteristics −
1. Unambiguous − Algorithm should be clear and unambiguous. Each of its steps (or phases), and
their input/outputs should be clear and must lead to only one meaning.
2. Input − An algorithm should have 0 or more well defined inputs.
3. Output − An algorithm should have 1 or more well defined outputs, and should match the desired
output.
4. Finiteness − Algorithms must terminate after a finite number of steps.
5. Feasibility − Should be feasible with the available resources.
6. Independent − An algorithm should have step-by-step directions which should be independent of
any programming code.

ALGORITHM DESIGN TECHNIQUES (APPROACHES, DESIGN PARADIGMS)


General approaches to the construction of efficient solutions to problems. Such methods are of interest
because:
• They provide templates suited to solving a broad range of diverse problems.
• They can be translated into common control and data structures provided by most high-level
languages.
• The temporal and spatial requirements of the algorithms which result can be precisely
analyzed.
Although more than one technique may be applicable to a specific problem, it is often the case that an
algorithm constructed by one approach is clearly superior to equivalent solutions built using alternative
techniques.

1. Brute Force

Brute force is a straightforward approach to solve a problem based on the problem’s statement and
definitions of the concepts involved. It is considered as one of the easiest approach to apply and is useful
for solving small – size instances of a problem. Some examples of brute force algorithms are:

• Computing an (a > 0, n a nonnegative integer) by multiplying a*a*…*a

• Computing n!

• Selection sort , Bubble sort

• Sequential search
• Exhaustive search: Traveling Salesman Problem, Knapsack problem.

2. Divide-and-Conquer, Decrease-and-Conquer

These are methods of designing algorithms that (informally) proceed as follows:

Given an instance of the problem to be solved, split this into several smaller sub-instances (of the same
problem), independently solve each of the sub-instances and then combine the sub-instance solutions so
as to yield a solution for the original instance. With the divide- and- conquer method the size of the
problem instance is reduced by a factor (e.g. half the input size), while with the decrease- and-conquer
method the size is reduced by a constant.

Examples of divide-and-conquer algorithms:

• Computing an (a > 0, n a nonnegative integer) by recursion

• Binary search in a sorted array (recursion)

• Mergesort algorithm, Quicksort algorithm (recursion)

• The algorithm for solving the fake coin problem (recursion)


3. Greedy Algorithms "take what you can get now" strategy

The solution is constructed through a sequence of steps, each expanding a partially constructed solution
obtained so far. At each step the choice must be locally optimal – this is the central point of this
technique.

Greedy is a strategy that works well on optimization problems with the following characteristics:

1. Greedy-choice property: A global optimum can be arrived at by selecting a local optimum.

2. Optimal substructure: An optimal solution to the problem contains an optimal solution to


subproblems.

The second property may make greedy algorithms look like dynamic programming. However, the two
techniques are quite different.
Examples:

• Minimal spanning tree

• Shortest distance in graphs

• Greedy algorithm for the Knapsack problem

• The coin exchange problem

• Huffman trees for optimal encoding

Greedy techniques are mainly used to solve optimization problems. They do not always give the best
solution.

Example:

Examples of such greedy algorithms are


• Kruskal's algorithm for finding minimum spanning trees

• Prim's algorithm for finding minimum spanning trees

• Huffman Algorithm for finding optimum Huffman trees.

• Used in Networking too

Greedy algorithms appear in network routing as well. Using greedy routing, a message is forwarded to the
neighboring node which is "closest" to the destination. The notion of a node's location (and hence
"closeness") may be determined by its physical location, as in geographic routing used by ad hoc
networks. Location may also be an entirely artificial construct as in small world routing and distributed
hash table

4. Dynamic Programming

One disadvantage of using Divide-and-Conquer is that the process of recursively solving separate sub-
instances can result in the same computations being performed repeatedly since identical sub-instances
may arise.
18CSC204J- ALGORITHM DESIGN AND ANALYSIS UNIT-I

It is used when the solution can be recursively described in terms of solutions to subproblems (optimal
substructure). Algorithm finds solutions to subproblems and stores them in memory for later use. More
efficient than “brute-force methods”, which solve the same subproblems over and over again.

• Optimal substructure:
Optimal solution to problem consists of optimal solutions to subproblems

• Overlapping subproblems:
Few subproblems in total, many recurring instances of each

• Bottom up approach:
Solve bottom-up, building a table of solved subproblems that are used to solve larger ones.

Examples:

• Fibonacci numbers computed by iteration.

• Warshall’s algorithm implemented by iterations

5. Backtracking methods

The method is used for state-space search problems. State-space search problems are problems, where
the problem representation consists of:
• initial state

• goal state(s)

• a set of intermediate states

• a set of operators that transform one state into another. Each operator has preconditions and
post conditions.

• a cost function – evaluates the cost of the operations (optional)

• a utility function – evaluates how close is a given state to the goal state (optional)

The solving process solution is based on the construction of a state-space tree, whose nodes represent
states, the root represents the initial state, and one or more leaves are goal states. Each edge is labeled
with some operator.

If a node b is obtained from a node a as a result of applying the operator O, then b is a child of a
and the edge from a to b is labeled with O.

The solution is obtained by searching the tree until a goal state is found.
18CSC204J- ALGORITHM DESIGN AND ANALYSIS UNIT-I

Backtracking uses depth-first search usually without cost function. The main algorithm is as follows:

1. Store the initial state in a stack

2. While the stack is not empty, do:

a. Read a node from the stack.

b. While there are available operators do:

i. Apply an operator to generate a child

ii. If the child is a goal state – stop

iii. If it is a new state, push the child into the stack

The utility function is used to tell how close is a given state to the goal state and whether a given state
may be considered a goal state.

If no children can be generated from a given node, then we backtrack – read the next node from the stack.

6. Branch-and-bound

Branch and bound is used when we can evaluate each node using the cost and utility functions. At each
step we choose the best node to proceed further. Branch-and bound algorithms are implemented using a
priority queue. The state-space tree is built in a breadth-first manner.

Example: the 8-puzzle problem. The cost function is the number of moves. The utility function
evaluates how close is a given state of the puzzle to the goal state, e.g. counting how many tiles are
not in place.

ALGORITHM ANALYSIS

An algorithm is said to be efficient and fast, if it takes less time to execute and consumes less memory
space.

The performance of an algorithm is measured on the basis of following properties:

1. Time Complexity

2. Space Complexity

Suppose X is an algorithm and n is the size of input data, the time and space used by the Algorithm X are
the two main factors which decide the efficiency of X.

• Time Factor − The time is measured by counting the number of key operations such as
comparisons in sorting algorithm

• Space Factor − The space is measured by counting the maximum memory space required by
the algorithm.
The complexity of an algorithm f(n) gives the running time and / or storage space required by the
algorithm in terms of n as the size of input data.
18CSC204J- ALGORITHM DESIGN AND ANALYSIS UNIT-I

Space Complexity

Space complexity of an algorithm represents the amount of memory space required by the algorithm in its
life cycle. Its the amount of memory space required by the algorithm, during the course of its execution.
Space complexity must be taken seriously for multi-user systems and in situations where limited memory
is available.
Space required by an algorithm is equal to the sum of the following two components −

• A fixed part that is a space required to store certain data and variables that are independent of
the size of the problem. For example, simple variables & constant used and program size etc.

• A variable part is a space required by variables, whose size depends on the size of the
problem. For example, dynamic memory allocation, recursion stacks space etc.

An algorithm generally requires space for following components:

• Instruction Space: It is the space required to store the executable version of the program.
This space is fixed, but varies depending upon the number of lines of code in the program.
• Data Space: It is the space required to store all the constants and variables value.

• Environment Space: It is the space required to store the environment information needed to
resume the suspended function.

Space complexity S(P) of any algorithm P is S(P) = C + SP(I) Where C is the fixed part and S(I) is the
variable part of the algorithm which depends on instance characteristic I.

Time Complexity

The time complexity is a function that gives the amount of time required by an algorithm to run to
completion.

• Worst case time complexity: It is the function defined by the maximum amount of time needed by
an algorithm for an input of size n.
• Average case time complexity: The average-case running time of an algorithm is an estimate of
the running time for an “average” input. Computation of average-case running time entails
knowing all possible input sequences, the probability distribution of occurrence of these
sequences, and the running times for the individual sequences.

• Best case time complexity: It is the minimum amount of time that an algorithm requires for an
input of size n.
18CSC204J- ALGORITHM DESIGN AND ANALYSIS UNIT-I

There are four rules to count the operations:

Rule 1: for loops - the size of the loop times the running time of the body

The running time of a for loop is at most the running time of the statements inside the loop times the
number of iterations.

for( i = 0; i < n; i++)

sum = sum + i;

a. Find the running time of statements when executed only once:

The statements in the loop heading have fixed number of operations, hence they have constant running
time O(1) when executed only once. The statement in the loop body has fixed number of operations,
hence it has a constant running time when executed only once.

b. Find how many times each statement is executed.

for( i = 0; i < n; i++) // i = 0; executed only once: O(1)

// i < n; n + 1 times O(n)

// i++ n times O(n)

// total time of the loop heading:

// O(1) + O(n) + O(n) = O(n)

sum = sum + i; // executed n times, O(n)

The loop heading plus the loop body will give: O(n) + O(n) = O(n).

Loop running time is: O(n)

Mathematical analysis of how many times the statements in the body are executed

If

a) the size of the loop is n (loop variable runs from 0, or some fixed constant, to n)
and
b) the body has constant running time (no nested loops)

then the time is O(n)


18CSC204J- ALGORITHM DESIGN AND ANALYSIS UNIT-I

Rule 2: Nested loops – the product of the size of the loops times the running time of the body

The total running time is the running time of the inside statements times the product of the sizes of all the
loops

sum = 0;

for( i = 0; i < n; i++)

for( j = 0; j < n; j++)

sum++;

Applying Rule 1 for the nested loop (the ‘j’ loop) we get O(n) for the body of the outer loop. The outer
loop runs n times, therefore the total time for the nested loops will be

O(n) * O(n) = O(n*n) = O(n^2)

Analysis

What happens if the inner loop does not start from 0?


sum = 0;

for( i = 0; i < n; i++)

for( j = i; j < n; j++)

sum++;
18CSC204J- ALGORITHM DESIGN AND ANALYSIS UNIT-
I

Here, the number of the times the inner loop is executed depends on the value of i

i = 0, inner loop runs n times

i = 1, inner loop runs (n-1) times

i = 2, inner loop runs (n-2) times

i = n – 2, inner loop runs 2 times

i = n – 1, inner loop runs once.

Thus we get: ( 1 + 2 + … + n) = n*(n+1)/2 = O(n2)

General rule for nested loops:

Running time is the product of the size of the loops times the running time of the body.

Example:

sum = 0;
for( i = 0; i < n; i++)

for( j = 0; j < 2n; j++)

sum++;

We have one operation inside the loops, and the product of the sizes is 2n2

Hence the running time is O(2n2) = O(n2)

Note: if the body contains a function call, its running time has to be taken into consideration

sum = 0;
for( i = 0; i < n; i++)

for( j = 0; j < n; j++)

sum = sum + function(sum);


Assume that the running time of function(sum) is known to be log(n).

Then the total running time will be O(n2*log(n))


18CSC204J- ALGORITHM DESIGN AND ANALYSIS UNIT-I

Rule 3: Consecutive program fragments

The total running time is the maximum of the running time of the individual fragments

sum = 0;
for( i = 0; i < n; i++)
sum = sum + i;
sum = 0;

for( i = 0; i < n; i++)

for( j = 0; j < 2*n; j++)

sum++;

The first loop runs in O(n) time, the second - O(n2) time, the maximum is O(n2)

Rule 4: If statement

if C

S1;
else

S2;

The running time is the maximum of the running times of S1 and S2.

Summary

Steps in analysis of non-recursive algorithms:

 Decide on parameter n indicating input size


 Identify algorithm’s basic operation

Check whether the number of time the basic operation is executed depends on
some additional property of the input. If so, determine worst, average, and best case for
input of size n
 Count the number of operations using the rules above.

You might also like