0% found this document useful (0 votes)
10 views12 pages

DAA(UNIT3)

Uploaded by

anitha.7.com
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views12 pages

DAA(UNIT3)

Uploaded by

anitha.7.com
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

EXHAUSTIVE&BRUTE FORCE

Exhaustive search is simply a brute force approach to combinatorial problems (Minimization


or maximization of optimization problems and constraint satisfaction problems).
Reason to choose brute-force / exhaustive search approach as an important algorithm design
strategy

Brute force is a straightforward approach to solving a problem, usually directly based on


the problem statement and definitions of the concepts involved.
Selection Sort, Bubble Sort, Sequential Search, String Matching, Depth-First Search and
Breadth-First Search, Closest-Pair and Convex-Hull Problems can be solved by Brute Force.
Examples:
1. Computing an : a * a * a * … * a ( n times)
2. Computing n! : The n! can be computed as n*(n-1)* … *3*2*1
3. Multiplication of two matrices : C=AB
4. Searching a key from list of elements (Sequential search)
Advantages:
1. Brute force is applicable to a very wide variety of problems.
2. It is very useful for solving small size instances of a problem, even though it is
inefficient.
3. The brute-force approach yields reasonable algorithms of at least some practical value
with no limitation on instance size for sorting, searching, and string matching.
Selection Sort
 First scan the entire given list to find its smallest element and exchange it with the first
element, putting the smallest element in its final position in the sorted list.
 Then scan the list, starting with the second element, to find the smallest among the last n − 1
elements and exchange it with the second element, putting the second smallest element in its
final position in the sorted list.
 Generally, on the ith pass through the list, which we number from 0 to n − 2, the algorithm
searches for the smallest item among the last n − i elements and swaps it with Ai:

A0 ≤ A1 ≤ . . . ≤ Ai–1 | Ai, . . . , Amin, . . . , An–1


in their final positions | the last n – i elements
 After n − 1 passes, the list is sorted.

ALGORITHM SelectionSort(A[0..n − 1])


//Sorts a given array by selection sort
//Input: An array A[0..n − 1] of orderable elements
//Output: Array A[0..n − 1] sorted in nondecreasing order
for i ← 0 to n − 2 do
min ← i
for j ← i + 1 to n − 1 do
if A[j ]<A[min]
min ← j
swap A[i] and A[min]
The analysis of selection sort is straightforward. The input size is given by the number of
elements n; the basic operation is the key comparison [j ] < 𝐴[𝑚𝑖𝑛]. The number of times it is
executed depends only on the array size and is given by the following sum:
n−2 n−1 n−2 n−2

(n − 1)n
(𝑛) = ∑ ∑ 1 = ∑[(n − 1) − (i + 1) + 1] = ∑(n − 1 − i) = 2
i=0 j =i+1 i=0 i=0

Thus, selection sort is a Θ(n2) algorithm on all inputs.


Note: The number of key swaps is only Θ(n), or, more precisely n – 1.

Bubble Sort
The bubble sorting algorithm is to compare adjacent elements of the list and exchange them
if they are out of order. By doing it repeatedly, we end up “bubbling up” the largest element to the
last position on the list. The next pass bubbles up the second largest element, and so on, until after n
− 1 passes the list is sorted. Pass i (0 ≤ i ≤ n − 2) of bubble sort can be represented by the
?
following: A0, . . . , Aj ⫘ Aj+1, . . . , An−i−1 | An−i ≤ . . . ≤ An−1
ALGORITHM BubbleSort(A[0..n − 1])
//Sorts a given array by bubble sort
//Input: An array A[0..n − 1] of orderable elements
//Output: Array A[0..n − 1] sorted in nondecreasing order
for i ← 0 to n − 2 do
for j ← 0 to n − 2 − i do
if A[j + 1]<A[j ] swap A[j ] and A[j + 1]
The action of the algorithm on the list 89, 45, 68, 90, 29, 34, 17 is illustrated as an example.

etc.
The number of key comparisons for the bubble-sort version given above is the same for all arrays
of size n; it is obtained by a sum that is almost identical to the sum for selection sort:
n−2 n−2−i n−2 n−2 (n − 1)n
(𝑛) = ∑ ∑ 1 = ∑[(n − 2 − i) − 0 + 1] = ∑(n − 1 − i) =
2
i=0 j =i+1 i=0 i=0

The number of key swaps, however, depends on the input. In the worst case of decreasing
arrays, it is the same as the number of key comparisons.
𝐶worst(𝑛) ∈ Θ (n2)
2.2 EXHAUSTIVE SEARCH
For discrete problems in which no efficient solution method is known, it might be necessary
to test each possibility sequentially in order to determine if it is the solution. Such
exhaustive examination of all possibilities is known as exhaustive search, complete search or direct
search.
Exhaustive search is simply a brute force approach to combinatorial problems (Minimization
or maximization of optimization problems and constraint satisfaction problems).
Reason to choose brute-force / exhaustive search approach as an important algorithm design
strategy
1. First, unlike some of the other strategies, brute force is applicable to a very wide
variety of problems. In fact, it seems to be the only general approach for which it is
more difficult to point out problems it cannot tackle.
2. Second, for some important problems, e.g., sorting, searching, matrix multiplication,
string matching the brute-force approach yields reasonable algorithms of at least some
practical value with no limitation on instance size.
3. Third, the expense of designing a more efficient algorithm may be unjustifiable if only
a few instances of a problem need to be solved and a brute-force algorithm can solve
those instances with acceptable speed.
4. Fourth, even if too inefficient in general, a brute-force algorithm can still be useful
for solving small-size instances of a problem.

Exhaustive Search is applied to the important problems like


 Traveling Salesman Problem
 Knapsack Problem
 Assignment Problem.

TRAVELING SALESMAN PROBLEM


The traveling salesman problem (TSP) is one of the combinatorial problems. The problem
asks to find the shortest tour through a given set of n cities that visits each city exactly once before
returning to the city where it started.

The problem can be conveniently modeled by a weighted graph, with the graph’s vertices
representing the cities and the edge weights specifying the distances. Then the problem can be stated
as the problem of finding the shortest Hamiltonian circuit of the graph. (A Hamiltonian circuit is
defined as a cycle that passes through all the vertices of the graph exactly once).
A Hamiltonian circuit can also be defined as a sequence of n + 1 adjacent vertices
vi0, vi1, . . . , vin−1, vi0, where the first vertex of the sequence is the same as the last one and all the
other n − 1 vertices are distinct. All circuits start and end at one particular vertex
Tour Length
a ---> b ---> c ---> d ---> a I = 2 + 8 + 1 + 7 = 18
a ---> b ---> d ---> c ---> a I = 2 + 3 + 1 + 5 = 11 optimal
a ---> c ---> b ---> d ---> a I = 5 + 8 + 3 + 7 = 23
a ---> c ---> d ---> b ---> a I = 5 + 1 + 3 + 2 = 11 optimal
a ---> d ---> b ---> c ---> a I = 7 + 3 + 8 + 5 = 23
a ---> d ---> c ---> b ---> a I = 7 + 1 + 8 + 2 = 18
Solution to a small instance of the traveling salesman problem by exhaustive search.

Time efficiency
 We can get all the tours by generating all the permutations of n − 1 intermediate cities
from a particular city.. i.e. (n - 1)!
 Consider two intermediate vertices, say, b and c, and then only permutations in which b
precedes c. (This trick implicitly defines a tour’s direction.)
 An inspection of Figure 2.4 reveals three pairs of tours that differ only by their direction.
Hence, we could cut the number of vertex permutations by half because cycle total lengths
in both directions are same.
1
 The total number of permutations needed is still 2 (n − 1)!, which makes the exhaustive-
search approach impractical for large n. It is useful for very small values of n.

KNAPSACK PROBLEM
Given n items of known weights w1, w2, . . . , wn and values v1, v2, . . . , vn and a knapsack of
capacity W, find the most valuable subset of the items that fit into the knapsack.

Real time examples:


 A Thief who wants to steal the most valuable loot that fits into his knapsack,
 A transport plane that has to deliver the most valuable set of items to a remote location
without exceeding the plane’s capacity.

The exhaustive-search approach to this problem leads to generating all the subsets of the set
of n items given, computing the total weight of each subset in order to identify feasible subsets (i.e.,
the ones with the total weight not exceeding the knapsack capacity), and finding a subset of the largest
value among them.
Instance of the knapsack problem.

Subset Total weight Total value


Φ 0 $0
{1} 7 $42
{2} 3 $12
{3} 4 $40
{4} 5 $25
{1, 2} 10 $54
{1, 3} 11 not feasible
{1, 4} 12 not feasible
{2, 3} 7 $52
{2, 4} 8 $37
{3, 4} 9 $65 (Maximum-Optimum)
{1, 2, 3} 14 not feasible
{1, 2, 4} 15 not feasible
{1, 3, 4} 16 not feasible
{ 2, 3, 4} 12 not feasible
{1, 2, 3, 4} 19 not feasible
knapsack problem’s solution by exhaustive search. The information about the optimal
selection is in bold.
Time efficiency: As given in the example, the solution to the instance of is given in Since the number
of subsets of an n-element set is 2n, the exhaustive search leads to a Ω(2n) algorithm, no matter how
efficiently individual subsets are generated.

Note: Exhaustive search of both the traveling salesman and knapsack problems leads to extremely
inefficient algorithms on every input. In fact, these two problems are the best-known examples of
NP-hard problems. No polynomial-time algorithm is known for any NP-hard problem. Moreover,
most computer scientists believe that such algorithms do not exist. some sophisticated approaches
like backtracking and branch-and-bound enable us to solve some instances but not all instances of
these in less than exponential time. Alternatively, we can use one of many approximation
algorithms.
Graphs
A graph is defined as Graph is a collection of vertices and arcs which connects vertices in the graph. A graph G is
represented as G = ( V , E ), where V is set of vertices and E is set of edges.
Example: graph G can be defined as G = ( V , E )

Graph Representations

Graph data structure is represented using following representations

1. Adjacency Matrix

2. Adjacency List

1. Adjacency Matrix

In this representation, graph can be represented using a matrix of size total number of vertices by total number of
vertices; means if a graph with 4 vertices can be represented using a matrix of 4X4 size.

In this matrix, rows and columns both represent vertices.

This matrix is filled with either 1 or 0. Here, 1 represents there is an edge from row vertex to column vertex and 0
represents there is no edge from row vertex to column vertex.

Adjacency Matrix : let G = (V, E) with n vertices, n  1. The adjacency matrix of G is a 2-dimensional n  n
matrix, A, A(i, j) = 1 iff (vi, vj) E(G) (vi, vj for a diagraph), A(i, j) = 0 otherwise.

example : for undirected graph

For a Directed graph

The adjacency matrix for an undirected graph is symmetric; the adjacency matrix for a digraph need not be
symmetric.

Merits of Adjacency Matrix:

From the adjacency matrix, to determine the connection of vertices is easy


n1

The degree of a vertex is  adj _ mat[i][ j]


j 0

For a digraph, the row sum is the out_degree, while the column sum is the in_degree
n1 n1
ind (vi)  A[ j, i]
j 0
outd(vi)  A[i, j]
j 0

The space needed to represent a graph using adjacency matrix is n2 bits. To identify the edges in a graph,
adjacency matrices will require at least O(n2) time.
Adjacency List
In this representation, every vertex of graph contains list of its adjacent vertices. The n rows of the adjacency
matrix are represented as n chains. The nodes in chain I represent the vertices that are adjacent to vertex i.

It can be represented in two forms. In one form, array is used to store n vertices and chain is used to store its
adjacencies. Example:

So that we can access the adjacency list for any vertex in O(1) time. Adjlist[i] is a pointer to to first node in the
adjacency list for vertex i.

Graph Traversal
There are two types of graph traversal, they are
1. Depth First Traversal (DFS)
2. Breadth First Traversal (BFS)

Depth First Traversal (DFS)


DFS traversal of a graph, produces a spanning tree as final result. Spanning Tree is a graph without
any loops. We use Stack data structure with maximum size of total number of vertices in the graph to implement
DFS traversal of a graph.

We use the following steps to implement DFS traversal...

Step 1: Define a Stack of size total number of vertices in the graph.

Step 2: Select any vertex as starting point for traversal. Visit that vertex and push it on to the Stack.

Step 3: Visit any one of the adjacent vertex of the vertex which is at top of the stack which is not visited and push
it on to the stack.

Step 4: Repeat step 3 until there are no new vertex to be visit from the vertex on top of the stack.

Step 5: When there is no new vertex to be visit then use back tracking and pop one vertex from the stack.

Step 6: Repeat steps 3, 4 and 5 until stack becomes Empty.

Step 7: When stack becomes Empty, then produce final spanning tree by removing unused edges from the graph.

Example:
Breadth-First Search
BFS traversal of a graph, produces a spanning tree as final result. Spanning Tree is a graph without any loops.
We use Queue data structure with maximum size of total number of vertices in the graph to implement BFS traversal
of a graph.
In a breadth-first search, we begin by visiting the start vertex v. Next all unvisited vertices adjacent to v are
Visited. Unvisited vertices adjacent to these newly visited vertices are then visited and so on.

We use the following steps to implement BFS traversal...


Step 1: Define a Queue of size total number of vertices in the graph.
Step 2: Select any vertex as starting point for traversal. Visit that vertex and insert it into the Queue.
Step 3: Visit all the adjacent vertices of the vertex which is at front of the Queue which is not visited and insert
them into the Queue.
Step 4: When there is no new vertex to be visit from the vertex at front of the Queue then delete that vertex from
the Queue.
Step 5: Repeat step 3 and 4 until queue becomes empty.
Step 6: When queue becomes Empty, then produce final spanning tree by removing unused edges from the graph.
Linear Search:
It is the process of finding the location of the element in a linear array.
This is the simple method in which the element to be searched is
compared with each element of the array one by one from the beginning to
till end of the array. Since searching is one after the other, it is also called
as sequential search.
Algorithm:
Step 1: Loc=-1
Step 2: For p=0 to n-1
if(a[p]=ele)
Loc=p
Go to step3
End For
Step 3: if(Loc>0)
Print “Loc”
Else
Print “Search is Unsuccessful”
End If
Step 4: Exit
Example:

You might also like