0% found this document useful (2 votes)
3K views7 pages

Ignou Mcs 031 Solved Paper

The document provides information about algorithms including insertion sort, quicksort, and depth-first search (DFS). It includes pseudocode for insertion sort, quicksort, and DFS algorithms. For insertion sort, it analyzes the time complexity as O(N^2). For quicksort, it explains the divide and conquer approach and provides pseudocode. It compares DFS to breadth-first search (BFS), explaining that DFS explores branches depth-first while BFS explores neighbors breadth-first.

Uploaded by

Gaurav Sood
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (2 votes)
3K views7 pages

Ignou Mcs 031 Solved Paper

The document provides information about algorithms including insertion sort, quicksort, and depth-first search (DFS). It includes pseudocode for insertion sort, quicksort, and DFS algorithms. For insertion sort, it analyzes the time complexity as O(N^2). For quicksort, it explains the divide and conquer approach and provides pseudocode. It compares DFS to breadth-first search (BFS), explaining that DFS explores branches depth-first while BFS explores neighbors breadth-first.

Uploaded by

Gaurav Sood
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 7

Ignou MCS-031 Design and Analysis of Algorithms Solved Assignment 2012-13

Question1: Using Insertion Sort, sort the following sequence in increasing order and do the analysis of the algorithm: 35, 37, 18,
15, 40, 12
Ans: The idea behind insertion sort is:
1. Put the first 2 items in correct relative order.
2. Insert the 3rd item in the correct place relative to the first 2.
3. Insert the 4th item in the correct place relative to the first 3.
4. etc.
As with selection sort, a nested loop is used; however, a different invariant holds: after the i-th time around the outer loop, the items in A[0]
through A[i-1] are in order relative to each other (but are not necessarily in their final places). Also, note that in order to insert an item into its
place in the (relatively) sorted part of the array, it is necessary to move some values to the right to make room.
Heres the code:
public static <E extends Comparable<E>> void insertionSort(E[] A) {
int k, j;
E tmp;
int N = A.length;
for (k = 1; k < N, k++) {
tmp = A[k];
j = k 1;
while ((j >= 0) && (A[j].compareTo(tmp) > 0)) {
A[j+1] = A[j]; // move one value over one place to the right
j;
}
A[j+1] = tmp; // insert kth value in correct place relative
// to previous values
}
}
the time complexity of insertion sort. Again, the inner loop can execute a different number of times for every iteration of the outer loop. In the
worst case:

1st iteration of outer loop: inner executes 1 time


2nd iteration of outer loop: inner executes 2 times
3rd iteration of outer loop: inner executes 3 times

N-1st iteration of outer loop: inner executes N-1 times


So we get:
1 + 2 + + N-1
which is still O(N2).
Question 2: Write a pseudocode for divide and conquer algorithm for finding the position of an array of n numbers and estimate
the number of key comparisons made by your algorithm.
Ans: Quick sort is the best example of the divide and conquer technique, so lets go through it once. Please pay attention to each and
every word, as each has its own importance in this lesson.
Quick sort
Quick sort was discovered by Tony Hoare in 1962. In this algorithm, the hard work is splitting the array into subsets so that merging the final
result is trivial.
1. Choose a pivot element (it can be any element but preferably) from the array.
2. Split the array into three sub-arrays containing the items less than the pivot, the pivot itself, and the items bigger than the pivot.
3. Recursively calling (quick sort) function on the first and last sub-array.
This is exactly what we do:
We put the pivot element on its right position and then again divide the array into two sub-arrays with reference to that pivot element. The
newly generated sub-arrays will have their own new pivot elements, the choice of the pivot element is completely user oriented. And then
again through the recursive mechanism, these sub-arrays are called in which again their pivot elements are put on their respective position
and that array is further divided again. This process continues until we have a single element left in the sub-arrays, which is in itself in its
right position.
Let me present a pictorial example to explain quick sort.
We have to sort a given list (11,10,8,12,4,3,9).
So the different steps that comes while under the quick sort algorithm that use the divide and conquer technique is as follows.

eres a more formal specification of the quick sort algorithm. The separate Partition subroutine takes the original position of the pivot
element as input and returns the post-partition pivot position as output.
Quicksort(A, p, r):
{
if p < r:
q = Partition(A, p, r) // calling partion function which returns the bifurcation point
Quicksort(A, p, q) // calling quicksort for the first array
Quicksort(A, q+1, r) // calling quicksort for the second array
}
Partition(A, p, r):
{
x = A[p] // considering the first element to ba as the pivot element
i=p1
j=r+1
while 1:
j=j1
while A[j] <= x:
j=j1
i=i+1
while A[i] >= x:
j=j1
if i < j:
swap(A[i], A[j])
else:
return j
}
In the above code of (Quick sort) we are inputting an array containing the numbers which are to be sort with n as the total number of the
numbers present in it. The function (Partition) returns a position about which that array/sub-array is to be partitioned again, this returned
value is again used in the recursive calls made by the function (Quick sort).
Question 3: Apply quicksort to sort the following list: Q U I C K S O R T in alphabetical order. Find the element whose position is
unchanged in the sorted list.
Ans:
#include <ctype.h>
#include <string>
#include <iostream>
#include <cstring>
#include <iomanip>
#include <stdio.h>
using namespace std;
const int DATA = 1000;
int compare(const void* , const void*);
int main()
{
int i;
int m;
int j;
int n;
char c;
char list[DATA];
string word[100];
cout << Enter words:\n;
fgets (list, 256, stdin );
printf (You entered: );
cout << endl;
printf (list);
cout << endl;
while (list[i])
{
if (isupper(list[i])) list[i]=tolower(list[i]);
putchar (list[i]);

i++;
}
cout << endl;
qsort(list, DATA, sizeof(char), compare);
cout << endl;
cout << The sorted list of words:\n;
printf (list);
cout << endl;
return 0;
}
int compare(const void* pnum1, const void *pnum2)
// compares two numbers, returns -1 if first is smaller
// returns 0 is they are equal, returns 1 if first is larger
{
int num1, num2;
num1 = *(char *)pnum1; // cast from pointer to void
num2 = *(char *)pnum2; // to pointer to int
if(num1 < num2)
return -1;
else
if (num1 == num2)
return 0;
else
return 1;
}
Question 4: Write Strassens matrix multiplications algorithm for obtaining the product of two matrices.
Ans:
Given: Two N by N matrices A and B.
Problem: Compute C = A B

Brute Force
for i:= 1 to N do
for j:=1 to N do
C[i,j] := 0;
for k := 1 to N do
C[i,j] := C[i,j] + A[i,k] * B[k,j]

O(N3) multiplications
Divide and Conquer
P1 = (A11 + A22) (B11+B22)
P2=(A21+A22)B11
P3=A11(B12-B22)
P4=A22(B21-B11)
P5=(A11+A12)B22
P6=(A21-A11)(B11+B12)
P7=(A12-A22)(B21+B22)
C11=P1+P4-P5+P7
C12=P3+P5
C21=P2+P4
C22=P1+P3-P2+P6
From
T(n) = O(nlog 7) = O(n2.81).
Question 5: (i) Define DFS. Explain briefly how it differs from BFS.
Ans:
Depth-first search (DFS) is an algorithm for traversing or searching a tree, tree structure, or graph. One starts at the root (selecting some
node as the root in the graph case) and explores as far as possible along each branch before backtracking.
Formally, DFS is an uninformed search that progresses by expanding the first child node of the search tree that appears and thus going
deeper and deeper until a goal node is found, or until it hits a node that has no children. Then the search backtracks, returning to the most
recent node it hasnt finished exploring. In a non-recursive implementation, all freshly expanded nodes are added to a stack for exploration.
The time and space analysis of DFS differs according to its application area. In theoretical computer science, DFS is typically used to
traverse an entire graph, and takes time O(|V| + |E|), linear in the size of the graph. In these applications it also uses space O(|V|) in the

worst case to store the stack of vertices on the current search path as well as the set of already-visited vertices. Thus, in this setting, the
time and space bounds are the same as for breadth first search and the choice of which of these two algorithms to use depends less on
their complexity and more on the different properties of the vertex orderings the two algorithms produce.
breadth-first search (BFS) is a graph search algorithm that begins at the root node and explores all the neighboring nodes. Then for each of
those nearest nodes, it explores their unexplored neighbor nodes, and so on, until it finds the goal. BFS is an uninformed search method
that aims to expand and examine all nodes of a graph or combination of sequences by systematically searching through every solution. In
other words, it exhaustively searches the entire graph or sequence without considering the goal until it finds it. It does not use a heuristic
algorithm. From the standpoint of the algorithm, all child nodes obtained by expanding a node are added to a FIFO queue. In typical
implementations, nodes that have not yet been examined for their neighbors are placed in some container called open and then once
examined are placed in the container closed.
Algorithm
1. Enqueue the root node.
2. Dequeue a node and examine it.

If the element sought is found in this node, quit the search and return a result.
Otherwise enqueue any successors (the direct child nodes) that have not yet been discovered.
3. If the queue is empty, every node on the graph has been examined quit the search and
return not found.
4. Repeat from Step 2.
Breadth-first search is complete. This means that if there is a solution, breadth-first search will find it regardless of the kind of graph.
However, if the graph is infinite and there is no solution breadth-first search will diverge.If the shallowest goal node is at some finite depth
say d, breadth-first search will eventually find it after expanding all shallower nodes.
(ii) Write pseudocode for DFS and calculate its time complexity
Ans:
public class Graph {

enum VertexState {
White, Gray, Black
}
public void DFS()
{
VertexState state[] = new VertexState[vertexCount];
for (int i = 0; i < vertexCount; i++)
state[i] = VertexState.White;
runDFS(0, state);
}
public void runDFS(int u, VertexState[] state)
{
state[u] = VertexState.Gray;
for (int v = 0; v < vertexCount; v++)
if (isEdge(u, v) && state[v] == VertexState.White)
runDFS(v, state);
state[u] = VertexState.Black;
}
}
The time and space analysis of DFS differs according to its application area. In theoretical computer science, DFS is typically used to
traverse an entire graph, and takes time O(|V| + |E|), linear in the size of the graph. In these applications it also uses space O(|V|) in the
worst case to store the stack of vertices on the current search path as well as the set of already-visited vertices. Thus, in this setting, the
time and space bounds are the same as for breadth first search and the choice of which of these two algorithms to use depends less on
their complexity and more on the different properties of the vertex orderings the two algorithms produce.
rd

IGNOU MCA 3 Sem Solved Assignments 2012-13

Question 6: Apply Kruskals algorithm to find minimal spanning tree with an example.
Ans:
Kruskals algorithm is an algorithm in graph theory that finds a minimum spanning tree for a connected weighted graph. This means it finds
a subset of the edges that forms a tree that includes every vertex, where the total weight of all the edges in the tree is minimized. If the
graph is not connected, then it finds a minimum spanning forest (a minimum spanning tree for each connected component). Kruskals
algorithm is an example of a greedy algorithm.
It works as follows:
create a forest F (a set of trees), where each vertex in the graph is a separate tree

create a set S containing all the edges in the graph


while S is nonempty and F is not yet spanning
o remove an edge with minimum weight from S
o if that edge connects two different trees, then add it to the forest, combining two trees into a single tree
o otherwise discard that edge.
At the termination of the algorithm, the forest has only one component and forms a minimum
spanning tree of the graph.
Question7: Arrange the following growth rates in increasing order:
O (3n), O (n2), O (1), O (n log n)
Ans:
Take the following list of functions and arrange them in ascending order of growth rate. That is, if function g(n) immediately follows function
f(n) in your list, then it should be the case that f(n) is O(g(n)).

g1(n) = 2n
g2(n) = n4/3
g3(n) = n(log n)3
g4(n) = nlog n
g5(n) = 22n
g6(n) = 2n2
Solutions:
Here are the functions ordered in ascending order of growth rate:

g3(n) = n(log n)3


g2(n) = n4/3
g4(n) = nlog n
g1(n) = 2n
g6(n) = 2n2
g5(n) = 22n
One way to figure out that g4(n) = nlog n is O(2n) is by taking log of both functions. Remember that log is a monotonically increasing
function and also note that g4(n) = nlog n and g1(n) = 2n are both greater than 2 for large values of n. Hence taking log to each of these
functions we get that log(nlog n) = (log n)2 which grows slower than log(2n) = n. and so g4(n) = nlog n is O(2n).
Question 8: Using Principle of Mathematical Induction, prove that the sum 20 + 21 + + 2n is 2n+1 1 for all n >= 1 .
Ans:
The principle of mathematical induction is usually stated as an axiom of the natural numbers; seePeano axioms. However, it can be proved
in some logical systems. For instance, it can be proved if one assumes:

The set of natural numbers is well-ordered.


Every natural number is either zero, or n+1 for some natural number n.
For any natural number n, n+1 is greater than n.
To derive simple induction from these axioms, we must show that if P(n) is some proposition predicated of n, and if:

P(0) holds and


whenever P(k) is true then P(k+1) is also true
then P(n) holds for all n.
Proof. Let S be the set of all natural numbers for which P(n) is false. Let us see what happens if we assert that S is nonempty. Well-ordering
tells us that S has a least element, say t. Moreover, since P(0) is true, t is not 0. Since every natural number is either zero or some n+1,
there is some natural number n such that n+1=t. Now n is less than t, and t is the least element of S. It follows that n is not in S, and so P(n)
is true. This means that P(n+1) is true, and so P(t) is true. This is a contradiction, since t was in S. Therefore, S is empty.
Question 9: Define Knapsack Problem and cite one instance of the problem.
Ans:
the multidimensional knapsack problem (MKP) is a well-known, strongly NP-hard problem and one of the most challenging problems in the
class of the knapsack problems. In the last few years, it has been a favorite playground for metaheuristics, but very few contributions have
appeared on exact methods. In this paper we introduce an exact approach based on the optimal solution of subproblems limited to a subset
of variables. Each subproblem is faced through a recursive variable-fixing process that continues until the number of variables decreases a
given threshold (restricted core problem). The solution space of the restricted core problem is split into subspaces, each containing
solutions of a given cardinality. Each subspace is then explored with a branch-and-bound algorithm. Pruning conditions are introduced to
improve the efficiency of the branch-and-bound routine. In all the tested instances, the proposed method was shown to be, on average,

more efficient than the recent branch-and-bound method . Reduced costs propagation in an efficient implicit enumeration for the 0-1
multidimensional knapsack problem.
Question 10: Explain the essential idea of Dynamic Programming. How does Dynamic Programming differ from Divide and
conquer approach for solving problems?
Ans:
dynamic programming is a method of solving complex problems by breaking them down into simpler steps. It is applicable to problems that
exhibit the properties of overlapping sub problems which are only slightly smaller and optimal substructure (described below). When
applicable, the method takes much less time than naive methods.
Top-down dynamic programming simply means storing the results of certain calculations, which are then re-used later because the same
calculation is a sub-problem in a larger calculation. Bottom-up dynamic programming involves formulating a complex calculation as a
recursive series of simpler calculations.Dynamic programming is both a mathematical optimization method, and a computer programming
method. In both contexts, it refers to simplifying a complicated problem by breaking it down into simpler subproblems in a recursive manner.
While some decision problems cannot be taken apart this way, decisions that span several points in time do often break apart recursively;
Bellman called this the Principle of Optimality. Likewise, in computer science, a problem which can be broken down recursively is said to
have optimal substructure.
If subproblems can be nested recursively inside larger problems, so that dynamic programming methods are applicable, then there is a
relation between the value of the larger problem and the values of the subproblems.In the optimization literature this relationship is called
the Bellman equation.
Dynamic programming in mathematical optimization
In terms of mathematical optimization, dynamic programming usually refers to a simplification of a decision by breaking it down into a
sequence of decision steps over time. This is done by defining a sequence of value functions V1 , V2 , Vn , with an argument y
representing the state of the system at times i from 1 to n. The definition of Vn(y) is the value obtained in state y at the last time n. The
values Vi at earlier times i=n-1,n-2,,2,1 can be found by working backwards, using a recursive relationship called the Bellman equation.
For i=2,n, Vi -1 at any state y is calculated from Vi by maximizing a simple function (usually the sum) of the gain from decision i-1 and the
function Vi at the new state of the system if this decision is made. Since Vi has already been calculated, for the needed states, the above
operation yields Vi -1 for all the needed states. Finally, V1 at the initial state of the system is the value of the optimal solution. The optimal
values of the decision variables can be recovered, one by one, by tracking back the calculations already performed.
Dynamic programming in computer programming
There are two key attributes that a problem must have in order for dynamic programming to be applicable: optimal substructure and
overlapping subproblems which are only slightly smaller. When the overlapping problems are, say, half the size of the original problem the
strategy is called divide and conquer rather than dynamic programming. This is why merge sort, and quick sort, and finding all matches
of a regular expression are not classified as dynamic programming problems.
Optimal substructure means that the solution to a given optimization problem can be obtained by the combination of optimal solutions to its
subproblems. Consequently, the first step towards devising a dynamic programming solution is to check whether the problem exhibits such
optimal substructure. Such optimal substructures are usually described by means of recursion. For example, given a graph G=(V,E), the
shortest path p from a vertex u to a vertex v exhibits optimal substructure: take any intermediate vertex w on this shortest path p. If p is truly
the shortest path, then the path p1 from u to w and p2 from w to v are indeed the shortest paths between the corresponding vertices (by the
simple cut-and-paste argument described in CLRS). Hence, one can easily formulate the solution for finding shortest paths in a recursive
manner, which is what the Bellman-Ford algorithm does.
Overlapping subproblems means that the space of subproblems must be small, that is, any recursive algorithm solving the problem should
solve the same subproblems over and over, rather than generating new subproblems. For example, consider the recursive formulation for
generating the Fibonacci series: Fi = Fi-1 + Fi-2, with base case F1=F2=1. Then F43 = F42 + F41, and F42 = F41 + F40. Now F41 is being
solved in the recursive subtrees of both F43 as well as F42. Even though the total number of subproblems is actually small (only 43 of
them), we end up solving the same problems over and over if we adopt a naive recursive solution such as this. Dynamic programming
takes account of this fact and solves each subproblem only once. Note that the subproblems must be only slightly smaller (typically taken
to mean a constant additive factor) than the larger problem; when they are a multiplicative factor smaller the problem is no longer classified
as dynamic programming.

Incoming search terms:

design and analysis of algorithms solved question papers


mcs-031 solved question papers
logic design solved assignment
previous question for algorithm ignou
mcs 0 21 sample paper solved
design and analysis of algorithms notes ignou
previous question papers of design and analysis of algorithm
daa notes for mcs

ignou mcs-031 notes


mcs 031 of ignou notes

You might also like