Ignou Mcs 031 Solved Paper
Ignou Mcs 031 Solved Paper
Question1: Using Insertion Sort, sort the following sequence in increasing order and do the analysis of the algorithm: 35, 37, 18,
15, 40, 12
Ans: The idea behind insertion sort is:
1. Put the first 2 items in correct relative order.
2. Insert the 3rd item in the correct place relative to the first 2.
3. Insert the 4th item in the correct place relative to the first 3.
4. etc.
As with selection sort, a nested loop is used; however, a different invariant holds: after the i-th time around the outer loop, the items in A[0]
through A[i-1] are in order relative to each other (but are not necessarily in their final places). Also, note that in order to insert an item into its
place in the (relatively) sorted part of the array, it is necessary to move some values to the right to make room.
Heres the code:
public static <E extends Comparable<E>> void insertionSort(E[] A) {
int k, j;
E tmp;
int N = A.length;
for (k = 1; k < N, k++) {
tmp = A[k];
j = k 1;
while ((j >= 0) && (A[j].compareTo(tmp) > 0)) {
A[j+1] = A[j]; // move one value over one place to the right
j;
}
A[j+1] = tmp; // insert kth value in correct place relative
// to previous values
}
}
the time complexity of insertion sort. Again, the inner loop can execute a different number of times for every iteration of the outer loop. In the
worst case:
eres a more formal specification of the quick sort algorithm. The separate Partition subroutine takes the original position of the pivot
element as input and returns the post-partition pivot position as output.
Quicksort(A, p, r):
{
if p < r:
q = Partition(A, p, r) // calling partion function which returns the bifurcation point
Quicksort(A, p, q) // calling quicksort for the first array
Quicksort(A, q+1, r) // calling quicksort for the second array
}
Partition(A, p, r):
{
x = A[p] // considering the first element to ba as the pivot element
i=p1
j=r+1
while 1:
j=j1
while A[j] <= x:
j=j1
i=i+1
while A[i] >= x:
j=j1
if i < j:
swap(A[i], A[j])
else:
return j
}
In the above code of (Quick sort) we are inputting an array containing the numbers which are to be sort with n as the total number of the
numbers present in it. The function (Partition) returns a position about which that array/sub-array is to be partitioned again, this returned
value is again used in the recursive calls made by the function (Quick sort).
Question 3: Apply quicksort to sort the following list: Q U I C K S O R T in alphabetical order. Find the element whose position is
unchanged in the sorted list.
Ans:
#include <ctype.h>
#include <string>
#include <iostream>
#include <cstring>
#include <iomanip>
#include <stdio.h>
using namespace std;
const int DATA = 1000;
int compare(const void* , const void*);
int main()
{
int i;
int m;
int j;
int n;
char c;
char list[DATA];
string word[100];
cout << Enter words:\n;
fgets (list, 256, stdin );
printf (You entered: );
cout << endl;
printf (list);
cout << endl;
while (list[i])
{
if (isupper(list[i])) list[i]=tolower(list[i]);
putchar (list[i]);
i++;
}
cout << endl;
qsort(list, DATA, sizeof(char), compare);
cout << endl;
cout << The sorted list of words:\n;
printf (list);
cout << endl;
return 0;
}
int compare(const void* pnum1, const void *pnum2)
// compares two numbers, returns -1 if first is smaller
// returns 0 is they are equal, returns 1 if first is larger
{
int num1, num2;
num1 = *(char *)pnum1; // cast from pointer to void
num2 = *(char *)pnum2; // to pointer to int
if(num1 < num2)
return -1;
else
if (num1 == num2)
return 0;
else
return 1;
}
Question 4: Write Strassens matrix multiplications algorithm for obtaining the product of two matrices.
Ans:
Given: Two N by N matrices A and B.
Problem: Compute C = A B
Brute Force
for i:= 1 to N do
for j:=1 to N do
C[i,j] := 0;
for k := 1 to N do
C[i,j] := C[i,j] + A[i,k] * B[k,j]
O(N3) multiplications
Divide and Conquer
P1 = (A11 + A22) (B11+B22)
P2=(A21+A22)B11
P3=A11(B12-B22)
P4=A22(B21-B11)
P5=(A11+A12)B22
P6=(A21-A11)(B11+B12)
P7=(A12-A22)(B21+B22)
C11=P1+P4-P5+P7
C12=P3+P5
C21=P2+P4
C22=P1+P3-P2+P6
From
T(n) = O(nlog 7) = O(n2.81).
Question 5: (i) Define DFS. Explain briefly how it differs from BFS.
Ans:
Depth-first search (DFS) is an algorithm for traversing or searching a tree, tree structure, or graph. One starts at the root (selecting some
node as the root in the graph case) and explores as far as possible along each branch before backtracking.
Formally, DFS is an uninformed search that progresses by expanding the first child node of the search tree that appears and thus going
deeper and deeper until a goal node is found, or until it hits a node that has no children. Then the search backtracks, returning to the most
recent node it hasnt finished exploring. In a non-recursive implementation, all freshly expanded nodes are added to a stack for exploration.
The time and space analysis of DFS differs according to its application area. In theoretical computer science, DFS is typically used to
traverse an entire graph, and takes time O(|V| + |E|), linear in the size of the graph. In these applications it also uses space O(|V|) in the
worst case to store the stack of vertices on the current search path as well as the set of already-visited vertices. Thus, in this setting, the
time and space bounds are the same as for breadth first search and the choice of which of these two algorithms to use depends less on
their complexity and more on the different properties of the vertex orderings the two algorithms produce.
breadth-first search (BFS) is a graph search algorithm that begins at the root node and explores all the neighboring nodes. Then for each of
those nearest nodes, it explores their unexplored neighbor nodes, and so on, until it finds the goal. BFS is an uninformed search method
that aims to expand and examine all nodes of a graph or combination of sequences by systematically searching through every solution. In
other words, it exhaustively searches the entire graph or sequence without considering the goal until it finds it. It does not use a heuristic
algorithm. From the standpoint of the algorithm, all child nodes obtained by expanding a node are added to a FIFO queue. In typical
implementations, nodes that have not yet been examined for their neighbors are placed in some container called open and then once
examined are placed in the container closed.
Algorithm
1. Enqueue the root node.
2. Dequeue a node and examine it.
If the element sought is found in this node, quit the search and return a result.
Otherwise enqueue any successors (the direct child nodes) that have not yet been discovered.
3. If the queue is empty, every node on the graph has been examined quit the search and
return not found.
4. Repeat from Step 2.
Breadth-first search is complete. This means that if there is a solution, breadth-first search will find it regardless of the kind of graph.
However, if the graph is infinite and there is no solution breadth-first search will diverge.If the shallowest goal node is at some finite depth
say d, breadth-first search will eventually find it after expanding all shallower nodes.
(ii) Write pseudocode for DFS and calculate its time complexity
Ans:
public class Graph {
enum VertexState {
White, Gray, Black
}
public void DFS()
{
VertexState state[] = new VertexState[vertexCount];
for (int i = 0; i < vertexCount; i++)
state[i] = VertexState.White;
runDFS(0, state);
}
public void runDFS(int u, VertexState[] state)
{
state[u] = VertexState.Gray;
for (int v = 0; v < vertexCount; v++)
if (isEdge(u, v) && state[v] == VertexState.White)
runDFS(v, state);
state[u] = VertexState.Black;
}
}
The time and space analysis of DFS differs according to its application area. In theoretical computer science, DFS is typically used to
traverse an entire graph, and takes time O(|V| + |E|), linear in the size of the graph. In these applications it also uses space O(|V|) in the
worst case to store the stack of vertices on the current search path as well as the set of already-visited vertices. Thus, in this setting, the
time and space bounds are the same as for breadth first search and the choice of which of these two algorithms to use depends less on
their complexity and more on the different properties of the vertex orderings the two algorithms produce.
rd
Question 6: Apply Kruskals algorithm to find minimal spanning tree with an example.
Ans:
Kruskals algorithm is an algorithm in graph theory that finds a minimum spanning tree for a connected weighted graph. This means it finds
a subset of the edges that forms a tree that includes every vertex, where the total weight of all the edges in the tree is minimized. If the
graph is not connected, then it finds a minimum spanning forest (a minimum spanning tree for each connected component). Kruskals
algorithm is an example of a greedy algorithm.
It works as follows:
create a forest F (a set of trees), where each vertex in the graph is a separate tree
g1(n) = 2n
g2(n) = n4/3
g3(n) = n(log n)3
g4(n) = nlog n
g5(n) = 22n
g6(n) = 2n2
Solutions:
Here are the functions ordered in ascending order of growth rate:
more efficient than the recent branch-and-bound method . Reduced costs propagation in an efficient implicit enumeration for the 0-1
multidimensional knapsack problem.
Question 10: Explain the essential idea of Dynamic Programming. How does Dynamic Programming differ from Divide and
conquer approach for solving problems?
Ans:
dynamic programming is a method of solving complex problems by breaking them down into simpler steps. It is applicable to problems that
exhibit the properties of overlapping sub problems which are only slightly smaller and optimal substructure (described below). When
applicable, the method takes much less time than naive methods.
Top-down dynamic programming simply means storing the results of certain calculations, which are then re-used later because the same
calculation is a sub-problem in a larger calculation. Bottom-up dynamic programming involves formulating a complex calculation as a
recursive series of simpler calculations.Dynamic programming is both a mathematical optimization method, and a computer programming
method. In both contexts, it refers to simplifying a complicated problem by breaking it down into simpler subproblems in a recursive manner.
While some decision problems cannot be taken apart this way, decisions that span several points in time do often break apart recursively;
Bellman called this the Principle of Optimality. Likewise, in computer science, a problem which can be broken down recursively is said to
have optimal substructure.
If subproblems can be nested recursively inside larger problems, so that dynamic programming methods are applicable, then there is a
relation between the value of the larger problem and the values of the subproblems.In the optimization literature this relationship is called
the Bellman equation.
Dynamic programming in mathematical optimization
In terms of mathematical optimization, dynamic programming usually refers to a simplification of a decision by breaking it down into a
sequence of decision steps over time. This is done by defining a sequence of value functions V1 , V2 , Vn , with an argument y
representing the state of the system at times i from 1 to n. The definition of Vn(y) is the value obtained in state y at the last time n. The
values Vi at earlier times i=n-1,n-2,,2,1 can be found by working backwards, using a recursive relationship called the Bellman equation.
For i=2,n, Vi -1 at any state y is calculated from Vi by maximizing a simple function (usually the sum) of the gain from decision i-1 and the
function Vi at the new state of the system if this decision is made. Since Vi has already been calculated, for the needed states, the above
operation yields Vi -1 for all the needed states. Finally, V1 at the initial state of the system is the value of the optimal solution. The optimal
values of the decision variables can be recovered, one by one, by tracking back the calculations already performed.
Dynamic programming in computer programming
There are two key attributes that a problem must have in order for dynamic programming to be applicable: optimal substructure and
overlapping subproblems which are only slightly smaller. When the overlapping problems are, say, half the size of the original problem the
strategy is called divide and conquer rather than dynamic programming. This is why merge sort, and quick sort, and finding all matches
of a regular expression are not classified as dynamic programming problems.
Optimal substructure means that the solution to a given optimization problem can be obtained by the combination of optimal solutions to its
subproblems. Consequently, the first step towards devising a dynamic programming solution is to check whether the problem exhibits such
optimal substructure. Such optimal substructures are usually described by means of recursion. For example, given a graph G=(V,E), the
shortest path p from a vertex u to a vertex v exhibits optimal substructure: take any intermediate vertex w on this shortest path p. If p is truly
the shortest path, then the path p1 from u to w and p2 from w to v are indeed the shortest paths between the corresponding vertices (by the
simple cut-and-paste argument described in CLRS). Hence, one can easily formulate the solution for finding shortest paths in a recursive
manner, which is what the Bellman-Ford algorithm does.
Overlapping subproblems means that the space of subproblems must be small, that is, any recursive algorithm solving the problem should
solve the same subproblems over and over, rather than generating new subproblems. For example, consider the recursive formulation for
generating the Fibonacci series: Fi = Fi-1 + Fi-2, with base case F1=F2=1. Then F43 = F42 + F41, and F42 = F41 + F40. Now F41 is being
solved in the recursive subtrees of both F43 as well as F42. Even though the total number of subproblems is actually small (only 43 of
them), we end up solving the same problems over and over if we adopt a naive recursive solution such as this. Dynamic programming
takes account of this fact and solves each subproblem only once. Note that the subproblems must be only slightly smaller (typically taken
to mean a constant additive factor) than the larger problem; when they are a multiplicative factor smaller the problem is no longer classified
as dynamic programming.