0% found this document useful (0 votes)
68 views16 pages

Vet. Assignment

The document discusses various algorithms under different classes: 1. Searching algorithms like jump search and ternary search are explained, with jump search having O(√n) time complexity and ternary search having O(log3N) time complexity. 2. Sorting algorithms like insertion sort and quicksort are explained, with insertion sort having O(n2) average time complexity and quicksort having O(n2) worst case time complexity. 3. Binary search tree algorithms like iterative and recursive tree searches are discussed, with both having O(N2) time complexity. 4. The Ford-Fulkerson algorithm for solving maximum flow problems in graphs is provided as an example of

Uploaded by

Abdulsamad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views16 pages

Vet. Assignment

The document discusses various algorithms under different classes: 1. Searching algorithms like jump search and ternary search are explained, with jump search having O(√n) time complexity and ternary search having O(log3N) time complexity. 2. Sorting algorithms like insertion sort and quicksort are explained, with insertion sort having O(n2) average time complexity and quicksort having O(n2) worst case time complexity. 3. Binary search tree algorithms like iterative and recursive tree searches are discussed, with both having O(N2) time complexity. 4. The Ford-Fulkerson algorithm for solving maximum flow problems in graphs is provided as an example of

Uploaded by

Abdulsamad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

FEDERAL UNIVERSITY BIRNIN KEBBI

DEPARTMENT OF COMPUTER SCIENCE

ASSIGNMENT
BY

ISHAQ ABDULSAMAD

1810204035
CSC 308

QUESTION(S):
Give two algorithm of the following classes (excluding the discussed
algorithm) and explain their complexity.
1. Searching Algorithm
2. Sorting Algorithm
3. Binary Search Tree Algorithm
4. Graph Algorithm
5. Hash trees

LECTURER-IN-CHARGE:
DR. S. A. BAKURA
1. SEARCHING ALGORITHM

A search algorithm is an algorithm designed to solve a search problem. Search


algorithms work to retrieve information stored within particular data structure, or
calculated in the search space of a problem domain, with either discrete or
continuous values.

Although search engines use search algorithms, they belong to the study
of information retrieval, not algorithmic. Some types of search algorithms are:

 Sentinel Linear Search Algorithm


 Ternary Search Algorithm
 Jump Search Algorithm
 Exponential Search Algorithm
 Meta-Binary Search Algorithm
 Interpolation Search Algorithm

 JUMP SEARCH ALGORITHM is a relatively new algorithm for searching


an element in a sorted array. The fundamental idea behind this searching
technique is to search fewer number of elements compared to linear search
algorithm (which scans every element in the array to check if it matches with
the element being searched or not). This can be done by skipping some fixed
number of array elements or jumping ahead by fixed number of steps in every
iteration.

Example:
Set i=0 and m = √n.
If A[i] != item && A[i] < item{
Set i = m
Increment m by √n
m < n-1
If A[i] > item{

2
Set x = i
}
Compare A[x] with item.
If A[x]== item {
x++
}
x<m
Time Complexity:
The while loop in the above code executes n/m times because the loop counter
increments by m times in every iteration. Since the optimal value of m= √n , thus,
n/m=√n resulting in a time complexity of O(√n).
Space Complexity:
The space complexity of this algorithm is O(1) since it does not require any other
data structure for its implementation.

 TERNARY SEARCH ALGORITHM

A ternary search algorithm is a technique in computer science for finding


the minimum or maximum of a unimodal function. A ternary search determines
either that the minimum or maximum cannot be in the first third of the domain or
that it cannot be in the last third of the domain, then repeats on the remaining two
thirds.

Example:

int ternary_search(int l,int r, int x){


if(r>=l) {
int mid1 = l + (r-l)/3;
int mid2 = r - (r-l)/3;
if(ar[mid1] == x)
return mid1;
if(ar[mid2] == x)
return mid2;
3
if(x<ar[mid1])
return ternary_search(l,mid1-1,x);
else if(x>ar[mid2])
return ternary_search(mid2+1,r,x);
else
return ternary_search(mid1+1,mid2-1,x);
}
return -1;
}

Complexity
O(log3N) , where N is the size of the array.

At each step, you are reducing the size of the searchable range by a constant factor
(in this case 3). If you find your element after n steps, then the searchable range
has size N = 3n. Inversely, the number of steps that you need until you find the
element is the logarithm of the size of the collection. That is, the runtime is
O(log N). A little further thought shows that you can also always construct
situations where you need all those steps, so the worst-case runtime is actually
Θ(log3 N).

4
2. SORTING ALGORITHM

A Sorting Algorithm is used to rearrange a given array or list of elements according


to a comparison operator on the elements. The comparison operator is used to decide
the new order of elements in the respective data structure.
The sorting algorithms consists of five types, which are:
a. Insertion Sort
b. Selection Sort
c. Bubble Sort
d. Merge Sort
e. Quick Sort

o INSERTION SORT

Insertion sort is a simple sorting algorithm that works similarly to the way you sort
playing cards in your hands. The array is virtually split into a sorted and an unsorted
part.
Example :

Insertion-Sort(A){
for j=i to A.length
key = A[i];
// insert A[i] into sorted sequence A[1,2,3,..,i-1]
j= i-1;
while (j>0 and A[j]>key)
A[j+1] = A[j]
j= j-1
A[j+1] = key
}

5
Time Complexity:

 Best case: O(n)


When we initiate insertion sort on an already sorted array, it will only compare
each element to its predecessor, thereby requiring n steps to sort the already sorted
array of n elements.

 Worse case: O(n2)


When we apply insertion sort on a reverse-sorted array, it will insert each element
at the beginning of the sorted sub-array, making it the worst time complexity of
insertion sort.

 Average case: O(n2)


When the array elements are in random order, the average running time is O(n 2 / 4)
= O(n2).

Space Complexity:

Since we use only a constant amount of additional memory apart from the input
array, the space complexity is O(1).

o QUICK SORT

Quick Sort is also a Divide and Conquer algorithm. It picks an element as a pivot
and partitions the given array around the picked pivot such that all the smaller
elements are to the left of the pivot and all the greater elements are to the right of
the pivot.

Example:
partition(A, p, r){
x= A[r]
i= p-1
for (j= p:r-1){

6
if (A[j] <= x){
i= i+1
exchange A[i] with A[j]
}
}
exchange A[i+1] with A[r]
return i+1
}
Space Complexity: O(1)

Best case Complexity: O(n)

Average Case Complexity O(n2)

Worst Case Complexity O(n2)

We perform the same number of comparisons for an array of any given size.

In the first iteration, we perform (n - 1) comparisons, (n - 2) in the second, and so


on until the last iteration where we perform only one comparison. Thus the total
number of comparisons sum up to n * (n - 1) / 2. The number of swaps performed
is at most n - 1. So the overall time complexity is quadratic.

7
3. BINARY SEARCH TREE ALGORITHM

Binary Search Tree is an advanced algorithm used for analyzing the node, its left
and right branches, which are modeled in a tree structure and returning the value.
BST has the following properties:
 The left sub-tree of a node contains only nodes with keys lesser than the
node’s key.
 The right sub-tree of a node contains only nodes with keys greater than the
node’s key.
 The left and right sub-tree each must also be a binary search tree.
Illustration:

Example:

Search (root, item)


if (item = root → data) or (root = NULL)
return root
else if (item < root → data)
return Search(root → left, item)
else
return Search(root → right, item)
END if
END

8
Example:

Iterative Search Tree Algorithm: The recursive version of the search can be
"unrolled" into a while loop. On most machines, the iterative version is found to be
more efficient.

Iterative-Tree-Search(x, key)
while x ≠ NIL and key ≠ x.key then
if key < x.key then
x := x.left
else
x := x.right
end if
repeat
return x
Time Complexity: O(N2), As we visit every node just once and our helper
method also takes O(N) time, so overall time complexity becomes O(N) * O(N) =
O(N2)
Auxiliary Space: O(H), Where H is the height of the binary tree, and the extra
space is used due to the function call stack.

9
4. GRAPH ALGORITHM
A Graph is a non-linear data structure consisting of vertices and edges. The
vertices are sometimes also referred to as nodes and the edges are lines or arcs
that connect any two nodes in the graph. More formally a Graph is composed
of a set of vertices( V ) and a set of edges( E ). The graph is denoted by G(E,
V).

A graph G = (V;E) consists of a set of vertices V and a set of edges E, such that
each edge in E is a connection between a pair of vertices in V. The number of
vertices is written | V| , and the number of edges is written | E| . | E| can range
from zero to a maximum of | V| 2 - | V| . A graph with relatively few edges is
called sparse, while a graph with many edges is called dense. A graph
containing all possible edges is said to be complete.

Example 1: Ford Fulkerson algorithm


Ford Fulkerson's algorithm solves the maximum flow graph problem. It finds the
best organisation of flow through the edges of graphs such that you get maximum
10
flow out on the other end. The source has a specific rate of input and each edge has
a weight associated with it which is the maximum substance that can be passed
through that edge.

Ford Fulkerson algorithm is also called Edmund-Karp algorithm as the algorithm


was provided in complete specification by Jack Edmonds and Richard Karp.

It works by creating augmenting paths i.e. paths from source to sink that have a
non-zero flow. We pass the flow through the paths and we update the limits. This
can lead to situation where we have no more moves left. That's where the 'undo'
ability of this algorithm plays a big role. In case of being stuck, we decrease the
flow and open up the edge to pass our current substance.

Steps
1. Set zero flow for all edges.

2. While there is a path from source to sink do,

3. Find the minimum weight on the path, let it be limit .


4. For all edges (u, v) on the path do,
1. Add limit to flow from u to v. (For current move)
2. Subtract limit from flow from v to u. (For undo in later move)
Complexities
Ford Fulkerson algorithm has time complexity O(|E| * f), where |E| is the number
of edges and ‘f’ is the maximum flow of a network because it depends on the
maximum flow of the network.
Edmonds Karp algorithm has time complexity O(|V| * E2), where E is the number
of edges and V is the number of vertices. Edmonds Karp algorithm is independent
of the maximum flow of the network. The space complexity of the Edmonds Karp

11
algorithm is O(V) in the worst case. Creating a queue and inserting each node in it
takes O(V) extra space.
Python implementation
# Large number as infinity
inf = 1e10
def maximum_flow(graph, source, sink):
max_flow = 0
parent = bfs(graph, source, sink)
while path:
limit = inf
v = sink
while v != source:
u = parent[s]
path_flow = min(limit, graph[u][v])
v = parent[v]
max_flow += path_flow
v = sink
while v != source:
u = parent[v]
graph[u][v] -= path_flow
graph[v][u] += path_flow
v = parent[v]
path = bfs(graph, source, sink)
return max_flow

12
Example 2: Dinic’s Algorithm
This algorithm was given by Yefim Denitz in 1970. This algorithm is used to solve
the maximum flow problem as described above. The algorithm consists of several
phases. In each phase, construct the layered network of the residual network of
graphs, say ‘G’.then find an arbitrary blocking flow in the layered network and add
it in the current flow.
Complexities
The time complexity of Dinic’s algorithm is O(V2 * E). There are fewer than V
phases possible, and each phase takes O(V * E) time. Hence its time complexity is
O(V2 *E). The space complexity of Dinic’s algorithm is O(V + E). As it stores the
data of all the edges and vertices.

13
5. HASH TREE
A hash tree or Merkle tree is a tree in which every "leaf" (node) is labelled with
the cryptographic hash of a data block, and every node that is not a leaf (called
a branch, inner node, or inode) is labelled with the cryptographic hash of the labels
of its child nodes. A hash tree allows efficient and secure verification of the
contents of a large data structure. A hash tree is a generalization of a hash list and
a hash chain.
The main idea in this approach is to build the hash of each node depending on the
hash values of its children. When we get the hash of the root node, it’ll represent
the hash of the whole tree.
We’ll dive into the given tree using DFS traversal. The moment we reach a leaf
node, we return the hash of that single node as zero.

Then, when we go back to the DFS traversal, we can get the hash of the current
node by using any hash algorithm on the sequence of hash values of its children. In
our approach, we’ll use this formula to get the hash of a sequence S of length N:

Algorithm 1: Hash Tree Algorithm


Function Hash (node):
nodeHash 0;
for i = 0 to node.children.size – 1 do
nodeHash
nodeHash + Hash(child) x (SEED)i mod MOD;
end
return nodeHash;
end

14
Initially, we define the Hash function that will return the hash of the subtree that’s
rooted at node. The function will have one parameter, node, which will represent
the root of the subtree we want to hash.
First, we declare nodeHash, which will store the hash of the current subtree that’s
rooted at node. Second, we iterate over the children of the current node and get the
hash value of each one of them.
Third, we use the children’s hashes to get the hash value of the subtree, using the
formula:

Algorithm 2: Sorted Hash Tree Approach


Function Hash(node):
children _hash ɵ;
for child € node.children do
children_hash.add(Hash(child));
end
sort(children_hash);
nodeHash 0;
for i=0 to children_hash.size – 1 do
nodeHash
nodeHash + children_hash[i] x (SEED)i mod MOD;
end
return nodeHash;
end

Complexity
Operation Complexity
Space O(n)
Searching O(logn)
Traversal O(n)
Insertion O(logn)
Deletion O(logn)
Synchronizatio O(logn)
n

15
The complexity of this algorithm is O(N x Log2(N)), where N is the number of
nodes in the given tree. The reason for this complexity is that we iterate over each
node and each edge of the tree only once.
However, for each node, we sort its children’s hash in increasing order because the
initial order of the children doesn’t matter. This operation has a complexity of O(N
x Log2(N)).

Note that each node is a child of one node. Thus, it will be used for sorting only
once.

16

You might also like