Vet. Assignment
Vet. Assignment
ASSIGNMENT
BY
ISHAQ ABDULSAMAD
1810204035
CSC 308
QUESTION(S):
Give two algorithm of the following classes (excluding the discussed
algorithm) and explain their complexity.
1. Searching Algorithm
2. Sorting Algorithm
3. Binary Search Tree Algorithm
4. Graph Algorithm
5. Hash trees
LECTURER-IN-CHARGE:
DR. S. A. BAKURA
1. SEARCHING ALGORITHM
Although search engines use search algorithms, they belong to the study
of information retrieval, not algorithmic. Some types of search algorithms are:
Example:
Set i=0 and m = √n.
If A[i] != item && A[i] < item{
Set i = m
Increment m by √n
m < n-1
If A[i] > item{
2
Set x = i
}
Compare A[x] with item.
If A[x]== item {
x++
}
x<m
Time Complexity:
The while loop in the above code executes n/m times because the loop counter
increments by m times in every iteration. Since the optimal value of m= √n , thus,
n/m=√n resulting in a time complexity of O(√n).
Space Complexity:
The space complexity of this algorithm is O(1) since it does not require any other
data structure for its implementation.
Example:
Complexity
O(log3N) , where N is the size of the array.
At each step, you are reducing the size of the searchable range by a constant factor
(in this case 3). If you find your element after n steps, then the searchable range
has size N = 3n. Inversely, the number of steps that you need until you find the
element is the logarithm of the size of the collection. That is, the runtime is
O(log N). A little further thought shows that you can also always construct
situations where you need all those steps, so the worst-case runtime is actually
Θ(log3 N).
4
2. SORTING ALGORITHM
o INSERTION SORT
Insertion sort is a simple sorting algorithm that works similarly to the way you sort
playing cards in your hands. The array is virtually split into a sorted and an unsorted
part.
Example :
Insertion-Sort(A){
for j=i to A.length
key = A[i];
// insert A[i] into sorted sequence A[1,2,3,..,i-1]
j= i-1;
while (j>0 and A[j]>key)
A[j+1] = A[j]
j= j-1
A[j+1] = key
}
5
Time Complexity:
Space Complexity:
Since we use only a constant amount of additional memory apart from the input
array, the space complexity is O(1).
o QUICK SORT
Quick Sort is also a Divide and Conquer algorithm. It picks an element as a pivot
and partitions the given array around the picked pivot such that all the smaller
elements are to the left of the pivot and all the greater elements are to the right of
the pivot.
Example:
partition(A, p, r){
x= A[r]
i= p-1
for (j= p:r-1){
6
if (A[j] <= x){
i= i+1
exchange A[i] with A[j]
}
}
exchange A[i+1] with A[r]
return i+1
}
Space Complexity: O(1)
We perform the same number of comparisons for an array of any given size.
7
3. BINARY SEARCH TREE ALGORITHM
Binary Search Tree is an advanced algorithm used for analyzing the node, its left
and right branches, which are modeled in a tree structure and returning the value.
BST has the following properties:
The left sub-tree of a node contains only nodes with keys lesser than the
node’s key.
The right sub-tree of a node contains only nodes with keys greater than the
node’s key.
The left and right sub-tree each must also be a binary search tree.
Illustration:
Example:
8
Example:
Iterative Search Tree Algorithm: The recursive version of the search can be
"unrolled" into a while loop. On most machines, the iterative version is found to be
more efficient.
Iterative-Tree-Search(x, key)
while x ≠ NIL and key ≠ x.key then
if key < x.key then
x := x.left
else
x := x.right
end if
repeat
return x
Time Complexity: O(N2), As we visit every node just once and our helper
method also takes O(N) time, so overall time complexity becomes O(N) * O(N) =
O(N2)
Auxiliary Space: O(H), Where H is the height of the binary tree, and the extra
space is used due to the function call stack.
9
4. GRAPH ALGORITHM
A Graph is a non-linear data structure consisting of vertices and edges. The
vertices are sometimes also referred to as nodes and the edges are lines or arcs
that connect any two nodes in the graph. More formally a Graph is composed
of a set of vertices( V ) and a set of edges( E ). The graph is denoted by G(E,
V).
A graph G = (V;E) consists of a set of vertices V and a set of edges E, such that
each edge in E is a connection between a pair of vertices in V. The number of
vertices is written | V| , and the number of edges is written | E| . | E| can range
from zero to a maximum of | V| 2 - | V| . A graph with relatively few edges is
called sparse, while a graph with many edges is called dense. A graph
containing all possible edges is said to be complete.
It works by creating augmenting paths i.e. paths from source to sink that have a
non-zero flow. We pass the flow through the paths and we update the limits. This
can lead to situation where we have no more moves left. That's where the 'undo'
ability of this algorithm plays a big role. In case of being stuck, we decrease the
flow and open up the edge to pass our current substance.
Steps
1. Set zero flow for all edges.
11
algorithm is O(V) in the worst case. Creating a queue and inserting each node in it
takes O(V) extra space.
Python implementation
# Large number as infinity
inf = 1e10
def maximum_flow(graph, source, sink):
max_flow = 0
parent = bfs(graph, source, sink)
while path:
limit = inf
v = sink
while v != source:
u = parent[s]
path_flow = min(limit, graph[u][v])
v = parent[v]
max_flow += path_flow
v = sink
while v != source:
u = parent[v]
graph[u][v] -= path_flow
graph[v][u] += path_flow
v = parent[v]
path = bfs(graph, source, sink)
return max_flow
12
Example 2: Dinic’s Algorithm
This algorithm was given by Yefim Denitz in 1970. This algorithm is used to solve
the maximum flow problem as described above. The algorithm consists of several
phases. In each phase, construct the layered network of the residual network of
graphs, say ‘G’.then find an arbitrary blocking flow in the layered network and add
it in the current flow.
Complexities
The time complexity of Dinic’s algorithm is O(V2 * E). There are fewer than V
phases possible, and each phase takes O(V * E) time. Hence its time complexity is
O(V2 *E). The space complexity of Dinic’s algorithm is O(V + E). As it stores the
data of all the edges and vertices.
13
5. HASH TREE
A hash tree or Merkle tree is a tree in which every "leaf" (node) is labelled with
the cryptographic hash of a data block, and every node that is not a leaf (called
a branch, inner node, or inode) is labelled with the cryptographic hash of the labels
of its child nodes. A hash tree allows efficient and secure verification of the
contents of a large data structure. A hash tree is a generalization of a hash list and
a hash chain.
The main idea in this approach is to build the hash of each node depending on the
hash values of its children. When we get the hash of the root node, it’ll represent
the hash of the whole tree.
We’ll dive into the given tree using DFS traversal. The moment we reach a leaf
node, we return the hash of that single node as zero.
Then, when we go back to the DFS traversal, we can get the hash of the current
node by using any hash algorithm on the sequence of hash values of its children. In
our approach, we’ll use this formula to get the hash of a sequence S of length N:
14
Initially, we define the Hash function that will return the hash of the subtree that’s
rooted at node. The function will have one parameter, node, which will represent
the root of the subtree we want to hash.
First, we declare nodeHash, which will store the hash of the current subtree that’s
rooted at node. Second, we iterate over the children of the current node and get the
hash value of each one of them.
Third, we use the children’s hashes to get the hash value of the subtree, using the
formula:
Complexity
Operation Complexity
Space O(n)
Searching O(logn)
Traversal O(n)
Insertion O(logn)
Deletion O(logn)
Synchronizatio O(logn)
n
15
The complexity of this algorithm is O(N x Log2(N)), where N is the number of
nodes in the given tree. The reason for this complexity is that we iterate over each
node and each edge of the tree only once.
However, for each node, we sort its children’s hash in increasing order because the
initial order of the children doesn’t matter. This operation has a complexity of O(N
x Log2(N)).
Note that each node is a child of one node. Thus, it will be used for sorting only
once.
16