22GE002_Unit_2_LM
22GE002_Unit_2_LM
UNIT II
ALGORITHMIC DESIGN THINKING
2.1 ANALYSIS AND VERIFICATION
An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for
obtaining a required output for any legitimate input in a finite amount of time.
About Algorithms
Computational Complexity:
The branch of theoretical computer science where the goal is to classify algorithms
according to their efficiency and computational problems according to their inherent difficulty
is known as computational complexity. Paradoxically, such classifications are typically not
useful for predicting performance or for comparing algorithms in practical applications because
they focus on order-of-growth worst-case performance. In this book, we focus on analyses that
can be used to predict performance and compare algorithms.
1
Computational Problem Solving
Analysis of Algorithms:
A complete analysis of the running time of an algorithm involves the following steps:
• Identify unknown quantities that can be used to describe the frequency of execution of
the basic operations.
• Develop a realistic model for the input to the program.
• Analyze the unknown quantities, assuming the modelled input.
• Calculate the total running time by multiplying the time by the frequency for each
operation, then adding all the products.
Classical algorithm analysis on early computers could result in exact predictions of
running times. Modern systems and algorithms are much more complex, but modern analyses
are informed by the idea that exact analysis of this sort could be performed in principle.
Basic analysis operations in the context of algorithms involve understanding and
evaluating the performance and behavior of algorithms. These operations are essential for
assessing factors such as time complexity, space complexity, and correctness. Here are some
key basic analysis operations:
Correctness Analysis:
Correctness analysis ensures that an algorithm produces the correct output for all
possible inputs. It involves providing mathematical proofs or logical arguments to demonstrate
the correctness of the algorithm under different scenarios. Techniques such as loop invariants,
mathematical induction, and proof by contradiction are commonly used for correctness
analysis.
2
Computational Problem Solving
Asymptotic Analysis:
Asymptotic analysis focuses on the behavior of an algorithm as the input size
approaches infinity. It aims to capture the growth rate of the algorithm's time or space
requirements without being concerned with specific constants or lower-order terms.
Asymptotic analysis helps in comparing the relative efficiency of algorithms and understanding
their scalability.
Empirical Analysis:
Empirical analysis involves practical experimentation and measurement of an
algorithm's performance on real-world or synthetic datasets. It includes benchmarking the
algorithm's execution time, memory usage, and other metrics using actual implementations and
input data. Empirical analysis complements theoretical analysis and provides insights into the
algorithm's performance in real-world scenarios.
By conducting these basic analysis operations, researchers, developers, and analysts
can gain a comprehensive understanding of algorithms, identify optimization opportunities,
and make informed decisions regarding algorithm selection and usage.
3
Computational Problem Solving
Asymptotic notations are mathematical tools to express the time complexity of algorithms for
asymptotic analysis.
• Ω Omega Notation
• θ Theta Notation
If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist a positive
constant C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0
It returns the highest possible output value (big-O)for a given input. The execution time
serves as an upper bound on the algorithm’s time complexity.
4
Computational Problem Solving
Big O notation is used to describe the upper bound on the growth rate of an algorithm.
It represents the worst-case scenario, indicating how the algorithm's time or space requirements
grow as the size of the input increases. For example, if an algorithm has a time complexity of
O(n), it means that the algorithm's execution time grows linearly with the size of the input.
By analyzing the order of growth of algorithms, developers and analysts can make
informed decisions about algorithm selection, optimization, and scalability, ensuring efficient
problem- solving in various computational tasks.
Omega Notation Ω
Omega notation represents the lower bound of the running time of an algorithm. Thus,
it provides the best case complexity of an algorithm.
5
Computational Problem Solving
The execution time serves as a lower bound on the algorithm’s time complexity.
It is defined as the condition that allows an algorithm to complete statement execution
in the shortest amount of time.
Let g and f be the function from the set of natural numbers to itself. The function f is
said to be Ω(g), if there is a constant c > 0 and a natural number n0 such that c*g(n) ≤ f(n) for
all n ≥ n0
Theta Notation θ
Theta notation encloses the function from above and below. Since it represents the
upper and the lower bound of the running time of an algorithm, it is used for analyzing the
average-case complexity of an algorithm. Theta (Average Case) You add the running times for
each possible input combination and take the average in the average case.
Let g and f be the function from the set of natural numbers to itself. The function f is
said to be Θ(g), if there are constants c1, c2 > 0 and a natural number n0 such that c1* g(n) ≤
f(n) ≤ c2 * g(n) for all n ≥ n0
6
Computational Problem Solving
In general, if there is a problem P1, then it may have many solutions, such that each of
these solutions is regarded as an algorithm. So, theremay be many algorithms such as A1,A2, A3,
…, An.
Before you implement any algorithm as a program, it is better to find out which among
these algorithms are good in terms of time and memory. It would be best to analyze every
algorithm in terms of Time that relates to which one could execute faster and Memory
corresponding to which one will take less memory.
So, the Design and Analysis of Algorithm talks about how to design various algorithms
and how to analyze them. After designing and analyzing, choose the best algorithm thattakes
the least time and the least memory and then implement it as a program in C.
Generally, we make three types of analysis, which is as follows:
Worst-case time complexity: For 'n' input size, the worst-case time complexity can be
defined as the maximum amount of time needed by an algorithm to complete its
execution. Thus, it is nothing but a function defined by the maximum number of steps
performed on an instance having an input size of n.
Average case time complexity: For 'n' input size, the average-case time complexity
can be defined as the average amount of time needed by an algorithm to complete its
execution. Thus, it is nothing but a function defined by the average number of steps
performed on an instance having an input size of n.
Best case time complexity: For 'n' input size, the best-case time complexity can be
defined as the minimum amount of time needed by an algorithm to complete its
execution. Thus, it is nothing but a function defined by the minimum number of steps
performed on an instance having an input size of n.
Complexity of Algorithm
The term algorithm complexity measures how many steps are required by the algorithm
to solve the given problem. It evaluates the order of count of operations executed by an
algorithm as a function of input data size. To assess the complexity, the order (approximation)
of the count of operation is always considered instead of counting the exact steps.
O(f) notation represents the complexity of an algorithm, which is also termed as an
Asymptotic notation or "Big O" notation. Here the f corresponds to the function whose size is
the same as that of the input data. The complexity of the asymptotic computation O(f)
determines in which order the resources such as CPU time, memory, etc. are consumed by the
algorithm that is articulated as a function of the size of the input data.
The complexity can be found in any form such as constant, logarithmic, linear, n*log(n),
quadratic, cubic, exponential, etc. It is nothing but the order of constant, logarithmic, linear and
so on, the number of steps encountered for the completion of a particular algorithm. To make
it even more precise, we often call the complexity of an algorithm as "running time".
7
Computational Problem Solving
Logarithmic Complexity:
It imposes a complexity of O(log(N)). It undergoes the execution of the order of log(N)
steps. To perform operations on N elements, it often takes the logarithmic base as 2. For
N = 1,000,000, an algorithm that has a complexity of O(log(N)) would undergo 20 steps (with a
constant precision). Here, the logarithmic base does not hold a necessary consequence for the
operation count order, so it is usually omitted.
Linear Complexity:
It imposes a complexity of O(N). It encompasses the same number of steps as that of
the total number of elements to implement an operation on N elements. For example, if
there exist 500 elements, then it will take about 500 steps. Basically, in linear
complexity, the number of elements linearly depends on the number of steps. For
example, the number of steps for N elements can be N/2 or 3*N.
o It also imposes a run time of O(n*log(n)). It undergoes the execution of the order
N*log(N) on N number of elements to solve the given problem. For a given 1000
elements, the linear complexity will execute 10,000 steps for solving a given problem.
Cubic Complexity: It imposes a complexity of O(n3). For N input data size, it executes
the order of N3 steps on N elements to solve a given problem.
For example, if there exist 100 elements, it is going to execute 1,000,000 steps.
8
Computational Problem Solving
The exponential function N! grows even faster; for example, if N = 5 will result in 120.
Likewise, if N = 10, it will result in 3,628,800 and so on.
Since the constants do not hold a significant effect on the order of count of operation,
so it is better to ignore them. Thus, to consider an algorithm to be linear and equally efficient,
it must undergo N, N/2 or 3*N count of operation, respectively, on the same number of
elements to solve a particular problem.
Space and time complexity are fundamental concepts in algorithm analysis, which help
in understanding how algorithms behave in terms of memory usage and execution time as the
input size grows.
Time Complexity:
Time complexity refers to the amount of time an algorithm takes to complete as a
function of the size of its input. It provides an estimation of the number of operations performed
by the algorithm relative to the input size. Time complexity is typically expressed using Big O
notation.
if arr[i] == target:
return i return -1
# Time Complexity: O(n) - Linear Time
In this example, the time complexity of the linear search algorithm is O(n), where 'n'
represents the size of the input array. This is because the algorithm iterates through each element
of the array once to find the target element.
9
Computational Problem Solving
In this example, the time complexity of the binary search algorithm is O(log n), where 'n'
represents the size of the input array. This is because the algorithm halves the search space in
each iteration, resulting in a logarithmic growth rate.
Space Complexity:
Space complexity refers to the amount of memory space required by an algorithm to
execute as a function of the size of its input. It includes both the space required by the algorithm
itself (e.g., variables, data structures) and any additional space used during its execution (e.g.,
recursion stack, auxiliary space). Like time complexity, space complexity is also expressed
using Big O notation.
10
Computational Problem Solving
The brute force method is ideal for solving small and simpler problems.
It is known for its simplicity and can serve as a comparison benchmark.
Cons:
The brute force approach is inefficient. For real-time problems, algorithm analysis
often goes above the O(N!) order of growth.
11
Computational Problem Solving
This method relies more on compromising the power of a computer system for
solving a problem than on a good algorithm design.
Brute force algorithms are slow.
Brute force algorithms are not constructive or creative compared to algorithms that
are constructed using some other design paradigms.
String Matching:
The problem of matching patterns in strings is central to database and text
processing applications. The problem will be specified as: given an input text string t of length
n, and a pattern string p of length m, find the first (or all) instances of the pattern in the text.
The simplest algorithm for string matching is a brute force algorithm, where we simply
try to match the first character of the pattern with the first character of the text, and if we
succeed, try to match the second character, and so on; if we hit a failure point, slide the pattern
over one character and try again. When we find a match, return its starting location. Java code
for the brute force method:
for (int i = 0; i < n-m; i++) {
int j = 0;
while (j < m && t[i+j] == p[j]) {
j++;
}
if (j == m) return i; }
System.out.println(‘‘No match found’’);
return -1;
The outer loop is executed at most n-m+1 times, and the inner loop m times, for each
iteration of the outer loop. Therefore, the running time of this algorithm is in O(nm).
Travelling Salesman Problem:
Travelling Salesman Problem is based on a real life scenario, where a salesman
from a company has to start from his own city and visit all the assigned cities exactly once and
return to his home till the end of the day. The exact problem statement goes like this, "Given a
set of cities and distance between every pair of cities, the problem is to find the shortest possible
route that visits every city exactly once and returns to the starting point."
There are two important things to be cleared about in this problem statement,
Visit every city exactly once
Cover the shortest path
12
Computational Problem Solving
We need to find the shortest path covering all the nodes exactly once, which is highlighted
in the figure below for the above graph.
13
Computational Problem Solving
Run a loop num_nodes time and take two inputs namely first_node and second_node
* everytime as two nodes having an edge between them and place the
edges_list[first_node][second_node] position equal to 1.
Finally after the loop executes we have an adjacent matrix available i.e edges_list.
Generating the permutation of the rest cities. Suppose we have total N nodes and
we have considered one node as the source, then we need to generate the rest (N-1)!
(Factorial of N-1) permutations.
We need to calculate the edge sum (path sum) for every permutation and take a
track of the minimum path sum with each permutation.
Return the minimum edge cost.
Working Mechanism:
This function rearranges the objects in [nodes.begin(),nodes.end()], where the []
represents both iterator inclusive, in a lexicographical order. If there exists a greater
lexicographical arrangement than the current arrangement, then the function returns true else it
returns false.
Lexicographical order is also known as dictionary order in mathematics.
Complexity:
The time complexity of the algorithm is dependent upon the number of nodes. If the
number of nodes is n then the time complexity will be proportional to n! (factorial of n) i.e.
O(n!).
The most amount of space in this graph algorithm is taken by the adjacent matrix which
is a n * n two dimensional matrix, where n is the number of nodes. Hence the space complexity
is O(n^2).
14
Computational Problem Solving
Divide and Conquer algorithm consists of a dispute using the following three steps.
1. Divide the original problem into a set of sub problems.
2. Quicksort: It is the most efficient sorting algorithm, which is also known as partition-
exchange sort. It starts by selecting a pivot value from an array followed by dividing
the rest of the array elements into two sub-arrays. The partition is made by comparing
each of the elements with the pivot value. It compares whether the element holds a
greater value or lesser value than the pivot and then sort the arrays recursively.
15
Computational Problem Solving
3. Merge Sort: It is a sorting algorithm that sorts an array by making comparisons. It starts
by dividing an array into sub-array and then recursively sorts each of them. After the
sorting is done, it merges them back.
Since these algorithms inhibit parallelism, it does not involve any modification and
is handled by systems incorporating parallel processing.
16
Computational Problem Solving
Merge Sort
Merge sort is a sorting algorithm that falls under the category of Divide and
Conquer technique. It is one of the best sorting techniques that successfully build a recursive
algorithm.
1. Step1: The merge sort algorithm iteratively divides an array into equal halves until we
achieve an atomic value. In case if there are an odd number of elements in an array,
then one of the halves will have more elements than the other half.
2. Step2: After dividing an array into two subarrays, we will notice that it did not hamper
the order of elements as they were in the original array. After now, we will further
divide these two arrays into other halves.
3. Step3: Again, we will divide these arrays until we achieve an atomic value, i.e., a value
that cannot be further divided.
4. Step4: Next, we will merge them back in the same way as they were broken down.
5. Step5: For each list, we will first compare the element and then combine them to form
a new sorted list.
6. Step6: In the next iteration, we will compare the lists of two data values and merge
them back into a list of found data values, all placed in a sorted manner.
Merge sort is a sorting algorithm that follows the divide-and-conquer approach. It
works by recursively dividing the input array into smaller subarrays and sorting those subarrays
then merging them back together to obtain the sorted array.
In simple terms, we can say that the process of merge sort is to divide the array into two
halves, sort each half, and then merge the sorted halves back together. This process is repeated
until the entire array is sorted.
Example:
17
Computational Problem Solving
Divide: Divide the list or array recursively into two halves until it can no more be
divided.
Conquer: Each subarray is sorted individually using the merge sort algorithm.
Merge: The sorted subarrays are merged back together in sorted order. The process
continues until all elements from both subarrays have been merged.
18
Computational Problem Solving
Efficiency:
The divide and conquer strategy is a widely used algorithmic paradigm that solves a
problem by breaking it into smaller subproblems, solving each subproblem recursively, and
then combining the results. Its efficiency can be evaluated in terms of time complexity and
space complexity:
1. Time Complexity
• The time complexity of a divide and conquer algorithm depends on:
• The number of subproblems into which the problem is divided.
Examples:
Merge Sort: T(n)=2T(n/2)+O(n)T(n) = 2T(n/2) + O(n)T(n)=2T(n/2)+O(n) → Time
Complexity: O(nlog n)O(n \log n)O(nlogn).
Binary Search: T(n)=T(n/2)+O(1)T(n) = T(n/2) + O(1)T(n)=T(n/2)+O(1) → Time
Complexity: O(log n)O(\log n)O(logn).
2. Space Complexity
Space complexity in divide and conquer depends on:
2. Recursive Stack Space: Memory required to maintain the recursive function calls.
19
Computational Problem Solving
Analysis:
Each recursive call consumes stack space proportional to the depth of the recursion.
For most divide and conquer algorithms, the depth of recursion is O(log n)O(\log
n)O(logn) (when the problem size reduces geometrically).
Examples:
Merge Sort: Requires O(n)O(n)O(n) auxiliary space for merging and O(log n)O(\log
n)O(logn) stack space → Total Space: O(n+log n)O(n + \log n)O(n+logn).
Quick Sort: Requires O(logn)O(\log n)O(logn) stack space in the best case and
O(n)O(n)O(n) in the worst case → Total Space: O(log n)O(\log n)O(log n) (best case).
• Step 3: Create an iterative process for going over all subproblems and creating an
optimum solution.
Let’s take up a real-world problem and formulate a greedy solution for it.
20
Computational Problem Solving
Problem:
Alex is a very busy person. He has set aside time T to accomplish some interesting
tasks. He wants to do as many tasks as possible in this allotted time T. For that, he has created
an array A of timestamps to complete a list of items on his itinerary. Now, here we need to
figure out how many things Alex can complete in the T time he has. Approach to Build a
Solution: This given problem is a straightforward greedy problem. In each iteration, we will
have to pick the items from array A that will take the least amount of time to accomplish a task
while keeping two variables in mind: current_Time and number_Of_Things. To generate a
solution, we will have to carry out the following steps.
• Sort the array A in ascending order.
• Select one timestamp at a time.
• After picking up the timestamp, add the timestamp value to current_Time.
• Increase number_Of_Things by one.
Greedy Solution: In order to tackle this problem, we need to maintain a graph structure.
And for that graph structure, we'll have to create a tree structure, which will serve as the answer
to this problem.
The steps to generate this solution are given below:
• Start from the source vertex.
• Pick one vertex at a time with a minimum edge weight (distance) from the source
vertex.
• Add the selected vertex to a tree structure if the connecting edge does not form a cycle.
• Keep adding adjacent fringe vertices to the tree until you reach the destination vertex.
The animation given below explains how paths will be picked up in order to reach the
destination city.
21
Computational Problem Solving
22
Computational Problem Solving
The greedy method works by making the locally optimal choice at each stage in the
hope of finding the global optimum. This can be done by either minimizing or maximizing the
objective function at each step. The main advantage of the greedy method is that it is relatively
easy to implement and understand. However, there are some disadvantages to using this
method. First, the greedy method is not guaranteed to find the best solution. Second, it can be
quite slow. Finally, it is often difficult to prove that the greedy method will indeed find the
global optimum.
One of the most famous examples of the greedy method is the knapsack problem. In
this problem, we are given a set of items, each with a weight and a value. We want to find the
subset of items that maximizes the value while minimizing the weight. The greedy method
would simply take the item with the highest value at each step. However, this might not be the
best solution. For example, consider the following set of items:
Item 1: Weight = 2, Value = 6
Item 2: Weight = 2, Value = 3
Item 3: Weight = 4, Value = 5
The greedy method would take Item 1 and Item 3, for a total value of 11. However, the
optimal solution would be to take Item 2 and Item 3, for a total value of 8. Thus, the greedy
method does not always find the best solution.
There are many other examples of the greedy method. The most famous one is probably
the Huffman coding algorithm, which is used to compress data. In this algorithm, we are given
a set of symbols, each with a weight. We want to find the subset of symbols that minimizes the
average length of the code. The greedy method would simply take the symbol with the lowest
weight at each step. However, this might not be the best solution. For example, consider the
following set of symbols:
Symbol 1: Weight = 2, Code = 00
Symbol 2: Weight = 3, Code = 010
23
Computational Problem Solving
3. A selection function that picks the best candidate from the set, according to the ranking
4. A way of "pruning" the set of candidates, so that it doesn't contain any solutions that
are worse than the one already chosen.
The first two components are straightforward - the candidate solutions can be anything,
and the ranking criteria can be anything as well. The selection function is usually just a matter
of picking the candidate with the highest ranking.
The pruning step is important, because it ensures that the algorithm doesn't waste time
considering candidates that are already known to be worse than the best one found so far.
Without this step, the algorithm would essentially be doing a brute-force search of the entire
solution space, which would be very inefficient.
24
Computational Problem Solving
The pseudo code for the chooseNextState() function shows how we can choose the next
state that is closest to the goal state. First, we find all of the possible next states that we could
move to from the current state. Next, we loop through all of the possible next states and
compare each one to see if it is closer to the goal state than the best state that we have found so
far. If it is, then we set the best state to be equal to the new state. Finally, we return the best
state that we have found.
The above pseudo code shows how a greedy algorithm works in general. However, it
is important to note that not all problems can be solved using a greedy algorithm. In some
cases, it may be necessary to use a different type of algorithm altogether.
Sorting: Initially sort the array based on the profit/weight ratio. The sorted array will be {{60,
10}, {100, 20}, {120, 30}}.
Iteration:
For i = 0, weight = 10 which is less than W. So add this element in the knapsack. profit = 60
and remaining W = 50 – 10 = 40.
For i = 1, weight = 20 which is less than W. So add this element too. profit = 60 + 100 = 160
and remaining W = 40 – 20 = 20.
For i = 2, weight = 30 is greater than W. So add 20/30 fraction = 2/3 fraction of the element.
Therefore profit = 2/3 * 120 + 160 = 80 + 160 = 240 and remaining W becomes 0. So the final
profit becomes 240 for W = 50.
Follow the given steps to solve the problem using the above approach:
25
Computational Problem Solving
• If the weight of the current item is less than or equal to the remaining capacity then add
the value of that item into the result
• Else add the current item as much as we can and break out of the loop.
• Return res.
26
Computational Problem Solving
1. In this case, S represents the problem's starting point. You start at S and work your way
to solution S1 via the midway point M1. However, you discovered that solution S1 is
not a viable solution to our problem. As a result, you backtrack (return) from S1,
return to M1, return to S, and then look for the feasible solution S2. This process is
repeated until you arrive at a workable solution.
2. S1 and S2 are not viable options in this case. According to this example, only S3 is a
viable solution. When you look at this example, you can see that we go through all
possible combinations until you find a viable solution. As a result, you refer to
backtracking as a brute-force algorithmic technique.
3. A "space state tree" is the above tree representation of a problem. It represents all
possible states of a given problem (solution or non-solution).
The final algorithm is as follows:
• Step 1: Return success if the current point is a viable solution.
• Step 2: Otherwise, if all paths have been exhausted (i.e., the current point is an
endpoint), return failure because there is no feasible solution.
• Step 3: If the current point is not an endpoint, backtrack and explore other points, then
repeat the preceding steps.
Applications of Backtracking
N-queen problem
Sum of subset problem
Graph coloring
Hamilton cycle
N-Queens problem
• The N - Queens problem is to place N - queens in such a manner on an N x N chessboard
that no queens attack each other by being in the same row, column, or diagonal.
• Here, we solve the problem for N = 4 Queens.
27
Computational Problem Solving
• Before solving the problem, let's know about the movement of the queen in chess.
• In the chess game, a queen can move any number of steps in any direction like vertical,
horizontal, and diagonal.
• The only constraint is that it can’t change its direction while it’s moving.
• In the 4 - Queens problem we have to place 4 queens such as Q1, Q2, Q3, and Q4
on the chessboard, in such a way that no two queens attack each other.
• To solve this problem generally Backtracking algorithm or approach is used.
• In backtracking, start with one possible move out of many available moves, then try to
solve the problem.
• If it can solve the problem with the selected move then it will print the solution,
else it will backtrack and select some other move and try to solve it.
• If none of the moves works out the claim that there is no solution for the problem.
28
Computational Problem Solving
• Repeat this process of placing a queen and backtracking until all the N queens
are placed successfully.
Step 1
As this is the 4-Queens problem, therefore, create a 4×4 chessboard.
Step 2
• Place the Queen Q1 at the left-most position which means row 1 and column 1.
• Mark the cells of the chessboard with cross marks that are under attack from a queen
Q1.
• (horizontal, vertical, and diagonal move of the queen)
Step 3
• The possible safe cells for Queen Q2 at row 2 are of column3 and 4 because these cells
do not come under the attack from a queen Q1.
• So, here we place Q2 at the first possible safe cell which is row 2 and column 3.
• Mark the cells of the chessboard with cross marks that are under attack from a queen
Q2.
• (horizontal, vertical, and diagonal move of the queen)
• The chessboard looks as follows after placing Q2 at [2, 3] position:
Step 4
• We see that no safe place is remaining for the Queen Q3 if we place Q2 at position [2,
3]. Therefore make position [2, 3] false and backtrack.
Step 5
• So, we place Q2 at the second possible safe cell which is row 2 and column 4.
29
Computational Problem Solving
• Mark the cells of the chessboard with cross marks that are under attack from a queen
Q2. (horizontal, vertical, and diagonal move of the queen)
• The chessboard looks as follows after placing Q2 at [2, 4] position:
Step 6
• The only possible safe cell for Queen Q3 is remaining in row 3 and column 2.
• Therefore, we place Q3 at the only possible safe cell which is row 3 and column 2.
• Mark the cells of the chessboard with cross marks that are under attack from a queen
Q3. (horizontal, vertical, and diagonal move of the queen)
• The chessboard looks as follows after placing Q3 at [3, 2] position:
Step 7
We see that no safe place is remaining for the Queen Q4 if we place Q3 at position [3,
2]. Therefore, make position [3, 2] false and backtrack.
Step 8
• This time we backtrack to the first queen Q1.
• Place the Queen Q1 at column 2 of row 1.
• Mark the cells of the chessboard with cross marks that are under attack from a queen
Q1. (horizontal, vertical, and diagonal move of the queen)
• The chessboard looks as follows after placing Q1 at [1, 2] position:
30
Computational Problem Solving
Step 9
• The only possible safe cell for Queen Q2 is remaining in row 2 and column 4.
• Therefore, we place Q2 at the only possible safe cell which is row 2 and column 4.
• Mark the cells of the chessboard with cross marks that are under attack from a queen
Q2. (horizontal, vertical, and diagonal move of the queen)
• The chessboard looks as follows after placing Q2 at [2, 4] position:
Step 10
• The only possible safe cell for Queen Q3 is remaining in row 3 and column 1.
• Therefore, we place Q3 at the only possible safe cell which is row 3 and column 1.
• Mark the cells of the chessboard with cross marks that are under attack from a queen
Q3. (horizontal, vertical, and diagonal move of the queen)
• The chessboard looks as follows after placing Q3 at [3, 1] position:
Step 11
• The only possible safe cell for Queen Q4 is remaining in row 4 and column 3.
• Therefore, we place Q4 at the only possible safe cell which is row 4 and column 3.
• The chessboard looks as follows after placing Q3 at [4, 3] position:
Step 12
• Now, here we got the solution for the 4-queens problem because all 4 queens are placed
exactly in each row/column individually.
31
Computational Problem Solving
• In this example we placed the queens according to rows, we can do the same thing
column-wise also.
32
Computational Problem Solving
33