0% found this document useful (0 votes)
6 views39 pages

Group 24

The document compares four algorithm design approaches: Greedy, Divide and Conquer, Dynamic Programming, and Backtracking, detailing their methodologies, applications, advantages, and disadvantages. It provides Python code examples and discusses the time and space complexities associated with each approach. Each algorithm is suited for different types of problems, with specific strengths and weaknesses highlighted for effective problem-solving.

Uploaded by

findyandx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views39 pages

Group 24

The document compares four algorithm design approaches: Greedy, Divide and Conquer, Dynamic Programming, and Backtracking, detailing their methodologies, applications, advantages, and disadvantages. It provides Python code examples and discusses the time and space complexities associated with each approach. Each algorithm is suited for different types of problems, with specific strengths and weaknesses highlighted for effective problem-solving.

Uploaded by

findyandx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 39

Group 24

Comparison among the following algorithm


design approaches: Greedy, Divide and
Conquer, Dynamic Programming, Backtracking.
Python code to demonstrate a simple aspect of
each approach. Time and space complexity.
• The term "algorithm design approach" describes the
process through which an algorithm is created in order
to address a certain issue. In computer science, a variety
of design strategies are frequently employed, including:
I. Greedy,
II. Divide and Conquer,
III. Dynamic Programming, and
IV. Backtracking
General Comparison
Divide and conquer
• Is a method of addressing problems that entails
splitting a big issue into smaller, more manageable
sub-problem that can be resolved individually, then
integrating the answers to those individual solutions to
solve the main issue.
• It is recursive in nature.
The divide and conquer strategy of designing algorithms has two
fundamentals :
• Relational formula
• Stopping condition
Divide and conquer solves a problem in three stages:
⮚ Divide the problem into several smaller sub-problems that are similar to
the original but smaller in size.
⮚ Conquer the sub-problems by solving them recursively. If they are small
enough, just solve them in a straight forward manner.
⮚ Combine the solutions of the two/more subproblems to obtain a solution
of the larger problem.
Applications of Divide and Conquer

D and C has got many applications in different fields


which includes:
• Merge sort: The algorithm splits the array into two half, sorts
each half repeatedly(recursively), and then merges the two
sorted halves.
cont..
● Binary Search: Binary search is also implemented by the
divide and conquer strategy. This is used to find a particular
element in a sorted array.
● Quicksort: is a sorting algorithm that works by selecting a
pivot element and rearranges the array elements so that all
elements with values smaller than the selected pivot element
go to the left side of the pivot and all elements with values
greater than the selected pivot element move to the right
side.
● Strassen’s matrix multiplication: is a productive method for
Implementation
Python code for Binary Search
cont..
Time and Space complexity
● The Divide and Conquer algorithm solves the problem
in O(Logn) time.
● and O(logn) space complexity.
Advantages
● It separates the main issue into smaller issues so that it can be resolved concurrently,
ensuring multiprocessing.
● Dividing the problem makes it easy since the problem and resources are divided.
● Is very much faster than other algorithms.
● The divide and conquer algorithm works on parallelism. Parallel processing in
operating systems handles them very efficiently.
● The divide and conquer strategy used cache memory without occupying much main
memory. Executing problems in cache memory which is faster than main memory.
● Reduces the time complexity of the problem.
● The divide-and-conquer strategy frequently aids in the search for effective
algorithms.
Disadvantages

● Most of the divide and conquer design uses the concept of recursion
therefore it requires high memory management.
● Memory overuse is possible by an explicit stack.
● It may crash the system if recursion is not performed properly.
● It may crash the system if the recursion is performed rigorously.
● It involves recursion which is sometimes slow.
● Dividing a problem into smaller sub-problems can increase the
complexity of the overall solution.
GREEDY ALGORITHM
• It refers to a problem-solving technique that selects the best option available (locally
optimum choice) at that particular moment with the hope of finding a global optimum.

• It does not take into account whether the current best option does not lead to optimal
results.

• It does not reverse its earlier decision even though it turns out not to be the best
decision ever.

• It is a top-down approach which means that it divides its problems into small problems
that is to say we start from the top of the problem and gradually reach the bottom.
GREEDY ALGORITHM

20
GREEDY ALGORITHM
GREEDY ALGORITHM
GREEDY ALGORITHM
● Kruskal’s algorithm
● Prim’s algorithm of finding minimum spanning trees
● Prim’s algorithm of finding Hoffman trees
● Scheduling tasks to minimize the total time required
● Finding the shortest path in a graph
● Selecting subset of items to maximize a certain value, subject
to a constraint on total weight or cost
TIME AND SPACE COMPLEXITY

The space and time complexity for the


greedy algorithm is just the same O(nlogn)
ADVANTAGES

1. Simplicity: are often simple and easy to understand

2. Efficiency: in terms of time and space complexity

3. Approximation: provide a good approximation to the optimal


solution,
4. Flexibility: that is to say, can be applied to a wide range of
problems from scheduling to graph traversal to data
compression
DISADVANTAGES
1. Lack of backtracking means once a decision is made it's
difficult to turn back
2. Sensitivity to input meaning to say changing the input
can result in a different output, making it difficult to
generalize the algorithm to new contexts
3. Difficult in analysis meaning analyzing the correctness
and efficiency of greedy algorithms can be difficult.
BACKTRACKING
● A general algorithm for finding solutions to some computational
problems within defined constraints. Is a more exhaustive and
flexible way to solve an optimization problem.

● It works by exploring all possible choices at each step, and then


backtracking if a choice leads to a dead end or a worse solution.

● Backtracking fits those problems that have the feasible property,


which means that you can quickly check if a partial solution is valid
or not.
BACKTRACKING
Basic outline of the algorithm:
1. Start with an empty solution
2. Choose the next component to add to the solution
3. Check if the solution is valid
4. If invalid, backtrack to the previous step and try a
different candidate solution
5. If the solution is valid and complete, return it
6. Otherwise repeat step 2-5 until a valid and
complete solution is found or all possibilities have
been exhausted
APPLICATIONS

1. It solves Constraint Satisfaction Problems such as n-queens


problem, Sudoku puzzles, and graph coloring problems.
2. Combinational Optimization problems such as travelling
salesman problem and knapsack problem.
3. Game playing: Backtracking can be used to search optimal
moves in games, such as chess, checkers and Go
N-queens problem
Advantages of Backtracking

● Flexibility: can be adapted to a wide range of problem domains.


● Guaranteed solutions: backtracking are guaranteed to find a solution
to the problem provided that one exists.
● Exhaustive search: are an exhaustive search technique that can
systematically explore all possible solutions to a problem; useful for
problems where the solution space is large ad complex.
Disadvantages of Backtracking
● Because backtracking has the need to explore a large number of possibilities
before finding a solution that makes it t have more time-complexity which is
exponential.
● That’s makes it impractical for solving large or complex problems.
● Backtracking can also be memory intensive if it generates and stores a large
number of partial solutions. This can be a significant issue if the problem is
large or memory resources are limited.
● Backtracking may not always find the best solution because it may stop at the
first solution found or may not investigate all possible solutions.
Dynamic Programming
• Is a computer programming technique where an algorithmic problem is first broken down
into sub-problems.

• Solves sub-problems recursively and stores their solutions to avoid repeated calculations.

• The solutions of sub-problems are combined in order to achieve the best solution.

• Mostly, these algorithms are used for optimization.

• Dynamic algorithms use Memorization to remember the output of already solved sub-
problems.
What are sub-problems ?
Sub-problems are smaller versions of the original problem. Let’s see an
example. With the equation below:

Original problem: 1+2+3+4


We can break this down to:

Sub-problem 1: 1+2
Sub-problem 2: 3+4

● Once we solve these two smaller problems, we can add the solutions to
these sub-problems to find the solution to the overall problem.
● Notice how these sub-problems break down the original problem into
components that build up the solution.
Applications
The following computer problems can be solved using dynamic
programming approach
● Fibonacci number series
● Knapsack problem
● Tower of Hanoi
● All pair shortest path by Floyd-Warshall
● Shortest path by Dijkstra
● Project scheduling
What are sub-problems ?
Implementation
F(5)

F(4)
F(3)

F(3)
F(2)
F(2) F(1)

F(2) F(1)

F(1) F(0) F(1) F(0)


F(1) F(0)
“Those who cannot remember the past are condemned to repeat it” – Dynamic Programming
Applications
The following computer problems can be solved using dynamic
programming approach
• Optimization problems: such as knapsack problems, Fibonacci number
series, travel salesman problems, and the longest common subsequence
problem.
• Path finding: used to find shortest path in a graph or a network, such as
Dijkstra algorithm or Bellman-Ford algorithm
• Robotics: for motion planning and control
• Game theory: for solving problems such as minimax algorithm, alpha-
beta pruning, etc.
• Natural language processing: for natural language processing tasks such
as speech recognition, machine translation and text segmentation.
General Comparison
Applications
Greedy Divide and Conquer Backtracking Dynamic Programming
• Optimization problems, eg • Sorting algorithms: such • Solving constraint • Optimization problems:
shortest path algorithms as merge sort and satisfaction problems. such as knapsack
such as Dijkstra's algorithm quicksort. • Optimization problems: problems, Fibonacci
and Kruskal's algorithm. Used to find the best number series, travel
• Searching algorithm: used solution that can be salesman problems, and
• Coin changing problem: in binary search. applied. the longest common
find the minimum number subsequence problem.
of coins required to make • Matrix multiplication: such • Enumeration problems:
the target amount. as Strassen's algorithm. used to find the set of all • Path finding: used to find
feasible solutions of the shortest path in a graph or
• Minimum spanning tree of • Closest pair of points problem. a network, such as Dijkstra
a graph: such as Prim's closest pair of points in a algorithm or Bellman-Ford
algorithm and Kruskal's set of points in a plane. • Decision problems: used to algorithm
algorithm. find a feasible solution of
• Maximum subarray: find the problem,
• Job scheduling: used to the maximum subarray in
optimize job scheduling an array of numbers. • Generating permutations
problems. and combinations.
Time Complexity
NB: Time complexity of these algorithmic designs depends on many factors, including the specific problem being solved, the
implementation details of the algorithm, and the size and structure of the input. Therefore, it is important to analyze the time
complexity of each algorithm on a case-by-case basis to determine the most efficient algorithm for a particular problem.

Greedy Divide and Conquer Backtracking Dynamic Programming

1. typically O(n log n) or O(n) 1. typically its O(n log n) for is typically exponential, 1. is typically O(n^2) or
for problems that can be problems that can be O(b^d), where b is O(n^3) for problems that
solved by making locally divided into sub-problems the branching factor and d is involve computing optimal
optimal choices at each of equal size, such the depth of the search tree, solutions for all possible
step, such as the Huffman as sorting for problems that involve sub-problems, such as
coding problem and the algorithms like merge searching through all possible the knapsack problem and
minimum spanning tree sort and quicksort. solutions, such as the N- the matrix chain
problem. queens problem and the graph multiplication problem.
2. It can be higher: closest coloring problem.
2. can be higher, like in the pair of points algorithm is 2. can be higher, like in the
case of the traveling O(n^2 log n) case of the sequence
salesman problem, where alignment problem, where
the time complexity is NP- the time complexity is
hard. O(n^2 m).
Space Complexity
NB: Space complexity of these algorithmic designs depends on many factors, including the specific problem being solved, the
implementation details of the algorithm, and the size and structure of the input.

Greedy Divide and Conquer Backtracking Dynamic Programming


typically O(n) for problems that can typically O(n log n) for problems typically O(bd) for problems that typically O(n^2) or O(n^3) for
be solved by making locally that can be divided into sub- involve searching through all problems that involve computing
optimal choices at each step, such problems of equal size, such possible solutions, such as the N- optimal solutions for all possible
as the Huffman coding as sorting algorithms like merge queens problem and the graph sub-problems, such as
problem and the minimum sort and quicksort. coloring problem. the knapsack problem and the
spanning tree problem. This is matrix chain multiplication problem.
because the algorithm typically This is because the algorithm This is because the algorithm
only requires a constant amount of typically requires a stack of size typically requires a stack of size This is because the algorithm
memory to store the current O(log n) to keep track of O(d) to keep track of the recursion typically requires a table of size
solution and the input data. the recursion depth, and each level depth, and each level of recursion O(n^2) or O(n^3) to store the
of recursion requires O(n) space requires O(b) space to store the solutions to the sub-problems.
for temporary storage. However, current solution. However, some dynamic
for some problems, the space programming algorithms can be
complexity can be higher, like in optimized to use less space by
the case of the closest pair of only storing the necessary
points algorithm, where the space information.
complexity is O(n).
THANK YOU

You might also like